====================================================== WARNING: possible circular locking dependency detected 6.13.0-syzkaller-00603-g3d3a9c8b89d4 #0 Not tainted ------------------------------------------------------ syz-execprog/5305 is trying to acquire lock: ffff888000e21438 (&q->q_usage_counter(io)#37){++++}-{0:0}, at: __submit_bio+0x2c6/0x560 block/blk-core.c:629 but task is already holding lock: ffffffff8ea39440 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline] ffffffff8ea39440 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim+0xd4/0x3c0 mm/page_alloc.c:3951 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 __fs_reclaim_acquire mm/page_alloc.c:3853 [inline] fs_reclaim_acquire+0x88/0x130 mm/page_alloc.c:3867 might_alloc include/linux/sched/mm.h:318 [inline] slab_pre_alloc_hook mm/slub.c:4070 [inline] slab_alloc_node mm/slub.c:4148 [inline] __do_kmalloc_node mm/slub.c:4297 [inline] __kmalloc_node_noprof+0xb2/0x4d0 mm/slub.c:4304 __kvmalloc_node_noprof+0x72/0x190 mm/util.c:645 sbitmap_init_node+0x2d4/0x670 lib/sbitmap.c:132 scsi_realloc_sdev_budget_map+0x2a7/0x460 drivers/scsi/scsi_scan.c:246 scsi_add_lun drivers/scsi/scsi_scan.c:1106 [inline] scsi_probe_and_add_lun+0x3173/0x4bd0 drivers/scsi/scsi_scan.c:1287 __scsi_add_device+0x228/0x2f0 drivers/scsi/scsi_scan.c:1622 ata_scsi_scan_host+0x236/0x740 drivers/ata/libata-scsi.c:4575 async_run_entry_fn+0xa8/0x420 kernel/async.c:129 process_one_work kernel/workqueue.c:3236 [inline] process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317 worker_thread+0x870/0xd30 kernel/workqueue.c:3398 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 -> #0 (&q->q_usage_counter(io)#37){++++}-{0:0}: check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 bio_queue_enter block/blk.h:75 [inline] blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3090 __submit_bio+0x2c6/0x560 block/blk-core.c:629 __submit_bio_noacct_mq block/blk-core.c:710 [inline] submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739 swap_writepage_bdev_async mm/page_io.c:451 [inline] __swap_writepage+0x747/0x14d0 mm/page_io.c:474 swap_writepage+0x6ee/0xce0 mm/page_io.c:289 pageout mm/vmscan.c:696 [inline] shrink_folio_list+0x3b68/0x5ca0 mm/vmscan.c:1374 evict_folios+0x3c92/0x58c0 mm/vmscan.c:4600 try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4799 shrink_one+0x3b9/0x850 mm/vmscan.c:4844 shrink_many mm/vmscan.c:4907 [inline] lru_gen_shrink_node mm/vmscan.c:4985 [inline] shrink_node+0x37c5/0x3e50 mm/vmscan.c:5966 shrink_zones mm/vmscan.c:6225 [inline] do_try_to_free_pages+0x78c/0x1cf0 mm/vmscan.c:6287 try_to_free_pages+0x47c/0x1050 mm/vmscan.c:6537 __perform_reclaim mm/page_alloc.c:3929 [inline] __alloc_pages_direct_reclaim+0x178/0x3c0 mm/page_alloc.c:3951 __alloc_pages_slowpath+0x811/0x10b0 mm/page_alloc.c:4382 __alloc_pages_noprof+0x49b/0x710 mm/page_alloc.c:4766 alloc_pages_mpol_noprof+0x3e1/0x780 mm/mempolicy.c:2269 folio_alloc_mpol_noprof mm/mempolicy.c:2288 [inline] vma_alloc_folio_noprof+0x12e/0x230 mm/mempolicy.c:2318 folio_prealloc+0x2e/0x170 alloc_anon_folio mm/memory.c:4752 [inline] do_anonymous_page mm/memory.c:4809 [inline] do_pte_missing mm/memory.c:3977 [inline] handle_pte_fault+0x2c98/0x5ed0 mm/memory.c:5801 __handle_mm_fault mm/memory.c:5944 [inline] handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112 do_user_addr_fault arch/x86/mm/fault.c:1338 [inline] handle_page_fault arch/x86/mm/fault.c:1481 [inline] exc_page_fault+0x459/0x8b0 arch/x86/mm/fault.c:1539 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&q->q_usage_counter(io)#37); lock(fs_reclaim); rlock(&q->q_usage_counter(io)#37); *** DEADLOCK *** 2 locks held by syz-execprog/5305: #0: ffff888042c7cc40 (&vma->vm_lock->lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:717 [inline] #0: ffff888042c7cc40 (&vma->vm_lock->lock){++++}-{4:4}, at: lock_vma_under_rcu+0x34b/0x790 mm/memory.c:6278 #1: ffffffff8ea39440 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline] #1: ffffffff8ea39440 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim+0xd4/0x3c0 mm/page_alloc.c:3951 stack backtrace: CPU: 0 UID: 0 PID: 5305 Comm: syz-execprog Not tainted 6.13.0-syzkaller-00603-g3d3a9c8b89d4 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206 check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 bio_queue_enter block/blk.h:75 [inline] blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3090 __submit_bio+0x2c6/0x560 block/blk-core.c:629 __submit_bio_noacct_mq block/blk-core.c:710 [inline] submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739 swap_writepage_bdev_async mm/page_io.c:451 [inline] __swap_writepage+0x747/0x14d0 mm/page_io.c:474 swap_writepage+0x6ee/0xce0 mm/page_io.c:289 pageout mm/vmscan.c:696 [inline] shrink_folio_list+0x3b68/0x5ca0 mm/vmscan.c:1374 evict_folios+0x3c92/0x58c0 mm/vmscan.c:4600 try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4799 shrink_one+0x3b9/0x850 mm/vmscan.c:4844 shrink_many mm/vmscan.c:4907 [inline] lru_gen_shrink_node mm/vmscan.c:4985 [inline] shrink_node+0x37c5/0x3e50 mm/vmscan.c:5966 shrink_zones mm/vmscan.c:6225 [inline] do_try_to_free_pages+0x78c/0x1cf0 mm/vmscan.c:6287 try_to_free_pages+0x47c/0x1050 mm/vmscan.c:6537 __perform_reclaim mm/page_alloc.c:3929 [inline] __alloc_pages_direct_reclaim+0x178/0x3c0 mm/page_alloc.c:3951 __alloc_pages_slowpath+0x811/0x10b0 mm/page_alloc.c:4382 __alloc_pages_noprof+0x49b/0x710 mm/page_alloc.c:4766 alloc_pages_mpol_noprof+0x3e1/0x780 mm/mempolicy.c:2269 folio_alloc_mpol_noprof mm/mempolicy.c:2288 [inline] vma_alloc_folio_noprof+0x12e/0x230 mm/mempolicy.c:2318 folio_prealloc+0x2e/0x170 alloc_anon_folio mm/memory.c:4752 [inline] do_anonymous_page mm/memory.c:4809 [inline] do_pte_missing mm/memory.c:3977 [inline] handle_pte_fault+0x2c98/0x5ed0 mm/memory.c:5801 __handle_mm_fault mm/memory.c:5944 [inline] handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112 do_user_addr_fault arch/x86/mm/fault.c:1338 [inline] handle_page_fault arch/x86/mm/fault.c:1481 [inline] exc_page_fault+0x459/0x8b0 arch/x86/mm/fault.c:1539 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623 RIP: 0033:0x474e6b Code: 1f f8 c3 f3 44 0f 7f 3f f3 44 0f 7f 7c 1f f0 c3 f3 44 0f 7f 3f f3 44 0f 7f 7f 10 f3 44 0f 7f 7c 1f e0 f3 44 0f 7f 7c 1f f0 c3 44 0f 7f 3f f3 44 0f 7f 7f 10 f3 44 0f 7f 7f 20 f3 44 0f 7f 7f RSP: 002b:000000c0030266e0 EFLAGS: 00010246 RAX: 0000000000000000 RBX: 0000000000000080 RCX: 0000000000000010 RDX: 000000c007a45f80 RSI: 0000000000000000 RDI: 000000c007a45f80 RBP: 000000c0030266f8 R08: 0000000000000001 R09: 000000c007a44000 R10: 0000000000000001 R11: 00007f0a668ed000 R12: 00000000011c1f00 R13: 00007f0a6684f1c8 R14: 000000c001b11500 R15: 000000000000000f