====================================================== WARNING: possible circular locking dependency detected 6.13.0-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor/5337 is trying to acquire lock: ffff888000739438 (&q->q_usage_counter(io)#37){++++}-{0:0}, at: __submit_bio+0x2c6/0x560 block/blk-core.c:629 but task is already holding lock: ffffffff8ea39140 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline] ffffffff8ea39140 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim+0xd4/0x3c0 mm/page_alloc.c:3951 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 __fs_reclaim_acquire mm/page_alloc.c:3853 [inline] fs_reclaim_acquire+0x88/0x130 mm/page_alloc.c:3867 might_alloc include/linux/sched/mm.h:318 [inline] slab_pre_alloc_hook mm/slub.c:4070 [inline] slab_alloc_node mm/slub.c:4148 [inline] __do_kmalloc_node mm/slub.c:4297 [inline] __kmalloc_node_noprof+0xb2/0x4d0 mm/slub.c:4304 __kvmalloc_node_noprof+0x72/0x190 mm/util.c:645 sbitmap_init_node+0x2d4/0x670 lib/sbitmap.c:132 scsi_realloc_sdev_budget_map+0x2a7/0x460 drivers/scsi/scsi_scan.c:246 scsi_add_lun drivers/scsi/scsi_scan.c:1106 [inline] scsi_probe_and_add_lun+0x3173/0x4bd0 drivers/scsi/scsi_scan.c:1287 __scsi_add_device+0x228/0x2f0 drivers/scsi/scsi_scan.c:1622 ata_scsi_scan_host+0x236/0x740 drivers/ata/libata-scsi.c:4575 async_run_entry_fn+0xa8/0x420 kernel/async.c:129 process_one_work kernel/workqueue.c:3236 [inline] process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317 worker_thread+0x870/0xd30 kernel/workqueue.c:3398 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 -> #0 (&q->q_usage_counter(io)#37){++++}-{0:0}: check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 bio_queue_enter block/blk.h:75 [inline] blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3090 __submit_bio+0x2c6/0x560 block/blk-core.c:629 __submit_bio_noacct_mq block/blk-core.c:710 [inline] submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739 swap_writepage_bdev_async mm/page_io.c:451 [inline] __swap_writepage+0x747/0x14d0 mm/page_io.c:474 swap_writepage+0x6ee/0xce0 mm/page_io.c:289 pageout mm/vmscan.c:696 [inline] shrink_folio_list+0x3b68/0x5ca0 mm/vmscan.c:1374 evict_folios+0x3c92/0x58c0 mm/vmscan.c:4600 try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4799 shrink_one+0x3b9/0x850 mm/vmscan.c:4844 shrink_many mm/vmscan.c:4907 [inline] lru_gen_shrink_node mm/vmscan.c:4985 [inline] shrink_node+0x37c5/0x3e50 mm/vmscan.c:5966 shrink_zones mm/vmscan.c:6225 [inline] do_try_to_free_pages+0x78c/0x1cf0 mm/vmscan.c:6287 try_to_free_pages+0x47c/0x1050 mm/vmscan.c:6537 __perform_reclaim mm/page_alloc.c:3929 [inline] __alloc_pages_direct_reclaim+0x178/0x3c0 mm/page_alloc.c:3951 __alloc_pages_slowpath+0x764/0x1020 mm/page_alloc.c:4382 __alloc_pages_noprof+0x49b/0x710 mm/page_alloc.c:4766 alloc_pages_mpol_noprof+0x3e1/0x780 mm/mempolicy.c:2269 folio_alloc_mpol_noprof mm/mempolicy.c:2288 [inline] vma_alloc_folio_noprof+0x12e/0x230 mm/mempolicy.c:2318 folio_prealloc+0x2e/0x170 wp_page_copy mm/memory.c:3367 [inline] do_wp_page+0x1253/0x49b0 mm/memory.c:3759 handle_pte_fault+0xfa5/0x5ed0 mm/memory.c:5817 __handle_mm_fault mm/memory.c:5944 [inline] handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112 do_user_addr_fault arch/x86/mm/fault.c:1338 [inline] handle_page_fault arch/x86/mm/fault.c:1481 [inline] exc_page_fault+0x459/0x8b0 arch/x86/mm/fault.c:1539 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&q->q_usage_counter(io)#37); lock(fs_reclaim); rlock(&q->q_usage_counter(io)#37); *** DEADLOCK *** 2 locks held by syz-executor/5337: #0: ffff88801ef48ec8 (&vma->vm_lock->lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:717 [inline] #0: ffff88801ef48ec8 (&vma->vm_lock->lock){++++}-{4:4}, at: lock_vma_under_rcu+0x34b/0x790 mm/memory.c:6278 #1: ffffffff8ea39140 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline] #1: ffffffff8ea39140 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim+0xd4/0x3c0 mm/page_alloc.c:3951 stack backtrace: CPU: 0 UID: 0 PID: 5337 Comm: syz-executor Not tainted 6.13.0-syzkaller #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206 check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 bio_queue_enter block/blk.h:75 [inline] blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3090 __submit_bio+0x2c6/0x560 block/blk-core.c:629 __submit_bio_noacct_mq block/blk-core.c:710 [inline] submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739 swap_writepage_bdev_async mm/page_io.c:451 [inline] __swap_writepage+0x747/0x14d0 mm/page_io.c:474 swap_writepage+0x6ee/0xce0 mm/page_io.c:289 pageout mm/vmscan.c:696 [inline] shrink_folio_list+0x3b68/0x5ca0 mm/vmscan.c:1374 evict_folios+0x3c92/0x58c0 mm/vmscan.c:4600 try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4799 shrink_one+0x3b9/0x850 mm/vmscan.c:4844 shrink_many mm/vmscan.c:4907 [inline] lru_gen_shrink_node mm/vmscan.c:4985 [inline] shrink_node+0x37c5/0x3e50 mm/vmscan.c:5966 shrink_zones mm/vmscan.c:6225 [inline] do_try_to_free_pages+0x78c/0x1cf0 mm/vmscan.c:6287 try_to_free_pages+0x47c/0x1050 mm/vmscan.c:6537 __perform_reclaim mm/page_alloc.c:3929 [inline] __alloc_pages_direct_reclaim+0x178/0x3c0 mm/page_alloc.c:3951 __alloc_pages_slowpath+0x764/0x1020 mm/page_alloc.c:4382 __alloc_pages_noprof+0x49b/0x710 mm/page_alloc.c:4766 alloc_pages_mpol_noprof+0x3e1/0x780 mm/mempolicy.c:2269 folio_alloc_mpol_noprof mm/mempolicy.c:2288 [inline] vma_alloc_folio_noprof+0x12e/0x230 mm/mempolicy.c:2318 folio_prealloc+0x2e/0x170 wp_page_copy mm/memory.c:3367 [inline] do_wp_page+0x1253/0x49b0 mm/memory.c:3759 handle_pte_fault+0xfa5/0x5ed0 mm/memory.c:5817 __handle_mm_fault mm/memory.c:5944 [inline] handle_mm_fault+0x1106/0x1bb0 mm/memory.c:6112 do_user_addr_fault arch/x86/mm/fault.c:1338 [inline] handle_page_fault arch/x86/mm/fault.c:1481 [inline] exc_page_fault+0x459/0x8b0 arch/x86/mm/fault.c:1539 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623 RIP: 0033:0x7f089837c108 Code: 84 e4 74 66 e8 89 04 00 00 41 89 c4 85 c0 0f 84 4e 01 00 00 49 c7 c5 a8 ff ff ff 64 45 8b 75 00 48 89 da 89 ee bf 02 00 00 00 93 09 00 00 45 85 e4 79 05 64 45 89 75 00 48 8b 84 24 c8 00 00 RSP: 002b:00007ffd18fab2f0 EFLAGS: 00010202 RAX: 0000000000000002 RBX: 0000000000000000 RCX: 00007f089837c593 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000002 RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000001 R10: 000055558e52f7d0 R11: 0000000000000246 R12: 0000000000000002 R13: ffffffffffffffa8 R14: 0000000000000006 R15: 0000000000000000