INFO: task bch-reclaim/loo:6638 blocked for more than 143 seconds. Not tainted 6.15.0-rc1-syzkaller-g0af2f6be1b42 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:bch-reclaim/loo state:D stack:0 pid:6638 tgid:6638 ppid:2 task_flags:0x200040 flags:0x00000008 Call trace: __switch_to+0x414/0x788 arch/arm64/kernel/process.c:701 (T) context_switch kernel/sched/core.c:5382 [inline] __schedule+0x16a4/0x2c80 kernel/sched/core.c:6767 __schedule_loop kernel/sched/core.c:6845 [inline] schedule+0xbc/0x238 kernel/sched/core.c:6860 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6917 __mutex_lock_common+0xda8/0x2604 kernel/locking/mutex.c:678 __mutex_lock kernel/locking/mutex.c:746 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:798 bch2_journal_reclaim_thread+0x15c/0x504 fs/bcachefs/journal_reclaim.c:761 kthread+0x674/0x7dc kernel/kthread.c:464 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:862 INFO: task bch-reclaim/loo:6638 is blocked on a mutex likely owned by task syz-executor:6480. task:syz-executor state:D stack:0 pid:6480 tgid:6480 ppid:1 task_flags:0x400140 flags:0x0000000d Call trace: __switch_to+0x414/0x788 arch/arm64/kernel/process.c:701 (T) context_switch kernel/sched/core.c:5382 [inline] __schedule+0x16a4/0x2c80 kernel/sched/core.c:6767 __schedule_loop kernel/sched/core.c:6845 [inline] schedule+0xbc/0x238 kernel/sched/core.c:6860 __closure_sync+0x1a0/0x2d8 lib/closure.c:146 closure_sync include/linux/closure.h:195 [inline] __bch2_wait_on_allocator+0x1c4/0x23c fs/bcachefs/alloc_foreground.c:1753 bch2_wait_on_allocator fs/bcachefs/alloc_foreground.h:254 [inline] bch2_btree_update_start+0x114c/0x1604 fs/bcachefs/btree_update_interior.c:1251 bch2_btree_split_leaf+0x120/0x804 fs/bcachefs/btree_update_interior.c:1864 bch2_trans_commit_error+0x174/0x1174 fs/bcachefs/btree_trans_commit.c:904 __bch2_trans_commit+0x1c4c/0x6910 fs/bcachefs/btree_trans_commit.c:1069 bch2_trans_commit fs/bcachefs/btree_update.h:195 [inline] bch2_btree_write_buffer_flush_locked+0x25f4/0x2df4 fs/bcachefs/btree_write_buffer.c:452 btree_write_buffer_flush_seq+0x119c/0x12f4 fs/bcachefs/btree_write_buffer.c:570 bch2_btree_write_buffer_journal_flush+0xcc/0x154 fs/bcachefs/btree_write_buffer.c:586 journal_flush_pins+0x71c/0xcc0 fs/bcachefs/journal_reclaim.c:589 journal_flush_pins_or_still_flushing fs/bcachefs/journal_reclaim.c:859 [inline] journal_flush_done+0x100/0x6c8 fs/bcachefs/journal_reclaim.c:877 bch2_journal_flush_pins+0x208/0x34c fs/bcachefs/journal_reclaim.c:909 bch2_journal_flush_all_pins fs/bcachefs/journal_reclaim.h:76 [inline] __bch2_fs_read_only+0x134/0x450 fs/bcachefs/super.c:274 bch2_fs_read_only+0x4ac/0xabc fs/bcachefs/super.c:356 __bch2_fs_stop+0x104/0x560 fs/bcachefs/super.c:641 bch2_put_super+0x40/0x50 fs/bcachefs/fs.c:2065 generic_shutdown_super+0x12c/0x2bc fs/super.c:642 bch2_kill_sb+0x40/0x58 fs/bcachefs/fs.c:2303 deactivate_locked_super+0xc4/0x12c fs/super.c:473 deactivate_super+0xe0/0x100 fs/super.c:506 cleanup_mnt+0x34c/0x3dc fs/namespace.c:1435 __cleanup_mnt+0x20/0x30 fs/namespace.c:1442 task_work_run+0x230/0x2e0 kernel/task_work.c:227 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] do_notify_resume+0x178/0x1f4 arch/arm64/kernel/entry-common.c:151 exit_to_user_mode_prepare arch/arm64/kernel/entry-common.c:169 [inline] exit_to_user_mode arch/arm64/kernel/entry-common.c:178 [inline] el0_svc+0xac/0x168 arch/arm64/kernel/entry-common.c:745 el0t_64_sync_handler+0x84/0x108 arch/arm64/kernel/entry-common.c:762 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600 Showing all locks held in the system: 3 locks held by kworker/0:0/9: #0: ffff0000c0028d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x674/0x1638 kernel/workqueue.c:3212 #1: ffff800098217b80 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x708/0x1638 kernel/workqueue.c:3212 #2: ffff8000930815a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80 1 lock held by khungtaskd/32: #0: ffff800090127de0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x4/0x48 include/linux/rcupdate.h:330 5 locks held by kworker/u8:6/740: 3 locks held by kworker/0:2/1794: 2 locks held by dhcpcd/6140: #0: ffff800093065f08 (vlan_ioctl_mutex){+.+.}-{4:4}, at: sock_ioctl+0x53c/0x858 net/socket.c:1273 #1: ffff8000930815a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80 2 locks held by getty/6226: #0: ffff0000d7c1a0a0 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340 #1: ffff80009c1eb2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x46c/0x123c drivers/tty/n_tty.c:2222 6 locks held by syz-executor/6480: #0: ffff0000d8db80e0 (&type->s_umount_key#56){+.+.}-{4:4}, at: __super_lock fs/super.c:56 [inline] #0: ffff0000d8db80e0 (&type->s_umount_key#56){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline] #0: ffff0000d8db80e0 (&type->s_umount_key#56){+.+.}-{4:4}, at: deactivate_super+0xd8/0x100 fs/super.c:505 #1: ffff0000f3200278 (&c->state_lock){+.+.}-{4:4}, at: __bch2_fs_stop+0xfc/0x560 fs/bcachefs/super.c:640 #2: ffff0000f324ace8 (&j->reclaim_lock){+.+.}-{4:4}, at: journal_flush_done+0x90/0x6c8 fs/bcachefs/journal_reclaim.c:872 #3: ffff0000f3204210 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire+0x18/0x54 include/linux/srcu.h:160 #4: ffff0000f32045b8 (&wb->flushing.lock){+.+.}-{4:4}, at: btree_write_buffer_flush_seq+0x1194/0x12f4 fs/bcachefs/btree_write_buffer.c:569 #5: ffff0000f3226550 (&c->gc_lock){.+.+}-{4:4}, at: bch2_btree_update_start+0x544/0x1604 fs/bcachefs/btree_update_interior.c:1179 1 lock held by bch-reclaim/loo/6638: #0: ffff0000f324ace8 (&j->reclaim_lock){+.+.}-{4:4}, at: bch2_journal_reclaim_thread+0x15c/0x504 fs/bcachefs/journal_reclaim.c:761 2 locks held by syz-executor/9515: #0: ffff80008ff422a8 (&ops->srcu#2){.+.+}-{0:0}, at: srcu_lock_acquire+0x18/0x54 include/linux/srcu.h:160 #1: ffff8000930815a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline] #1: ffff8000930815a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline] #1: ffff8000930815a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xacc/0x185c net/core/rtnetlink.c:4064 1 lock held by syz.3.815/9552: 1 lock held by syz.5.817/9555: #0: ffff0001b3702958 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested kernel/sched/core.c:605 [inline] #0: ffff0001b3702958 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock kernel/sched/sched.h:1513 [inline] #0: ffff0001b3702958 (&rq->__lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1837 [inline] #0: ffff0001b3702958 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x34c/0x2c80 kernel/sched/core.c:6691 4 locks held by syz.5.817/9567: 1 lock held by syz.2.816/9565: #0: ffff0000d9f212d0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_write_lock_killable include/linux/mmap_lock.h:146 [inline] #0: ffff0000d9f212d0 (&mm->mmap_lock){++++}-{4:4}, at: vm_mmap_pgoff+0x1c0/0x4ac mm/util.c:577 4 locks held by syz.2.816/9566: 1 lock held by syz.0.818/9568: #0: ffff0000c2f2ac50 (&mm->mmap_lock){++++}-{4:4}, at: mmap_write_lock_killable include/linux/mmap_lock.h:146 [inline] #0: ffff0000c2f2ac50 (&mm->mmap_lock){++++}-{4:4}, at: vm_mmap_pgoff+0x1c0/0x4ac mm/util.c:577 1 lock held by syz.0.818/9569: =============================================