INFO: task bch-reclaim/loo:6753 blocked for more than 143 seconds. Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:bch-reclaim/loo state:D stack:26696 pid:6753 tgid:6753 ppid:2 task_flags:0x200840 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5401 [inline] __schedule+0x16a2/0x4cb0 kernel/sched/core.c:6790 __schedule_loop kernel/sched/core.c:6868 [inline] schedule+0x165/0x360 kernel/sched/core.c:6883 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6940 __mutex_lock_common kernel/locking/mutex.c:679 [inline] __mutex_lock+0x724/0xe80 kernel/locking/mutex.c:747 btree_write_buffer_flush_seq+0x18bd/0x1a40 fs/bcachefs/btree_write_buffer.c:571 bch2_btree_write_buffer_journal_flush+0x69/0xb0 fs/bcachefs/btree_write_buffer.c:588 journal_flush_pins+0x8e3/0xe90 fs/bcachefs/journal_reclaim.c:598 __bch2_journal_reclaim+0x8e9/0xea0 fs/bcachefs/journal_reclaim.c:726 bch2_journal_reclaim_thread+0x177/0x4f0 fs/bcachefs/journal_reclaim.c:768 kthread+0x711/0x8a0 kernel/kthread.c:464 ret_from_fork+0x3f9/0x770 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Showing all locks held in the system: 3 locks held by kworker/0:0/9: #0: ffff88801a480d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801a480d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc900000e7bc0 ((fqdir_free_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc900000e7bc0 ((fqdir_free_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 #2: ffffffff8e144b40 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3786 3 locks held by kworker/u8:0/12: #0: ffff88801a489148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801a489148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc90000117bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc90000117bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 #2: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303 3 locks held by kworker/u8:1/13: 1 lock held by khungtaskd/31: #0: ffffffff8e13f160 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8e13f160 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline] #0: ffffffff8e13f160 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6770 4 locks held by kworker/u8:4/59: #0: ffff88801b2fb948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801b2fb948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc9000210fbc0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc9000210fbc0 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 #2: ffffffff8f4fe950 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf7/0x800 net/core/net_namespace.c:662 #3: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: tc_action_net_exit include/net/act_api.h:173 [inline] #3: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: gate_exit_net+0x30/0x110 net/sched/act_gate.c:653 3 locks held by kworker/1:2/977: #0: ffff88801a480d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801a480d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc9000390fbc0 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc9000390fbc0 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 #2: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104 3 locks held by kworker/u8:8/2970: #0: ffff888030754148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff888030754148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc9000b887bc0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc9000b887bc0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 #2: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline] #2: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x112/0x14b0 net/ipv6/addrconf.c:4198 2 locks held by dhcpcd/5502: #0: ffff88802bc4e6d0 (nlk_cb_mutex-ROUTE){+.+.}-{4:4}, at: __netlink_dump_start+0xfe/0x7e0 net/netlink/af_netlink.c:2388 #1: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline] #1: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_dumpit+0x92/0x200 net/core/rtnetlink.c:6812 2 locks held by getty/5596: #0: ffff8880354890a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc9000333b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222 5 locks held by syz-executor/5835: #0: ffff88802a2c20e0 (&type->s_umount_key#82){+.+.}-{4:4}, at: __super_lock fs/super.c:57 [inline] #0: ffff88802a2c20e0 (&type->s_umount_key#82){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline] #0: ffff88802a2c20e0 (&type->s_umount_key#82){+.+.}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:506 #1: ffff888050200278 (&c->state_lock){++++}-{4:4}, at: __bch2_fs_stop+0xf8/0x900 fs/bcachefs/super.c:676 #2: ffff888050204740 (&wb->flushing.lock){+.+.}-{4:4}, at: btree_write_buffer_flush_seq+0x18bd/0x1a40 fs/bcachefs/btree_write_buffer.c:571 #3: ffff888050204398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:161 [inline] #3: ffff888050204398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:253 [inline] #3: ffff888050204398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: bch2_trans_srcu_lock+0xaf/0x220 fs/bcachefs/btree_iter.c:3299 #4: ffff888050226710 (&c->gc_lock){++++}-{4:4}, at: bch2_btree_update_start+0x542/0x1de0 fs/bcachefs/btree_update_interior.c:1211 3 locks held by udevd/5839: 3 locks held by bch-reclaim/loo/6753: #0: ffff88805024af68 (&j->reclaim_lock){+.+.}-{4:4}, at: bch2_journal_reclaim_thread+0x16b/0x4f0 fs/bcachefs/journal_reclaim.c:767 #1: ffff888050204398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:161 [inline] #1: ffff888050204398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:253 [inline] #1: ffff888050204398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x7f4/0xd80 fs/bcachefs/btree_iter.c:3505 #2: ffff888050204740 (&wb->flushing.lock){+.+.}-{4:4}, at: btree_write_buffer_flush_seq+0x18bd/0x1a40 fs/bcachefs/btree_write_buffer.c:571 3 locks held by syz-executor/9102: #0: ffffffff8ec92aa0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8ec92aa0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline] #0: ffffffff8ec92aa0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570 #1: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline] #1: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline] #1: ffffffff8f50b548 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4054 #2: ffffffff8e144c78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline] #2: ffffffff8e144c78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:998 3 locks held by syz.6.377/9312: ============================================= NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Call Trace: dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline] watchdog+0xfee/0x1030 kernel/hung_task.c:470 kthread+0x711/0x8a0 kernel/kthread.c:464 ret_from_fork+0x3f9/0x770 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 9312 Comm: syz.6.377 Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 RIP: 0033:0x7fb73e250d4b Code: cd 48 01 c1 49 39 4f 08 72 4c 8d 4d ff 85 ed 74 33 66 0f 1f 44 00 00 48 39 f0 72 1b 4d 8b 07 49 89 c1 49 29 f1 47 0f b6 0c 08 <45> 84 c9 74 08 45 88 0c 00 49 8b 47 10 48 83 c0 01 49 89 47 10 83 RSP: 002b:00007fb73f20d4a0 EFLAGS: 00000206 RAX: 0000000000b99896 RBX: 00007fb73f20d540 RCX: 0000000000000071 RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007fb73f20d5e0 RBP: 0000000000000102 R08: 00007fb733c00000 R09: 0000000000000000 R10: 0000000000000000 R11: 00007fb73f20d550 R12: 0000000000000001 R13: 00007fb73e42c3a0 R14: 0000000000000000 R15: 00007fb73f20d5e0 FS: 00007fb73f20e6c0 GS: 0000000000000000