INFO: task bch-reclaim/loo:5998 blocked for more than 143 seconds. Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:bch-reclaim/loo state:D stack:14704 pid:5998 tgid:5998 ppid:2 task_flags:0x200040 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5367 [inline] __schedule+0x1b18/0x50e0 kernel/sched/core.c:6748 __schedule_loop kernel/sched/core.c:6825 [inline] schedule+0x163/0x360 kernel/sched/core.c:6840 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6897 __mutex_lock_common kernel/locking/mutex.c:664 [inline] __mutex_lock+0x7fa/0x1000 kernel/locking/mutex.c:732 bch2_journal_reclaim_thread+0x16d/0x570 fs/bcachefs/journal_reclaim.c:763 kthread+0x7ab/0x920 kernel/kthread.c:464 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:153 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Showing all locks held in the system: 1 lock held by khungtaskd/31: #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline] #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x30/0x180 kernel/locking/lockdep.c:6761 2 locks held by getty/5587: #0: ffff888034aa20a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc90002fd62f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x53d/0x16b0 drivers/tty/n_tty.c:2211 6 locks held by syz-executor/5844: #0: ffff888057a800e0 (&type->s_umount_key#61){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline] #0: ffff888057a800e0 (&type->s_umount_key#61){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline] #0: ffff888057a800e0 (&type->s_umount_key#61){++++}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505 #1: ffff888053a80278 (&c->state_lock){+.+.}-{4:4}, at: __bch2_fs_stop+0xff/0x5b0 fs/bcachefs/super.c:633 #2: ffff888053acb028 (&j->reclaim_lock){+.+.}-{4:4}, at: journal_flush_done+0x85/0x820 fs/bcachefs/journal_reclaim.c:874 #3: ffff888053a84378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:161 [inline] #3: ffff888053a84378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:253 [inline] #3: ffff888053a84378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x7ed/0xd40 fs/bcachefs/btree_iter.c:3408 #4: ffff888053a84720 (&wb->flushing.lock){+.+.}-{4:4}, at: btree_write_buffer_flush_seq+0x1c9e/0x1e70 fs/bcachefs/btree_write_buffer.c:569 #5: ffff888053aa66d0 (&c->gc_lock){.+.+}-{4:4}, at: bch2_btree_update_start+0x68a/0x1680 fs/bcachefs/btree_update_interior.c:1182 1 lock held by bch-reclaim/loo/5998: #0: ffff888053acb028 (&j->reclaim_lock){+.+.}-{4:4}, at: bch2_journal_reclaim_thread+0x16d/0x570 fs/bcachefs/journal_reclaim.c:763 1 lock held by syz.4.231/6767: #0: ffff888057a800e0 (&type->s_umount_key#61){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline] #0: ffff888057a800e0 (&type->s_umount_key#61){++++}-{4:4}, at: super_lock+0x27c/0x400 fs/super.c:120 6 locks held by kworker/u8:17/8079: #0: ffff88801baee148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801baee148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319 #1: ffffc9001043fc60 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc9001043fc60 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319 #2: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x17c/0xd60 net/core/net_namespace.c:606 #3: ffff8880560920e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline] #3: ffff8880560920e8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline] #3: ffff8880560920e8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x13d/0x450 net/devlink/core.c:506 #4: ffff8880566e2250 (&devlink->lock_key#4){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:276 [inline] #4: ffff8880566e2250 (&devlink->lock_key#4){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline] #4: ffff8880566e2250 (&devlink->lock_key#4){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x14f/0x450 net/devlink/core.c:506 #5: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: nsim_destroy+0xa4/0x620 drivers/net/netdevsim/netdev.c:1016 1 lock held by syz.2.748/8765: #0: ffffffff8eb3fc78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline] #0: ffffffff8eb3fc78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x454/0x830 kernel/rcu/tree_exp.h:998 ============================================= NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 nmi_cpu_backtrace+0x4ab/0x4e0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline] watchdog+0x1058/0x10a0 kernel/hung_task.c:399 kthread+0x7ab/0x920 kernel/kthread.c:464 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:153 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 7234 Comm: kworker/0:11 Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 Workqueue: events drain_vmap_area_work RIP: 0010:debug_check_no_obj_freed+0x2dd/0x590 lib/debugobjects.c:1129 Code: 00 00 49 89 c4 49 89 c5 49 c1 ed 03 48 b8 00 00 00 00 00 fc ff df 41 80 7c 05 00 00 74 08 4c 89 e7 e8 17 39 28 fd 49 8b 04 24 <48> 89 44 24 38 49 8d 5c 24 18 48 89 d8 48 c1 e8 03 48 b9 00 00 00 RSP: 0018:ffffc900047075c0 EFLAGS: 00000046 RAX: ffff888028134738 RBX: ffff888028b396a8 RCX: dffffc0000000000 RDX: dffffc0000000000 RSI: 0000000000000004 RDI: ffffc900047074a0 RBP: ffffc90004707718 R08: 0000000000000003 R09: fffff520008e0e94 R10: dffffc0000000000 R11: fffff520008e0e94 R12: ffff888027f10d58 R13: 1ffff11004fe21ab R14: ffff8880292902b8 R15: 0000000000000011 FS: 0000000000000000(0000) GS:ffff888125224000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000110c39b423 CR3: 0000000031f84000 CR4: 0000000000350ef0 Call Trace: free_pages_prepare mm/page_alloc.c:1134 [inline] free_frozen_pages+0x4c2/0x10f0 mm/page_alloc.c:2660 kasan_depopulate_vmalloc_pte+0x74/0x90 mm/kasan/shadow.c:408 apply_to_pte_range mm/memory.c:2910 [inline] apply_to_pmd_range mm/memory.c:2954 [inline] apply_to_pud_range mm/memory.c:2990 [inline] apply_to_p4d_range mm/memory.c:3026 [inline] __apply_to_page_range+0x80a/0xde0 mm/memory.c:3062 kasan_release_vmalloc+0xa5/0xd0 mm/kasan/shadow.c:529 kasan_release_vmalloc_node mm/vmalloc.c:2196 [inline] purge_vmap_node+0x231/0x8e0 mm/vmalloc.c:2213 __purge_vmap_area_lazy+0x753/0xb20 mm/vmalloc.c:2304 drain_vmap_area_work+0x27/0x40 mm/vmalloc.c:2338 process_one_work kernel/workqueue.c:3238 [inline] process_scheduled_works+0xac5/0x18e0 kernel/workqueue.c:3319 worker_thread+0x870/0xd30 kernel/workqueue.c:3400 kthread+0x7ab/0x920 kernel/kthread.c:464 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:153 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245