INFO: task syz.6.931:10113 blocked for more than 143 seconds. Not tainted 6.15.0-rc6-syzkaller-gc919f087 Not tainted 6.15.0-rc6-syzkaller-gc919f08732cc #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.6.931 state:D stack:0 pid:10113 tgid:10112 ppid:7397 task_flags:0x400140 flags:0x00000019 Call trace: __switch_to+0x414/0x834 arch/arm64/kernel/process.c:735 (T) context_switch kernel/sched/core.c:5382 [inline] __schedule+0x13b0/0x28d4 kernel/sched/core.c:6767 __schedule_loop kernel/sched/core.c:6845 [inline] schedule+0xb4/0x230 kernel/sched/core.c:6860 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6917 __mutex_lock_common+0xbd0/0x2190 kernel/locking/mutex.c:678 __mutex_lock kernel/locking/mutex.c:746 [inline] mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:798 bio_find_or_create_slab block/bio.c:122 [inline] bioset_init+0x1d0/0x654 block/bio.c:1705 bch2_fs_io_read_init+0x30/0xcc fs/bcachefs/io_read.c:1384 bch2_fs_alloc fs/bcachefs/super.c:921 [inline] bch2_fs_open+0x1fb4/0x22e8 fs/bcachefs/super.c:2210 bch2_fs_get_tree+0x384/0xf30 fs/bcachefs/fs.c:2489 vfs_get_tree+0x90/0x28c fs/super.c:1759 do_new_mount+0x228/0x814 fs/namespace.c:3881 path_mount+0x5b4/0xde0 fs/namespace.c:4208 do_mount fs/namespace.c:4221 [inline] __do_sys_mount fs/namespace.c:4432 [inline] __se_sys_mount fs/namespace.c:4409 [inline] __arm64_sys_mount+0x3e8/0x468 fs/namespace.c:4409 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151 el0_svc+0x58/0x17c arch/arm64/kernel/entry-common.c:767 el0t_64_sync_handler+0x78/0x108 arch/arm64/kernel/entry-common.c:786 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:600 Showing all locks held in the system: 1 lock held by kthreadd/2: 3 locks held by kworker/u8:0/12: 1 lock held by kworker/R-mm_pe/13: 3 locks held by kworker/u8:1/14: 3 locks held by kworker/1:0/24: 2 locks held by kworker/1:1/26: 1 lock held by khungtaskd/32: #0: ffff80008f508aa0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x4/0x48 include/linux/rcupdate.h:330 3 locks held by kworker/u8:2/44: 3 locks held by kworker/u8:3/45: 2 locks held by pr/ttyAMA0/46: 3 locks held by kworker/u8:4/115: 3 locks held by kworker/u8:5/308: 3 locks held by kworker/u8:6/470: 3 locks held by kworker/u8:7/779: 2 locks held by kworker/1:2/1783: 3 locks held by kworker/R-ipv6_/4139: #0: ffff0000d27bf948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x658/0x156c kernel/workqueue.c:3212 #1: ffff8000a0597ba0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x6ec/0x156c kernel/workqueue.c:3212 #2: ffff80009248b628 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80 2 locks held by kworker/R-bat_e/4220: 2 locks held by udevd/6103: 2 locks held by crond/6232: 2 locks held by getty/6251: #0: ffff0000d305c0a0 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340 #1: ffff80009b59e2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x34c/0xfa0 drivers/tty/n_tty.c:2222 2 locks held by syz-executor/6493: 3 locks held by syz-executor/6497: 1 lock held by kworker/R-wg-cr/6531: #0: ffff80008f3b10e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffff80008f3b10e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3529 1 lock held by kworker/R-wg-cr/6532: #0: ffff80008f3b10e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffff80008f3b10e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3529 1 lock held by kworker/R-wg-cr/6535: 4 locks held by kworker/1:3/6537: 4 locks held by kworker/0:4/6538: #0: ffff0000c8e71d48 ((wq_completion)wg-kex-wg0#12){+.+.}-{0:0}, at: process_one_work+0x658/0x156c kernel/workqueue.c:3212 #1: ffff8000a2327bc0 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker)))); (typeof((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x6ec/0x156c kernel/workqueue.c:3212 #2: ffff0000d4015308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_response+0x180/0x988 drivers/net/wireguard/noise.c:742 #3: ffff0000c4e9f030 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_response+0x20c/0x988 drivers/net/wireguard/noise.c:753 2 locks held by kworker/0:5/6539: 2 locks held by kworker/1:6/6563: 3 locks held by syz-executor/7306: 3 locks held by kworker/u8:8/7512: 3 locks held by kworker/u8:9/7863: 2 locks held by udevd/8398: 1 lock held by syz.6.931/10113: #0: ffff80008fc23ca8 (bio_slab_lock){+.+.}-{4:4}, at: bio_find_or_create_slab block/bio.c:122 [inline] #0: ffff80008fc23ca8 (bio_slab_lock){+.+.}-{4:4}, at: bioset_init+0x1d0/0x654 block/bio.c:1705 3 locks held by cmp/10173: 3 locks held by kworker/u8:10/10176: 4 locks held by kworker/u8:11/10178: #0: ffff0000c0031948 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x658/0x156c kernel/workqueue.c:3212 #1: ffff80009dc47bc0 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x6ec/0x156c kernel/workqueue.c:3212 #2: ffff80009248b628 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80 #3: ffff0000dad10768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: class_wiphy_constructor include/net/cfg80211.h:6092 [inline] #3: ffff0000dad10768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: reg_leave_invalid_chans net/wireless/reg.c:2471 [inline] #3: ffff0000dad10768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: reg_check_chans_work+0x11c/0xd88 net/wireless/reg.c:2486 4 locks held by kworker/u8:12/10179: 3 locks held by kworker/u8:13/10180: 4 locks held by kworker/u8:14/10181: 3 locks held by syz-executor/10182: =============================================