syzbot


INFO: task hung in p9_client_destroy

Status: auto-obsoleted due to no activity on 2025/10/08 18:13
Subsystems: v9fs
[Documentation on labels]
First crash: 100d, last: 100d

Sample crash report:
INFO: task syz.0.284:7954 blocked for more than 149 seconds.
      Not tainted 6.16.0-rc5-syzkaller-gec4801305969 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.284       state:D stack:0     pid:7954  tgid:7943  ppid:6524   task_flags:0x400140 flags:0x00000011
Call trace:
 __switch_to+0x414/0x834 arch/arm64/kernel/process.c:742 (T)
 context_switch kernel/sched/core.c:5401 [inline]
 __schedule+0x1414/0x2a28 kernel/sched/core.c:6790
 __schedule_loop kernel/sched/core.c:6868 [inline]
 schedule+0xb4/0x230 kernel/sched/core.c:6883
 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6940
 __mutex_lock_common+0xbd0/0x2190 kernel/locking/mutex.c:679
 __mutex_lock kernel/locking/mutex.c:747 [inline]
 mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799
 kmem_cache_destroy+0x58/0x178 mm/slab_common.c:511

 p9_client_destroy+0x418/0x46c net/9p/client.c:1088
 v9fs_session_init+0x1208/0x149c fs/9p/v9fs.c:490
 v9fs_mount+0xd0/0x8d4 fs/9p/vfs_super.c:122
 legacy_get_tree+0xd4/0x16c fs/fs_context.c:666
 vfs_get_tree+0x90/0x28c fs/super.c:1804
 do_new_mount+0x228/0x814 fs/namespace.c:3902
 path_mount+0x5b4/0xde0 fs/namespace.c:4226
 do_mount fs/namespace.c:4239 [inline]
 __do_sys_mount fs/namespace.c:4450 [inline]
 __se_sys_mount fs/namespace.c:4427 [inline]
 __arm64_sys_mount+0x3e8/0x468 fs/namespace.c:4427
 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
 invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
 el0_svc+0x58/0x180 arch/arm64/kernel/entry-common.c:879
 el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:898
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596

Showing all locks held in the system:
2 locks held by kthreadd/2:
2 locks held by kworker/0:1/11:
3 locks held by kworker/u8:0/12:
1 lock held by kworker/R-mm_pe/13:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
3 locks held by kworker/u8:1/15:
1 lock held by khungtaskd/32:
 #0: ffff80008f8599c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x4/0x48 include/linux/rcupdate.h:330
4 locks held by kworker/u8:2/41:
 #0: ffff0000c18a2148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3212
 #1: ffff8000990e7bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3212
 #2: ffff8000928a99d0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf4/0x6c0 net/core/net_namespace.c:662
 #3: ffff8000928b6568 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80
3 locks held by kworker/u8:3/42:
 #0: ffff0000c0031948 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3212
 #1: ffff8000990f7bc0 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3212
 #2: ffff8000928b6568 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80
2 locks held by pr/ttyAMA0/43:
3 locks held by kworker/u8:4/60:
3 locks held by kworker/u8:5/169:
3 locks held by kworker/u8:6/789:
3 locks held by kworker/u8:7/1638:
4 locks held by kworker/0:2/2325:
3 locks held by kworker/R-ipv6_/4170:
 #0: ffff0000d2732948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3212
 #1: ffff8000a25f7ba0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3212
 #2: ffff8000928b6568 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80
2 locks held by kworker/R-bat_e/4251:
1 lock held by klogd/6128:
2 locks held by udevd/6139:
2 locks held by getty/6291:
 #0: ffff0000d2f0d0a0 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340
 #1: ffff80009ba2e2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x34c/0xfa4 drivers/tty/n_tty.c:2222
3 locks held by sshd-session/6511:
1 lock held by syz-executor/6512:
3 locks held by udevd/6514:
2 locks held by kworker/0:3/6528:
4 locks held by kworker/0:4/6529:
3 locks held by syz-executor/6534:
1 lock held by kworker/R-wg-cr/6564:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/6565:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/6566:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/6567:
1 lock held by kworker/R-wg-cr/6570:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/6572:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/6573:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/6575:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/6576:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/6577:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2678
3 locks held by kworker/0:5/6605:
 #0: ffff0000c0028d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3212
 #1: ffff8000a4397bc0 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3212
 #2: ffff8000928b6568 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80
3 locks held by kworker/u8:8/6618:
4 locks held by kworker/1:6/6639:
3 locks held by kworker/1:7/6647:
4 locks held by udevd/6713:
3 locks held by kworker/u8:9/6987:
2 locks held by syz-executor/7511:
1 lock held by kworker/R-wg-cr/7530:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/7531:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/7532:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2678
4 locks held by syz-executor/7547:
1 lock held by kworker/R-wg-cr/7565:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/7566:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/7567:
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f701328 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
2 locks held by syz-executor/7723:
3 locks held by kworker/u8:10/7766:
2 locks held by syz.0.284/7954:
 #0: ffff80008f6f2990 (cpu_hotplug_lock){++++}-{0:0}, at: kmem_cache_destroy+0x48/0x178 mm/slab_common.c:510
 #1: ffff80008f937618 (slab_mutex){+.+.}-{4:4}, at: kmem_cache_destroy+0x58/0x178 mm/slab_common.c:511
2 locks held by kworker/u8:11/7960:
3 locks held by dhcpcd-run-hook/7961:
4 locks held by kworker/u8:12/7962:
3 locks held by kworker/u8:13/7963:

=============================================


Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/10 18:03 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci ec4801305969 19d4829f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in p9_client_destroy
* Struck through repros no longer work on HEAD.