syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 688d, last: 1h26m
✨ AI Jobs (1)
ID Workflow Result Correct Bug Created Started Finished Revision Error
a607d1e4-f56a-479f-bd5d-819025c7ef3e repro INFO: task hung in nfsd_umount 2026/03/07 03:10 2026/03/07 03:11 2026/03/07 03:20 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly nfs report (Jul 2025) 0 (1) 2025/07/04 12:38
[syzbot] Monthly nfs report (Jun 2025) 0 (1) 2025/06/03 09:38
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz.8.6863:27849 blocked for more than 145 seconds.
      Tainted: G             L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.6863      state:D stack:24832 pid:27849 tgid:27849 ppid:24195  task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0x1553/0x5240 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7289
 rt_mutex_slowlock_block+0x508/0x680 kernel/locking/rtmutex.c:1647
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked kernel/locking/rtmutex.c:1760 [inline]
 rt_mutex_slowlock+0x2dc/0x7b0 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 __mutex_lock_common kernel/locking/rtmutex_api.c:534 [inline]
 mutex_lock_nested+0x168/0x1d0 kernel/locking/rtmutex_api.c:552
 nfsd_shutdown_threads+0x4e/0xd0 fs/nfsd/nfssvc.c:576
 nfsd_umount+0x41/0x60 fs/nfsd/nfsctl.c:1364
 deactivate_locked_super+0xbc/0x130 fs/super.c:476
 put_fs_context+0x93/0xb00 fs/fs_context.c:495
 fscontext_release+0x65/0x80 fs/fsopen.c:79
 __fput+0x461/0xa90 fs/file_table.c:469
 task_work_run+0x1d9/0x270 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:67 [inline]
 exit_to_user_mode_loop+0xed/0x480 kernel/entry/common.c:98
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
 do_syscall_64+0x32d/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fdf22f3c819
RSP: 002b:00007ffed47c0e48 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
RAX: 0000000000000000 RBX: 00007fdf231b7da0 RCX: 00007fdf22f3c819
RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
RBP: 00007fdf231b7da0 R08: 0000000000000006 R09: 0000000000000000
R10: 00007fdf231b7cb0 R11: 0000000000000246 R12: 00000000001bb1fe
R13: 00007fdf231b618c R14: 00000000001bb088 R15: 00007fdf231b6180
 </TASK>

Showing all locks held in the system:
5 locks held by kworker/u8:1/13:
1 lock held by khungtaskd/38:
 #0: ffffffff8ddcb980 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8ddcb980 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8ddcb980 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
2 locks held by getty/5556:
 #0: ffff8880370770a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e8b2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211
3 locks held by kworker/u9:1/24198:
 #0: ffff888029314138 ((wq_completion)hci7){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
 #0: ffff888029314138 ((wq_completion)hci7){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359
 #1: ffffc90006ba7c40 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline]
 #1: ffffc90006ba7c40 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359
 #2: ffff888064ad4f80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_cmd_sync_work+0x1d3/0x400 net/bluetooth/hci_sync.c:331
2 locks held by syz.6.6324/26075:
 #0: ffffffff8f1cb8e0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8e0e6658 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0x13e/0x1630 fs/nfsd/nfsctl.c:1903
2 locks held by syz.8.6863/27849:
 #0: ffff8880354580d0 (&type->s_umount_key#84){+.+.}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880354580d0 (&type->s_umount_key#84){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff8880354580d0 (&type->s_umount_key#84){+.+.}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:508
 #1: ffffffff8e0e6658 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x4e/0xd0 fs/nfsd/nfssvc.c:576
2 locks held by syz.7.6923/28003:
 #0: ffff8880947bc0d0 (&type->s_umount_key#84){+.+.}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880947bc0d0 (&type->s_umount_key#84){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff8880947bc0d0 (&type->s_umount_key#84){+.+.}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:508
 #1: ffffffff8e0e6658 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x4e/0xd0 fs/nfsd/nfssvc.c:576

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Tainted: G             L      syzkaller #0 PREEMPT_{RT,(full)} 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xfd9/0x1030 kernel/hung_task.c:515
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 13 Comm: kworker/u8:1 Tainted: G             L      syzkaller #0 PREEMPT_{RT,(full)} 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Workqueue: bat_events batadv_iv_send_outstanding_bat_ogm_packet
RIP: 0010:get_current arch/x86/include/asm/current.h:-1 [inline]
RIP: 0010:__lock_acquire+0x2e/0x2cf0 kernel/locking/lockdep.c:5082
Code: 56 41 55 41 54 53 48 81 ec e8 00 00 00 65 48 8b 05 57 a9 ae 10 48 89 84 24 e0 00 00 00 65 48 8b 05 2f a9 ae 10 48 89 44 24 08 <31> ed 83 3d f9 50 ca 0d 00 0f 84 ef 13 00 00 45 89 cf 49 89 f9 48
RSP: 0018:ffffc90000127720 EFLAGS: 00000082
RAX: ffff88801caa5b80 RBX: 0000000000000246 RCX: 0000000000000002
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8ddcb980
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: dffffc0000000000 R11: ffffed100c5a1317 R12: 0000000000000002
R13: ffffffff8ddcb980 R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888126332000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055f8a0867168 CR3: 000000000dbba000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868
 rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 rcu_read_lock include/linux/rcupdate.h:850 [inline]
 batadv_iv_ogm_slide_own_bcast_window net/batman-adv/bat_iv_ogm.c:764 [inline]
 batadv_iv_ogm_schedule_buff net/batman-adv/bat_iv_ogm.c:836 [inline]
 batadv_iv_ogm_schedule+0x47d/0xf60 net/batman-adv/bat_iv_ogm.c:876
 batadv_iv_send_outstanding_bat_ogm_packet+0x6c8/0x7e0 net/batman-adv/bat_iv_ogm.c:1712
 process_one_work kernel/workqueue.c:3276 [inline]
 process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3359
 worker_thread+0xa53/0xfc0 kernel/workqueue.c:3440
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (3966):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/04/08 12:31 upstream 3036cd0d3328 2c961e87 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/04/08 06:29 upstream 3036cd0d3328 2c961e87 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/08 04:18 upstream 3036cd0d3328 2c961e87 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/08 01:13 upstream 3036cd0d3328 2c961e87 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/07 17:58 upstream bfe62a454542 628666c6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/07 09:41 upstream bfe62a454542 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2026/04/07 06:55 upstream bfe62a454542 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/07 02:50 upstream bfe62a454542 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/07 00:53 upstream bfe62a454542 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 14:16 upstream 591cd656a1bf 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 08:28 upstream 591cd656a1bf 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 06:01 upstream 1791c390149f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 03:39 upstream 1791c390149f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 01:55 upstream 1791c390149f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/05 14:35 upstream 3aae9383f42f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 20:15 upstream 7ca6d1cfec80 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 16:38 upstream 7ca6d1cfec80 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/04/04 14:04 upstream 7ca6d1cfec80 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 13:00 upstream 631919fb12fe 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 05:34 upstream 631919fb12fe 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 18:05 upstream d8a9a4b11a13 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 16:37 upstream d8a9a4b11a13 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 15:25 upstream d8a9a4b11a13 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 10:14 upstream 5619b098e2fb 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 08:21 upstream 5619b098e2fb 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 02:17 upstream 5619b098e2fb 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 16:31 upstream 9147566d8016 91bc79b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 14:18 upstream 9147566d8016 8b15d4ae .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/04/02 12:53 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/04/02 08:39 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 06:57 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 04:59 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 04:12 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 02:22 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/01 21:19 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/01 04:40 upstream d0c3bcd5b897 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 12:52 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 10:31 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 06:13 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 02:23 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 22:42 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 10:27 upstream 46b513250491 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 08:31 upstream 46b513250491 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 00:49 upstream 0138af2472df 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 09:38 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 07:53 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/06 23:37 upstream 651690480a96 5cb44a80 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/03/29 20:31 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2026/03/29 05:26 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2026/03/28 00:20 linux-next e77a5a5cfe43 74a13a23 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.