syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 575d, last: 1h47m
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly nfs report (Jul 2025) 0 (1) 2025/07/04 12:38
[syzbot] Monthly nfs report (Jun 2025) 0 (1) 2025/06/03 09:38
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz-executor:5834 blocked for more than 143 seconds.
      Tainted: G     U       L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:23448 pid:5834  tgid:5834  ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1139/0x6150 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6960
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7017
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc69/0x1ca0 kernel/locking/mutex.c:776
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
 nfsd_umount+0x3b/0x60 fs/nfsd/nfsctl.c:1347
 deactivate_locked_super+0xc1/0x1a0 fs/super.c:474
 deactivate_super fs/super.c:507 [inline]
 deactivate_super+0xde/0x100 fs/super.c:503
 cleanup_mnt+0x225/0x450 fs/namespace.c:1318
 task_work_run+0x150/0x240 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:44 [inline]
 exit_to_user_mode_loop+0xfb/0x540 kernel/entry/common.c:75
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]
 do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f3f60f90af7
RSP: 002b:00007fffd0c041a8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00007f3f61013d7d RCX: 00007f3f60f90af7
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007fffd0c052f0
RBP: 00007fffd0c052dc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fffd0c052f0
R13: 00007f3f61013d7d R14: 00000000000aa951 R15: 00007fffd0c05330
 </TASK>

Showing all locks held in the system:
1 lock held by pool_workqueue_/3:
 #0: ffffffff8e3d4bf8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a3/0x3c0 kernel/rcu/tree_exp.h:343
1 lock held by khungtaskd/31:
 #0: ffffffff8e3c94a0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e3c94a0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8e3c94a0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6775
2 locks held by syz-executor/5834:
 #0: ffff8880406220e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff8880406220e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff8880406220e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:506 [inline]
 #0: ffff8880406220e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:503
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
3 locks held by kworker/0:5/5919:
 #0: ffff88813ff55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x128d/0x1b20 kernel/workqueue.c:3232
 #1: ffffc90004b1fc90 (free_ipc_work){+.+.}-{0:0}, at: process_one_work+0x914/0x1b20 kernel/workqueue.c:3233
 #2: ffffffff8e3d4bf8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a3/0x3c0 kernel/rcu/tree_exp.h:343
2 locks held by getty/14731:
 #0: ffff88814cfb60a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc90004bab2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x1510 drivers/tty/n_tty.c:2211
2 locks held by kworker/u8:40/15060:
 #0: ffff88801cfaf148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work+0x128d/0x1b20 kernel/workqueue.c:3232
 #1: ffffc9000b35fc90 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x914/0x1b20 kernel/workqueue.c:3233
2 locks held by syz-executor/15893:
 #0: ffff88802b4d80e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff88802b4d80e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff88802b4d80e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:506 [inline]
 #0: ffff88802b4d80e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:503
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz.1.2244/16497:
 #0: ffffffff901eb350 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x687/0xbc0 fs/nfsd/nfsctl.c:1590
2 locks held by syz-executor/16770:
 #0: ffff8880769340e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff8880769340e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff8880769340e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:506 [inline]
 #0: ffff8880769340e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:503
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz-executor/17039:
 #0: ffff8880851fe0e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff8880851fe0e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff8880851fe0e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:506 [inline]
 #0: ffff8880851fe0e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:503
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz-executor/17559:
 #0: ffff88807a7800e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff88807a7800e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff88807a7800e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:506 [inline]
 #0: ffff88807a7800e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:503
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz-executor/17834:
 #0: ffff8880462fa0e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff8880462fa0e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff8880462fa0e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:506 [inline]
 #0: ffff8880462fa0e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:503
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz.5.2509/17948:
 #0: ffffffff901eb350 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x687/0xbc0 fs/nfsd/nfsctl.c:1590
2 locks held by syz-executor/18081:
 #0: ffff88807ca220e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff88807ca220e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff88807ca220e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:506 [inline]
 #0: ffff88807ca220e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:503
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz-executor/18258:
 #0: ffff88807e1080e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff88807e1080e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff88807e1080e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:506 [inline]
 #0: ffff88807e1080e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:503
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz-executor/18270:
 #0: ffff88805aca80e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff88805aca80e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff88805aca80e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:506 [inline]
 #0: ffff88805aca80e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:503
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz.9.2601/18554:
 #0: ffffffff901eb350 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e8011e8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0xd5/0x1ae0 fs/nfsd/nfsctl.c:1880
1 lock held by syz-executor/18698:
 #0: ffffffff90144ea8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff90144ea8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x230 drivers/net/tun.c:3436
1 lock held by syz-executor/18765:
 #0: ffffffff90144ea8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff90144ea8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x230 drivers/net/tun.c:3436
1 lock held by syz.3.2666/18915:
3 locks held by syz.5.2718/19204:
 #0: ffffffff9012e6d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x333/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff90144ea8 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0x90/0xc80 net/core/dev.c:13022
 #2: ffffffff8e3d4bf8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a3/0x3c0 kernel/rcu/tree_exp.h:343
2 locks held by syz.1.2720/19211:
2 locks held by syz.1.2720/19213:
 #0: ffffffff9012e6d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x333/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff90144ea8 (rtnl_mutex){+.+.}-{4:4}, at: ip_tunnel_init_net+0x21d/0x7d0 net/ipv4/ip_tunnel.c:1146

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Tainted: G     U       L      syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x133/0x180 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xe66/0x1180 kernel/hung_task.c:515
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x983/0xb10 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>

Crashes (3155):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/12/16 07:15 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/16 04:39 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/16 02:24 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/16 01:04 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/15 22:01 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/15 11:39 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/15 08:33 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/14 22:38 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/14 20:56 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/14 16:02 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/14 14:27 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/13 23:49 upstream 9d9c1cfec01c d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/13 16:44 upstream 9551a26f17d9 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/13 12:11 upstream 9551a26f17d9 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/13 10:54 upstream 9551a26f17d9 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/12 14:01 upstream 187d0801404f d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/12/12 10:39 upstream d358e5254674 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/12 06:59 upstream d358e5254674 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/12 04:28 upstream d358e5254674 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/11 21:11 upstream d358e5254674 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/11 04:39 upstream 0048fbb4011e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2025/12/10 23:33 upstream 0048fbb4011e d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/10 21:00 upstream 0048fbb4011e d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/09 14:41 upstream cb015814f8b6 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/09 11:36 upstream cb015814f8b6 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/09 03:55 upstream c2f2b01b74be d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/09 01:03 upstream c2f2b01b74be d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/08 23:49 upstream c2f2b01b74be d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/08 11:32 upstream ba65a4e7120a d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/08 05:36 upstream c2f2b01b74be d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2025/12/08 02:16 upstream 37bb2e7217b0 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/08 01:04 upstream 37bb2e7217b0 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/07 22:57 upstream 37bb2e7217b0 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/07 21:42 upstream 37bb2e7217b0 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/07 20:14 upstream 37bb2e7217b0 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/07 18:45 upstream 37bb2e7217b0 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/07 09:27 upstream cc3ee4ba57b7 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/06 22:01 upstream 416f99c3b16f d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/06 19:07 upstream 416f99c3b16f d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/06 17:58 upstream 416f99c3b16f d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/06 17:56 upstream 416f99c3b16f d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/06 06:26 upstream d1d36025a617 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/05 19:02 upstream 2061f18ad76e d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/05 05:22 upstream 559e608c4655 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/05 02:11 upstream 559e608c4655 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/04 12:54 upstream 8f7aa3d3c732 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/04 09:33 upstream 8f7aa3d3c732 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/12/04 00:22 upstream 3f9f0252130e d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/12/11 02:57 linux-next 5ce74bc1b7cb d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2025/11/24 16:00 linux-next 422f3140bbcb bf6fe8fe .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.