syzbot


INFO: task hung in expkey_flush

Status: upstream: reported on 2026/01/20 20:17
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b6a0f8e7fb5f9c959ddd@syzkaller.appspotmail.com
First crash: 479d, last: 1d02h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [nfs?] INFO: task hung in expkey_flush 0 (1) 2026/01/20 20:17

Sample crash report:
INFO: task syz.1.2396:17278 blocked for more than 143 seconds.
      Tainted: G     U       L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.2396      state:D stack:26528 pid:17278 tgid:17277 ppid:15923  task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7008
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7065
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 expkey_flush+0x18/0x90 fs/nfsd/export.c:261
 write_flush.isra.0+0x2af/0x3d0 net/sunrpc/cache.c:1567
 pde_write fs/proc/inode.c:330 [inline]
 proc_reg_write+0x240/0x330 fs/proc/inode.c:342
 do_loop_readv_writev fs/read_write.c:852 [inline]
 do_loop_readv_writev fs/read_write.c:837 [inline]
 vfs_writev+0x5ea/0xe10 fs/read_write.c:1061
 do_writev+0x13e/0x340 fs/read_write.c:1105
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f8c1bb9c819
RSP: 002b:00007f8c1ca46028 EFLAGS: 00000246 ORIG_RAX: 0000000000000014
RAX: ffffffffffffffda RBX: 00007f8c1be15fa0 RCX: 00007f8c1bb9c819
RDX: 000000000000000a RSI: 0000200000000240 RDI: 0000000000000003
RBP: 00007f8c1bc32c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f8c1be16038 R14: 00007f8c1be15fa0 R15: 00007ffea1d6b168
 </TASK>
INFO: task syz.1.2396:17281 blocked for more than 144 seconds.
      Tainted: G     U       L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.2396      state:D stack:29000 pid:17281 tgid:17277 ppid:15923  task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7008
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7065
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 fdget_pos+0x2aa/0x380 fs/file.c:1261
 class_fd_pos_constructor include/linux/file.h:85 [inline]
 ksys_read+0x71/0x250 fs/read_write.c:708
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f8c1bb9c819
RSP: 002b:00007f8c1ca04028 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
RAX: ffffffffffffffda RBX: 00007f8c1be16180 RCX: 00007f8c1bb9c819
RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 00007f8c1bc32c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f8c1be16218 R14: 00007f8c1be16180 R15: 00007ffea1d6b168
 </TASK>
INFO: task syz.5.2415:17386 blocked for more than 144 seconds.
      Tainted: G     U       L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.5.2415      state:D stack:28984 pid:17386 tgid:17384 ppid:10355  task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7008
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7065
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
 nfsd_umount+0x3b/0x60 fs/nfsd/nfsctl.c:1364
 deactivate_locked_super+0xc1/0x1b0 fs/super.c:476
 deactivate_super fs/super.c:509 [inline]
 deactivate_super+0xe7/0x110 fs/super.c:505
 cleanup_mnt+0x21f/0x450 fs/namespace.c:1312
 task_work_run+0x150/0x240 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:67 [inline]
 exit_to_user_mode_loop+0x100/0x4a0 kernel/entry/common.c:98
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
 do_syscall_64+0x668/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f0d24b9c819
RSP: 002b:00007f0d259f3028 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: fffffffffffffffe RBX: 00007f0d24e15fa0 RCX: 00007f0d24b9c819
RDX: 0000200000000180 RSI: 00002000000000c0 RDI: 0000000000000000
RBP: 00007f0d24c32c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f0d24e16038 R14: 00007f0d24e15fa0 R15: 00007ffc476883d8
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/0:0/9:
 #0: ffff88813fe63148 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc900000e7d08 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffff88807a66f240 (&data->fib_lock){+.+.}-{4:4}, at: nsim_fib_event_work+0x1b8/0x63b0 drivers/net/netdevsim/fib.c:1490
1 lock held by khungtaskd/30:
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
3 locks held by kworker/0:4/5878:
 #0: ffff88813fe63148 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc90004497d08 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffff88803c837240 (&data->fib_lock){+.+.}-{4:4}, at: nsim_fib_event_work+0x1b8/0x63b0 drivers/net/netdevsim/fib.c:1490
3 locks held by kworker/u11:34/11561:
 #0: ffff888032b17148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc90004a6fd08 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x11f/0x1360 net/ipv6/addrconf.c:4198
2 locks held by syz.4.2040/15471:
 #0: ffffffff906c33f0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x6c1/0xc00 fs/nfsd/nfsctl.c:1607
2 locks held by syz-executor/15746:
 #0: ffff88807cc120e0 (&type->s_umount_key#54){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88807cc120e0 (&type->s_umount_key#54){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88807cc120e0 (&type->s_umount_key#54){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88807cc120e0 (&type->s_umount_key#54){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/16530:
 #0: ffff888036e1c0e0 (&type->s_umount_key#54){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888036e1c0e0 (&type->s_umount_key#54){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff888036e1c0e0 (&type->s_umount_key#54){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff888036e1c0e0 (&type->s_umount_key#54){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
3 locks held by syz.1.2396/17278:
 #0: ffff888036ebe2b8 (&f->f_pos_lock){+.+.}-{4:4}, at: fdget_pos+0x2aa/0x380 fs/file.c:1261
 #1: ffff88807cf88420 (sb_writers#3){.+.+}-{0:0}, at: do_writev+0x13e/0x340 fs/read_write.c:1105
 #2: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: expkey_flush+0x18/0x90 fs/nfsd/export.c:261
1 lock held by syz.1.2396/17281:
 #0: ffff888036ebe2b8 (&f->f_pos_lock){+.+.}-{4:4}, at: fdget_pos+0x2aa/0x380 fs/file.c:1261
2 locks held by syz.5.2415/17386:
 #0: ffff8880342880e0 (&type->s_umount_key#54){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880342880e0 (&type->s_umount_key#54){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff8880342880e0 (&type->s_umount_key#54){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff8880342880e0 (&type->s_umount_key#54){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
3 locks held by kworker/u11:12/17494:
 #0: ffff88813fea4148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc9000399fd08 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0x51/0xc0 net/core/link_watch.c:313
2 locks held by syz.9.2544/18070:
 #0: ffffffff906c33f0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_version_set_doit+0xc4/0x7a0 fs/nfsd/nfsctl.c:1753
2 locks held by getty/18077:
 #0: ffff8880378a30a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc90005d152f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
2 locks held by syz.0.2735/19367:
 #0: ffffffff906c33f0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0xd5/0x1a80 fs/nfsd/nfsctl.c:1903
2 locks held by syz.0.2735/19371:
 #0: ffffffff906c33f0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0xd5/0x1a80 fs/nfsd/nfsctl.c:1903
2 locks held by syz.8.2751/19442:
 #0: ffffffff906c33f0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_version_set_doit+0xc4/0x7a0 fs/nfsd/nfsctl.c:1753
2 locks held by syz.8.2751/19450:
 #0: ffffffff906c33f0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_version_set_doit+0xc4/0x7a0 fs/nfsd/nfsctl.c:1753
1 lock held by syz-executor/19505:
 #0: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x220 drivers/net/tun.c:3436
2 locks held by syz.4.2775/19564:
 #0: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x220 drivers/net/tun.c:3436
 #1: ffffffff8e7f32b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343
2 locks held by syz.3.2777/19574:
 #0: ffffffff905fe850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: cfg80211_pernet_exit+0x17/0x120 net/wireless/core.c:1701

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Tainted: G     U       L      syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (41):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/04/06 06:29 upstream 1791c390149f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/04/02 08:02 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/03/24 09:25 upstream c369299895a5 baf8bf12 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/03/23 03:23 upstream 8d8bd2a5aa98 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/03/11 21:17 upstream b29fb8829bff 2d88ab01 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/03/09 13:55 upstream 1f318b96cc84 176bead5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/02/28 02:21 upstream a75cb869a8cc 2cf092b8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/02/24 04:28 upstream 7dff99b35460 41d2fa6a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/02/23 07:20 upstream 189f164e573e 6e7b5511 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/01/26 22:00 upstream fcb70a56f4d8 efb3e894 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/01/25 10:30 upstream 5dbeeb268b63 40acda8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/01/16 20:06 upstream 983d014aafb1 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/01/07 15:29 upstream f0b9d8eb98df d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/01/07 14:52 upstream f0b9d8eb98df d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2026/01/01 03:25 upstream 349bd28a86f2 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/12/19 15:08 upstream dd9b004b7ff3 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/11/23 06:35 upstream 89edd36fd801 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/11/20 02:50 upstream 23cb64fb7625 26ee5237 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/11/18 10:07 upstream e7c375b18160 ef766cd7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/11/10 09:23 upstream e9a6fb0bcdd7 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/11/02 18:24 upstream 691d401c7e0e 2c50b6a9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/10/30 13:19 upstream e53642b87a4f fd2207e7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/10/19 13:21 upstream 1c64efcb083c 1c8c8cd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/10/14 14:27 upstream 3a8660878839 b6605ba8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/10/13 08:45 upstream 3a8660878839 ff1712fe .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/09/27 06:54 upstream 083fc6d7fa0d 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/09/17 11:34 upstream 5aca7966d2a7 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/09/09 03:31 upstream f777d1112ee5 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/07/27 06:19 upstream 302f88ff3584 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/05/18 09:45 upstream 5723cc3450bc f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/04/26 07:13 upstream c3137514f1f1 c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/04/12 20:47 upstream 3bde70a2c827 0bd6db41 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/02/18 12:59 upstream 2408a807bfc3 c37c7249 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/02/12 01:00 upstream 09fbf3d50205 f2baddf5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/02/08 22:21 upstream 8f6629c004b1 ef44b750 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/02/08 06:54 upstream 7ee983c850b4 ef44b750 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/01/02 05:47 upstream 56e6a3499e14 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2025/01/01 08:56 upstream ccb98ccef0e5 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2024/12/21 03:43 upstream e9b8ffafd20a d7f584ee .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2024/12/20 23:57 upstream e9b8ffafd20a d7f584ee .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
2024/12/13 23:21 upstream 243f750a2df0 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in expkey_flush
* Struck through repros no longer work on HEAD.