syzbot


INFO: task hung in rxrpc_destroy_all_connections (5)

Status: upstream: reported on 2026/02/25 07:45
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+138f5aa6fa94d4802887@syzkaller.appspotmail.com
First crash: 296d, last: 55m
✨ AI Jobs (1)
ID Workflow Result Correct Bug Created Started Finished Revision Error
8d127d0d-aa75-47e9-bd7f-70576e4b5f67 repro INFO: task hung in rxrpc_destroy_all_connections (5) 2026/03/08 02:32 2026/03/08 02:32 2026/03/08 02:42 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] INFO: task hung in rxrpc_destroy_all_connections (5) 0 (1) 2026/02/25 07:45
Similar bugs (5)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in rxrpc_destroy_all_connections 1 1 2401d 2401d 0/1 auto-closed as invalid on 2019/12/16 04:54
upstream INFO: task hung in rxrpc_destroy_all_connections net afs 1 1 2386d 2386d 0/29 auto-closed as invalid on 2019/11/30 22:24
upstream INFO: task hung in rxrpc_destroy_all_connections (2) afs net 1 5 1986d 2022d 0/29 auto-closed as invalid on 2021/01/03 19:58
upstream INFO: task hung in rxrpc_destroy_all_connections (4) afs net 1 1 771d 764d 0/29 auto-obsoleted due to no activity on 2024/04/02 13:13
upstream INFO: task hung in rxrpc_destroy_all_connections (3) afs net 1 1 1589d 1589d 0/29 auto-closed as invalid on 2022/02/05 03:08

Sample crash report:
INFO: task kworker/u8:16:7781 blocked for more than 163 seconds.
      Tainted: G             L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:16   state:D stack:25496 pid:7781  tgid:7781  ppid:2      task_flags:0x4208160 flags:0x00080000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6908
 __schedule_loop kernel/sched/core.c:6990 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7005
 schedule_timeout+0x1b2/0x280 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common+0x2e7/0x4c0 kernel/sched/completion.c:121
 __flush_workqueue+0x3f7/0x1200 kernel/workqueue.c:4084
 rxrpc_destroy_all_connections+0xf9/0x420 net/rxrpc/conn_object.c:477
 rxrpc_exit_net+0x7b/0xc0 net/rxrpc/net_ns.c:113
 ops_exit_list net/core/net_namespace.c:199 [inline]
 ops_undo_list+0x2ee/0xab0 net/core/net_namespace.c:252
 cleanup_net+0x499/0x920 net/core/net_namespace.c:704
 process_one_work+0xa23/0x19a0 kernel/workqueue.c:3276
 process_scheduled_works kernel/workqueue.c:3359 [inline]
 worker_thread+0x5ef/0xe50 kernel/workqueue.c:3440
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz.0.730:9981 blocked for more than 164 seconds.
      Tainted: G             L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.730       state:D stack:27408 pid:9981  tgid:9980  ppid:5822   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6908
 __schedule_loop kernel/sched/core.c:6990 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7005
 schedule_timeout+0x1b2/0x280 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common+0x2e7/0x4c0 kernel/sched/completion.c:121
 __flush_workqueue+0x3f7/0x1200 kernel/workqueue.c:4084
 rxrpc_release_sock net/rxrpc/af_rxrpc.c:967 [inline]
 rxrpc_release+0x2a7/0x6a0 net/rxrpc/af_rxrpc.c:998
 __sock_release net/socket.c:662 [inline]
 sock_release+0x91/0x1c0 net/socket.c:690
 afs_open_socket+0x32f/0x3f0 fs/afs/rxrpc.c:117
 afs_net_init+0x825/0xb00 fs/afs/main.c:116
 ops_init+0x1e2/0x5f0 net/core/net_namespace.c:137
 setup_net+0x118/0x3a0 net/core/net_namespace.c:446
 copy_net_ns+0x46f/0x7c0 net/core/net_namespace.c:581
 create_new_namespaces+0x3ea/0xac0 kernel/nsproxy.c:130
 unshare_nsproxy_namespaces+0xc3/0x1f0 kernel/nsproxy.c:226
 ksys_unshare+0x473/0xad0 kernel/fork.c:3174
 __do_sys_unshare kernel/fork.c:3245 [inline]
 __se_sys_unshare kernel/fork.c:3243 [inline]
 __x64_sys_unshare+0x31/0x40 kernel/fork.c:3243
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f091eb9c799
RSP: 002b:00007f091f9a1028 EFLAGS: 00000246 ORIG_RAX: 0000000000000110
RAX: ffffffffffffffda RBX: 00007f091ee15fa0 RCX: 00007f091eb9c799
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040000080
RBP: 00007f091ec32c99 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f091ee16038 R14: 00007f091ee15fa0 R15: 00007fff80217888
 </TASK>
INFO: task syz.4.774:10252 blocked for more than 145 seconds.
      Tainted: G             L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.774       state:D stack:26736 pid:10252 tgid:10251 ppid:9284   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6908
 __schedule_loop kernel/sched/core.c:6990 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7005
 schedule_timeout+0x1b2/0x280 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common+0x2e7/0x4c0 kernel/sched/completion.c:121
 __flush_workqueue+0x3f7/0x1200 kernel/workqueue.c:4084
 rxrpc_release_sock net/rxrpc/af_rxrpc.c:967 [inline]
 rxrpc_release+0x2a7/0x6a0 net/rxrpc/af_rxrpc.c:998
 __sock_release net/socket.c:662 [inline]
 sock_release+0x91/0x1c0 net/socket.c:690
 afs_open_socket+0x32f/0x3f0 fs/afs/rxrpc.c:117
 afs_net_init+0x825/0xb00 fs/afs/main.c:116
 ops_init+0x1e2/0x5f0 net/core/net_namespace.c:137
 setup_net+0x118/0x3a0 net/core/net_namespace.c:446
 copy_net_ns+0x46f/0x7c0 net/core/net_namespace.c:581
 create_new_namespaces+0x3ea/0xac0 kernel/nsproxy.c:130
 unshare_nsproxy_namespaces+0xc3/0x1f0 kernel/nsproxy.c:226
 ksys_unshare+0x473/0xad0 kernel/fork.c:3174
 __do_sys_unshare kernel/fork.c:3245 [inline]
 __se_sys_unshare kernel/fork.c:3243 [inline]
 __x64_sys_unshare+0x31/0x40 kernel/fork.c:3243
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fcbce99c799
RSP: 002b:00007fcbcf8c0028 EFLAGS: 00000246 ORIG_RAX: 0000000000000110
RAX: ffffffffffffffda RBX: 00007fcbcec15fa0 RCX: 00007fcbce99c799
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040000080
RBP: 00007fcbcea32c99 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fcbcec16038 R14: 00007fcbcec15fa0 R15: 00007fff7d789b18
 </TASK>

Showing all locks held in the system:
1 lock held by pool_workqueue_/3:
1 lock held by khungtaskd/30:
 #0: ffffffff8e7e7420 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e7420 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e7420 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
3 locks held by kworker/u8:16/7781:
 #0: ffff88801c6ae948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc90003aa7d08 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff905fb850 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xb8/0x920 net/core/net_namespace.c:675
1 lock held by syz.0.730/9981:
 #0: ffffffff905fb850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz.4.774/10252:
 #0: ffffffff905fb850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
3 locks held by kworker/u11:6/10651:
 #0: ffff88813fea4148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc9000497fd08 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff906140a8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0x51/0xc0 net/core/link_watch.c:313
3 locks held by kworker/u11:12/10896:
 #0: ffff88803268b148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc900047dfd08 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff906140a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff906140a8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x11f/0x1360 net/ipv6/addrconf.c:4198
1 lock held by syz.2.903/10957:
 #0: ffffffff905fb850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz.5.964/11257:
 #0: ffff8880523fe988 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
 #0: ffff8880523fe988 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: __sock_release+0x86/0x260 net/socket.c:661
1 lock held by syz.6.980/11341:
 #0: ffffffff905fb850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz.3.988/11386:
 #0: ffffffff905fb850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz.7.1062/11819:
 #0: ffffffff905fb850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz.9.1108/12132:
1 lock held by syz.4.1215/12782:
 #0: ffffffff905fb850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz.1.1226/12844:
 #0: ffffffff905fb850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz-executor/12896:
 #0: ffffffff906140a8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff906140a8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x220 drivers/net/tun.c:3436
1 lock held by syz.8.1241/12920:
 #0: ffff88807d3f9ec8 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
 #0: ffff88807d3f9ec8 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: __sock_release+0x86/0x260 net/socket.c:661
1 lock held by syz-executor/12954:
 #0: ffffffff906140a8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff906140a8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x220 drivers/net/tun.c:3436
2 locks held by syz.0.1249/12964:
 #0: ffffffff906140a8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff906140a8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x220 drivers/net/tun.c:3436
 #1: ffffffff8e7f3038 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (37):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/03/15 06:40 upstream 69237f8c1f69 ee8d34d6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/03/13 19:55 upstream 0257f64bdac7 351cb5cf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/03/11 06:03 upstream b4f0dd314b39 86914af9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/03/05 10:54 upstream ecc64d2dc9ff a9fe5c9e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/03/02 07:41 upstream 39c633261414 43249bac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/25 04:23 upstream 7dff99b35460 787dfb7c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/24 19:13 upstream 7dff99b35460 96b1aa46 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/21 07:41 upstream a95f71ad3e2e 6e7b5511 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/18 21:01 upstream 23b0f90ba871 77d4d919 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/16 21:08 upstream 0f2acd3148e0 84656fa6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/14 20:38 upstream 3e48a11675c5 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/01 22:27 upstream 9f2693489ef8 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/01 00:46 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/24 18:53 upstream 62085877ae65 40acda8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/20 02:32 upstream 24d479d26b25 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/13 09:33 upstream b71e635feefc d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/07 13:07 upstream f0b9d8eb98df d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/05 11:01 upstream 3609fa95fb0f d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/12/31 12:43 upstream c8ebd433459b d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/12/21 15:19 upstream 9094662f6707 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/12/05 17:06 upstream 2061f18ad76e d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/26 14:34 upstream 30f09200cc4a c116feb4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/24 19:55 upstream ac3fd01e4c1e bf6fe8fe .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/19 03:04 upstream 5bebe8de1926 ef766cd7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/01 17:03 upstream ba36dd5ee6fd 2c50b6a9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/10/23 18:15 upstream 43e9ad0c55a3 c0460fcd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/10/04 12:21 upstream 2ccb4d203fe4 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/10/03 23:27 upstream e406d57be7bd 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/09/16 17:25 upstream 46a51f4f5eda e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/27 09:55 upstream fab1beda7597 e12e5ba4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/23 01:45 upstream cf6fc5eefc5b bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/10 04:36 upstream 561c80369df0 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/02 19:01 upstream a6923c06a3b2 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/07/24 09:40 upstream f9af7b5d9349 0c1d6ded .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/07/12 09:48 upstream 379f604cc3dc 3cda49cf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/06/21 12:59 upstream 11313e2f7812 d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/05/22 14:46 upstream d608703fcdd9 0919b50b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
* Struck through repros no longer work on HEAD.