syzbot


INFO: task hung in rxrpc_destroy_all_connections (5)

Status: upstream: reported on 2026/02/25 07:45
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+138f5aa6fa94d4802887@syzkaller.appspotmail.com
First crash: 288d, last: 2d03h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] INFO: task hung in rxrpc_destroy_all_connections (5) 0 (1) 2026/02/25 07:45
Similar bugs (5)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in rxrpc_destroy_all_connections 1 1 2393d 2393d 0/1 auto-closed as invalid on 2019/12/16 04:54
upstream INFO: task hung in rxrpc_destroy_all_connections net afs 1 1 2378d 2378d 0/29 auto-closed as invalid on 2019/11/30 22:24
upstream INFO: task hung in rxrpc_destroy_all_connections (2) afs net 1 5 1978d 2014d 0/29 auto-closed as invalid on 2021/01/03 19:58
upstream INFO: task hung in rxrpc_destroy_all_connections (4) afs net 1 1 764d 756d 0/29 auto-obsoleted due to no activity on 2024/04/02 13:13
upstream INFO: task hung in rxrpc_destroy_all_connections (3) afs net 1 1 1581d 1581d 0/29 auto-closed as invalid on 2022/02/05 03:08

Sample crash report:
INFO: task kworker/u8:11:2984 blocked for more than 143 seconds.
      Tainted: G     U       L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:11   state:D stack:25592 pid:2984  tgid:2984  ppid:2      task_flags:0x4208060 flags:0x00080000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6908
 __schedule_loop kernel/sched/core.c:6990 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7005
 schedule_timeout+0x1b2/0x280 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common+0x2e7/0x4c0 kernel/sched/completion.c:121
 __flush_workqueue+0x3f7/0x1200 kernel/workqueue.c:4083
 rxrpc_destroy_all_connections+0xf9/0x420 net/rxrpc/conn_object.c:477
 rxrpc_exit_net+0x7b/0xc0 net/rxrpc/net_ns.c:113
 ops_exit_list net/core/net_namespace.c:199 [inline]
 ops_undo_list+0x2ee/0xab0 net/core/net_namespace.c:252
 cleanup_net+0x499/0x920 net/core/net_namespace.c:704
 process_one_work+0x9d7/0x1920 kernel/workqueue.c:3275
 process_scheduled_works kernel/workqueue.c:3358 [inline]
 worker_thread+0x5da/0xe40 kernel/workqueue.c:3439
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz.4.1464:13645 blocked for more than 144 seconds.
      Tainted: G     U       L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.1464      state:D stack:25752 pid:13645 tgid:13643 ppid:8814   task_flags:0x400140 flags:0x00080006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6908
 __schedule_loop kernel/sched/core.c:6990 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7005
 schedule_timeout+0x1b2/0x280 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common+0x2e7/0x4c0 kernel/sched/completion.c:121
 __flush_workqueue+0x3f7/0x1200 kernel/workqueue.c:4083
 rxrpc_release_sock net/rxrpc/af_rxrpc.c:965 [inline]
 rxrpc_release+0x2a7/0x6a0 net/rxrpc/af_rxrpc.c:996
 __sock_release net/socket.c:662 [inline]
 sock_release+0x91/0x1c0 net/socket.c:690
 afs_open_socket+0x32f/0x3f0 fs/afs/rxrpc.c:117
 afs_net_init+0x825/0xb00 fs/afs/main.c:116
 ops_init+0x1e2/0x5f0 net/core/net_namespace.c:137
 setup_net+0x118/0x3a0 net/core/net_namespace.c:446
 copy_net_ns+0x46f/0x7c0 net/core/net_namespace.c:581
 create_new_namespaces+0x3ea/0xac0 kernel/nsproxy.c:130
 unshare_nsproxy_namespaces+0xc3/0x1f0 kernel/nsproxy.c:226
 ksys_unshare+0x473/0xad0 kernel/fork.c:3174
 __do_sys_unshare kernel/fork.c:3245 [inline]
 __se_sys_unshare kernel/fork.c:3243 [inline]
 __x64_sys_unshare+0x31/0x40 kernel/fork.c:3243
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f70b5f9c799
RSP: 002b:00007f70b6e57028 EFLAGS: 00000246 ORIG_RAX: 0000000000000110
RAX: ffffffffffffffda RBX: 00007f70b6215fa0 RCX: 00007f70b5f9c799
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040000080
RBP: 00007f70b6032bd9 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f70b6216038 R14: 00007f70b6215fa0 R15: 00007ffd0a745428
 </TASK>

Showing all locks held in the system:
1 lock held by pool_workqueue_/3:
 #0: ffffffff8e7f4e38 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343
1 lock held by khungtaskd/31:
 #0: ffffffff8e7e9220 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e9220 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e9220 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
3 locks held by kworker/u8:11/2984:
 #0: ffff88801c6ae948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x1287/0x1920 kernel/workqueue.c:3250
 #1: ffffc9000eb1fd08 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x93c/0x1920 kernel/workqueue.c:3251
 #2: ffffffff905fb270 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xb8/0x920 net/core/net_namespace.c:675
2 locks held by kworker/u10:2/13588:
 #0: ffff88801f7e7148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work+0x1287/0x1920 kernel/workqueue.c:3250
 #1: ffffc90004c7fd08 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x93c/0x1920 kernel/workqueue.c:3251
1 lock held by syz.4.1464/13645:
 #0: ffffffff905fb270 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
3 locks held by kworker/u10:20/13724:
 #0: ffff88813fea4148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x1287/0x1920 kernel/workqueue.c:3250
 #1: ffffc90004377d08 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x93c/0x1920 kernel/workqueue.c:3251
 #2: ffffffff90613ba8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0x51/0xc0 net/core/link_watch.c:313
1 lock held by syz.7.1491/13813:
 #0: ffffffff905fb270 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz.7.1491/13814:
 #0: ffffffff905fb270 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz.8.1497/13840:
 #0: ffffffff905fb270 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz.9.1545/14277:
 #0: ffff88804543ef48 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
 #0: ffff88804543ef48 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: __sock_release+0x86/0x260 net/socket.c:661
1 lock held by syz.6.1627/14973:
 #0: ffffffff905fb270 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
2 locks held by getty/15560:
 #0: ffff8880346660a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc900077832f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
1 lock held by syz.1.1722/15710:
 #0: ffffffff905fb270 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
1 lock held by syz.3.1747/15857:
 #0: ffffffff905fb270 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
2 locks held by syz-executor/16021:
 #0: ffffffff90613ba8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff90613ba8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x220 drivers/net/tun.c:3436
 #1: ffffffff8e7f4e38 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343
4 locks held by syz-executor/16141:
 #0: ffff888078b84ec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close+0x26/0xb0 net/bluetooth/hci_core.c:500
 #1: ffff888078b840c0 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x35c/0x1240 net/bluetooth/hci_sync.c:5346
 #2: ffffffff908abba8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2151 [inline]
 #2: ffffffff908abba8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xbb/0x280 net/bluetooth/hci_conn.c:2644
 #3: ffff8880421cc2f8 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x80/0x770 net/bluetooth/l2cap_core.c:1755
1 lock held by syz.0.1903/16675:
 #0: ffffffff90613ba8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff90613ba8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x220 drivers/net/tun.c:3436
1 lock held by syz.2.1905/16683:
 #0: ffffffff90613ba8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff90613ba8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x220 drivers/net/tun.c:3436

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Tainted: G     U       L      syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (34):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/03/05 10:54 upstream ecc64d2dc9ff a9fe5c9e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/03/02 07:41 upstream 39c633261414 43249bac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/25 04:23 upstream 7dff99b35460 787dfb7c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/24 19:13 upstream 7dff99b35460 96b1aa46 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/21 07:41 upstream a95f71ad3e2e 6e7b5511 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/18 21:01 upstream 23b0f90ba871 77d4d919 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/16 21:08 upstream 0f2acd3148e0 84656fa6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/14 20:38 upstream 3e48a11675c5 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/01 22:27 upstream 9f2693489ef8 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/01 00:46 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/24 18:53 upstream 62085877ae65 40acda8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/20 02:32 upstream 24d479d26b25 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/13 09:33 upstream b71e635feefc d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/07 13:07 upstream f0b9d8eb98df d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/05 11:01 upstream 3609fa95fb0f d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/12/31 12:43 upstream c8ebd433459b d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/12/21 15:19 upstream 9094662f6707 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/12/05 17:06 upstream 2061f18ad76e d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/26 14:34 upstream 30f09200cc4a c116feb4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/24 19:55 upstream ac3fd01e4c1e bf6fe8fe .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/19 03:04 upstream 5bebe8de1926 ef766cd7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/01 17:03 upstream ba36dd5ee6fd 2c50b6a9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/10/23 18:15 upstream 43e9ad0c55a3 c0460fcd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/10/04 12:21 upstream 2ccb4d203fe4 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/10/03 23:27 upstream e406d57be7bd 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/09/16 17:25 upstream 46a51f4f5eda e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/27 09:55 upstream fab1beda7597 e12e5ba4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/23 01:45 upstream cf6fc5eefc5b bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/10 04:36 upstream 561c80369df0 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/02 19:01 upstream a6923c06a3b2 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/07/24 09:40 upstream f9af7b5d9349 0c1d6ded .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/07/12 09:48 upstream 379f604cc3dc 3cda49cf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/06/21 12:59 upstream 11313e2f7812 d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/05/22 14:46 upstream d608703fcdd9 0919b50b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
* Struck through repros no longer work on HEAD.