syzbot


INFO: rcu detected stall in wg_packet_handshake_send_worker (6)

Status: auto-obsoleted due to no activity on 2025/07/21 04:05
Subsystems: crypto
[Documentation on labels]
First crash: 97d, last: 97d
Similar bugs (10)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in wg_packet_handshake_send_worker (4) wireguard 1 1 741d 741d 0/29 auto-obsoleted due to no activity on 2023/10/15 22:40
upstream INFO: rcu detected stall in wg_packet_handshake_send_worker net 1 1 1915d 1915d 0/29 auto-closed as invalid on 2020/07/28 10:33
linux-5.15 INFO: rcu detected stall in wg_packet_handshake_send_worker 1 1 476d 476d 0/3 auto-obsoleted due to no activity on 2024/07/16 21:11
upstream INFO: rcu detected stall in wg_packet_handshake_send_worker (5) wireguard 1 1 604d 604d 0/29 auto-obsoleted due to no activity on 2024/02/29 10:43
linux-5.15 BUG: soft lockup in wg_packet_handshake_send_worker 1 2 230d 254d 0/3 auto-obsoleted due to no activity on 2025/03/20 05:34
upstream INFO: rcu detected stall in wg_packet_handshake_send_worker (3) kernel 1 1 950d 950d 0/29 auto-obsoleted due to no activity on 2023/04/12 00:54
linux-6.1 INFO: rcu detected stall in wg_packet_handshake_send_worker 1 1 422d 422d 0/3 auto-obsoleted due to no activity on 2024/09/09 01:47
upstream INFO: rcu detected stall in wg_packet_handshake_send_worker (2) net 1 1 1822d 1822d 0/29 auto-closed as invalid on 2020/10/29 22:12
android-5-15 BUG: soft lockup in wg_packet_handshake_send_worker 1 2 738d 745d 0/2 auto-obsoleted due to no activity on 2023/10/18 09:06
android-5-10 BUG: soft lockup in wg_packet_handshake_send_worker 1 6 711d 751d 0/2 auto-obsoleted due to no activity on 2023/11/15 02:30

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	(detected by 1, t=10502 jiffies, g=133441, q=493 ncpus=2)
rcu: All QSes seen, last rcu_preempt kthread activity 10500 (4295017923-4295007423), jiffies_till_next_fqs=1, root ->qsmask 0x0
rcu: rcu_preempt kthread starved for 10500 jiffies! g133441 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:27304 pid:16    tgid:16    ppid:2      task_flags:0x208040 flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x116f/0x5de0 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6860
 schedule_timeout+0x123/0x290 kernel/time/sleep_timeout.c:99
 rcu_gp_fqs_loop+0x1ea/0xb00 kernel/rcu/tree.c:2046
 rcu_gp_kthread+0x270/0x380 kernel/rcu/tree.c:2248
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 32388 Comm: kworker/u8:11 Not tainted 6.15.0-rc3-syzkaller-00008-ga33b5a08cbbd #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Workqueue: wg-kex-wg0 wg_packet_handshake_send_worker
RIP: 0010:pv_queued_spin_unlock arch/x86/include/asm/paravirt.h:577 [inline]
RIP: 0010:queued_spin_unlock arch/x86/include/asm/qspinlock.h:57 [inline]
RIP: 0010:do_raw_spin_unlock+0x152/0x230 kernel/locking/spinlock_debug.c:142
Code: 83 e0 07 83 c0 03 38 d0 7c 08 84 d2 0f 85 95 00 00 00 48 c7 c0 10 1a 22 8e c7 43 08 ff ff ff ff 48 ba 00 00 00 00 00 fc ff df <48> c1 e8 03 80 3c 10 00 0f 85 ba 00 00 00 48 83 3d d8 9b 89 0c 00
RSP: 0018:ffffc90000007d18 EFLAGS: 00000046
RAX: ffffffff8e221a10 RBX: ffffffff9ade1278 RCX: ffffffff81987d23
RDX: dffffc0000000000 RSI: 0000000000000004 RDI: ffffffff9ade1278
RBP: ffffffff9ade1280 R08: 0000000000000000 R09: fffffbfff35bc24f
R10: ffffffff9ade127b R11: 0000000000000000 R12: ffffffff9ade1288
R13: dffffc0000000000 R14: ffff88807b6f6340 R15: 1ffff92000000fac
FS:  0000000000000000(0000) GS:ffff8881249b2000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b2fa1bff8 CR3: 000000007a638000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <IRQ>
 __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:150 [inline]
 _raw_spin_unlock_irqrestore+0x22/0x80 kernel/locking/spinlock.c:194
 debug_object_deactivate+0x1ec/0x3a0 lib/debugobjects.c:888
 debug_hrtimer_deactivate kernel/time/hrtimer.c:450 [inline]
 debug_deactivate kernel/time/hrtimer.c:490 [inline]
 __run_hrtimer kernel/time/hrtimer.c:1729 [inline]
 __hrtimer_run_queues+0x46f/0xad0 kernel/time/hrtimer.c:1825
 hrtimer_interrupt+0x397/0x8e0 kernel/time/hrtimer.c:1887
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1038 [inline]
 __sysvec_apic_timer_interrupt+0x108/0x3f0 arch/x86/kernel/apic/apic.c:1055
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1049 [inline]
 sysvec_apic_timer_interrupt+0x9f/0xc0 arch/x86/kernel/apic/apic.c:1049
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:fsqr+0x17d/0x1d0 arch/x86/crypto/curve25519-x86_64.c:672
Code: 38 f6 4e 08 c4 62 ab f6 6e 30 66 4c 0f 38 f6 d3 f3 4c 0f 38 f6 56 10 c4 e2 a3 f6 46 38 66 4d 0f 38 f6 dd f3 4c 0f 38 f6 5e 18 <66> 48 0f 38 f6 c1 f3 48 0f 38 f6 c1 48 0f af c2 49 01 c0 66 4c 0f
RSP: 0018:ffffc90010a4f440 EFLAGS: 00000a46
RAX: 0000000000000010 RBX: 000000000000000e RCX: 0000000000000000
RDX: 0000000000000026 RSI: ffffc90010a4f600 RDI: ffffc90010a4f4f8
RBP: ffffc90010a4f4f8 R08: bdc6ce07cd4f4f36 R09: a2b2b5bb802afe55
R10: 748851290b373041 R11: 482f20f50fb69620 R12: ffffc90010a4f600
R13: 0000000000000004 R14: 72119cbbe57817f4 R15: 0000000000000000
 fsquare_times arch/x86/crypto/curve25519-x86_64.c:1120 [inline]
 finv+0x19d/0x450 arch/x86/crypto/curve25519-x86_64.c:1141
 encode_point+0xbc/0x360 arch/x86/crypto/curve25519-x86_64.c:1217
 curve25519_ever64_base+0x6ad/0x770 arch/x86/crypto/curve25519-x86_64.c:1587
 curve25519_base_arch+0x23/0x50 arch/x86/crypto/curve25519-x86_64.c:1609
 curve25519_generate_public include/crypto/curve25519.h:55 [inline]
 wg_noise_handshake_create_initiation+0x27c/0x650 drivers/net/wireguard/noise.c:542
 wg_packet_send_handshake_initiation+0x19a/0x360 drivers/net/wireguard/send.c:34
 wg_packet_handshake_send_worker+0x1c/0x30 drivers/net/wireguard/send.c:51
 process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238
 process_scheduled_works kernel/workqueue.c:3319 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/04/22 04:03 upstream a33b5a08cbbd 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in wg_packet_handshake_send_worker
* Struck through repros no longer work on HEAD.