syzbot


INFO: rcu detected stall in clone3

Status: upstream: reported on 2025/07/02 10:02
Reported-by: syzbot+5264e0b6dcde93e505c6@syzkaller.appspotmail.com
First crash: 1d16h, last: 1d16h
Similar bugs (14)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in clone3 (5) mm 7 185d 395d 0/29 auto-obsoleted due to no activity on 2025/03/30 08:38
upstream INFO: rcu detected stall in clone3 mm 2 1269d 1295d 0/29 closed as invalid on 2022/02/08 10:10
upstream INFO: rcu detected stall in clone3 (2) kernfs 1 1117d 1117d 0/29 auto-closed as invalid on 2022/09/10 23:49
upstream INFO: rcu detected stall in clone3 (3) kernel 1 866d 866d 0/29 auto-obsoleted due to no activity on 2023/05/19 23:49
upstream INFO: rcu detected stall in clone3 (4) net mm 2 679d 759d 0/29 auto-obsoleted due to no activity on 2023/11/22 09:08
upstream INFO: rcu detected stall in sys_clone3 (3) mm 26 109d 304d 0/29 auto-obsoleted due to no activity on 2025/06/14 09:25
linux-6.1 INFO: rcu detected stall in sys_clone3 (3) 3 223d 282d 0/3 auto-obsoleted due to no activity on 2025/03/02 13:35
upstream INFO: rcu detected stall in sys_clone3 kernfs 1 1138d 1138d 0/29 auto-closed as invalid on 2022/08/20 13:01
linux-5.15 INFO: rcu detected stall in sys_clone3 1 272d 272d 0/3 auto-obsoleted due to no activity on 2025/01/12 11:21
upstream INFO: rcu detected stall in sys_clone3 (2) cgroups mm 3 403d 506d 0/29 auto-obsoleted due to no activity on 2024/08/25 02:20
linux-6.1 INFO: rcu detected stall in sys_clone3 1 544d 541d 0/3 auto-obsoleted due to no activity on 2024/04/15 06:45
linux-6.1 INFO: rcu detected stall in sys_clone3 (4) 1 85d 85d 0/3 upstream: reported on 2025/04/09 19:29
linux-5.15 INFO: rcu detected stall in sys_clone3 (2) 1 139d 139d 0/3 auto-obsoleted due to no activity on 2025/05/25 09:12
linux-6.1 INFO: rcu detected stall in sys_clone3 (2) 1 405d 405d 0/3 auto-obsoleted due to no activity on 2024/09/01 21:33

Sample crash report:
rcu: INFO: rcu_preempt self-detected stall on CPU
rcu: 	1-...!: (10499 ticks this GP) idle=afd/1/0x4000000000000000 softirq=28810/28810 fqs=0 
	(t=10500 jiffies g=39489 q=222)
rcu: rcu_preempt kthread timer wakeup didn't happen for 10499 jiffies! g39489 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
rcu: 	Possible timer handling issue on cpu=0 timer-softirq=28568
rcu: rcu_preempt kthread starved for 10500 jiffies! g39489 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:I stack:27808 pid:   15 ppid:     2 flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5030 [inline]
 __schedule+0x11b8/0x43b0 kernel/sched/core.c:6376
 schedule+0x11b/0x1e0 kernel/sched/core.c:6459
 schedule_timeout+0x15c/0x280 kernel/time/timer.c:1914
 rcu_gp_fqs_loop+0x29e/0x11b0 kernel/rcu/tree.c:1972
 rcu_gp_kthread+0x98/0x350 kernel/rcu/tree.c:2145
 kthread+0x436/0x520 kernel/kthread.c:334
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 9744 Comm: syz.5.1021 Not tainted 5.15.186-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
RIP: 0010:__lock_acquire+0xdc9/0x7c60 kernel/locking/lockdep.c:4982
Code: 48 89 cb 48 8d 3c c5 c0 b0 f9 8f be 08 00 00 00 e8 3c 45 61 00 49 b8 00 00 00 00 00 fc ff df 48 63 c3 48 0f a3 05 47 b8 9d 0e <0f> 83 23 4f 00 00 49 8d 9d e0 0a 00 00 49 89 de 49 c1 ee 03 43 80
RSP: 0018:ffffc900000078c0 EFLAGS: 00000057
RAX: 0000000000000015 RBX: 0000000000000015 RCX: ffffffff815bf864
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8ff9b0c0
RBP: ffffc90000007b10 R08: dffffc0000000000 R09: fffffbfff1ff3619
R10: fffffbfff1ff3619 R11: 1ffffffff1ff3618 R12: ffff888028c0a8d0
R13: ffff888028c09dc0 R14: 0000000000000000 R15: ffff888028c0a8b0
FS:  00005555755fb500(0000) GS:ffff8880b9000000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b3031aff8 CR3: 0000000068690000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <IRQ>
 lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623
 rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:312
 rcu_read_lock include/linux/rcupdate.h:739 [inline]
 advance_sched+0x6ca/0x940 net/sched/sch_taprio.c:769
 __run_hrtimer kernel/time/hrtimer.c:1690 [inline]
 __hrtimer_run_queues+0x53d/0xc40 kernel/time/hrtimer.c:1754
 hrtimer_interrupt+0x3bb/0x8d0 kernel/time/hrtimer.c:1816
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1097 [inline]
 __sysvec_apic_timer_interrupt+0x137/0x4a0 arch/x86/kernel/apic/apic.c:1114
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
 sysvec_apic_timer_interrupt+0x9b/0xc0 arch/x86/kernel/apic/apic.c:1108
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:unwind_next_frame+0x673/0x1d90 arch/x86/kernel/unwind_orc.c:475
Code: 0f 85 7f 13 00 00 4c 89 f8 48 c1 e8 03 42 0f b6 04 28 84 c0 0f 85 8f 13 00 00 48 0f bf 02 48 01 c5 e9 0e 01 00 00 4c 8d 7b 40 <4c> 89 f8 48 c1 e8 03 42 80 3c 28 00 74 0c 4c 89 ff e8 27 cd 88 00
RSP: 0018:ffffc90003b9f248 EFLAGS: 00000293
RAX: 0000000000000014 RBX: ffffc90003b9f308 RCX: ffffffff8d77273c
RDX: ffffffff8de27c3a RSI: ffffffff8de27c3a RDI: ffffffff8134690c
RBP: ffffffff81469f84 R08: 0000000000000003 R09: 0000000000000006
R10: fffff52000773e6d R11: 1ffff92000773e6b R12: 1ffffffff1bc4f87
R13: dffffc0000000000 R14: ffffffff8de27c3e R15: ffffc90003b9f348
 arch_stack_walk+0x10c/0x140 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x98/0xe0 kernel/stacktrace.c:122
 save_stack+0xf3/0x1e0 mm/page_owner.c:119
 __set_page_owner+0x41/0x2d0 mm/page_owner.c:181
 prep_new_page mm/page_alloc.c:2426 [inline]
 get_page_from_freelist+0x1b77/0x1c60 mm/page_alloc.c:4192
 __alloc_pages+0x1e1/0x470 mm/page_alloc.c:5474
 vm_area_alloc_pages mm/vmalloc.c:2869 [inline]
 __vmalloc_area_node mm/vmalloc.c:2925 [inline]
 __vmalloc_node_range+0x4b2/0x8b0 mm/vmalloc.c:3030
 alloc_thread_stack_node kernel/fork.c:246 [inline]
 dup_task_struct+0x3f5/0xb30 kernel/fork.c:899
 copy_process+0x5b3/0x3e00 kernel/fork.c:2121
 kernel_clone+0x219/0x930 kernel/fork.c:2679
 __do_sys_clone3 kernel/fork.c:2954 [inline]
 __se_sys_clone3+0x2d5/0x360 kernel/fork.c:2938
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7ff21fb5b189
Code: c2 08 00 48 8d 3d 5c c2 08 00 e8 02 29 f6 ff 66 90 b8 ea ff ff ff 48 85 ff 74 2c 48 85 d2 74 27 49 89 c8 b8 b3 01 00 00 0f 05 <48> 85 c0 7c 18 74 01 c3 31 ed 48 83 e4 f0 4c 89 c7 ff d2 48 89 c7
RSP: 002b:00007ffeae1c91e8 EFLAGS: 00000206 ORIG_RAX: 00000000000001b3
RAX: ffffffffffffffda RBX: 00007ff21fadd590 RCX: 00007ff21fb5b189
RDX: 00007ff21fadd590 RSI: 0000000000000058 RDI: 00007ffeae1c9230
RBP: 00007ff21d98e6c0 R08: 00007ff21d98e6c0 R09: 00007ffeae1c9317
R10: 0000000000000008 R11: 0000000000000206 R12: ffffffffffffffa8
R13: 000000000000000b R14: 00007ffeae1c9230 R15: 00007ffeae1c9318
 </TASK>
NMI backtrace for cpu 1
CPU: 1 PID: 144 Comm: kworker/u4:1 Not tainted 5.15.186-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Workqueue: netns cleanup_net
Call Trace:
 <IRQ>
 dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x397/0x3d0 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62
 trigger_single_cpu_backtrace include/linux/nmi.h:166 [inline]
 rcu_dump_cpu_stacks+0x22f/0x380 kernel/rcu/tree_stall.h:349
 print_cpu_stall+0x31d/0x5f0 kernel/rcu/tree_stall.h:633
 check_cpu_stall kernel/rcu/tree_stall.h:727 [inline]
 rcu_pending kernel/rcu/tree.c:3932 [inline]
 rcu_sched_clock_irq+0x6d8/0x1110 kernel/rcu/tree.c:2619
 update_process_times+0x193/0x200 kernel/time/timer.c:1818
 tick_sched_handle kernel/time/tick-sched.c:254 [inline]
 tick_sched_timer+0x37d/0x560 kernel/time/tick-sched.c:1473
 __run_hrtimer kernel/time/hrtimer.c:1690 [inline]
 __hrtimer_run_queues+0x4fe/0xc40 kernel/time/hrtimer.c:1754
 hrtimer_interrupt+0x3bb/0x8d0 kernel/time/hrtimer.c:1816
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1097 [inline]
 __sysvec_apic_timer_interrupt+0x137/0x4a0 arch/x86/kernel/apic/apic.c:1114
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1108 [inline]
 sysvec_apic_timer_interrupt+0x9b/0xc0 arch/x86/kernel/apic/apic.c:1108
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:676
RIP: 0010:csd_lock_wait kernel/smp.c:440 [inline]
RIP: 0010:smp_call_function_single+0x212/0x490 kernel/smp.c:758
Code: 48 44 89 f6 83 e6 01 31 ff e8 0a 6b 0b 00 41 83 e6 01 49 bc 00 00 00 00 00 fc ff df 75 0a e8 95 67 0b 00 e9 a3 00 00 00 f3 90 <f7> 44 24 48 01 00 00 00 0f 84 8e 00 00 00 e8 7b 67 0b 00 eb e9 e8
RSP: 0018:ffffc9000127f8e0 EFLAGS: 00000293
RAX: ffffffff816c5b95 RBX: 0000000000000000 RCX: ffff8880185a1dc0
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000
RBP: ffffc9000127f9d0 R08: dffffc0000000000 R09: ffffed1017207679
R10: ffffed1017207679 R11: 1ffff11017207678 R12: dffffc0000000000
R13: 0000000000000001 R14: 0000000000000001 R15: 1ffff9200024ff20
 rcu_barrier+0x25d/0x4b0 kernel/rcu/tree.c:4078
 l2tp_exit_net+0x207/0x2b0 net/l2tp/l2tp_core.c:1685
 ops_exit_list net/core/net_namespace.c:172 [inline]
 cleanup_net+0x6f0/0xb80 net/core/net_namespace.c:635
 process_one_work+0x863/0x1000 kernel/workqueue.c:2310
 worker_thread+0xaa8/0x12a0 kernel/workqueue.c:2457
 kthread+0x436/0x520 kernel/kthread.c:334
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/02 10:01 linux-5.15.y 3dea0e7f549e bc80e4f0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan INFO: rcu detected stall in clone3
* Struck through repros no longer work on HEAD.