syzbot


INFO: rcu detected stall in call_usermodehelper_exec_work (4)

Status: auto-obsoleted due to no activity on 2026/03/16 13:29
Subsystems: mm
[Documentation on labels]
First crash: 258d, last: 92d
✨ AI Jobs (1)
ID Workflow Result Correct Bug Created Started Finished Revision Error
39c99af9-e7c5-4552-8ac6-0c43c2ad0268 repro INFO: rcu detected stall in call_usermodehelper_exec_work (4) 2026/03/06 12:41 2026/03/06 12:41 2026/03/06 12:58 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in call_usermodehelper_exec_work cgroups mm 1 1 2261d 2261d 0/29 closed as invalid on 2020/01/09 08:13
upstream INFO: rcu detected stall in call_usermodehelper_exec_work (2) kernel 1 2 1571d 1580d 0/29 closed as invalid on 2022/02/08 10:00
upstream INFO: rcu detected stall in call_usermodehelper_exec_work (3) mm 1 2 571d 602d 0/29 auto-obsoleted due to no activity on 2024/11/22 13:58

Sample crash report:
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:6e:9e:6d:d7:e4:98, vlan:0)
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P50/1:b..l
rcu: 	(detected by 0, t=10502 jiffies, g=153541, q=1128 ncpus=1)
task:kworker/u8:3    state:R  running task     stack:23816 pid:50    tgid:50    ppid:2      task_flags:0x4208160 flags:0x00080000
Workqueue: events_unbound call_usermodehelper_exec_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1139/0x6150 kernel/sched/core.c:6863
 preempt_schedule_irq+0x51/0x90 kernel/sched/core.c:7190
 irqentry_exit+0x1d8/0x8c0 kernel/entry/common.c:216
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:697
RIP: 0010:__update_page_owner_free_handle.constprop.0+0x238/0x4a0 mm/page_owner.c:292
Code: 09 80 fa 03 0f 8e 3f 02 00 00 49 8d 7e 38 45 8b bf 18 06 00 00 48 b8 00 00 00 00 00 fc ff df 48 89 f9 48 c1 e9 03 0f b6 04 01 <84> c0 74 08 3c 03 0f 8e 09 02 00 00 48 8b 14 24 45 89 7e 38 48 b8
RSP: 0018:ffffc90000bb7540 EFLAGS: 00000a06
RAX: 0000000000000000 RBX: ffff88801e06fbd0 RCX: 1ffff11003c0df82
RDX: 0000000000000000 RSI: ffffffff8231cb1f RDI: ffff88801e06fc10
RBP: 0000000000000001 R08: 0000000000000001 R09: ffffed1003c0df7a
R10: ffff88801e06fbd7 R11: ffff8880202a4830 R12: 0000000000140442
R13: 0000000000000008 R14: ffff88801e06fbd8 R15: 0000000000000032
 __reset_page_owner+0x93/0x1a0 mm/page_owner.c:321
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1395 [inline]
 __free_frozen_pages+0x7df/0x1170 mm/page_alloc.c:2943
 qlink_free mm/kasan/quarantine.c:163 [inline]
 qlist_free_all+0x4c/0xf0 mm/kasan/quarantine.c:179
 kasan_quarantine_reduce+0x195/0x1e0 mm/kasan/quarantine.c:286
 __kasan_slab_alloc+0x69/0x90 mm/kasan/common.c:349
 kasan_slab_alloc include/linux/kasan.h:252 [inline]
 slab_post_alloc_hook mm/slub.c:4953 [inline]
 slab_alloc_node mm/slub.c:5263 [inline]
 kmem_cache_alloc_noprof+0x25e/0x770 mm/slub.c:5270
 alloc_pid+0xd8/0x13c0 kernel/pid.c:183
 copy_process+0x4027/0x7430 kernel/fork.c:2237
 kernel_clone+0xfc/0x910 kernel/fork.c:2651
 user_mode_thread+0xc8/0x110 kernel/fork.c:2727
 call_usermodehelper_exec_work kernel/umh.c:171 [inline]
 call_usermodehelper_exec_work+0xcb/0x170 kernel/umh.c:157
 process_one_work+0x9ba/0x1b20 kernel/workqueue.c:3257
 process_scheduled_works kernel/workqueue.c:3340 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3421
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x983/0xb10 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
rcu: rcu_preempt kthread starved for 8170 jiffies! g153541 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:28488 pid:16    tgid:16    ppid:2      task_flags:0x208040 flags:0x00080000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1139/0x6150 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6960
 schedule_timeout+0x123/0x290 kernel/time/sleep_timeout.c:99
 rcu_gp_fqs_loop+0x1ea/0xaf0 kernel/rcu/tree.c:2083
 rcu_gp_kthread+0x26d/0x380 kernel/rcu/tree.c:2285
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x983/0xb10 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 0 UID: 0 PID: 64 Comm: kworker/u8:4 Tainted: G     U       L      syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Workqueue: wg-kex-wg0 wg_packet_handshake_send_worker
RIP: 0010:debug_lockdep_rcu_enabled+0x2e/0x40 kernel/rcu/update.c:322
Code: 8b 05 56 8c 13 05 85 c0 74 20 8b 05 e0 bb 13 05 85 c0 74 16 65 48 8b 05 98 d8 3c 08 8b 80 2c 0b 00 00 85 c0 0f 94 c0 0f b6 c0 <e9> 0d 2c 03 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 90 90 90 90
RSP: 0018:ffffc900000068d8 EFLAGS: 00000246
RAX: 0000000000000001 RBX: 0000000000000003 RCX: ffffffff8a1e0c1a
RDX: ffff88801cb53d00 RSI: ffffffff8a1e0c2e RDI: 0000000000000001
RBP: ffffc900000069f0 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000000 R11: ffff88801cb54830 R12: ffff888030338cdf
R13: ffff888030338cc8 R14: 1ffff92000000d2c R15: ffff88807ce61000
FS:  0000000000000000(0000) GS:ffff8881248fd000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffcaebec84c CR3: 00000000783d6000 CR4: 00000000003526f0
Call Trace:
 <IRQ>
 rcu_read_lock_held_common kernel/rcu/update.c:105 [inline]
 rcu_read_lock_held+0x9/0x50 kernel/rcu/update.c:349
 __in6_dev_get include/net/addrconf.h:347 [inline]
 ip6_ignore_linkdown include/net/addrconf.h:448 [inline]
 find_match+0x373/0x15d0 net/ipv6/route.c:780
 __find_rr_leaf+0x140/0xe00 net/ipv6/route.c:868
 find_rr_leaf net/ipv6/route.c:889 [inline]
 rt6_select net/ipv6/route.c:933 [inline]
 fib6_table_lookup+0x57c/0xa30 net/ipv6/route.c:2233
 ip6_pol_route+0x1cc/0x1230 net/ipv6/route.c:2269
 pol_lookup_func include/net/ip6_fib.h:617 [inline]
 fib6_rule_lookup+0x536/0x720 net/ipv6/fib6_rules.c:120
 ip6_route_input_lookup net/ipv6/route.c:2338 [inline]
 ip6_route_input+0x662/0xc70 net/ipv6/route.c:2641
 ip6_rcv_finish_core.constprop.0+0x1a0/0x5d0 net/ipv6/ip6_input.c:66
 ip6_rcv_finish+0x130/0x580 net/ipv6/ip6_input.c:77
 ip_sabotage_in+0x21e/0x290 net/bridge/br_netfilter_hooks.c:990
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_slow+0xbe/0x200 net/netfilter/core.c:623
 nf_hook.constprop.0+0x424/0x750 include/linux/netfilter.h:273
 NF_HOOK include/linux/netfilter.h:316 [inline]
 ipv6_rcv+0xa4/0x650 net/ipv6/ip6_input.c:311
 __netif_receive_skb_one_core+0x12d/0x1e0 net/core/dev.c:6137
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:6250
 netif_receive_skb_internal net/core/dev.c:6336 [inline]
 netif_receive_skb+0x137/0x760 net/core/dev.c:6395
 NF_HOOK include/linux/netfilter.h:318 [inline]
 NF_HOOK include/linux/netfilter.h:312 [inline]
 br_pass_frame_up+0x346/0x490 net/bridge/br_input.c:70
 br_handle_frame_finish+0x10e8/0x1f00 net/bridge/br_input.c:235
 br_nf_hook_thresh+0x307/0x410 net/bridge/br_netfilter_hooks.c:1167
 br_nf_pre_routing_finish_ipv6+0x76a/0xfc0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:318 [inline]
 br_nf_pre_routing_ipv6+0x3cd/0x8c0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x860/0x15b0 net/bridge/br_netfilter_hooks.c:508
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:291 [inline]
 br_handle_frame+0xb28/0x14e0 net/bridge/br_input.c:442
 __netif_receive_skb_core.constprop.0+0x6b3/0x35b0 net/core/dev.c:6024
 __netif_receive_skb_one_core+0xb0/0x1e0 net/core/dev.c:6135
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:6250
 process_backlog+0x4a2/0x1650 net/core/dev.c:6602
 __napi_poll.constprop.0+0xb3/0x540 net/core/dev.c:7666
 napi_poll net/core/dev.c:7729 [inline]
 net_rx_action+0x9f9/0xfa0 net/core/dev.c:7881
 handle_softirqs+0x219/0x950 kernel/softirq.c:622
 do_softirq kernel/softirq.c:523 [inline]
 do_softirq+0xb2/0xf0 kernel/softirq.c:510
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:450
 local_bh_enable include/linux/bottom_half.h:33 [inline]
 fpregs_unlock arch/x86/include/asm/fpu/api.h:77 [inline]
 kernel_fpu_end arch/x86/kernel/fpu/core.c:480 [inline]
 kernel_fpu_end+0x5e/0x70 arch/x86/kernel/fpu/core.c:473
 blake2s_compress+0x77/0xe0 lib/crypto/x86/blake2s.h:42
 blake2s_final+0xc9/0x160 lib/crypto/blake2s.c:142
 hmac.constprop.0+0x335/0x420 drivers/net/wireguard/noise.c:333
 kdf.constprop.0+0x14b/0x280 drivers/net/wireguard/noise.c:367
 message_ephemeral+0x5f/0x70 drivers/net/wireguard/noise.c:493
 wg_noise_handshake_create_initiation+0x322/0x610 drivers/net/wireguard/noise.c:545
 wg_packet_send_handshake_initiation+0x19a/0x360 drivers/net/wireguard/send.c:34
 wg_packet_handshake_send_worker+0x1c/0x30 drivers/net/wireguard/send.c:51
 process_one_work+0x9ba/0x1b20 kernel/workqueue.c:3257
 process_scheduled_works kernel/workqueue.c:3340 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3421
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x983/0xb10 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
net_ratelimit: 6043 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:6e:9e:6d:d7:e4:98, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:6e:9e:6d:d7:e4:98, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:6e:9e:6d:d7:e4:98, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
net_ratelimit: 9077 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:6e:9e:6d:d7:e4:98, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:6e:9e:6d:d7:e4:98, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:6e:9e:6d:d7:e4:98, vlan:0)

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/12/16 13:19 upstream 40fbbd64bba6 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: rcu detected stall in call_usermodehelper_exec_work
2025/09/21 13:19 upstream 3b08f56fbbb9 67c37560 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: rcu detected stall in call_usermodehelper_exec_work
2025/09/06 14:42 upstream d1d10cea0895 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: rcu detected stall in call_usermodehelper_exec_work
2025/07/02 23:18 upstream b4911fb0b060 0cd59a8f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: rcu detected stall in call_usermodehelper_exec_work
* Struck through repros no longer work on HEAD.