syzbot


INFO: rcu detected stall in sys_inotify_add_watch (3)

Status: auto-obsoleted due to no activity on 2025/05/12 13:24
Subsystems: fs
[Documentation on labels]
First crash: 214d, last: 142d
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: rcu detected stall in sys_inotify_add_watch 1 158d 158d 0/3 auto-obsoleted due to no activity on 2025/05/05 23:52
upstream INFO: rcu detected stall in sys_inotify_add_watch (2) lsm 1 330d 330d 0/29 auto-obsoleted due to no activity on 2024/11/04 18:18
upstream INFO: rcu detected stall in sys_inotify_add_watch fs 1 1385d 1385d 0/29 auto-closed as invalid on 2021/12/16 04:51

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P10916/1:b..l
rcu: 	(detected by 0, t=10502 jiffies, g=48513, q=207042 ncpus=2)
task:udevd           state:R  running task     stack:27200 pid:10916 tgid:10916 ppid:5193   task_flags:0x400140 flags:0x00000002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5377 [inline]
 __schedule+0xf43/0x5890 kernel/sched/core.c:6764
 preempt_schedule_irq+0x51/0x90 kernel/sched/core.c:7086
 irqentry_exit+0x36/0x90 kernel/entry/common.c:354
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:leave_rcu fs/namei.c:741 [inline]
RIP: 0010:try_to_unlazy+0x192/0x660 fs/namei.c:851
Code: d2 0f 85 3c 04 00 00 44 8b 7b 3c 31 ff 44 89 f8 83 e0 01 89 c6 89 44 24 04 e8 da 92 87 ff 8b 44 24 04 85 c0 0f 84 24 01 00 00 <e8> c9 97 87 ff 4c 89 e2 48 b8 00 00 00 00 00 fc ff df 48 c1 ea 03
RSP: 0018:ffffc9000cb37b78 EFLAGS: 00000246
RAX: dffffc0000000000 RBX: ffffc9000cb37ca0 RCX: ffffffff82323b83
RDX: 1ffff92001966f98 RSI: ffffffff82323b91 RDI: 0000000000000001
RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000001 R12: ffffc9000cb37cd8
R13: ffffc9000cb37ce0 R14: ffff888023a031a0 R15: dffffc0000000000
 complete_walk+0x10b/0x330 fs/namei.c:957
 path_lookupat+0x28c/0x770 fs/namei.c:2643
 filename_lookup+0x221/0x5f0 fs/namei.c:2665
 user_path_at+0x3a/0x60 fs/namei.c:3072
 inotify_find_inode+0x2e/0x160 fs/notify/inotify/inotify_user.c:377
 __do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:771 [inline]
 __se_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:729 [inline]
 __x64_sys_inotify_add_watch+0x20d/0x360 fs/notify/inotify/inotify_user.c:729
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fd89cd23ee7
RSP: 002b:00007ffcf22d5628 EFLAGS: 00000202 ORIG_RAX: 00000000000000fe
RAX: ffffffffffffffda RBX: 000055b7d00600c0 RCX: 00007fd89cd23ee7
RDX: 0000000000000008 RSI: 000055b7d0046080 RDI: 0000000000000007
RBP: 000055b7d00600c0 R08: 0000000000000001 R09: 3c07f85065e45093
R10: 00000000000001b6 R11: 0000000000000202 R12: 000055b7d00e4110
R13: 000055b7d0067b70 R14: 0000000000000008 R15: 000055b7d002d2c0
 </TASK>
rcu: rcu_preempt kthread starved for 10467 jiffies! g48513 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:27072 pid:17    tgid:17    ppid:2      task_flags:0x208040 flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5377 [inline]
 __schedule+0xf43/0x5890 kernel/sched/core.c:6764
 __schedule_loop kernel/sched/core.c:6841 [inline]
 schedule+0xe7/0x350 kernel/sched/core.c:6856
 schedule_timeout+0x124/0x280 kernel/time/sleep_timeout.c:99
 rcu_gp_fqs_loop+0x1eb/0xb00 kernel/rcu/tree.c:2024
 rcu_gp_kthread+0x271/0x380 kernel/rcu/tree.c:2226
 kthread+0x3af/0x750 kernel/kthread.c:464
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 8630 Comm: kworker/u8:27 Not tainted 6.14.0-rc2-syzkaller-00034-gfebbc555cf0f #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Workqueue: events_unbound cfg80211_wiphy_work
RIP: 0010:__lock_acquire+0x1c/0x3c40 kernel/locking/lockdep.c:5079
Code: 90 90 90 90 90 90 90 90 90 90 90 90 90 90 41 57 41 56 41 89 f6 41 55 41 54 49 89 fc 55 89 d5 53 44 89 cb 48 81 ec f0 00 00 00 <48> 8b 84 24 28 01 00 00 44 89 04 24 48 c7 84 24 90 00 00 00 b3 8a
RSP: 0018:ffffc90000a178a8 EFLAGS: 00000082
RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000000
RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffffffff9aa22ea0
RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000001
R10: ffffffff90623617 R11: 0000000000000007 R12: ffffffff9aa22ea0
R13: ffffffff9aa22ea0 R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fab1cf44f98 CR3: 000000004881e000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000008 DR2: 0002000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5851
 __raw_spin_trylock include/linux/spinlock_api_smp.h:90 [inline]
 _raw_spin_trylock+0x63/0x80 kernel/locking/spinlock.c:138
 avc_reclaim_node security/selinux/avc.c:473 [inline]
 avc_alloc_node+0x1f1/0x5a0 security/selinux/avc.c:507
 avc_insert security/selinux/avc.c:618 [inline]
 avc_compute_av+0xfd/0x5c0 security/selinux/avc.c:993
 avc_perm_nonode+0xaa/0x180 security/selinux/avc.c:1117
 avc_has_perm_noaudit+0x2d2/0x3a0 security/selinux/avc.c:1160
 avc_has_perm+0xc1/0x1c0 security/selinux/avc.c:1195
 selinux_inet_sys_rcv_skb+0x11f/0x190 security/selinux/hooks.c:5082
 selinux_ip_forward+0x4d3/0x550 security/selinux/hooks.c:5706
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xbb/0x200 net/netfilter/core.c:626
 nf_hook+0x486/0x810 include/linux/netfilter.h:269
 NF_HOOK include/linux/netfilter.h:312 [inline]
 br_nf_forward_ip.part.0+0x5e5/0x820 net/bridge/br_netfilter_hooks.c:719
 br_nf_forward_ip net/bridge/br_netfilter_hooks.c:679 [inline]
 br_nf_forward+0xf11/0x1bd0 net/bridge/br_netfilter_hooks.c:776
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xbb/0x200 net/netfilter/core.c:626
 nf_hook+0x474/0x7d0 include/linux/netfilter.h:269
 NF_HOOK include/linux/netfilter.h:312 [inline]
 __br_forward+0x1be/0x5b0 net/bridge/br_forward.c:115
 deliver_clone+0x5b/0xa0 net/bridge/br_forward.c:131
 maybe_deliver+0xa7/0x120 net/bridge/br_forward.c:190
 br_flood+0x17b/0x5e0 net/bridge/br_forward.c:237
 br_handle_frame_finish+0xea2/0x1c90 net/bridge/br_input.c:220
 br_nf_hook_thresh+0x303/0x410 net/bridge/br_netfilter_hooks.c:1170
 br_nf_pre_routing_finish_ipv6+0x76a/0xfb0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:314 [inline]
 br_nf_pre_routing_ipv6+0x3ce/0x8c0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x860/0x15b0 net/bridge/br_netfilter_hooks.c:508
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:282 [inline]
 br_handle_frame+0xad7/0x14a0 net/bridge/br_input.c:433
 __netif_receive_skb_core.constprop.0+0xa20/0x4330 net/core/dev.c:5722
 __netif_receive_skb_one_core+0xb1/0x1e0 net/core/dev.c:5826
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:5941
 process_backlog+0x443/0x15f0 net/core/dev.c:6289
 __napi_poll.constprop.0+0xb7/0x550 net/core/dev.c:7106
 napi_poll net/core/dev.c:7175 [inline]
 net_rx_action+0xa94/0x1010 net/core/dev.c:7297
 handle_softirqs+0x213/0x8f0 kernel/softirq.c:561
 do_softirq kernel/softirq.c:462 [inline]
 do_softirq+0xb2/0xf0 kernel/softirq.c:449
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:389
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 cfg80211_inform_single_bss_data+0x9b9/0x1df0 net/wireless/scan.c:2383
 cfg80211_inform_bss_data+0x205/0x3ba0 net/wireless/scan.c:3222
 cfg80211_inform_bss_frame_data+0x272/0x7a0 net/wireless/scan.c:3317
 ieee80211_bss_info_update+0x311/0xab0 net/mac80211/scan.c:226
 ieee80211_rx_bss_info net/mac80211/ibss.c:1102 [inline]
 ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1581 [inline]
 ieee80211_ibss_rx_queued_mgmt+0x189c/0x2f50 net/mac80211/ibss.c:1608
 ieee80211_iface_process_skb net/mac80211/iface.c:1611 [inline]
 ieee80211_iface_work+0xc15/0xf50 net/mac80211/iface.c:1665
 cfg80211_wiphy_work+0x3ed/0x570 net/wireless/core.c:435
 process_one_work+0x9c5/0x1ba0 kernel/workqueue.c:3236
 process_scheduled_works kernel/workqueue.c:3317 [inline]
 worker_thread+0x6c8/0xf00 kernel/workqueue.c:3398
 kthread+0x3af/0x750 kernel/kthread.c:464
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
net_ratelimit: 777 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:33:18:38:ed:40, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:33:18:38:ed:40, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
audit_log_start: 12789 callbacks suppressed
audit: audit_backlog=65 > audit_backlog_limit=64
audit: audit_backlog=65 > audit_backlog_limit=64
audit: audit_lost=110197 audit_rate_limit=0 audit_backlog_limit=64
audit: audit_lost=110198 audit_rate_limit=0 audit_backlog_limit=64
audit: backlog limit exceeded
audit: audit_backlog=65 > audit_backlog_limit=64
audit: backlog limit exceeded
audit: audit_backlog=65 > audit_backlog_limit=64
audit: audit_lost=110199 audit_rate_limit=0 audit_backlog_limit=64
audit: audit_lost=110200 audit_rate_limit=0 audit_backlog_limit=64
net_ratelimit: 818 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:33:18:38:ed:40, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:33:18:38:ed:40, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/02/11 13:23 upstream febbc555cf0f 43f51a00 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in sys_inotify_add_watch
2024/11/30 15:36 upstream 2ba9f676d0a2 68914665 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-arm64-mte INFO: rcu detected stall in sys_inotify_add_watch
* Struck through repros no longer work on HEAD.