syzbot


possible deadlock in __queue_map_get

Status: upstream: reported C repro on 2025/06/23 19:42
Bug presence: origin:lts-only
[Documentation on labels]
Reported-by: syzbot+9a27d82855e38d482549@syzkaller.appspotmail.com
First crash: 68d, last: 29d
Bug presence (2)
Date Name Commit Repro Result
2025/06/28 linux-6.6.y (ToT) 3f5b4c104b7d C [report] possible deadlock in __queue_map_get
2025/06/28 upstream (ToT) 35e261cd95dd C Didn't crash
Similar bugs (2)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in __queue_map_get origin:upstream missing-backport 4 C 30 6d22h 510d 0/3 upstream: reported C repro on 2024/04/08 00:56
upstream possible deadlock in __queue_map_get bpf 4 C error 180 146d 505d 28/29 fixed on 2025/06/10 16:19

Sample crash report:
============================================
WARNING: possible recursive locking detected
6.6.101-syzkaller #0 Not tainted
--------------------------------------------
syz.1.561/6562 is trying to acquire lock:
ffff888030130218 (&qs->lock){-.-.}-{2:2}, at: __queue_map_get+0x11c/0x4b0 kernel/bpf/queue_stack_maps.c:105

but task is already holding lock:
ffff88807c242218 (&qs->lock){-.-.}-{2:2}, at: __queue_map_get+0x11c/0x4b0 kernel/bpf/queue_stack_maps.c:105

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&qs->lock);
  lock(&qs->lock);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

4 locks held by syz.1.561/6562:
 #0: ffffffff8cd78688 (tracepoints_mutex){+.+.}-{3:3}, at: tracepoint_probe_unregister+0x30/0x930 kernel/tracepoint.c:548
 #1: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 #1: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
 #1: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2321 [inline]
 #1: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0xde/0x3c0 kernel/trace/bpf_trace.c:2361
 #2: ffff88807c242218 (&qs->lock){-.-.}-{2:2}, at: __queue_map_get+0x11c/0x4b0 kernel/bpf/queue_stack_maps.c:105
 #3: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 #3: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
 #3: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2321 [inline]
 #3: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0xde/0x3c0 kernel/trace/bpf_trace.c:2361

stack backtrace:
CPU: 1 PID: 6562 Comm: syz.1.561 Not tainted 6.6.101-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
 check_deadlock kernel/locking/lockdep.c:3062 [inline]
 validate_chain kernel/locking/lockdep.c:3856 [inline]
 __lock_acquire+0x5d40/0x7c80 kernel/locking/lockdep.c:5137
 lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xa8/0xf0 kernel/locking/spinlock.c:162
 __queue_map_get+0x11c/0x4b0 kernel/bpf/queue_stack_maps.c:105
 bpf_prog_00798911c748094f+0x42/0x46
 bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
 __bpf_prog_run include/linux/filter.h:612 [inline]
 bpf_prog_run include/linux/filter.h:619 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
 bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
 __bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
 __traceiter_contention_end+0x78/0xb0 include/trace/events/lock.h:122
 trace_contention_end+0xe6/0x110 include/trace/events/lock.h:122
 __pv_queued_spin_lock_slowpath+0x7ec/0x9d0 kernel/locking/qspinlock.c:560
 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:586 [inline]
 queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
 queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
 do_raw_spin_lock+0x24e/0x2c0 kernel/locking/spinlock_debug.c:115
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
 _raw_spin_lock_irqsave+0xb4/0xf0 kernel/locking/spinlock.c:162
 __queue_map_get+0x11c/0x4b0 kernel/bpf/queue_stack_maps.c:105
 bpf_prog_00798911c748094f+0x42/0x46
 bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
 __bpf_prog_run include/linux/filter.h:612 [inline]
 bpf_prog_run include/linux/filter.h:619 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
 bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
 __bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
 __traceiter_contention_end+0x78/0xb0 include/trace/events/lock.h:122
 trace_contention_end+0xc5/0xe0 include/trace/events/lock.h:122
 __mutex_lock_common kernel/locking/mutex.c:612 [inline]
 __mutex_lock+0x2fa/0xcc0 kernel/locking/mutex.c:747
 tracepoint_probe_unregister+0x30/0x930 kernel/tracepoint.c:548
 bpf_raw_tp_link_release+0x63/0x90 kernel/bpf/syscall.c:3369
 bpf_link_free+0x131/0x310 kernel/bpf/syscall.c:2912
 bpf_link_put_direct kernel/bpf/syscall.c:2952 [inline]
 bpf_link_release+0x6e/0x80 kernel/bpf/syscall.c:2959
 __fput+0x234/0x970 fs/file_table.c:384
 task_work_run+0x1ce/0x250 kernel/task_work.c:239
 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
 exit_to_user_mode_loop+0xe6/0x110 kernel/entry/common.c:177
 exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:210
 __syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
 syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
 do_syscall_64+0x61/0xb0 arch/x86/entry/common.c:87
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f382938eb69
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffc40fc1348 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
RAX: 0000000000000000 RBX: 000000000001dc65 RCX: 00007f382938eb69
RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000440fc163f
R10: 00007f3829000000 R11: 0000000000000246 R12: 00007f38295b5fac
R13: 00007f38295b5fa0 R14: ffffffffffffffff R15: 0000000000000003
 </TASK>

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/08/01 23:33 linux-6.6.y 3a8ababb8b6a 40127d41 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf possible deadlock in __queue_map_get
2025/06/23 19:42 linux-6.6.y 6282921b6825 d6cdfb8a .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf possible deadlock in __queue_map_get
2025/07/22 21:43 linux-6.6.y d96eb99e2f0e 8e9d1dc1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf possible deadlock in __queue_map_get
2025/07/22 20:18 linux-6.6.y d96eb99e2f0e 8e9d1dc1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf possible deadlock in __queue_map_get
* Struck through repros no longer work on HEAD.