syzbot


possible deadlock in get_partial_node (3)

Status: upstream: reported on 2025/10/20 06:32
Reported-by: syzbot+36547da81cd9e30aff85@syzkaller.appspotmail.com
First crash: 54d, last: 54d
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in get_partial_node (2) 4 3 261d 359d 0/3 auto-obsoleted due to no activity on 2025/07/04 20:47
linux-6.1 possible deadlock in get_partial_node 4 1 542d 542d 0/3 auto-obsoleted due to no activity on 2024/09/26 19:17
linux-6.6 possible deadlock in get_partial_node 4 1 39d 39d 0/2 upstream: reported on 2025/11/04 05:18
upstream possible deadlock in get_partial_node (3) serial 4 4 163d 159d 0/29 auto-obsoleted due to no activity on 2025/09/01 07:10
upstream possible deadlock in get_partial_node bpf 4 19 423d 570d 0/29 auto-obsoleted due to no activity on 2025/01/24 02:29
upstream possible deadlock in get_partial_node (2) bcachefs 4 C done done 4 277d 299d 28/29 fixed on 2025/06/10 16:19

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.7.347/5606 is trying to acquire lock:
ffff8880174433d8 (&n->list_lock){-.-.}-{2:2}, at: get_partial_node+0x36/0x470 mm/slub.c:2215

but task is already holding lock:
ffff8880591ec238 (&trie->lock){..-.}-{2:2}, at: trie_update_elem+0xc4/0xe90 kernel/bpf/lpm_trie.c:335

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&trie->lock){..-.}-{2:2}:
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162
       trie_delete_elem+0x90/0x690 kernel/bpf/lpm_trie.c:467
       bpf_prog_12666aae6518557f+0x26/0x3e
       bpf_dispatcher_nop_func include/linux/bpf.h:1012 [inline]
       __bpf_prog_run include/linux/filter.h:603 [inline]
       bpf_prog_run include/linux/filter.h:610 [inline]
       __bpf_trace_run kernel/trace/bpf_trace.c:2285 [inline]
       bpf_trace_run2+0x1cd/0x3b0 kernel/trace/bpf_trace.c:2324
       trace_contention_end+0x13f/0x190 include/trace/events/lock.h:122
       __pv_queued_spin_lock_slowpath+0x7e8/0x9c0 kernel/locking/qspinlock.c:560
       pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
       queued_spin_lock_slowpath+0x43/0x50 arch/x86/include/asm/qspinlock.h:51
       queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
       do_raw_spin_lock+0x217/0x280 kernel/locking/spinlock_debug.c:115
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
       _raw_spin_lock_irqsave+0xb0/0xf0 kernel/locking/spinlock.c:162
       __unfreeze_partials+0x71/0x200 mm/slub.c:2555
       put_cpu_partial+0x17c/0x250 mm/slub.c:2667
       qlink_free mm/kasan/quarantine.c:168 [inline]
       qlist_free_all+0x76/0xe0 mm/kasan/quarantine.c:187
       kasan_quarantine_reduce+0x144/0x160 mm/kasan/quarantine.c:294
       __kasan_slab_alloc+0x1e/0x80 mm/kasan/common.c:305
       kasan_slab_alloc include/linux/kasan.h:201 [inline]
       slab_post_alloc_hook+0x4b/0x480 mm/slab.h:737
       slab_alloc_node mm/slub.c:3359 [inline]
       slab_alloc mm/slub.c:3367 [inline]
       __kmem_cache_alloc_lru mm/slub.c:3374 [inline]
       kmem_cache_alloc+0x123/0x2f0 mm/slub.c:3383
       mt_alloc_one lib/maple_tree.c:152 [inline]
       mas_alloc_nodes+0x2ec/0x890 lib/maple_tree.c:1277
       mas_node_count_gfp lib/maple_tree.c:1359 [inline]
       mas_preallocate+0x161/0x3c0 lib/maple_tree.c:5807
       __mmap_region mm/mmap.c:2765 [inline]
       mmap_region+0xd8a/0x1c70 mm/mmap.c:2916
       do_mmap+0x958/0xfd0 mm/mmap.c:1436
       vm_mmap_pgoff+0x1b2/0x2b0 mm/util.c:520
       elf_map+0x19e/0x2f0 fs/binfmt_elf.c:375
       load_elf_interp+0x46d/0xd40 fs/binfmt_elf.c:664
       load_elf_binary+0x19cd/0x26d0 fs/binfmt_elf.c:1272
       search_binary_handler fs/exec.c:1764 [inline]
       exec_binprm fs/exec.c:1805 [inline]
       bprm_execve+0xb10/0x18a0 fs/exec.c:1874
       kernel_execve+0x8b9/0x9c0 fs/exec.c:2039
       call_usermodehelper_exec_async+0x207/0x350 kernel/umh.c:113
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295

-> #0 (&n->list_lock){-.-.}-{2:2}:
       check_prev_add kernel/locking/lockdep.c:3090 [inline]
       check_prevs_add kernel/locking/lockdep.c:3209 [inline]
       validate_chain kernel/locking/lockdep.c:3825 [inline]
       __lock_acquire+0x2cf8/0x7c50 kernel/locking/lockdep.c:5049
       lock_acquire+0x1b4/0x490 kernel/locking/lockdep.c:5662
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162
       get_partial_node+0x36/0x470 mm/slub.c:2215
       get_partial mm/slub.c:2330 [inline]
       ___slab_alloc+0x968/0x1230 mm/slub.c:3131
       __slab_alloc mm/slub.c:3240 [inline]
       slab_alloc_node mm/slub.c:3325 [inline]
       __kmem_cache_alloc_node+0x1a0/0x260 mm/slub.c:3398
       __do_kmalloc_node mm/slab_common.c:935 [inline]
       __kmalloc_node+0xa0/0x240 mm/slab_common.c:943
       kmalloc_node include/linux/slab.h:589 [inline]
       bpf_map_kmalloc_node+0xb8/0x1a0 kernel/bpf/syscall.c:454
       lpm_trie_node_alloc kernel/bpf/lpm_trie.c:291 [inline]
       trie_update_elem+0x160/0xe90 kernel/bpf/lpm_trie.c:338
       bpf_map_update_value+0x59e/0x670 kernel/bpf/syscall.c:228
       generic_map_update_batch+0x569/0x850 kernel/bpf/syscall.c:1709
       bpf_map_do_batch+0x466/0x600 kernel/bpf/syscall.c:-1
       __sys_bpf+0x65f/0x6d0 kernel/bpf/syscall.c:-1
       __do_sys_bpf kernel/bpf/syscall.c:5131 [inline]
       __se_sys_bpf kernel/bpf/syscall.c:5129 [inline]
       __x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5129
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&trie->lock);
                               lock(&n->list_lock);
                               lock(&trie->lock);
  lock(&n->list_lock);

 *** DEADLOCK ***

2 locks held by syz.7.347/5606:
 #0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: bpf_map_update_value+0x375/0x670 kernel/bpf/syscall.c:227
 #1: ffff8880591ec238 (&trie->lock){..-.}-{2:2}, at: trie_update_elem+0xc4/0xe90 kernel/bpf/lpm_trie.c:335

stack backtrace:
CPU: 0 PID: 5606 Comm: syz.7.347 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
 check_noncircular+0x274/0x310 kernel/locking/lockdep.c:2170
 check_prev_add kernel/locking/lockdep.c:3090 [inline]
 check_prevs_add kernel/locking/lockdep.c:3209 [inline]
 validate_chain kernel/locking/lockdep.c:3825 [inline]
 __lock_acquire+0x2cf8/0x7c50 kernel/locking/lockdep.c:5049
 lock_acquire+0x1b4/0x490 kernel/locking/lockdep.c:5662
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162
 get_partial_node+0x36/0x470 mm/slub.c:2215
 get_partial mm/slub.c:2330 [inline]
 ___slab_alloc+0x968/0x1230 mm/slub.c:3131
 __slab_alloc mm/slub.c:3240 [inline]
 slab_alloc_node mm/slub.c:3325 [inline]
 __kmem_cache_alloc_node+0x1a0/0x260 mm/slub.c:3398
 __do_kmalloc_node mm/slab_common.c:935 [inline]
 __kmalloc_node+0xa0/0x240 mm/slab_common.c:943
 kmalloc_node include/linux/slab.h:589 [inline]
 bpf_map_kmalloc_node+0xb8/0x1a0 kernel/bpf/syscall.c:454
 lpm_trie_node_alloc kernel/bpf/lpm_trie.c:291 [inline]
 trie_update_elem+0x160/0xe90 kernel/bpf/lpm_trie.c:338
 bpf_map_update_value+0x59e/0x670 kernel/bpf/syscall.c:228
 generic_map_update_batch+0x569/0x850 kernel/bpf/syscall.c:1709
 bpf_map_do_batch+0x466/0x600 kernel/bpf/syscall.c:-1
 __sys_bpf+0x65f/0x6d0 kernel/bpf/syscall.c:-1
 __do_sys_bpf kernel/bpf/syscall.c:5131 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:5129 [inline]
 __x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5129
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fd7b878efc9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fd7b9543038 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fd7b89e6090 RCX: 00007fd7b878efc9
RDX: 0000000000000038 RSI: 0000200000000240 RDI: 000000000000001a
RBP: 00007fd7b8811f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fd7b89e6128 R14: 00007fd7b89e6090 R15: 00007ffc68af33d8
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/10/20 06:32 linux-6.1.y 8e6e2188d949 1c8c8cd8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in get_partial_node
* Struck through repros no longer work on HEAD.