| Title | Rank 🛈 | Repro | Cause bisect | Fix bisect | Count | Last | Reported |
|---|---|---|---|---|---|---|---|
| possible deadlock in ahci_single_level_irq_intr net bpf | 4 | C | 10 | 602d | 610d |
syzbot |
sign-in | mailing list | source | docs |
| Title | Replies (including bot) | Last reply |
|---|---|---|
| [syzbot] [bpf?] [net?] possible deadlock in wq_worker_tick | 2 (4) | 2024/04/02 09:06 |
| Created | Duration | User | Patch | Repo | Result |
|---|---|---|---|---|---|
| 2024/03/20 23:47 | 5h04m | hdanton@sina.com | patch | https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git ea80e3ed09ab | OK log |
=====================================================
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
6.8.0-syzkaller-05271-gf99c5f563c17 #0 Not tainted
-----------------------------------------------------
kworker/1:2/783 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
ffffc9000acec3e0 (&htab->buckets[i].lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
ffffc9000acec3e0 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xb0/0x300 net/core/sock_map.c:939
and this task is already holding:
ffff888014ca0018 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x6ec/0xec0
which would create a new lock dependency:
(&pool->lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+...}-{2:2}
but this new dependency connects a HARDIRQ-irq-safe lock:
(&pool->lock){-.-.}-{2:2}
... which became HARDIRQ-irq-safe at:
lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
wq_worker_tick+0x207/0x440 kernel/workqueue.c:1501
scheduler_tick+0x375/0x6e0 kernel/sched/core.c:5699
update_process_times+0x202/0x230 kernel/time/timer.c:2481
tick_periodic+0x190/0x220 kernel/time/tick-common.c:100
tick_handle_periodic+0x4a/0x160 kernel/time/tick-common.c:112
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1032 [inline]
__sysvec_apic_timer_interrupt+0x107/0x3a0 arch/x86/kernel/apic/apic.c:1049
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1043 [inline]
sysvec_apic_timer_interrupt+0xa1/0xc0 arch/x86/kernel/apic/apic.c:1043
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
_raw_spin_unlock_irqrestore+0xd8/0x140 kernel/locking/spinlock.c:194
spin_unlock_irqrestore include/linux/spinlock.h:406 [inline]
rmqueue_bulk mm/page_alloc.c:2154 [inline]
__rmqueue_pcplist+0x20a5/0x2560 mm/page_alloc.c:2820
rmqueue_pcplist mm/page_alloc.c:2862 [inline]
rmqueue mm/page_alloc.c:2899 [inline]
get_page_from_freelist+0x896/0x3580 mm/page_alloc.c:3308
__alloc_pages+0x256/0x680 mm/page_alloc.c:4569
alloc_pages_mpol+0x3de/0x650 mm/mempolicy.c:2133
vm_area_alloc_pages mm/vmalloc.c:3135 [inline]
__vmalloc_area_node mm/vmalloc.c:3211 [inline]
__vmalloc_node_range+0x9a4/0x14a0 mm/vmalloc.c:3392
__vmalloc_node mm/vmalloc.c:3457 [inline]
__vmalloc+0x79/0x90 mm/vmalloc.c:3471
pcpu_mem_zalloc mm/percpu.c:512 [inline]
pcpu_alloc_chunk mm/percpu.c:1469 [inline]
pcpu_create_chunk+0x31e/0xbc0 mm/percpu-vm.c:338
pcpu_balance_populated mm/percpu.c:2101 [inline]
pcpu_balance_workfn+0xc4d/0xd40 mm/percpu.c:2238
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa00/0x1770 kernel/workqueue.c:3335
worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
to a HARDIRQ-irq-unsafe lock:
(&htab->buckets[i].lock){+...}-{2:2}
... which became HARDIRQ-irq-unsafe at:
...
lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
sock_hash_free+0x164/0x820 net/core/sock_map.c:1154
bpf_map_free_deferred+0xe6/0x110 kernel/bpf/syscall.c:734
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa00/0x1770 kernel/workqueue.c:3335
worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
other info that might help us debug this:
Possible interrupt unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&htab->buckets[i].lock);
local_irq_disable();
lock(&pool->lock);
lock(&htab->buckets[i].lock);
<Interrupt>
lock(&pool->lock);
*** DEADLOCK ***
5 locks held by kworker/1:2/783:
#0: ffff888014c78948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3229 [inline]
#0: ffff888014c78948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x1770 kernel/workqueue.c:3335
#1: ffffc90003eafd00 ((work_completion)(&aux->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3230 [inline]
#1: ffffc90003eafd00 ((work_completion)(&aux->work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x1770 kernel/workqueue.c:3335
#2: ffffffff8e131920 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
#2: ffffffff8e131920 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
#2: ffffffff8e131920 (rcu_read_lock){....}-{1:2}, at: __queue_work+0x198/0xec0 kernel/workqueue.c:2324
#3: ffff888014ca0018 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x6ec/0xec0
#4: ffffffff8e131920 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
#4: ffffffff8e131920 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
#4: ffffffff8e131920 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
#4: ffffffff8e131920 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run1+0xf0/0x3f0 kernel/trace/bpf_trace.c:2419
the dependencies between HARDIRQ-irq-safe lock and the holding lock:
-> (&pool->lock){-.-.}-{2:2} {
IN-HARDIRQ-W at:
lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
wq_worker_tick+0x207/0x440 kernel/workqueue.c:1501
scheduler_tick+0x375/0x6e0 kernel/sched/core.c:5699
update_process_times+0x202/0x230 kernel/time/timer.c:2481
tick_periodic+0x190/0x220 kernel/time/tick-common.c:100
tick_handle_periodic+0x4a/0x160 kernel/time/tick-common.c:112
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1032 [inline]
__sysvec_apic_timer_interrupt+0x107/0x3a0 arch/x86/kernel/apic/apic.c:1049
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1043 [inline]
sysvec_apic_timer_interrupt+0xa1/0xc0 arch/x86/kernel/apic/apic.c:1043
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
_raw_spin_unlock_irqrestore+0xd8/0x140 kernel/locking/spinlock.c:194
spin_unlock_irqrestore include/linux/spinlock.h:406 [inline]
rmqueue_bulk mm/page_alloc.c:2154 [inline]
__rmqueue_pcplist+0x20a5/0x2560 mm/page_alloc.c:2820
rmqueue_pcplist mm/page_alloc.c:2862 [inline]
rmqueue mm/page_alloc.c:2899 [inline]
get_page_from_freelist+0x896/0x3580 mm/page_alloc.c:3308
__alloc_pages+0x256/0x680 mm/page_alloc.c:4569
alloc_pages_mpol+0x3de/0x650 mm/mempolicy.c:2133
vm_area_alloc_pages mm/vmalloc.c:3135 [inline]
__vmalloc_area_node mm/vmalloc.c:3211 [inline]
__vmalloc_node_range+0x9a4/0x14a0 mm/vmalloc.c:3392
__vmalloc_node mm/vmalloc.c:3457 [inline]
__vmalloc+0x79/0x90 mm/vmalloc.c:3471
pcpu_mem_zalloc mm/percpu.c:512 [inline]
pcpu_alloc_chunk mm/percpu.c:1469 [inline]
pcpu_create_chunk+0x31e/0xbc0 mm/percpu-vm.c:338
pcpu_balance_populated mm/percpu.c:2101 [inline]
pcpu_balance_workfn+0xc4d/0xd40 mm/percpu.c:2238
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa00/0x1770 kernel/workqueue.c:3335
worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
IN-SOFTIRQ-W at:
lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
__queue_work+0x6ec/0xec0
call_timer_fn+0x17e/0x600 kernel/time/timer.c:1792
expire_timers kernel/time/timer.c:1838 [inline]
__run_timers kernel/time/timer.c:2408 [inline]
__run_timer_base+0x695/0x8e0 kernel/time/timer.c:2419
run_timer_base kernel/time/timer.c:2428 [inline]
run_timer_softirq+0xb7/0x170 kernel/time/timer.c:2438
__do_softirq+0x2bc/0x943 kernel/softirq.c:554
invoke_softirq kernel/softirq.c:428 [inline]
__irq_exit_rcu+0xf2/0x1c0 kernel/softirq.c:633
irq_exit_rcu+0x9/0x30 kernel/softirq.c:645
instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1043 [inline]
sysvec_apic_timer_interrupt+0xa6/0xc0 arch/x86/kernel/apic/apic.c:1043
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline]
arch_safe_halt arch/x86/include/asm/irqflags.h:86 [inline]
default_idle+0x13/0x20 arch/x86/kernel/process.c:742
default_idle_call+0x74/0xb0 kernel/sched/idle.c:117
cpuidle_idle_call kernel/sched/idle.c:191 [inline]
do_idle+0x22f/0x5d0 kernel/sched/idle.c:332
cpu_startup_entry+0x42/0x60 kernel/sched/idle.c:430
rest_init+0x2e0/0x300 init/main.c:730
arch_call_rest_init+0xe/0x10 init/main.c:831
start_kernel+0x47a/0x500 init/main.c:1077
x86_64_start_reservations+0x2a/0x30 arch/x86/kernel/head64.c:509
x86_64_start_kernel+0x99/0xa0 arch/x86/kernel/head64.c:490
common_startup_64+0x13e/0x147
INITIAL USE at:
lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
__queue_work+0x6ec/0xec0
queue_work_on+0x14f/0x250 kernel/workqueue.c:2435
queue_work include/linux/workqueue.h:605 [inline]
start_poll_synchronize_rcu_expedited+0xf7/0x150 kernel/rcu/tree_exp.h:1017
rcu_init+0xea/0x140 kernel/rcu/tree.c:5240
start_kernel+0x1f7/0x500 init/main.c:969
x86_64_start_reservations+0x2a/0x30 arch/x86/kernel/head64.c:509
x86_64_start_kernel+0x99/0xa0 arch/x86/kernel/head64.c:490
common_startup_64+0x13e/0x147
}
... key at: [<ffffffff926c0e60>] init_worker_pool.__key+0x0/0x20
the dependencies between the lock to be acquired
and HARDIRQ-irq-unsafe lock:
-> (&htab->buckets[i].lock){+...}-{2:2} {
HARDIRQ-ON-W at:
lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
sock_hash_free+0x164/0x820 net/core/sock_map.c:1154
bpf_map_free_deferred+0xe6/0x110 kernel/bpf/syscall.c:734
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa00/0x1770 kernel/workqueue.c:3335
worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
INITIAL USE at:
lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
sock_hash_free+0x164/0x820 net/core/sock_map.c:1154
bpf_map_free_deferred+0xe6/0x110 kernel/bpf/syscall.c:734
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa00/0x1770 kernel/workqueue.c:3335
worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
}
... key at: [<ffffffff94882300>] sock_hash_alloc.__key+0x0/0x20
... acquired at:
lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
sock_hash_delete_elem+0xb0/0x300 net/core/sock_map.c:939
bpf_prog_2c29ac5cdc6b1842+0x42/0x46
bpf_dispatcher_nop_func include/linux/bpf.h:1234 [inline]
__bpf_prog_run include/linux/filter.h:657 [inline]
bpf_prog_run include/linux/filter.h:664 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
bpf_trace_run1+0x1e0/0x3f0 kernel/trace/bpf_trace.c:2419
trace_workqueue_activate_work+0x161/0x1d0 include/trace/events/workqueue.h:59
__queue_work+0xc04/0xec0 kernel/workqueue.c:2399
queue_work_on+0x14f/0x250 kernel/workqueue.c:2435
__bpf_free_used_maps kernel/bpf/core.c:2716 [inline]
bpf_free_used_maps kernel/bpf/core.c:2722 [inline]
bpf_prog_free_deferred+0x21d/0x710 kernel/bpf/core.c:2761
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa00/0x1770 kernel/workqueue.c:3335
worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
stack backtrace:
CPU: 1 PID: 783 Comm: kworker/1:2 Not tainted 6.8.0-syzkaller-05271-gf99c5f563c17 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Workqueue: events bpf_prog_free_deferred
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
print_bad_irq_dependency kernel/locking/lockdep.c:2626 [inline]
check_irq_usage kernel/locking/lockdep.c:2865 [inline]
check_prev_add kernel/locking/lockdep.c:3138 [inline]
check_prevs_add kernel/locking/lockdep.c:3253 [inline]
validate_chain+0x4dc7/0x58e0 kernel/locking/lockdep.c:3869
__lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
sock_hash_delete_elem+0xb0/0x300 net/core/sock_map.c:939
bpf_prog_2c29ac5cdc6b1842+0x42/0x46
bpf_dispatcher_nop_func include/linux/bpf.h:1234 [inline]
__bpf_prog_run include/linux/filter.h:657 [inline]
bpf_prog_run include/linux/filter.h:664 [inline]
__bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
bpf_trace_run1+0x1e0/0x3f0 kernel/trace/bpf_trace.c:2419
trace_workqueue_activate_work+0x161/0x1d0 include/trace/events/workqueue.h:59
__queue_work+0xc04/0xec0 kernel/workqueue.c:2399
queue_work_on+0x14f/0x250 kernel/workqueue.c:2435
__bpf_free_used_maps kernel/bpf/core.c:2716 [inline]
bpf_free_used_maps kernel/bpf/core.c:2722 [inline]
bpf_prog_free_deferred+0x21d/0x710 kernel/bpf/core.c:2761
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa00/0x1770 kernel/workqueue.c:3335
worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
</TASK>
------------[ cut here ]------------
raw_local_irq_restore() called with IRQs enabled
WARNING: CPU: 1 PID: 783 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x29/0x40 kernel/locking/irqflag-debug.c:10
Modules linked in:
CPU: 1 PID: 783 Comm: kworker/1:2 Not tainted 6.8.0-syzkaller-05271-gf99c5f563c17 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Workqueue: events bpf_prog_free_deferred
RIP: 0010:warn_bogus_irq_restore+0x29/0x40 kernel/locking/irqflag-debug.c:10
Code: 90 f3 0f 1e fa 90 80 3d de 69 01 04 00 74 06 90 c3 cc cc cc cc c6 05 cf 69 01 04 01 90 48 c7 c7 20 ba aa 8b e8 f8 e5 e7 f5 90 <0f> 0b 90 90 90 c3 cc cc cc cc 66 2e 0f 1f 84 00 00 00 00 00 0f 1f
RSP: 0018:ffffc90003eafa58 EFLAGS: 00010246
RAX: 2d0b12bfc3c13400 RBX: 0000000000000200 RCX: ffff88801f9abc00
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000
RBP: ffffc90003eafb38 R08: ffffffff8157cc12 R09: 1ffff920007d5ea0
R10: dffffc0000000000 R11: fffff520007d5ea1 R12: 0000000000000200
R13: 0000000000000000 R14: 0000000000000246 R15: 1ffff920007d5f50
FS: 0000000000000000(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffd0adbd248 CR3: 000000002d53a000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
queue_work_on+0x1ea/0x250 kernel/workqueue.c:2439
__bpf_free_used_maps kernel/bpf/core.c:2716 [inline]
bpf_free_used_maps kernel/bpf/core.c:2722 [inline]
bpf_prog_free_deferred+0x21d/0x710 kernel/bpf/core.c:2761
process_one_work kernel/workqueue.c:3254 [inline]
process_scheduled_works+0xa00/0x1770 kernel/workqueue.c:3335
worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
</TASK>
| Time | Kernel | Commit | Syzkaller | Config | Log | Report | Syz repro | C repro | VM info | Assets (help?) | Manager | Title |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2024/03/30 04:00 | net | f99c5f563c17 | c52bcb23 | .config | console log | report | syz | C | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | |
| 2024/03/17 02:41 | net | ea80e3ed09ab | d615901c | .config | console log | report | syz | C | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | |
| 2024/03/31 21:57 | bpf-next | 14bb1e8c8d4a | 6baf5069 | .config | console log | report | syz | C | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-next-kasan-gce | possible deadlock in wq_worker_tick | |
| 2024/04/26 03:29 | bpf | 443574b03387 | 8bdc0f22 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/25 22:48 | bpf | 443574b03387 | 8bdc0f22 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/25 20:59 | bpf | 443574b03387 | 8bdc0f22 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/25 00:47 | bpf | 443574b03387 | 8bdc0f22 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/24 13:21 | bpf | 443574b03387 | 21339d7b | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/23 09:55 | bpf | 443574b03387 | 21339d7b | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/23 05:03 | bpf | 443574b03387 | 21339d7b | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/22 11:34 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/22 10:11 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/22 04:52 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/22 02:52 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/21 23:46 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/21 20:15 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/21 17:31 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/20 23:41 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/20 19:09 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/20 15:37 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/20 00:25 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/19 19:54 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/19 17:41 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/19 14:49 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/19 05:26 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/19 02:50 | bpf | 443574b03387 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/18 22:32 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/18 20:29 | bpf | 443574b03387 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/18 16:19 | net | f99c5f563c17 | af24b050 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/18 04:16 | net | f99c5f563c17 | bd38b692 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/18 00:40 | net | f99c5f563c17 | bd38b692 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/17 20:32 | net | f99c5f563c17 | bd38b692 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/17 18:57 | net | f99c5f563c17 | bd38b692 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/17 17:53 | net | f99c5f563c17 | bd38b692 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/17 14:41 | net | f99c5f563c17 | 18f6e127 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/17 11:50 | net | f99c5f563c17 | 18f6e127 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/17 10:02 | net | f99c5f563c17 | 18f6e127 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/17 07:48 | net | f99c5f563c17 | 18f6e127 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/17 05:45 | net | f99c5f563c17 | 18f6e127 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/17 01:11 | bpf | 443574b03387 | 18f6e127 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/16 19:12 | net | f99c5f563c17 | 0d592ce4 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/16 16:44 | net | f99c5f563c17 | 0d592ce4 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/16 15:15 | net | f99c5f563c17 | 0d592ce4 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/16 13:26 | net | f99c5f563c17 | 0d592ce4 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/16 07:12 | net | f99c5f563c17 | 0d592ce4 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/16 02:24 | net | f99c5f563c17 | 0d592ce4 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/15 23:31 | net | f99c5f563c17 | 0d592ce4 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/15 17:34 | net | f99c5f563c17 | c8349e48 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/15 16:14 | net | f99c5f563c17 | c8349e48 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/15 15:03 | net | f99c5f563c17 | c8349e48 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-this-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/04/06 01:01 | bpf-next | 14bb1e8c8d4a | 18ea8213 | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-bpf-next-kasan-gce | possible deadlock in wq_worker_tick | ||
| 2024/03/21 04:20 | net-next | 237bb5f7f7f5 | 5b7d42ae | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci-upstream-net-kasan-gce | possible deadlock in wq_worker_tick |