syzbot


INFO: task hung in remove_one

Status: upstream: reported syz repro on 2025/01/06 11:11
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+3147c5de186107ffc7a1@syzkaller.appspotmail.com
First crash: 432d, last: 16h06m
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] INFO: task hung in remove_one 0 (1) 2025/01/06 11:11
Last patch testing requests (10)
Created Duration User Patch Repo Result
2026/01/29 02:30 18m retest repro upstream report log
2026/01/29 02:30 19m retest repro upstream report log
2026/01/29 02:30 19m retest repro upstream report log
2026/01/29 02:30 18m retest repro upstream report log
2026/01/29 02:30 19m retest repro upstream report log
2025/11/01 12:43 19m retest repro upstream report log
2025/11/01 12:43 18m retest repro upstream report log
2025/11/01 12:43 18m retest repro upstream report log
2025/11/01 12:43 19m retest repro upstream report log
2025/11/01 12:43 19m retest repro upstream report log

Sample crash report:
INFO: task kworker/u8:3:49 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:3    state:D stack:24936 pid:49    tgid:49    ppid:2      task_flags:0x4208160 flags:0x00080000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x60e0 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7004
 schedule_timeout+0x1b2/0x280 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common+0x2e7/0x4c0 kernel/sched/completion.c:121
 __debugfs_file_removed fs/debugfs/inode.c:751 [inline]
 remove_one+0x312/0x420 fs/debugfs/inode.c:758
 __simple_recursive_removal+0x148/0x5c0 fs/libfs.c:625
 debugfs_remove+0x5d/0x80 fs/debugfs/inode.c:781
 nsim_dev_health_exit+0x3b/0xe0 drivers/net/netdevsim/health.c:227
 nsim_dev_reload_destroy+0x144/0x4a0 drivers/net/netdevsim/dev.c:1768
 nsim_dev_reload_down+0x66/0xd0 drivers/net/netdevsim/dev.c:1039
 devlink_reload+0x173/0x790 net/devlink/dev.c:461
 devlink_pernet_pre_exit+0x222/0x330 net/devlink/core.c:507
 ops_pre_exit_list net/core/net_namespace.c:161 [inline]
 ops_undo_list+0x187/0xab0 net/core/net_namespace.c:234
 cleanup_net+0x499/0x920 net/core/net_namespace.c:704
 process_one_work+0x9d7/0x1920 kernel/workqueue.c:3275
 process_scheduled_works kernel/workqueue.c:3358 [inline]
 worker_thread+0x5da/0xe40 kernel/workqueue.c:3439
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz-executor:12100 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:24504 pid:12100 tgid:12100 ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x60e0 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7004
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7061
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 device_lock include/linux/device.h:895 [inline]
 device_del+0xa0/0x9b0 drivers/base/core.c:3840
 device_unregister+0x1d/0xe0 drivers/base/core.c:3919
 nsim_bus_dev_del drivers/net/netdevsim/bus.c:491 [inline]
 del_device_store+0x346/0x480 drivers/net/netdevsim/bus.c:244
 bus_attr_store+0x74/0xb0 drivers/base/bus.c:172
 sysfs_kf_write+0xf2/0x150 fs/sysfs/file.c:142
 kernfs_fop_write_iter+0x3e0/0x5f0 fs/kernfs/file.c:352
 new_sync_write fs/read_write.c:595 [inline]
 vfs_write+0x6ac/0x1070 fs/read_write.c:688
 ksys_write+0x12a/0x250 fs/read_write.c:740
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4006d5cece
RSP: 002b:00007ffe893cfc88 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 000055558a513500 RCX: 00007f4006d5cece
RDX: 0000000000000001 RSI: 00007ffe893cfd10 RDI: 0000000000000005
RBP: 00007f4006e3343f R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
R13: 00007ffe893cfd10 R14: 00007f4007b44620 R15: 0000000000000003
 </TASK>
INFO: task syz.3.2945:12124 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.2945      state:D stack:28440 pid:12124 tgid:12124 ppid:10777  task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x60e0 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7004
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7061
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 devlink_health_report+0x681/0xb50 net/devlink/health.c:680
 nsim_dev_health_break_write+0x166/0x210 drivers/net/netdevsim/health.c:162
 full_proxy_write+0x135/0x1a0 fs/debugfs/file.c:388
 vfs_write+0x2aa/0x1070 fs/read_write.c:686
 ksys_write+0x12a/0x250 fs/read_write.c:740
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7efd2bf9c629
RSP: 002b:00007ffec260f438 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007efd2c215fa0 RCX: 00007efd2bf9c629
RDX: 00000000000001e1 RSI: 0000200000000080 RDI: 0000000000000003
RBP: 00007efd2c032b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007efd2c215fac R14: 00007efd2c215fa0 R15: 00007efd2c215fa0
 </TASK>
INFO: task syz.0.2957:12153 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.2957      state:D stack:28440 pid:12153 tgid:12153 ppid:11460  task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x60e0 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7004
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7061
 rwsem_down_read_slowpath+0x5dc/0xb30 kernel/locking/rwsem.c:1086
 __down_read_common kernel/locking/rwsem.c:1261 [inline]
 __down_read kernel/locking/rwsem.c:1274 [inline]
 down_read+0xed/0x460 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:1043 [inline]
 open_last_lookups fs/namei.c:4582 [inline]
 path_openat+0xa16/0x31a0 fs/namei.c:4827
 do_file_open+0x20e/0x430 fs/namei.c:4859
 do_sys_openat2+0x10d/0x1e0 fs/open.c:1366
 do_sys_open fs/open.c:1372 [inline]
 __do_sys_openat fs/open.c:1388 [inline]
 __se_sys_openat fs/open.c:1383 [inline]
 __x64_sys_openat+0x12d/0x210 fs/open.c:1383
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f8a1739c629
RSP: 002b:00007ffe20cb27d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f8a17615fa0 RCX: 00007f8a1739c629
RDX: 0000000000048081 RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007f8a17432b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f8a17615fac R14: 00007f8a17615fa0 R15: 00007f8a17615fa0
 </TASK>
INFO: task syz.1.2959:12154 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.2959      state:D stack:28440 pid:12154 tgid:12154 ppid:5961   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x60e0 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7004
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7061
 rwsem_down_read_slowpath+0x5dc/0xb30 kernel/locking/rwsem.c:1086
 __down_read_common kernel/locking/rwsem.c:1261 [inline]
 __down_read kernel/locking/rwsem.c:1274 [inline]
 down_read+0xed/0x460 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:1043 [inline]
 open_last_lookups fs/namei.c:4582 [inline]
 path_openat+0xa16/0x31a0 fs/namei.c:4827
 do_file_open+0x20e/0x430 fs/namei.c:4859
 do_sys_openat2+0x10d/0x1e0 fs/open.c:1366
 do_sys_open fs/open.c:1372 [inline]
 __do_sys_openat fs/open.c:1388 [inline]
 __se_sys_openat fs/open.c:1383 [inline]
 __x64_sys_openat+0x12d/0x210 fs/open.c:1383
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f84ce39c629
RSP: 002b:00007ffd38dd0b68 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f84ce615fa0 RCX: 00007f84ce39c629
RDX: 0000000000048081 RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007f84ce432b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f84ce615fac R14: 00007f84ce615fa0 R15: 00007f84ce615fa0
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/u8:0/12:
 #0: ffff8880b843b2e0 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2c/0x140 kernel/sched/core.c:647
 #1: ffff8880b8424648 (psi_seq){-.-.}-{0:0}, at: psi_sched_switch kernel/sched/stats.h:225 [inline]
 #1: ffff8880b8424648 (psi_seq){-.-.}-{0:0}, at: __schedule+0x2c11/0x60e0 kernel/sched/core.c:6901
 #2: ffff888076810788 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: class_wiphy_constructor include/net/cfg80211.h:6441 [inline]
 #2: ffff888076810788 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: cfg80211_wiphy_work+0x92/0x5c0 net/wireless/core.c:426
 #3: ffffffff8e7e9220 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #3: ffffffff8e7e9220 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #3: ffffffff8e7e9220 (rcu_read_lock){....}-{1:3}, at: ieee80211_sta_active_ibss+0xdc/0x420 net/mac80211/ibss.c:635
1 lock held by khungtaskd/30:
 #0: ffffffff8e7e9220 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e9220 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e9220 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
5 locks held by kworker/u8:2/35:
 #0: ffff8880b843b2e0 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2c/0x140 kernel/sched/core.c:647
 #1: ffff8880b8424648 (psi_seq){-.-.}-{0:0}, at: psi_sched_switch kernel/sched/stats.h:225 [inline]
 #1: ffff8880b8424648 (psi_seq){-.-.}-{0:0}, at: __schedule+0x2c11/0x60e0 kernel/sched/core.c:6901
 #2: ffff8880b8426358 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x124/0x1d0 kernel/time/timer.c:1004
 #3: ffffffff9b3b0a40 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x144/0x490 lib/debugobjects.c:818
 #4: ffff8880b8426358 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x124/0x1d0 kernel/time/timer.c:1004
6 locks held by kworker/u8:3/49:
 #0: ffff88801c6ae948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x1287/0x1920 kernel/workqueue.c:3250
 #1: ffffc90000b97d08 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x93c/0x1920 kernel/workqueue.c:3251
 #2: ffffffff905f8eb0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xb8/0x920 net/core/net_namespace.c:675
 #3: ffff88803623a0e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:895 [inline]
 #3: ffff88803623a0e8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline]
 #3: ffff88803623a0e8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x185/0x330 net/devlink/core.c:504
 #4: ffff888036239250 (&devlink->lock_key#2){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:274 [inline]
 #4: ffff888036239250 (&devlink->lock_key#2){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline]
 #4: ffff888036239250 (&devlink->lock_key#2){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x18f/0x330 net/devlink/core.c:504
 #5: ffff88805daae480 (&sb->s_type->i_mutex_key#10/2){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #5: ffff88805daae480 (&sb->s_type->i_mutex_key#10/2){+.+.}-{4:4}, at: __simple_recursive_removal+0xe0/0x5c0 fs/libfs.c:621
2 locks held by getty/5582:
 #0: ffff888037c130a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
5 locks held by syz-executor/12100:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff88807543f888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
 #4: ffff88803623a0e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:895 [inline]
 #4: ffff88803623a0e8 (&dev->mutex){....}-{4:4}, at: device_del+0xa0/0x9b0 drivers/base/core.c:3840
2 locks held by syz.3.2945/12124:
 #0: ffff88802028a420 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff888036239250 (&devlink->lock_key#2){+.+.}-{4:4}, at: devlink_health_report+0x681/0xb50 net/devlink/health.c:680
2 locks held by syz.0.2957/12153:
 #0: ffff88802028a420 (sb_writers#8){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:4572 [inline]
 #0: ffff88802028a420 (sb_writers#8){.+.+}-{0:0}, at: path_openat+0x9b1/0x31a0 fs/namei.c:4827
 #1: ffff88805daae480 (&sb->s_type->i_mutex_key#18){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1043 [inline]
 #1: ffff88805daae480 (&sb->s_type->i_mutex_key#18){++++}-{4:4}, at: open_last_lookups fs/namei.c:4582 [inline]
 #1: ffff88805daae480 (&sb->s_type->i_mutex_key#18){++++}-{4:4}, at: path_openat+0xa16/0x31a0 fs/namei.c:4827
2 locks held by syz.1.2959/12154:
 #0: ffff88802028a420 (sb_writers#8){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:4572 [inline]
 #0: ffff88802028a420 (sb_writers#8){.+.+}-{0:0}, at: path_openat+0x9b1/0x31a0 fs/namei.c:4827
 #1: ffff88805daae480 (&sb->s_type->i_mutex_key#18){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1043 [inline]
 #1: ffff88805daae480 (&sb->s_type->i_mutex_key#18){++++}-{4:4}, at: open_last_lookups fs/namei.c:4582 [inline]
 #1: ffff88805daae480 (&sb->s_type->i_mutex_key#18){++++}-{4:4}, at: path_openat+0xa16/0x31a0 fs/namei.c:4827
4 locks held by syz-executor/12161:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff88807afa8488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/12163:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff888061fa6888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/12165:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff88805bab5488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/12196:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff888038b1ac88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/12209:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff88805d323088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/12210:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff88805d585888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/12212:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff888037d9c888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/12245:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff8880758eb888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/12257:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff88802cc97088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/12259:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff888035ccd088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/12261:
 #0: ffff888036dcc420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:740
 #1: ffff88805cd8f488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x2c2/0x5f0 fs/kernfs/file.c:343
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88802a4bcf08 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x332/0x5f0 fs/kernfs/file.c:344
 #3: ffffffff8fb6a0a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x480 drivers/net/netdevsim/bus.c:234
2 locks held by dhcpcd/12291:
 #0: ffff888026934260 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1709 [inline]
 #0: ffff888026934260 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2c/0xf50 net/packet/af_packet.c:3198
 #1: ffffffff8e7f4e38 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x27f/0x3c0 kernel/rcu/tree_exp.h:311

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 35 Comm: kworker/u8:2 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Workqueue: events_unbound cfg80211_wiphy_work
RIP: 0010:__kasan_check_byte+0x15/0x50 mm/kasan/common.c:573
Code: 80 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 0f 1f 40 d6 41 54 49 89 f4 55 48 89 fd 53 e8 dd 20 00 00 89 c3 <84> c0 74 0b 89 d8 5b 5d 41 5c c3 cc cc cc cc 4c 89 e1 48 89 ef 31
RSP: 0018:ffffc90000ab6d60 EFLAGS: 00000297
RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000002
RDX: 0000000000000000 RSI: ffffffff81b7aaf1 RDI: fffffbfff1cfd244
RBP: ffffffff8e7e9220 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000200 R11: 00000000000a0169 R12: ffffffff81b7aaf1
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888124451000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055ac29b3d168 CR3: 000000000e598000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 kasan_check_byte include/linux/kasan.h:402 [inline]
 lock_acquire kernel/locking/lockdep.c:5842 [inline]
 lock_acquire+0x148/0x380 kernel/locking/lockdep.c:5825
 rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 rcu_read_lock include/linux/rcupdate.h:850 [inline]
 class_rcu_constructor include/linux/rcupdate.h:1193 [inline]
 unwind_next_frame+0xd1/0x1ea0 arch/x86/kernel/unwind_orc.c:495
 arch_stack_walk+0x94/0xf0 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x8e/0xc0 kernel/stacktrace.c:122
 kasan_save_stack+0x30/0x50 mm/kasan/common.c:57
 kasan_save_track+0x14/0x30 mm/kasan/common.c:78
 poison_kmalloc_redzone mm/kasan/common.c:398 [inline]
 __kasan_kmalloc+0xaa/0xb0 mm/kasan/common.c:415
 kasan_kmalloc include/linux/kasan.h:263 [inline]
 __do_kmalloc_node mm/slub.c:5219 [inline]
 __kmalloc_noprof+0x301/0x850 mm/slub.c:5231
 kmalloc_noprof include/linux/slab.h:966 [inline]
 kzalloc_noprof include/linux/slab.h:1204 [inline]
 cfg80211_inform_single_bss_data+0x557/0x1e20 net/wireless/scan.c:2345
 cfg80211_inform_bss_data+0x237/0x3a00 net/wireless/scan.c:3228
 cfg80211_inform_bss_frame_data+0x247/0x790 net/wireless/scan.c:3319
 ieee80211_bss_info_update+0x310/0xab0 net/mac80211/scan.c:230
 ieee80211_rx_bss_info net/mac80211/ibss.c:1094 [inline]
 ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1575 [inline]
 ieee80211_ibss_rx_queued_mgmt+0x1919/0x2f80 net/mac80211/ibss.c:1602
 ieee80211_iface_process_skb net/mac80211/iface.c:1748 [inline]
 ieee80211_iface_work+0xbff/0x13d0 net/mac80211/iface.c:1802
 cfg80211_wiphy_work+0x446/0x5c0 net/wireless/core.c:440
 process_one_work+0x9d7/0x1920 kernel/workqueue.c:3275
 process_scheduled_works kernel/workqueue.c:3358 [inline]
 worker_thread+0x5da/0xe40 kernel/workqueue.c:3439
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (154):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/19 23:33 upstream 2b7a25df823d 73a252ac .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/04 18:23 upstream 5fd0a1df5d05 ea10c935 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/15 02:29 upstream 944aacb68baf d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/11 21:58 upstream 755bc1335e3b d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/04 16:06 upstream aacb0a6d604a d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/04 12:16 upstream aacb0a6d604a d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/04 08:03 upstream aacb0a6d604a d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/04 04:10 upstream aacb0a6d604a d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/01 01:55 upstream 349bd28a86f2 d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/25 04:13 upstream ccd1cdca5cd4 d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/14 18:56 upstream 8f0b4cce4481 d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/10 20:34 upstream 0048fbb4011e d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/04 17:05 upstream 8f7aa3d3c732 d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/27 22:43 upstream 765e56e41a5a e8331348 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/26 00:29 upstream 8a2bcda5e139 64219f15 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/16 02:21 upstream f824272b6e3f f7988ea4 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/11 02:37 upstream 4ea7c1717f3f 4e1406b4 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/08 17:22 upstream e811c33b1f13 4e1406b4 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/11 08:21 upstream 917167ed1211 ff1712fe .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/02 09:12 upstream d3479214c05d 267f56c6 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/23 23:27 upstream cec1e6e5d1ab e667a34f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/22 18:41 upstream 07e27ad16399 770ff59f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/11 13:37 upstream 7aac71907bde e2beed91 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/03 01:35 upstream e6b9dce0aeeb 96a211bc .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/16 09:56 upstream dfd4b508c8c6 1804e95e .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/09 20:20 upstream c30a13538d9f 32a0e5ed .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/02 13:03 upstream a6923c06a3b2 7368264b .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/16 00:32 upstream 155a3c003e55 03fcfc4b .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/09 10:43 upstream 733923397fd9 f4e5e155 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/26 23:37 upstream ee88bddf7f2f 1ae8177e .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/19 21:11 upstream 24770983ccfe ed3e87f7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/16 19:14 upstream e04c78d86a96 d1716036 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/14 01:31 upstream 27605c8c0f69 0e8da31f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/03 04:42 upstream 7f9039c524a3 a30356b7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/06 06:41 upstream 01f95500a162 ae98e6b9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 17:07 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 13:30 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 09:08 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/03 11:55 upstream 7eb172143d55 c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/23 06:01 upstream 5cf80612d3f7 d34966d1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/19 19:23 upstream 6537cfb395f3 cbd8edab .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/17 00:28 upstream ba643b6d8440 40a34ec9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/14 22:20 upstream 128c8f96eb86 fe17639f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/02 11:04 upstream 56e6a3499e14 d3ccff63 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/24 12:23 upstream 7dff99b35460 96b1aa46 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/24 05:34 upstream 7dff99b35460 41d2fa6a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/23 20:48 upstream 6de23f81a5e0 7c9658af .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/23 16:37 upstream 6de23f81a5e0 7c9658af .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/19 19:21 upstream 2b7a25df823d 73a252ac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/18 07:23 upstream 2961f841b025 39751c21 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/15 08:08 upstream 3e48a11675c5 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/15 00:33 upstream 3e48a11675c5 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/12 04:01 upstream 1e83ccd5921a 76a109e2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/08 21:47 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/07 16:50 upstream 2687c848e578 f20fc9f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/04 15:32 upstream 5fd0a1df5d05 ea10c935 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/03 05:19 upstream dee65f79364c d78927dd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/02 03:33 upstream 9f2693489ef8 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/02/01 19:35 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/14 22:34 upstream 944aacb68baf d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/11 19:36 upstream 755bc1335e3b d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/09 05:13 upstream 79b95d74470d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/06 13:30 upstream 7f98ab9da046 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/03 14:58 upstream 805f9a061372 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2026/01/01 14:10 upstream b69053dd3ffb d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/31 22:01 upstream 349bd28a86f2 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/25 03:53 upstream ccd1cdca5cd4 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/25 01:05 upstream ccd1cdca5cd4 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/11 17:01 upstream d358e5254674 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/11 04:37 upstream 5c179cac0519 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/10 16:40 upstream 0048fbb4011e d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/04 13:52 upstream 8f7aa3d3c732 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/03 01:29 upstream 4a26e7032d7d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/28 23:00 upstream e538109ac71d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/27 19:00 upstream 765e56e41a5a e8331348 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/25 20:27 upstream 8a2bcda5e139 64219f15 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/24 10:20 upstream ac3fd01e4c1e bf6fe8fe .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/24 01:18 upstream d0e88704d96c 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/15 23:48 upstream f824272b6e3f f7988ea4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/15 10:50 upstream 7a0892d2836e f7988ea4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/15 06:03 upstream 7a0892d2836e f7988ea4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/14 15:56 upstream 6da43bbeb691 6d98c1c8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/12 22:48 upstream 24172e0d7990 07e030de .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/11 23:38 upstream 24172e0d7990 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/10 23:39 upstream 4ea7c1717f3f 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/09 17:07 upstream 439fc29dfd3b 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/08 14:35 upstream e811c33b1f13 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/03 22:03 upstream 6146a0f1dfae e6c64ba8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/18 05:17 upstream b5c6891b2c5b 2a20f901 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/19 21:16 upstream eabcdba3ad40 1d58202c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
* Struck through repros no longer work on HEAD.