syzbot


INFO: task hung in remove_one

Status: upstream: reported syz repro on 2025/01/06 11:11
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+3147c5de186107ffc7a1@syzkaller.appspotmail.com
First crash: 196d, last: 4d08h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] INFO: task hung in remove_one 0 (1) 2025/01/06 11:11
Last patch testing requests (10)
Created Duration User Patch Repo Result
2025/05/22 11:16 21m retest repro upstream report log
2025/05/22 11:16 21m retest repro upstream OK log
2025/04/09 01:34 28m retest repro upstream report log
2025/04/09 01:34 27m retest repro upstream report log
2025/04/09 01:34 27m retest repro upstream report log
2025/04/09 01:34 20m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log
2025/04/09 00:08 15m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log

Sample crash report:
INFO: task kworker/u8:4:60 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc3-syzkaller-00072-gee88bddf7f2f #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:4    state:D stack:22744 pid:60    tgid:60    ppid:2      task_flags:0x4208060 flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6878
 schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common+0x2ff/0x4e0 kernel/sched/completion.c:116
 __debugfs_file_removed fs/debugfs/inode.c:775 [inline]
 remove_one+0x312/0x420 fs/debugfs/inode.c:782
 simple_recursive_removal+0x21e/0x690 fs/libfs.c:636
 debugfs_remove+0x5d/0x80 fs/debugfs/inode.c:805
 nsim_dev_health_exit+0x3b/0xe0 drivers/net/netdevsim/health.c:227
 nsim_dev_reload_destroy+0x144/0x4d0 drivers/net/netdevsim/dev.c:1664
 nsim_dev_reload_down+0x6e/0xd0 drivers/net/netdevsim/dev.c:968
 devlink_reload+0x19e/0x7c0 net/devlink/dev.c:461
 devlink_pernet_pre_exit+0x1a0/0x2b0 net/devlink/core.c:509
 ops_pre_exit_list net/core/net_namespace.c:162 [inline]
 ops_undo_list+0x187/0xab0 net/core/net_namespace.c:235
 cleanup_net+0x408/0x890 net/core/net_namespace.c:686
 process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238
 process_scheduled_works kernel/workqueue.c:3321 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3402
 kthread+0x3c5/0x780 kernel/kthread.c:464
 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz-executor:10317 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc3-syzkaller-00072-gee88bddf7f2f #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:24336 pid:10317 tgid:10317 ppid:1      task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0x6c7/0xb90 kernel/locking/mutex.c:747
 device_lock include/linux/device.h:884 [inline]
 device_del+0xa0/0x9f0 drivers/base/core.c:3843
 device_unregister+0x1d/0xc0 drivers/base/core.c:3922
 nsim_bus_dev_del drivers/net/netdevsim/bus.c:462 [inline]
 del_device_store+0x355/0x4a0 drivers/net/netdevsim/bus.c:226
 bus_attr_store+0x71/0xb0 drivers/base/bus.c:172
 sysfs_kf_write+0xef/0x150 fs/sysfs/file.c:145
 kernfs_fop_write_iter+0x351/0x510 fs/kernfs/file.c:334
 new_sync_write fs/read_write.c:593 [inline]
 vfs_write+0x6c7/0x1150 fs/read_write.c:686
 ksys_write+0x12a/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f191b58d3df
RSP: 002b:00007ffd650cbc20 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f191b58d3df
RDX: 0000000000000001 RSI: 00007ffd650cbc70 RDI: 0000000000000005
RBP: 00007f191b611d8d R08: 0000000000000000 R09: 00007ffd650cba77
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000001
R13: 00007ffd650cbc70 R14: 00007f191c2e4620 R15: 0000000000000003
 </TASK>
INFO: task syz.3.4152:10410 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc3-syzkaller-00072-gee88bddf7f2f #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.4152      state:D stack:27512 pid:10410 tgid:10410 ppid:8449   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0x6c7/0xb90 kernel/locking/mutex.c:747
 devlink_health_report+0x3ba/0x9c0 net/devlink/health.c:627
 nsim_dev_health_break_write+0x166/0x210 drivers/net/netdevsim/health.c:162
 full_proxy_write+0x13c/0x200 fs/debugfs/file.c:398
 vfs_write+0x29d/0x1150 fs/read_write.c:684
 ksys_write+0x12a/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6c76b8e929
RSP: 002b:00007fff8ca69808 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f6c76db5fa0 RCX: 00007f6c76b8e929
RDX: 00000000000001ff RSI: 0000000000000000 RDI: 0000000000000003
RBP: 00007f6c76c10b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f6c76db5fa0 R14: 00007f6c76db5fa0 R15: 0000000000000003
 </TASK>
INFO: task syz.1.4182:10440 blocked for more than 144 seconds.
      Not tainted 6.16.0-rc3-syzkaller-00072-gee88bddf7f2f #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.4182      state:D stack:27848 pid:10440 tgid:10440 ppid:9634   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 rwsem_down_read_slowpath+0x62f/0xb60 kernel/locking/rwsem.c:1084
 __down_read_common kernel/locking/rwsem.c:1248 [inline]
 __down_read kernel/locking/rwsem.c:1261 [inline]
 down_read+0xef/0x480 kernel/locking/rwsem.c:1526
 inode_lock_shared include/linux/fs.h:884 [inline]
 open_last_lookups fs/namei.c:3815 [inline]
 path_openat+0x818/0x2cb0 fs/namei.c:4052
 do_filp_open+0x20b/0x470 fs/namei.c:4082
 do_sys_openat2+0x11b/0x1d0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_openat fs/open.c:1468 [inline]
 __se_sys_openat fs/open.c:1463 [inline]
 __x64_sys_openat+0x174/0x210 fs/open.c:1463
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f2e9118e929
RSP: 002b:00007ffdc572b348 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f2e913b5fa0 RCX: 00007f2e9118e929
RDX: 0000000000048081 RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007f2e91210b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f2e913b5fa0 R14: 00007f2e913b5fa0 R15: 0000000000000004
 </TASK>
INFO: task syz.0.4181:10441 blocked for more than 144 seconds.
      Not tainted 6.16.0-rc3-syzkaller-00072-gee88bddf7f2f #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.4181      state:D stack:28216 pid:10441 tgid:10441 ppid:9001   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 rwsem_down_read_slowpath+0x62f/0xb60 kernel/locking/rwsem.c:1084
 __down_read_common kernel/locking/rwsem.c:1248 [inline]
 __down_read kernel/locking/rwsem.c:1261 [inline]
 down_read+0xef/0x480 kernel/locking/rwsem.c:1526
 inode_lock_shared include/linux/fs.h:884 [inline]
 open_last_lookups fs/namei.c:3815 [inline]
 path_openat+0x818/0x2cb0 fs/namei.c:4052
 do_filp_open+0x20b/0x470 fs/namei.c:4082
 do_sys_openat2+0x11b/0x1d0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_openat fs/open.c:1468 [inline]
 __se_sys_openat fs/open.c:1463 [inline]
 __x64_sys_openat+0x174/0x210 fs/open.c:1463
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff050b8e929
RSP: 002b:00007ffec070ebe8 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007ff050db5fa0 RCX: 00007ff050b8e929
RDX: 0000000000048081 RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007ff050c10b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ff050db5fa0 R14: 00007ff050db5fa0 R15: 0000000000000004
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e5c47c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e5c47c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e5c47c0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6770
6 locks held by kworker/u8:4/60:
 #0: ffff88801c6fe148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc9000211fd10 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffffffff90338250 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xad/0x890 net/core/net_namespace.c:662
 #3: ffff888076c280e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline]
 #3: ffff888076c280e8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline]
 #3: ffff888076c280e8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x12c/0x2b0 net/devlink/core.c:506
 #4: ffff888076c29250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:276 [inline]
 #4: ffff888076c29250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline]
 #4: ffff888076c29250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x136/0x2b0 net/devlink/core.c:506
 #5: ffff88805c9b8148 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #5: ffff88805c9b8148 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: simple_recursive_removal+0x2c5/0x690 fs/libfs.c:628
2 locks held by kworker/u8:8/3021:
2 locks held by getty/5591:
 #0: ffff8880317150a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc900036cb2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
5 locks held by syz-executor/10317:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805037e888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
 #4: ffff888076c280e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline]
 #4: ffff888076c280e8 (&dev->mutex){....}-{4:4}, at: device_del+0xa0/0x9f0 drivers/base/core.c:3843
2 locks held by syz.3.4152/10410:
 #0: ffff8881412fe428 (sb_writers#9){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888076c29250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devlink_health_report+0x3ba/0x9c0 net/devlink/health.c:627
2 locks held by syz.1.4182/10440:
 #0: ffff8881412fe428 (sb_writers#9){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:3805 [inline]
 #0: ffff8881412fe428 (sb_writers#9){.+.+}-{0:0}, at: path_openat+0x1ec8/0x2cb0 fs/namei.c:4052
 #1: ffff88805c9b8148 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:884 [inline]
 #1: ffff88805c9b8148 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: open_last_lookups fs/namei.c:3815 [inline]
 #1: ffff88805c9b8148 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: path_openat+0x818/0x2cb0 fs/namei.c:4052
2 locks held by syz.0.4181/10441:
 #0: ffff8881412fe428 (sb_writers#9){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:3805 [inline]
 #0: ffff8881412fe428 (sb_writers#9){.+.+}-{0:0}, at: path_openat+0x1ec8/0x2cb0 fs/namei.c:4052
 #1: ffff88805c9b8148 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:884 [inline]
 #1: ffff88805c9b8148 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: open_last_lookups fs/namei.c:3815 [inline]
 #1: ffff88805c9b8148 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: path_openat+0x818/0x2cb0 fs/namei.c:4052
4 locks held by syz-executor/10469:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888076f24488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/10480:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888022396488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/10482:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888026a2a488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/10504:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805cea0c88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/10514:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888066f6f488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/10525:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880762ca488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/10527:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888030165888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/10550:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880603e1c88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/10562:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88804fe15c88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/10573:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88807881b488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/10575:
 #0: ffff88807d386428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88802524e088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff8880280230f8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8eafe8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc3-syzkaller-00072-gee88bddf7f2f #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline]
 watchdog+0xf70/0x12c0 kernel/hung_task.c:470
 kthread+0x3c5/0x780 kernel/kthread.c:464
 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 49 Comm: kworker/u8:3 Not tainted 6.16.0-rc3-syzkaller-00072-gee88bddf7f2f #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Workqueue: bat_events batadv_nc_worker
RIP: 0010:trace_hardirqs_on+0x36/0x40 kernel/trace/trace_preemptirq.c:79
Code: c0 75 0e e8 3c 82 d4 ff 48 8b 3c 24 e9 33 4f b9 09 48 8b 3c 24 e8 aa fe ff ff 65 c7 05 d3 af 09 12 00 00 00 00 e8 1a 82 d4 ff <48> 8b 3c 24 e9 11 4f b9 09 90 90 90 90 90 90 90 90 90 90 90 90 90
RSP: 0018:ffffc90000b97a90 EFLAGS: 00000046
RAX: 0000000000000001 RBX: 00000000fffffe00 RCX: 0000000000000006
RDX: 0000000000000000 RSI: ffffffff8ddee470 RDI: ffffffff8c1565a0
RBP: ffffffff8b49cf44 R08: 0000000000000000 R09: 0000000000000001
R10: ffffffff90a82d57 R11: 0000000000000001 R12: ffffffff8b49d2f0
R13: ffff888027cf90c0 R14: ffff88807f174d80 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888124760000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055a6881bc240 CR3: 000000000e382000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 __local_bh_enable_ip+0xa4/0x120 kernel/softirq.c:412
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 batadv_nc_purge_paths+0x1d4/0x3a0 net/batman-adv/network-coding.c:471
 batadv_nc_worker+0x921/0x1030 net/batman-adv/network-coding.c:720
 process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238
 process_scheduled_works kernel/workqueue.c:3321 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3402
 kthread+0x3c5/0x780 kernel/kthread.c:464
 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (46):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/06/26 23:37 upstream ee88bddf7f2f 1ae8177e .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/19 21:11 upstream 24770983ccfe ed3e87f7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/16 19:14 upstream e04c78d86a96 d1716036 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/14 01:31 upstream 27605c8c0f69 0e8da31f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/03 04:42 upstream 7f9039c524a3 a30356b7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/06 06:41 upstream 01f95500a162 ae98e6b9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 17:07 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 13:30 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 09:08 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/03 11:55 upstream 7eb172143d55 c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/23 06:01 upstream 5cf80612d3f7 d34966d1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/19 19:23 upstream 6537cfb395f3 cbd8edab .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/17 00:28 upstream ba643b6d8440 40a34ec9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/14 22:20 upstream 128c8f96eb86 fe17639f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/02 11:04 upstream 56e6a3499e14 d3ccff63 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/30 05:07 upstream afa9a6f4f574 fc9d8ee5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/26 13:46 upstream ee88bddf7f2f 1ae8177e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/23 14:54 upstream 86731a2a651e d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/19 17:34 upstream 24770983ccfe ed3e87f7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/16 16:08 upstream e04c78d86a96 d1716036 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/13 22:55 upstream 27605c8c0f69 0e8da31f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/07 08:34 upstream c0c9379f235d 4826c28e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/03 01:33 upstream 7f9039c524a3 a30356b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/08 11:15 upstream d76bb1ebb558 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/04 18:16 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/01 22:55 upstream 4f79eaa2ceac 51b137cd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/28 08:28 upstream b4432656b36e c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/19 16:36 upstream 8560697b23dc 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/18 05:17 upstream b5c6891b2c5b 2a20f901 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/18 00:04 upstream b5c6891b2c5b 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/10 15:19 upstream 2eb959eeecc6 1ef3ab4d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/26 00:06 upstream 2df0c02dab82 89d30d73 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/21 02:08 upstream 5fc319360819 62330552 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/16 22:28 upstream cb82ca153949 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/12 20:56 upstream 0fed89a961ea 1a5d9317 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/03 10:19 upstream 7eb172143d55 c3901742 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/01 11:57 upstream 276f98efb64a 67cf5345 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/26 01:08 upstream 2a1944bff549 d34966d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/23 02:01 upstream 5cf80612d3f7 d34966d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/19 15:35 upstream 6537cfb395f3 cbd8edab .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/14 20:19 upstream 128c8f96eb86 fe17639f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/31 12:49 upstream 2a9f04bde07a 4c6ac32f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/16 20:46 upstream ce69b4019001 f9e07a6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/23 02:17 upstream bcde95ce32b6 b4fbdbd4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/22 04:11 upstream 48f506ad0b68 d7f584ee .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/19 21:16 upstream eabcdba3ad40 1d58202c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
* Struck through repros no longer work on HEAD.