syzbot


INFO: task hung in do_unlinkat (5)

Status: upstream: reported C repro on 2024/06/02 14:09
Subsystems: jfs
[Documentation on labels]
Reported-by: syzbot+08b113332e19a9378dd5@syzkaller.appspotmail.com
First crash: 681d, last: 4d00h
Cause bisection: failed (error log, bisect log)
  
Discussions (4)
Title Replies (including bot) Last reply
[syzbot] Monthly ntfs3 report (Dec 2025) 0 (1) 2025/12/29 08:11
[syzbot] Monthly ntfs3 report (Nov 2025) 0 (1) 2025/11/27 07:44
[syzbot] Monthly kernfs report (Jan 2025) 0 (1) 2025/01/16 10:12
[syzbot] [kernfs?] [bcachefs?] [exfat?] INFO: task hung in do_unlinkat (5) 0 (2) 2024/11/26 01:26
Similar bugs (9)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in do_unlinkat (2) 1 1 1174d 1174d 0/1 auto-obsoleted due to no activity on 2023/03/27 07:50
android-49 INFO: task hung in do_unlinkat 1 5 2726d 2837d 0/3 auto-closed as invalid on 2019/02/24 11:49
upstream INFO: task hung in do_unlinkat (2) fs 1 4 1832d 1832d 0/29 auto-closed as invalid on 2021/05/17 08:41
linux-6.1 INFO: task hung in do_unlinkat origin:upstream missing-backport 1 C inconclusive 12 283d 410d 0/3 upstream: reported C repro on 2024/12/30 08:52
upstream INFO: task hung in do_unlinkat exfat 1 34 2586d 2823d 0/29 closed as dup on 2018/10/27 13:26
upstream INFO: task hung in do_unlinkat (3) fs 1 2 1550d 1593d 0/29 closed as invalid on 2022/02/07 19:19
linux-4.19 INFO: task hung in do_unlinkat 1 1 1304d 1304d 0/1 auto-obsoleted due to no activity on 2022/11/17 10:56
upstream INFO: task hung in do_unlinkat (4) exfat 1 4 1142d 1249d 0/29 auto-obsoleted due to no activity on 2023/04/08 02:53
linux-5.15 INFO: task hung in do_unlinkat 1 3 494d 642d 0/3 auto-obsoleted due to no activity on 2025/01/15 03:04
Last patch testing requests (8)
Created Duration User Patch Repo Result
2025/12/31 13:30 26m retest repro linux-next OK log
2025/12/31 13:02 27m retest repro linux-next OK log
2025/06/15 06:54 23m retest repro upstream OK log
2025/06/15 06:54 23m retest repro upstream OK log
2025/04/05 13:56 17m retest repro upstream report log
2025/04/05 13:56 16m retest repro upstream report log
2024/12/21 18:21 17m retest repro upstream report log
2024/12/21 18:21 19m retest repro upstream report log

Sample crash report:
INFO: task syz.8.255:8028 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.255       state:D stack:28984 pid:8028  tgid:8021  ppid:6612   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5260 [inline]
 __schedule+0x1498/0x5140 kernel/sched/core.c:6867
 __schedule_loop kernel/sched/core.c:6949 [inline]
 rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7245
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1f8f/0x25c0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xbd/0x170 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14d/0x730 kernel/locking/rwbase_rt.c:244
 inode_lock_nested include/linux/fs.h:1072 [inline]
 __start_dirop fs/namei.c:2873 [inline]
 start_dirop fs/namei.c:2884 [inline]
 do_unlinkat+0x1c3/0x590 fs/namei.c:5429
 __do_sys_unlink fs/namei.c:5483 [inline]
 __se_sys_unlink fs/namei.c:5481 [inline]
 __x64_sys_unlink+0x47/0x50 fs/namei.c:5481
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f3d0e79af79
RSP: 002b:00007f3d0c9cd028 EFLAGS: 00000246 ORIG_RAX: 0000000000000057
RAX: ffffffffffffffda RBX: 00007f3d0ea16090 RCX: 00007f3d0e79af79
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000000
RBP: 00007f3d0e8316e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f3d0ea16128 R14: 00007f3d0ea16090 R15: 00007fff4ce79708
 </TASK>
INFO: task syz.8.255:8030 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.255       state:D stack:28984 pid:8030  tgid:8021  ppid:6612   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5260 [inline]
 __schedule+0x1498/0x5140 kernel/sched/core.c:6867
 __schedule_loop kernel/sched/core.c:6949 [inline]
 rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7245
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1f8f/0x25c0 kernel/locking/rtmutex.c:1760
 __rwbase_read_lock+0xc3/0x180 kernel/locking/rwbase_rt.c:114
 rwbase_read_lock kernel/locking/rwbase_rt.c:147 [inline]
 __down_read kernel/locking/rwsem.c:1466 [inline]
 down_read+0x132/0x200 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:1042 [inline]
 lookup_slow+0x46/0x70 fs/namei.c:1882
 walk_component fs/namei.c:2229 [inline]
 lookup_last fs/namei.c:2730 [inline]
 path_lookupat+0x3f5/0x8c0 fs/namei.c:2754
 filename_lookup+0x256/0x5d0 fs/namei.c:2783
 user_path_at+0x3a/0x60 fs/namei.c:3576
 do_fchmodat+0xce/0x1d0 fs/open.c:692
 __do_sys_chmod fs/open.c:718 [inline]
 __se_sys_chmod fs/open.c:716 [inline]
 __x64_sys_chmod+0x62/0x70 fs/open.c:716
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f3d0e79af79
RSP: 002b:00007f3d0c9ac028 EFLAGS: 00000246 ORIG_RAX: 000000000000005a
RAX: ffffffffffffffda RBX: 00007f3d0ea16180 RCX: 00007f3d0e79af79
RDX: 0000000000000000 RSI: 0000000000000020 RDI: 0000200000000000
RBP: 00007f3d0e8316e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f3d0ea16218 R14: 00007f3d0ea16180 R15: 00007fff4ce79708
 </TASK>
INFO: task syz.8.255:8031 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.255       state:D stack:28328 pid:8031  tgid:8021  ppid:6612   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5260 [inline]
 __schedule+0x1498/0x5140 kernel/sched/core.c:6867
 __schedule_loop kernel/sched/core.c:6949 [inline]
 rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7245
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1f8f/0x25c0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xbd/0x170 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14d/0x730 kernel/locking/rwbase_rt.c:244
 inode_lock include/linux/fs.h:1027 [inline]
 open_last_lookups fs/namei.c:4546 [inline]
 path_openat+0xb65/0x3e70 fs/namei.c:4793
 do_filp_open+0x22d/0x490 fs/namei.c:4823
 do_sys_openat2+0x12f/0x220 fs/open.c:1430
 do_sys_open fs/open.c:1436 [inline]
 __do_sys_openat fs/open.c:1452 [inline]
 __se_sys_openat fs/open.c:1447 [inline]
 __x64_sys_openat+0x138/0x170 fs/open.c:1447
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f3d0e79af79
RSP: 002b:00007f3d0c589028 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f3d0ea16270 RCX: 00007f3d0e79af79
RDX: 000000000000275a RSI: 0000200000000080 RDI: ffffffffffffff9c
RBP: 00007f3d0e8316e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f3d0ea16308 R14: 00007f3d0ea16270 R15: 00007fff4ce79708
 </TASK>
INFO: task syz.8.255:8032 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.255       state:D stack:28984 pid:8032  tgid:8021  ppid:6612   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5260 [inline]
 __schedule+0x1498/0x5140 kernel/sched/core.c:6867
 __schedule_loop kernel/sched/core.c:6949 [inline]
 rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7245
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1f8f/0x25c0 kernel/locking/rtmutex.c:1760
 __rwbase_read_lock+0xc3/0x180 kernel/locking/rwbase_rt.c:114
 rwbase_read_lock kernel/locking/rwbase_rt.c:147 [inline]
 __down_read kernel/locking/rwsem.c:1466 [inline]
 down_read+0x132/0x200 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:1042 [inline]
 lookup_slow+0x46/0x70 fs/namei.c:1882
 walk_component fs/namei.c:2229 [inline]
 lookup_last fs/namei.c:2730 [inline]
 path_lookupat+0x3f5/0x8c0 fs/namei.c:2754
 filename_lookup+0x256/0x5d0 fs/namei.c:2783
 user_path_at+0x3a/0x60 fs/namei.c:3576
 do_sys_truncate+0xb6/0x1c0 fs/open.c:139
 __do_sys_truncate fs/open.c:153 [inline]
 __se_sys_truncate fs/open.c:151 [inline]
 __x64_sys_truncate+0x5b/0x70 fs/open.c:151
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f3d0e79af79
RSP: 002b:00007f3d0c166028 EFLAGS: 00000246 ORIG_RAX: 000000000000004c
RAX: ffffffffffffffda RBX: 00007f3d0ea16360 RCX: 00007f3d0e79af79
RDX: 0000000000000000 RSI: 0000000000001bf8 RDI: 0000200000000280
RBP: 00007f3d0e8316e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f3d0ea163f8 R14: 00007f3d0ea16360 R15: 00007fff4ce79708
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/38:
 #0: ffffffff8dbc77c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8dbc77c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8dbc77c0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
4 locks held by kworker/u8:2/42:
 #0: ffff88801aad4938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88801aad4938 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3340
 #1: ffffc90000b47bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90000b47bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3340
 #2: ffffffff8ef2d000 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xfe/0x7b0 net/core/net_namespace.c:670
 #3: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0xe5/0xa00 net/core/dev.c:13037
3 locks held by kworker/u8:3/44:
 #0: ffff88814d8ba138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88814d8ba138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3340
 #1: ffffc90000b67bc0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90000b67bc0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3340
 #2: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x124/0x1680 net/ipv6/addrconf.c:4194
3 locks held by kworker/u8:4/88:
 #0: ffff88813fe69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88813fe69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3340
 #1: ffffc9000159fbc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000159fbc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3340
 #2: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:313
4 locks held by kworker/u8:5/152:
 #0: ffff88801deab138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88801deab138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3340
 #1: ffffc900039ffbc0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc900039ffbc0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3340
 #2: ffff88805d2080d0 (&type->s_umount_key#66){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
 #3: ffff888060f8bc68 (&jfs_ip->commit_mutex){+.+.}-{4:4}, at: jfs_commit_inode+0x1ca/0x530 fs/jfs/inode.c:108
1 lock held by dhcpcd/5458:
 #0: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #0: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #0: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8a1/0x1be0 net/core/rtnetlink.c:4071
2 locks held by getty/5553:
 #0: ffff88814ede00a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e762e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211
3 locks held by kworker/0:4/5875:
 #0: ffff88813fe55138 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88813fe55138 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3340
 #1: ffffc9000575fbc0 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000575fbc0 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3340
 #2: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
3 locks held by kworker/1:5/5916:
4 locks held by syz.8.255/8022:
2 locks held by syz.8.255/8028:
 #0: ffff88805d208480 (sb_writers#21){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff888060f8c038 (&type->i_mutex_dir_key#18/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff888060f8c038 (&type->i_mutex_dir_key#18/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2873 [inline]
 #1: ffff888060f8c038 (&type->i_mutex_dir_key#18/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2884 [inline]
 #1: ffff888060f8c038 (&type->i_mutex_dir_key#18/1){+.+.}-{4:4}, at: do_unlinkat+0x1c3/0x590 fs/namei.c:5429
1 lock held by syz.8.255/8030:
 #0: ffff888060f8c038 (&type->i_mutex_dir_key#18){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1042 [inline]
 #0: ffff888060f8c038 (&type->i_mutex_dir_key#18){++++}-{4:4}, at: lookup_slow+0x46/0x70 fs/namei.c:1882
2 locks held by syz.8.255/8031:
 #0: ffff88805d208480 (sb_writers#21){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff888060f8c038 (&type->i_mutex_dir_key#18){++++}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #1: ffff888060f8c038 (&type->i_mutex_dir_key#18){++++}-{4:4}, at: open_last_lookups fs/namei.c:4546 [inline]
 #1: ffff888060f8c038 (&type->i_mutex_dir_key#18){++++}-{4:4}, at: path_openat+0xb65/0x3e70 fs/namei.c:4793
1 lock held by syz.8.255/8032:
 #0: ffff888060f8c038 (&type->i_mutex_dir_key#18){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1042 [inline]
 #0: ffff888060f8c038 (&type->i_mutex_dir_key#18){++++}-{4:4}, at: lookup_slow+0x46/0x70 fs/namei.c:1882
1 lock held by syz-executor/8523:
 #0: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #0: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #0: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8a1/0x1be0 net/core/rtnetlink.c:4071
1 lock held by syz-executor/8539:
 #0: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: inet6_rtm_newaddr+0x65f/0xe30 net/ipv6/addrconf.c:5027
1 lock held by syz-executor/8546:
 #0: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: inet6_rtm_newaddr+0x65f/0xe30 net/ipv6/addrconf.c:5027
2 locks held by syz-executor/8592:
 #0: ffffffff8f480908 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8f480908 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8f480908 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8a1/0x1be0 net/core/rtnetlink.c:4071
2 locks held by syz-executor/8659:
 #0: ffffffff8e698d68 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e698d68 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8e698d68 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8ef3b938 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8a1/0x1be0 net/core/rtnetlink.c:4071
3 locks held by syz-executor/8785:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xf90/0xfe0 kernel/hung_task.c:515
 kthread+0x726/0x8b0 kernel/kthread.c:463
 ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 17 Comm: pr/legacy Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
RIP: 0010:io_serial_in+0x77/0xc0 drivers/tty/serial/8250/8250_port.c:400
Code: e8 ae 9f a1 fc 44 89 f9 d3 e3 49 83 ee 80 4c 89 f0 48 c1 e8 03 42 80 3c 20 00 74 08 4c 89 f7 e8 5f 07 06 fd 41 03 1e 89 da ec <0f> b6 c0 5b 41 5c 41 5e 41 5f e9 9a a6 ef 05 cc 44 89 f9 80 e1 07
RSP: 0018:ffffc90000167950 EFLAGS: 00000202
RAX: 1ffffffff32d8500 RBX: 00000000000003fd RCX: 0000000000000000
RDX: 00000000000003fd RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffffff996c2ff0 R08: 0000000000000000 R09: 0000000000000000
R10: dffffc0000000000 R11: ffffffff85219740 R12: dffffc0000000000
R13: 0000000000000000 R14: ffffffff996c2d60 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8881265c9000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffc5e304000 CR3: 00000000324fc000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 serial_in drivers/tty/serial/8250/8250.h:128 [inline]
 serial_lsr_in drivers/tty/serial/8250/8250.h:150 [inline]
 wait_for_lsr+0x1aa/0x2f0 drivers/tty/serial/8250/8250_port.c:1961
 fifo_wait_for_lsr drivers/tty/serial/8250/8250_port.c:3234 [inline]
 serial8250_console_fifo_write drivers/tty/serial/8250/8250_port.c:3257 [inline]
 serial8250_console_write+0x120d/0x1b90 drivers/tty/serial/8250/8250_port.c:3342
 console_emit_next_record kernel/printk/printk.c:3109 [inline]
 console_flush_one_record+0x68b/0xb90 kernel/printk/printk.c:3215
 legacy_kthread_func+0x1b6/0x250 kernel/printk/printk.c:3674
 kthread+0x726/0x8b0 kernel/kthread.c:463
 ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>

Crashes (133):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/09 18:12 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2026/02/01 16:18 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2026/01/22 18:44 upstream a66191c590b3 82c9c083 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2026/01/04 09:28 upstream aacb0a6d604a d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/12/17 12:31 upstream ea1013c15392 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/12/10 01:27 upstream cb015814f8b6 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/12/05 15:47 upstream 2061f18ad76e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/11/12 12:14 upstream 24172e0d7990 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/11/12 05:45 upstream 24172e0d7990 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/10/25 08:19 upstream 2e590d67c2d8 c0460fcd .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in do_unlinkat
2025/10/13 06:13 upstream 3a8660878839 ff1712fe .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/10/09 09:35 upstream cd5a0afbdf80 7e2882b3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/10/03 18:17 upstream e406d57be7bd 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/09/29 15:55 upstream e5f0a698b34e 86341da6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/09/03 01:07 upstream e6b9dce0aeeb 96a211bc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/09/02 15:43 upstream b320789d6883 96a211bc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/08/18 22:51 upstream c17b750b3ad9 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/08/12 04:22 upstream 8f5ae30d69d7 c06e8995 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/08/05 03:42 upstream d632ab86aff2 f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/07/24 13:44 upstream 25fae0b93d1d 65d60d73 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/07/10 02:02 upstream 8c2e52ebbe88 f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/07/09 11:54 upstream 733923397fd9 f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/06/28 21:58 upstream aaf724ed6926 fc9d8ee5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/06/19 06:54 upstream fb4d33ab452e ed3e87f7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/25 08:26 upstream d0c22de9995b ed351ea7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/16 22:59 upstream 3c21441eeffc f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/09 18:03 upstream 9c69f8884904 77908e5f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/08 22:39 upstream 2c89c1b655c0 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/08 01:54 upstream 707df3375124 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/04 15:36 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/04 14:08 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/03 10:34 upstream 95d3481af6dc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/03 06:17 upstream 2bfcee565c3a b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/01 19:13 upstream 4f79eaa2ceac 51b137cd .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/28 04:42 upstream b4432656b36e c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/27 19:45 upstream 5bc1018675ec c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/27 11:50 upstream 5bc1018675ec c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/22 10:21 upstream a33b5a08cbbd 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/21 22:43 upstream 9d7a0577c9db 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/21 04:56 upstream 6fea5fabd332 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/19 14:43 upstream 3088d26962e8 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/19 07:19 upstream 3088d26962e8 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/11/30 11:50 upstream 509f806f7f70 68914665 .config strace log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/11/26 01:25 upstream 9f16d5e6f220 11dbc254 .config strace log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/06/23 12:19 upstream 5f583a3162ff edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in do_unlinkat
2024/05/22 04:48 upstream b6394d6f7159 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/05/16 14:46 upstream 3c999d1ae3c7 ef5d53ed .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in do_unlinkat
2024/05/10 10:19 upstream 448b3fe5a0ea de979bc2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/05/03 08:20 upstream 49a73b1652c5 ddfc15a1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/04/28 23:56 upstream e67572cd2204 07b455f9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/11/23 04:33 linux-next d724c6f85e80 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
2025/11/22 02:58 linux-next d724c6f85e80 c31c1b0b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
2025/11/22 00:30 linux-next d724c6f85e80 c31c1b0b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
2025/11/10 23:41 linux-next ab40c92c74c6 4e1406b4 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
2025/11/10 21:36 linux-next ab40c92c74c6 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
* Struck through repros no longer work on HEAD.