syzbot


INFO: task hung in __start_renaming

Status: upstream: reported C repro on 2025/11/23 22:44
Subsystems: jfs
[Documentation on labels]
Reported-by: syzbot+2fefb910d2c20c0698d8@syzkaller.appspotmail.com
First crash: 68d, last: 3d23h
Cause bisection: introduced by (bisect log) :
commit 1e3c3784221ac86401aea72e2bae36057062fc9c
Author: Mateusz Guzik <mjguzik@gmail.com>
Date: Fri Oct 10 22:17:36 2025 +0000

  fs: rework I_NEW handling to operate without fences

Crash: INFO: task hung in do_renameat2 (log)
Repro: C syz .config
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ntfs3?] INFO: task hung in __start_renaming 11 (17) 2025/11/25 09:35
Last patch testing requests (8)
Created Duration User Patch Repo Result
2026/01/01 01:28 26m retest repro linux-next OK log
2026/01/01 01:28 27m retest repro linux-next OK log
2026/01/01 01:28 36m retest repro linux-next OK log
2025/11/24 08:08 28m mjguzik@gmail.com git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs-6.19.directory.locking OK log
2025/11/24 08:07 29m mjguzik@gmail.com git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs-6.19.inode report log
2025/11/24 06:29 26m mjguzik@gmail.com patch linux-next report log
2025/11/24 03:29 59m mjguzik@gmail.com patch linux-next error
2025/11/24 00:28 28m mjguzik@gmail.com patch linux-next OK log

Sample crash report:
INFO: task syz.0.586:11114 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.586       state:D stack:22776 pid:11114 tgid:11113 ppid:10153  task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5260 [inline]
 __schedule+0x147b/0x4f50 kernel/sched/core.c:6867
 __schedule_loop kernel/sched/core.c:6949 [inline]
 rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7245
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1f8f/0x25c0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xbd/0x170 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14d/0x730 kernel/locking/rwbase_rt.c:244
 inode_lock_nested include/linux/fs.h:1072 [inline]
 lock_rename fs/namei.c:3721 [inline]
 __start_renaming+0x148/0x410 fs/namei.c:3817
 do_renameat2+0x3c9/0x910 fs/namei.c:6031
 __do_sys_rename fs/namei.c:6099 [inline]
 __se_sys_rename fs/namei.c:6097 [inline]
 __x64_sys_rename+0x82/0x90 fs/namei.c:6097
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fad7440acb9
RSP: 002b:00007fad72666028 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007fad74685fa0 RCX: 00007fad7440acb9
RDX: 0000000000000000 RSI: 0000200000000f40 RDI: 0000200000000600
RBP: 00007fad74478bf7 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fad74686038 R14: 00007fad74685fa0 R15: 00007ffdf1d413c8
 </TASK>
INFO: task syz.0.586:11122 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.586       state:D stack:27552 pid:11122 tgid:11113 ppid:10153  task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5260 [inline]
 __schedule+0x147b/0x4f50 kernel/sched/core.c:6867
 __schedule_loop kernel/sched/core.c:6949 [inline]
 rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7245
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1f8f/0x25c0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xbd/0x170 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14d/0x730 kernel/locking/rwbase_rt.c:244
 inode_lock include/linux/fs.h:1027 [inline]
 open_last_lookups fs/namei.c:4546 [inline]
 path_openat+0xb65/0x3e70 fs/namei.c:4793
 do_filp_open+0x22d/0x490 fs/namei.c:4823
 do_sys_openat2+0x12f/0x220 fs/open.c:1430
 do_sys_open fs/open.c:1436 [inline]
 __do_sys_creat fs/open.c:1514 [inline]
 __se_sys_creat fs/open.c:1508 [inline]
 __x64_sys_creat+0x8f/0xc0 fs/open.c:1508
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fad7440acb9
RSP: 002b:00007fad72624028 EFLAGS: 00000246 ORIG_RAX: 0000000000000055
RAX: ffffffffffffffda RBX: 00007fad74686180 RCX: 00007fad7440acb9
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000e00
RBP: 00007fad74478bf7 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fad74686218 R14: 00007fad74686180 R15: 00007ffdf1d413c8
 </TASK>
INFO: task syz.0.586:11140 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.586       state:D stack:27680 pid:11140 tgid:11113 ppid:10153  task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5260 [inline]
 __schedule+0x147b/0x4f50 kernel/sched/core.c:6867
 __schedule_loop kernel/sched/core.c:6949 [inline]
 rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7245
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1f8f/0x25c0 kernel/locking/rtmutex.c:1760
 __rwbase_read_lock+0xc3/0x180 kernel/locking/rwbase_rt.c:114
 rwbase_read_lock kernel/locking/rwbase_rt.c:147 [inline]
 __down_read kernel/locking/rwsem.c:1466 [inline]
 down_read+0x132/0x200 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:1042 [inline]
 lookup_slow+0x46/0x70 fs/namei.c:1882
 walk_component fs/namei.c:2229 [inline]
 lookup_last fs/namei.c:2730 [inline]
 path_lookupat+0x3f5/0x8c0 fs/namei.c:2754
 filename_lookup+0x256/0x5d0 fs/namei.c:2783
 user_path_at+0x3a/0x60 fs/namei.c:3576
 do_mount fs/namespace.c:4032 [inline]
 __do_sys_mount fs/namespace.c:4224 [inline]
 __se_sys_mount+0x2dc/0x420 fs/namespace.c:4201
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fad7440acb9
RSP: 002b:00007fad72201028 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007fad74686270 RCX: 00007fad7440acb9
RDX: 0000200000000100 RSI: 00002000000000c0 RDI: 0000200000000000
RBP: 00007fad74478bf7 R08: 0000200000000180 R09: 0000000000000000
R10: 0000000000090081 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fad74686308 R14: 00007fad74686270 R15: 00007ffdf1d413c8
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:0/12:
 #0: ffff88813fe29938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88813fe29938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3340
 #1: ffffc90000117bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90000117bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3340
 #2: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
1 lock held by khungtaskd/38:
 #0: ffffffff8d9c7780 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8d9c7780 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8d9c7780 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
4 locks held by kworker/u8:6/1018:
 #0: ffff8881416fb938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff8881416fb938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3340
 #1: ffffc9000453fbc0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000453fbc0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3340
 #2: ffff8881437180d0 (&type->s_umount_key#75){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
 #3: ffff88803a184fe8 (&jfs_ip->commit_mutex){+.+.}-{4:4}, at: jfs_commit_inode+0x1ca/0x530 fs/jfs/inode.c:108
3 locks held by kworker/u8:9/1299:
 #0: ffff88814d92e138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88814d92e138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3340
 #1: ffffc90004e5fbc0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90004e5fbc0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3340
 #2: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x124/0x1680 net/ipv6/addrconf.c:4194
7 locks held by kworker/u8:13/2229:
2 locks held by getty/5558:
 #0: ffff8880269330a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e8b2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211
2 locks held by kworker/u8:17/6099:
 #0: ffff88813fe29938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88813fe29938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3340
 #1: ffffc900063c7bc0 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc900063c7bc0 ((reaper_work).work){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3340
2 locks held by kworker/u8:23/9873:
 #0: ffff88813fe29938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88813fe29938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3340
 #1: ffffc9000598fbc0 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000598fbc0 (connector_reaper_work){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3340
2 locks held by syz.0.586/11114:
 #0: ffff888143718480 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88803a1853b8 (&type->i_mutex_dir_key#22/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88803a1853b8 (&type->i_mutex_dir_key#22/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3721 [inline]
 #1: ffff88803a1853b8 (&type->i_mutex_dir_key#22/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3817
4 locks held by syz.0.586/11121:
2 locks held by syz.0.586/11122:
 #0: ffff888143718480 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88803a1853b8 (&type->i_mutex_dir_key#23){++++}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #1: ffff88803a1853b8 (&type->i_mutex_dir_key#23){++++}-{4:4}, at: open_last_lookups fs/namei.c:4546 [inline]
 #1: ffff88803a1853b8 (&type->i_mutex_dir_key#23){++++}-{4:4}, at: path_openat+0xb65/0x3e70 fs/namei.c:4793
1 lock held by syz.0.586/11140:
 #0: ffff88803a1853b8 (&type->i_mutex_dir_key#23){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1042 [inline]
 #0: ffff88803a1853b8 (&type->i_mutex_dir_key#23){++++}-{4:4}, at: lookup_slow+0x46/0x70 fs/namei.c:1882
2 locks held by syz-executor/11831:
 #0: ffffffff8e4687c8 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e4687c8 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8e4687c8 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8a1/0x1be0 net/core/rtnetlink.c:4071
2 locks held by syz-executor/11851:
 #0: ffffffff8e497548 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e497548 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8e497548 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8a1/0x1be0 net/core/rtnetlink.c:4071
2 locks held by syz-executor/11963:
 #0: ffffffff8f2949b0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8f2949b0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8f2949b0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8a1/0x1be0 net/core/rtnetlink.c:4071
2 locks held by syz-executor/12036:
 #0: ffffffff8f2949b0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8f2949b0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8f2949b0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8ed37f78 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8a1/0x1be0 net/core/rtnetlink.c:4071
2 locks held by rm/12091:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/13/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xf90/0xfe0 kernel/hung_task.c:515
 kthread+0x726/0x8b0 kernel/kthread.c:463
 ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 12093 Comm: syz.6.676 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/13/2026
RIP: 0010:__sanitizer_cov_trace_const_cmp2+0x0/0xa0 kernel/kcov.c:306
Code: 00 48 89 7c 0a 10 48 89 74 0a 18 48 89 44 0a 20 c3 cc cc cc cc cc 0f 1f 40 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <f3> 0f 1e fa 48 8b 04 24 65 48 8b 0d b8 dc 40 10 65 44 8b 05 d8 dc
RSP: 0000:ffffc90004f9fc98 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000040
RDX: 0000000000000001 RSI: 0000000000000020 RDI: 0000000000000000
RBP: ffffc90004f9fe58 R08: 0000000000000000 R09: 0000000000000000
R10: dffffc0000000000 R11: fffffbfff1e4f46f R12: 1ffff11005c6a84a
R13: ffff88809f784510 R14: ffff88802e354250 R15: 0000000000000020
FS:  00007fdcfe4a66c0(0000) GS:ffff8881267fc000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fdcf6a7e000 CR3: 000000005dc92000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 mem_cgroup_exit_user_fault include/linux/memcontrol.h:1881 [inline]
 handle_mm_fault+0x1118/0x13c0 mm/memory.c:6598
 do_user_addr_fault+0xa73/0x1360 arch/x86/mm/fault.c:1334
 handle_page_fault arch/x86/mm/fault.c:1474 [inline]
 exc_page_fault+0x6a/0xc0 arch/x86/mm/fault.c:1527
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618
RIP: 0033:0x7fdd0010288e
Code: c1 49 39 4f 08 72 54 8d 4d ff 85 ed 74 3b 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 48 39 f0 72 1b 4d 8b 07 49 89 c1 49 29 f1 <47> 0f b6 0c 08 45 84 c9 74 08 45 88 0c 00 49 8b 47 10 48 83 c0 01
RSP: 002b:00007fdcfe4a5470 EFLAGS: 00010206
RAX: 00000000009f8001 RBX: 00007fdcfe4a5530 RCX: 0000000000000012
RDX: 0000000000000015 RSI: 0000000000000001 RDI: 00007fdcfe4a55d0
RBP: 0000000000000102 R08: 00007fdcf6086000 R09: 00000000009f8000
R10: 0000000000000000 R11: 00007fdcfe4a5540 R12: 0000000000000001
R13: 00007fdd002f7880 R14: 0000000000000000 R15: 00007fdcfe4a55d0
 </TASK>

Crashes (21):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/01/23 19:43 upstream c072629f05d7 e2b1b6e6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/19 14:09 upstream 24d479d26b25 a9fc5226 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/14 20:49 upstream c537e12daeec d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/13 05:48 upstream b71e635feefc d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/12 20:05 upstream 0f61b1860cc3 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/07 15:43 upstream f0b9d8eb98df d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/05 16:31 upstream 3609fa95fb0f d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2025/12/17 23:42 upstream ea1013c15392 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2025/12/04 11:46 upstream 559e608c4655 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __start_renaming
2025/11/26 00:01 linux-next 92fd6e84175b 64219f15 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/25 16:52 linux-next 92fd6e84175b 64219f15 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/23 14:39 linux-next d724c6f85e80 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/23 06:29 linux-next d724c6f85e80 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/22 02:05 linux-next d724c6f85e80 c31c1b0b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/21 06:10 linux-next 88cbd8ac379c 280ea308 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/20 06:47 linux-next fe4d0dea039f 26ee5237 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/20 02:02 linux-next fe4d0dea039f 26ee5237 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/20 01:40 linux-next fe4d0dea039f 26ee5237 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/20 00:04 linux-next fe4d0dea039f 26ee5237 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/19 22:29 linux-next fe4d0dea039f 26ee5237 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/19 19:39 linux-next fe4d0dea039f 26ee5237 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
* Struck through repros no longer work on HEAD.