syzbot


INFO: task hung in vfs_unlink (5)

Status: upstream: reported C repro on 2024/12/02 13:10
Subsystems: bcachefs
[Documentation on labels]
Reported-by: syzbot+6983c03a6a28616e362f@syzkaller.appspotmail.com
First crash: 454d, last: 9d22h
Cause bisection: introduced by (bisect log) :
commit f55c096f62f100aa9f5f48d86e1b6846ecbd67e7
Author: Yuezhang Mo <Yuezhang.Mo@sony.com>
Date: Tue May 30 09:35:00 2023 +0000

  exfat: do not zero the extended part

Crash: INFO: rcu detected stall in corrupted (log)
Repro: C syz .config
  
Fix bisection: fixed by (bisect log) :
commit b0522303f67255926b946aa66885a0104d1b2980
Author: Yuezhang Mo <Yuezhang.Mo@sony.com>
Date: Mon Mar 17 02:53:10 2025 +0000

  exfat: fix the infinite loop in exfat_find_last_cluster()

  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [bcachefs?] INFO: task hung in vfs_unlink (5) 0 (4) 2025/04/20 21:12
Similar bugs (14)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: task hung in vfs_unlink (3) 2 412d 423d 0/3 auto-obsoleted due to no activity on 2024/07/08 21:39
linux-6.1 INFO: task hung in vfs_unlink (2) 1 540d 540d 0/3 auto-obsoleted due to no activity on 2024/03/02 11:45
linux-4.14 INFO: task hung in vfs_unlink (2) 1 1488d 1488d 0/1 auto-closed as invalid on 2021/08/17 05:41
linux-4.14 INFO: task hung in vfs_unlink 8 1734d 1975d 0/1 auto-closed as invalid on 2020/12/15 00:37
linux-4.19 INFO: task hung in vfs_unlink (4) 1 920d 920d 0/1 auto-obsoleted due to no activity on 2023/03/09 01:26
linux-4.19 INFO: task hung in vfs_unlink (2) 2 1743d 1805d 0/1 auto-closed as invalid on 2020/12/05 18:54
upstream INFO: task hung in vfs_unlink (3) ext4 1 838d 838d 0/28 auto-obsoleted due to no activity on 2023/04/30 04:19
linux-5.15 INFO: task hung in vfs_unlink 29 425d 782d 0/3 auto-obsoleted due to no activity on 2024/06/26 00:49
upstream INFO: task hung in vfs_unlink ext4 32 1756d 2032d 0/28 auto-closed as invalid on 2020/11/23 01:14
linux-6.1 INFO: task hung in vfs_unlink 2 730d 741d 0/3 auto-obsoleted due to no activity on 2023/08/26 02:49
linux-4.19 INFO: task hung in vfs_unlink 6 1934d 2059d 0/1 auto-closed as invalid on 2020/05/28 17:30
linux-4.19 INFO: task hung in vfs_unlink (3) 1 1382d 1382d 0/1 auto-closed as invalid on 2021/12/01 23:35
upstream INFO: task hung in vfs_unlink (4) fs 6 576d 736d 0/28 auto-obsoleted due to no activity on 2024/01/16 15:08
upstream INFO: task hung in vfs_unlink (2) fs 1 1014d 1014d 0/28 auto-closed as invalid on 2022/10/05 08:13
Last patch testing requests (1)
Created Duration User Patch Repo Result
2025/04/01 13:31 15m retest repro upstream report log

Sample crash report:
INFO: task syz.9.184:7710 blocked for more than 143 seconds.
      Not tainted 6.15.0-rc5-syzkaller-00032-g0d8d44db295c #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.9.184       state:D stack:24888 pid:7710  tgid:7644  ppid:7158   task_flags:0x400140 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x168f/0x4c70 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1ab/0x1f0 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:867 [inline]
 vfs_unlink+0xf2/0x650 fs/namei.c:4568
 do_unlinkat+0x350/0x560 fs/namei.c:4643
 __do_sys_unlinkat fs/namei.c:4684 [inline]
 __se_sys_unlinkat fs/namei.c:4677 [inline]
 __x64_sys_unlinkat+0xd3/0xf0 fs/namei.c:4677
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf6/0x210 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6b86d8e969
RSP: 002b:00007f6b87bcc038 EFLAGS: 00000246 ORIG_RAX: 0000000000000107
RAX: ffffffffffffffda RBX: 00007f6b86fb6080 RCX: 00007f6b86d8e969
RDX: 0000000000000000 RSI: 0000200000000c40 RDI: ffffffffffffff9c
RBP: 00007f6b86e10ab1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f6b86fb6080 R15: 00007ffd74591988
 </TASK>
INFO: task syz.9.184:7714 blocked for more than 144 seconds.
      Not tainted 6.15.0-rc5-syzkaller-00032-g0d8d44db295c #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.9.184       state:D stack:28952 pid:7714  tgid:7644  ppid:7158   task_flags:0x400040 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x168f/0x4c70 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write_nested+0x1b5/0x200 kernel/locking/rwsem.c:1694
 inode_lock_nested include/linux/fs.h:902 [inline]
 lock_rename fs/namei.c:3265 [inline]
 do_renameat2+0x3dd/0xc50 fs/namei.c:5216
 __do_sys_rename fs/namei.c:5317 [inline]
 __se_sys_rename fs/namei.c:5315 [inline]
 __x64_sys_rename+0x82/0x90 fs/namei.c:5315
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf6/0x210 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6b86d8e969
RSP: 002b:00007f6b87bab038 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007f6b86fb6160 RCX: 00007f6b86d8e969
RDX: 0000000000000000 RSI: 0000200000000100 RDI: 00002000000000c0
RBP: 00007f6b86e10ab1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007f6b86fb6160 R15: 00007ffd74591988
 </TASK>
INFO: task syz.4.205:7805 blocked for more than 145 seconds.
      Not tainted 6.15.0-rc5-syzkaller-00032-g0d8d44db295c #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.205       state:D stack:25832 pid:7805  tgid:7798  ppid:5825   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x168f/0x4c70 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1ab/0x1f0 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:867 [inline]
 open_last_lookups fs/namei.c:3797 [inline]
 path_openat+0x8da/0x3830 fs/namei.c:4036
 do_filp_open+0x1fa/0x410 fs/namei.c:4066
 do_sys_openat2+0x121/0x1c0 fs/open.c:1429
 do_sys_open fs/open.c:1444 [inline]
 __do_sys_openat fs/open.c:1460 [inline]
 __se_sys_openat fs/open.c:1455 [inline]
 __x64_sys_openat+0x138/0x170 fs/open.c:1455
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf6/0x210 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f896bb8e969
RSP: 002b:00007f896c9b5038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f896bdb6160 RCX: 00007f896bb8e969
RDX: 0000000000101042 RSI: 0000200000006180 RDI: ffffffffffffff9c
RBP: 00007f896bc10ab1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000052 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f896bdb6160 R15: 00007ffe43ac6378
 </TASK>
INFO: task syz.4.205:7806 blocked for more than 146 seconds.
      Not tainted 6.15.0-rc5-syzkaller-00032-g0d8d44db295c #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.205       state:D stack:25272 pid:7806  tgid:7798  ppid:5825   task_flags:0x400040 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x168f/0x4c70 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 rwsem_down_read_slowpath+0x552/0x880 kernel/locking/rwsem.c:1084
 __down_read_common kernel/locking/rwsem.c:1248 [inline]
 __down_read kernel/locking/rwsem.c:1261 [inline]
 down_read+0x98/0x2e0 kernel/locking/rwsem.c:1526
 inode_lock_shared include/linux/fs.h:877 [inline]
 lookup_slow+0x46/0x70 fs/namei.c:1833
 walk_component+0x2d2/0x400 fs/namei.c:2138
 lookup_last fs/namei.c:2636 [inline]
 path_lookupat+0x163/0x430 fs/namei.c:2660
 filename_lookup+0x212/0x570 fs/namei.c:2689
 kern_path+0x35/0x50 fs/namei.c:2822
 tomoyo_mount_acl security/tomoyo/mount.c:136 [inline]
 tomoyo_mount_permission+0x776/0x970 security/tomoyo/mount.c:237
 security_sb_mount+0xec/0x350 security/security.c:1570
 path_mount+0xbc/0xfe0 fs/namespace.c:4153
 do_mount fs/namespace.c:4224 [inline]
 __do_sys_mount fs/namespace.c:4435 [inline]
 __se_sys_mount+0x317/0x410 fs/namespace.c:4412
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf6/0x210 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f896bb8e969
RSP: 002b:00007f896c994038 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f896bdb6240 RCX: 00007f896bb8e969
RDX: 0000000000000000 RSI: 0000200000000240 RDI: 00002000000001c0
RBP: 00007f896bc10ab1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000245818 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f896bdb6240 R15: 00007ffe43ac6378
 </TASK>
INFO: task syz.4.205:7807 blocked for more than 147 seconds.
      Not tainted 6.15.0-rc5-syzkaller-00032-g0d8d44db295c #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.205       state:D stack:26536 pid:7807  tgid:7798  ppid:5825   task_flags:0x400040 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x168f/0x4c70 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 rwsem_down_read_slowpath+0x552/0x880 kernel/locking/rwsem.c:1084
 __down_read_common kernel/locking/rwsem.c:1248 [inline]
 __down_read kernel/locking/rwsem.c:1261 [inline]
 down_read+0x98/0x2e0 kernel/locking/rwsem.c:1526
 inode_lock_shared include/linux/fs.h:877 [inline]
 open_last_lookups fs/namei.c:3799 [inline]
 path_openat+0x8cb/0x3830 fs/namei.c:4036
 do_filp_open+0x1fa/0x410 fs/namei.c:4066
 do_sys_openat2+0x121/0x1c0 fs/open.c:1429
 do_sys_open fs/open.c:1444 [inline]
 __do_sys_openat fs/open.c:1460 [inline]
 __se_sys_openat fs/open.c:1455 [inline]
 __x64_sys_openat+0x138/0x170 fs/open.c:1455
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf6/0x210 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f896bb8e969
RSP: 002b:00007f896c973038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f896bdb6320 RCX: 00007f896bb8e969
RDX: 0000000000000000 RSI: 0000200000000100 RDI: ffffffffffffff9c
RBP: 00007f896bc10ab1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f896bdb6320 R15: 00007ffe43ac6378
 </TASK>

Showing all locks held in the system:
1 lock held by pool_workqueue_/3:
 #0: ffffffff8df41338 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:304 [inline]
 #0: ffffffff8df41338 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x2f4/0x730 kernel/rcu/tree_exp.h:998
1 lock held by khungtaskd/31:
 #0: ffffffff8df3b860 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8df3b860 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8df3b860 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6764
4 locks held by kworker/u8:2/36:
 #0: ffff88801f69b148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801f69b148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc90000ad7c60 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90000ad7c60 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffff8880594f20e0 (&type->s_umount_key#89){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:562
 #3: ffff8880787f54c0 (&jfs_ip->commit_mutex){+.+.}-{4:4}, at: jfs_commit_inode+0x1ca/0x530 fs/jfs/inode.c:102
5 locks held by kworker/u8:6/1156:
 #0: ffff88801aef3948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801aef3948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc90003d0fc60 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90003d0fc60 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f2d5a10 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x145/0xbd0 net/core/net_namespace.c:608
 #3: ffffffff8f2e2548 (rtnl_mutex){+.+.}-{4:4}, at: cleanup_net+0x611/0xbd0 net/core/net_namespace.c:644
 #4: ffffffff8df41338 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline]
 #4: ffffffff8df41338 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b7/0x730 kernel/rcu/tree_exp.h:998
3 locks held by kworker/u8:7/3503:
 #0: ffff88801a089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc9000c137c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000c137c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f2e2548 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
2 locks held by getty/5573:
 #0: ffff88803489a0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002ffe2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
3 locks held by kworker/1:4/5869:
 #0: ffff88801a080d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a080d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc90004d6fc60 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90004d6fc60 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f2e2548 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
2 locks held by syz.9.184/7645:
 #0: ffff88806c4d4420 (sb_writers#20){.+.+}-{0:0}, at: direct_splice_actor+0x49/0x160 fs/splice.c:1157
 #1: ffff88805addb670 (&sb->s_type->i_mutex_key#26){++++}-{4:4}, at: inode_lock include/linux/fs.h:867 [inline]
 #1: ffff88805addb670 (&sb->s_type->i_mutex_key#26){++++}-{4:4}, at: bch2_direct_write+0x267/0x2d50 fs/bcachefs/fs-io-direct.c:612
3 locks held by syz.9.184/7710:
 #0: ffff88806c4d4420 (sb_writers#20){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:556
 #1: ffff88805addaed8 (&sb->s_type->i_mutex_key#26/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:902 [inline]
 #1: ffff88805addaed8 (&sb->s_type->i_mutex_key#26/1){+.+.}-{4:4}, at: do_unlinkat+0x1bf/0x560 fs/namei.c:4630
 #2: ffff88805addb670 (&sb->s_type->i_mutex_key#26){++++}-{4:4}, at: inode_lock include/linux/fs.h:867 [inline]
 #2: ffff88805addb670 (&sb->s_type->i_mutex_key#26){++++}-{4:4}, at: vfs_unlink+0xf2/0x650 fs/namei.c:4568
2 locks held by syz.9.184/7714:
 #0: ffff88806c4d4420 (sb_writers#20){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:556
 #1: ffff88805addaed8 (&sb->s_type->i_mutex_key#26/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:902 [inline]
 #1: ffff88805addaed8 (&sb->s_type->i_mutex_key#26/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3265 [inline]
 #1: ffff88805addaed8 (&sb->s_type->i_mutex_key#26/1){+.+.}-{4:4}, at: do_renameat2+0x3dd/0xc50 fs/namei.c:5216
2 locks held by bch-copygc/loop/7702:
4 locks held by syz.4.205/7799:
2 locks held by syz.4.205/7805:
 #0: ffff8880594f2420 (sb_writers#22){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:556
 #1: ffff8880787f5870 (&type->i_mutex_dir_key#13){++++}-{4:4}, at: inode_lock include/linux/fs.h:867 [inline]
 #1: ffff8880787f5870 (&type->i_mutex_dir_key#13){++++}-{4:4}, at: open_last_lookups fs/namei.c:3797 [inline]
 #1: ffff8880787f5870 (&type->i_mutex_dir_key#13){++++}-{4:4}, at: path_openat+0x8da/0x3830 fs/namei.c:4036
2 locks held by syz.4.205/7806:
 #0: ffffffff8e638f90 (tomoyo_ss){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:161 [inline]
 #0: ffffffff8e638f90 (tomoyo_ss){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:253 [inline]
 #0: ffffffff8e638f90 (tomoyo_ss){.+.+}-{0:0}, at: tomoyo_read_lock security/tomoyo/common.h:1108 [inline]
 #0: ffffffff8e638f90 (tomoyo_ss){.+.+}-{0:0}, at: tomoyo_mount_permission+0x27a/0x970 security/tomoyo/mount.c:236
 #1: ffff8880787f5870 (&type->i_mutex_dir_key#13){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:877 [inline]
 #1: ffff8880787f5870 (&type->i_mutex_dir_key#13){++++}-{4:4}, at: lookup_slow+0x46/0x70 fs/namei.c:1833
1 lock held by syz.4.205/7807:
 #0: ffff8880787f5870 (&type->i_mutex_dir_key#13){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:877 [inline]
 #0: ffff8880787f5870 (&type->i_mutex_dir_key#13){++++}-{4:4}, at: open_last_lookups fs/namei.c:3799 [inline]
 #0: ffff8880787f5870 (&type->i_mutex_dir_key#13){++++}-{4:4}, at: path_openat+0x8cb/0x3830 fs/namei.c:4036
2 locks held by syz.1.288/8845:
 #0: ffff8880594f20e0 (&type->s_umount_key#89){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880594f20e0 (&type->s_umount_key#89){++++}-{4:4}, at: super_lock+0x2a9/0x3b0 fs/super.c:120
 #1: ffff8881437b07d0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:387 [inline]
 #1: ffff8881437b07d0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: sync_inodes_sb+0x19f/0xa10 fs/fs-writeback.c:2831
2 locks held by syz-executor/10006:
 #0: ffffffff8f7de7b0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8f7de7b0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8f7de7b0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8f2e2548 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8f2e2548 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8f2e2548 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4064
2 locks held by syz-executor/10018:
 #0: ffffffff8f2d5a10 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x317/0x590 net/core/net_namespace.c:514
 #1: ffffffff8f2e2548 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x80/0x210 net/ipv4/nexthop.c:3918
4 locks held by dhcpcd-run-hook/10114:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.15.0-rc5-syzkaller-00032-g0d8d44db295c #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/29/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:274 [inline]
 watchdog+0xfee/0x1030 kernel/hung_task.c:437
 kthread+0x70e/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 7702 Comm: bch-copygc/loop Not tainted 6.15.0-rc5-syzkaller-00032-g0d8d44db295c #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/29/2025
RIP: 0010:match_held_lock+0x0/0xc0 kernel/locking/lockdep.c:5303
Code: 41 5e 41 5f c3 cc cc cc cc cc e8 fb f8 ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <41> 56 53 bb 01 00 00 00 48 39 77 10 74 6a 81 7f 20 00 00 20 00 72
RSP: 0018:ffffc90003207610 EFLAGS: 00000087
RAX: 0000000000000003 RBX: ffff888025620b40 RCX: 30a0a4bb31e2dc00
RDX: ffff888025620000 RSI: ffffffff8df3b860 RDI: ffff888025620b40
RBP: 00000000ffffffff R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: ffffffff843f1210 R12: 0000000000000246
R13: ffff888025620000 R14: ffffffff8df3b860 R15: 0000000000000002
FS:  0000000000000000(0000) GS:ffff8881261fd000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f7a65124440 CR3: 000000007e514000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 __lock_is_held kernel/locking/lockdep.c:5599 [inline]
 lock_is_held_type+0xa8/0x190 kernel/locking/lockdep.c:5938
 __rhashtable_insert_fast include/linux/rhashtable.h:724 [inline]
 rhashtable_lookup_insert_fast include/linux/rhashtable.h:914 [inline]
 move_bucket_in_flight_add fs/bcachefs/movinggc.c:55 [inline]
 bch2_copygc+0x23e5/0x3cf0 fs/bcachefs/movinggc.c:229
 bch2_copygc_thread+0x8c9/0xd40 fs/bcachefs/movinggc.c:405
 kthread+0x70e/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (58):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/05/07 06:31 upstream 0d8d44db295c 350f4ffc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2025/04/30 07:31 upstream ca91b9500108 85a5a23f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2025/03/17 16:53 upstream 4701f33a1070 948c34e4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2025/03/06 09:20 upstream bb2281fb05e5 831e3629 .config strace log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro (corrupt fs)] ci2-upstream-fs INFO: task hung in vfs_unlink
2025/03/06 06:39 upstream bb2281fb05e5 831e3629 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2025/01/15 21:22 upstream 619f0b6fad52 968edaf4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2025/01/05 03:39 upstream ab75170520d4 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2025/01/04 02:12 upstream 63676eefb7a0 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/12/31 06:40 upstream ccb98ccef0e5 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/12/30 11:43 upstream fc033cf25e61 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/12/30 01:22 upstream 4099a71718b0 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/12/24 17:10 upstream f07044dd0df0 444551c4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/12/24 07:45 upstream f07044dd0df0 444551c4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/12/20 00:53 upstream baaa2567a712 5905cb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/12/19 09:05 upstream eabcdba3ad40 1432fc84 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/12/02 08:03 upstream 40384c840ea1 68914665 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/11/30 05:47 upstream 509f806f7f70 68914665 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/11/30 05:44 upstream 509f806f7f70 68914665 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/11/28 18:47 upstream b86545e02e8c 5df23865 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/11/28 18:46 upstream b86545e02e8c 5df23865 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/11/28 18:46 upstream b86545e02e8c 5df23865 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/11/28 13:04 upstream b86545e02e8c 5df23865 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/11/26 06:36 upstream 9f16d5e6f220 11dbc254 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/10/29 08:25 upstream e42b1a9a2557 66aeb999 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/09/27 07:37 upstream 075dbe9f6e3c 9314348a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/09/17 00:16 upstream adfc3ded5c33 c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/06/06 17:38 upstream 2df0193e62cf 121701b6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/06/06 17:27 upstream 2df0193e62cf 121701b6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_unlink
2024/06/01 10:46 upstream d8ec19857b09 3113787f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/04/13 18:55 upstream fe46a7dd189e c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/04/07 11:04 upstream fe46a7dd189e ca620dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in vfs_unlink
2024/04/01 01:54 upstream fe46a7dd189e 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in vfs_unlink
2024/04/01 01:51 upstream fe46a7dd189e 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in vfs_unlink
2024/03/31 19:23 upstream fe46a7dd189e 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/03/31 13:47 upstream fe46a7dd189e 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/03/31 13:33 upstream fe46a7dd189e 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/03/23 00:57 upstream fe46a7dd189e 7a239ce7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
2024/12/11 01:59 linux-next af2ea8ab7a54 cfc402b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in vfs_unlink
2024/12/11 01:59 linux-next af2ea8ab7a54 cfc402b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in vfs_unlink
2024/12/11 00:52 linux-next af2ea8ab7a54 cfc402b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in vfs_unlink
2024/12/11 00:52 linux-next af2ea8ab7a54 cfc402b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in vfs_unlink
2024/12/11 00:51 linux-next af2ea8ab7a54 cfc402b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in vfs_unlink
2024/09/29 01:23 linux-next 40e0c9d414f5 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in vfs_unlink
2024/06/12 06:27 linux-next a957267fa7e9 4d75f4f7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in vfs_unlink
2024/06/11 02:27 linux-next d35b2284e966 048c640a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in vfs_unlink
2024/02/17 23:11 linux-next 2c3b09aac00d 578f7538 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in vfs_unlink
2024/09/26 09:37 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 5f5673607153 0d19f247 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vfs_unlink
2024/07/29 04:12 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci c912bf709078 46eb10b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vfs_unlink
2024/07/02 15:24 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci fdd6064ff31c 8373af66 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vfs_unlink
2024/07/02 15:22 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci fdd6064ff31c 8373af66 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vfs_unlink
2024/07/02 15:21 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci fdd6064ff31c 8373af66 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vfs_unlink
2024/05/15 12:02 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci fda5695d692c fdb4c10c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vfs_unlink
2024/04/26 00:41 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 6a71d2909427 8bdc0f22 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vfs_unlink
2024/04/25 23:59 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 6a71d2909427 8bdc0f22 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vfs_unlink
2024/04/18 17:45 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci b5d2afe8745b af24b050 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vfs_unlink
2024/04/16 12:40 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci b5d2afe8745b 0d592ce4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vfs_unlink
* Struck through repros no longer work on HEAD.