loop2: detected capacity change from 0 to 1024 ====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Not tainted ------------------------------------------------------ syz.2.2186/16795 is trying to acquire lock: ffff88802510e0b0 (&tree->tree_lock/1){+.+.}-{4:4}, at: hfsplus_find_init+0x186/0x2d0 fs/hfsplus/bfind.c:28 but task is already holding lock: ffff888055eab048 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{4:4}, at: hfsplus_file_truncate+0x211/0xcb0 fs/hfsplus/extents.c:573 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{4:4}: __mutex_lock_common kernel/locking/mutex.c:598 [inline] __mutex_lock+0x193/0x1060 kernel/locking/mutex.c:760 hfsplus_file_extend+0x1ca/0x12b0 fs/hfsplus/extents.c:453 hfsplus_bmap_reserve+0x31f/0x420 fs/hfsplus/btree.c:358 __hfsplus_ext_write_extent+0x474/0x5e0 fs/hfsplus/extents.c:104 __hfsplus_ext_cache_extent+0x98/0x9d0 fs/hfsplus/extents.c:186 hfsplus_file_truncate+0x44d/0xcb0 fs/hfsplus/extents.c:596 hfsplus_setattr+0x19f/0x320 fs/hfsplus/inode.c:266 notify_change+0x6d2/0x12a0 fs/attr.c:546 do_truncate+0x1d7/0x230 fs/open.c:68 vfs_truncate+0x5d6/0x6e0 fs/open.c:118 do_sys_truncate fs/open.c:141 [inline] __do_sys_truncate fs/open.c:153 [inline] __se_sys_truncate fs/open.c:151 [inline] __x64_sys_truncate+0x172/0x1e0 fs/open.c:151 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (&tree->tree_lock/1){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x126f/0x1c90 kernel/locking/lockdep.c:5237 lock_acquire kernel/locking/lockdep.c:5868 [inline] lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5825 __mutex_lock_common kernel/locking/mutex.c:598 [inline] __mutex_lock+0x193/0x1060 kernel/locking/mutex.c:760 hfsplus_find_init+0x186/0x2d0 fs/hfsplus/bfind.c:28 hfsplus_file_truncate+0x2b8/0xcb0 fs/hfsplus/extents.c:579 hfsplus_delete_inode+0x18f/0x220 fs/hfsplus/inode.c:452 hfsplus_unlink+0x581/0x7f0 fs/hfsplus/dir.c:405 hfsplus_rename+0xbc/0x200 fs/hfsplus/dir.c:547 vfs_rename+0xfa3/0x2290 fs/namei.c:5216 do_renameat2+0x7d8/0xc20 fs/namei.c:5364 __do_sys_rename fs/namei.c:5411 [inline] __se_sys_rename fs/namei.c:5409 [inline] __x64_sys_rename+0x7d/0xa0 fs/namei.c:5409 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&HFSPLUS_I(inode)->extents_lock); lock(&tree->tree_lock/1); lock(&HFSPLUS_I(inode)->extents_lock); lock(&tree->tree_lock/1); *** DEADLOCK *** 6 locks held by syz.2.2186/16795: #0: ffff888059b7a420 (sb_writers#12){.+.+}-{0:0}, at: do_renameat2+0x439/0xc20 fs/namei.c:5306 #1: ffff888055eaeef8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1025 [inline] #1: ffff888055eaeef8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3360 [inline] #1: ffff888055eaeef8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3357 [inline] #1: ffff888055eaeef8 (&type->i_mutex_dir_key#8/1){+.+.}-{4:4}, at: do_renameat2+0xb42/0xc20 fs/namei.c:5311 #2: ffff888055eab238 (&sb->s_type->i_mutex_key#46){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:980 [inline] #2: ffff888055eab238 (&sb->s_type->i_mutex_key#46){+.+.}-{4:4}, at: lock_two_nondirectories+0xd1/0x200 fs/inode.c:1232 #3: ffff888055eaf5b8 (&sb->s_type->i_mutex_key#46/4){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1025 [inline] #3: ffff888055eaf5b8 (&sb->s_type->i_mutex_key#46/4){+.+.}-{4:4}, at: lock_two_nondirectories+0xed/0x200 fs/inode.c:1234 #4: ffff88803c2cc998 (&sbi->vh_mutex){+.+.}-{4:4}, at: hfsplus_unlink+0x183/0x7f0 fs/hfsplus/dir.c:370 #5: ffff888055eab048 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{4:4}, at: hfsplus_file_truncate+0x211/0xcb0 fs/hfsplus/extents.c:573 stack backtrace: CPU: 0 UID: 0 PID: 16795 Comm: syz.2.2186 Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 print_circular_bug+0x275/0x350 kernel/locking/lockdep.c:2043 check_noncircular+0x14c/0x170 kernel/locking/lockdep.c:2175 check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x126f/0x1c90 kernel/locking/lockdep.c:5237 lock_acquire kernel/locking/lockdep.c:5868 [inline] lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5825 __mutex_lock_common kernel/locking/mutex.c:598 [inline] __mutex_lock+0x193/0x1060 kernel/locking/mutex.c:760 hfsplus_find_init+0x186/0x2d0 fs/hfsplus/bfind.c:28 hfsplus_file_truncate+0x2b8/0xcb0 fs/hfsplus/extents.c:579 hfsplus_delete_inode+0x18f/0x220 fs/hfsplus/inode.c:452 hfsplus_unlink+0x581/0x7f0 fs/hfsplus/dir.c:405 hfsplus_rename+0xbc/0x200 fs/hfsplus/dir.c:547 vfs_rename+0xfa3/0x2290 fs/namei.c:5216 do_renameat2+0x7d8/0xc20 fs/namei.c:5364 __do_sys_rename fs/namei.c:5411 [inline] __se_sys_rename fs/namei.c:5409 [inline] __x64_sys_rename+0x7d/0xa0 fs/namei.c:5409 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fea9978efc9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fea9a573038 EFLAGS: 00000246 ORIG_RAX: 0000000000000052 RAX: ffffffffffffffda RBX: 00007fea999e5fa0 RCX: 00007fea9978efc9 RDX: 0000000000000000 RSI: 00002000000002c0 RDI: 0000200000000580 RBP: 00007fea99811f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fea999e6038 R14: 00007fea999e5fa0 R15: 00007ffd0184ee58