syzbot |
sign-in | mailing list | source | docs |
loop1: detected capacity change from 0 to 1024
hfsplus: xattr searching failed
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.1.580/6760 is trying to acquire lock:
ffff888076ec60f8 (&sbi->alloc_mutex){+.+.}-{3:3}, at: hfsplus_block_free+0xc1/0x4d0 fs/hfsplus/bitmap.c:182
but task is already holding lock:
ffff8880197b3708 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x2a0/0xb40 fs/hfsplus/extents.c:574
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
__mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
hfsplus_get_block+0x39b/0x1530 fs/hfsplus/extents.c:260
block_read_full_page+0x2e8/0xd10 fs/buffer.c:2290
do_read_cache_page+0x8a1/0x1030 mm/filemap.c:-1
read_mapping_page include/linux/pagemap.h:515 [inline]
hfsplus_block_allocate+0xf4/0x900 fs/hfsplus/bitmap.c:37
hfsplus_file_extend+0xa8e/0x1950 fs/hfsplus/extents.c:466
hfsplus_get_block+0x40e/0x1530 fs/hfsplus/extents.c:245
__block_write_begin_int+0x54e/0x15a0 fs/buffer.c:2012
__block_write_begin fs/buffer.c:2062 [inline]
block_write_begin fs/buffer.c:2122 [inline]
cont_write_begin+0x58a/0x7b0 fs/buffer.c:2471
hfsplus_write_begin+0x92/0xe0 fs/hfsplus/inode.c:53
__page_symlink+0xf6/0x1f0 fs/namei.c:5199
hfsplus_symlink+0xc6/0x260 fs/hfsplus/dir.c:449
vfs_symlink+0x247/0x3d0 fs/namei.c:4437
do_symlinkat+0x1be/0x6c0 fs/namei.c:4466
__do_sys_symlink fs/namei.c:4488 [inline]
__se_sys_symlink fs/namei.c:4486 [inline]
__x64_sys_symlink+0x7a/0x90 fs/namei.c:4486
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
-> #0 (&sbi->alloc_mutex){+.+.}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x2c33/0x7c60 kernel/locking/lockdep.c:5012
lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623
__mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
hfsplus_block_free+0xc1/0x4d0 fs/hfsplus/bitmap.c:182
hfsplus_free_extents+0x42d/0xa50 fs/hfsplus/extents.c:371
hfsplus_file_truncate+0x745/0xb40 fs/hfsplus/extents.c:589
hfsplus_setattr+0x1c0/0x280 fs/hfsplus/inode.c:267
notify_change+0xbcd/0xee0 fs/attr.c:505
do_truncate+0x197/0x220 fs/open.c:65
vfs_truncate+0x262/0x2f0 fs/open.c:111
do_sys_truncate+0xdc/0x190 fs/open.c:134
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&sbi->alloc_mutex);
lock(&HFSPLUS_I(inode)->extents_lock);
lock(&sbi->alloc_mutex);
*** DEADLOCK ***
3 locks held by syz.1.580/6760:
#0: ffff888076d26460 (sb_writers#17){.+.+}-{0:0}, at: mnt_want_write+0x3d/0x90 fs/namespace.c:386
#1: ffff8880197b3900 (&sb->s_type->i_mutex_key#30){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
#1: ffff8880197b3900 (&sb->s_type->i_mutex_key#30){+.+.}-{3:3}, at: do_truncate+0x183/0x220 fs/open.c:63
#2: ffff8880197b3708 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x2a0/0xb40 fs/hfsplus/extents.c:574
stack backtrace:
CPU: 0 PID: 6760 Comm: syz.1.580 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
<TASK>
dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
check_noncircular+0x274/0x310 kernel/locking/lockdep.c:2133
check_prev_add kernel/locking/lockdep.c:3053 [inline]
check_prevs_add kernel/locking/lockdep.c:3172 [inline]
validate_chain kernel/locking/lockdep.c:3788 [inline]
__lock_acquire+0x2c33/0x7c60 kernel/locking/lockdep.c:5012
lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623
__mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
__mutex_lock kernel/locking/mutex.c:729 [inline]
mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
hfsplus_block_free+0xc1/0x4d0 fs/hfsplus/bitmap.c:182
hfsplus_free_extents+0x42d/0xa50 fs/hfsplus/extents.c:371
hfsplus_file_truncate+0x745/0xb40 fs/hfsplus/extents.c:589
hfsplus_setattr+0x1c0/0x280 fs/hfsplus/inode.c:267
notify_change+0xbcd/0xee0 fs/attr.c:505
do_truncate+0x197/0x220 fs/open.c:65
vfs_truncate+0x262/0x2f0 fs/open.c:111
do_sys_truncate+0xdc/0x190 fs/open.c:134
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f00b6659fc9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f00b48c1038 EFLAGS: 00000246 ORIG_RAX: 000000000000004c
RAX: ffffffffffffffda RBX: 00007f00b68b0fa0 RCX: 00007f00b6659fc9
RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000200000000140
RBP: 00007f00b66dcf91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f00b68b1038 R14: 00007f00b68b0fa0 R15: 00007ffce0192e68
</TASK>
| Time | Kernel | Commit | Syzkaller | Config | Log | Report | Syz repro | C repro | VM info | Assets (help?) | Manager | Title |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2025/10/25 10:57 | linux-5.15.y | ac56c046adf4 | c0460fcd | .config | console log | report | info | [disk image] [vmlinux] [kernel image] | ci2-linux-5-15-kasan | possible deadlock in hfsplus_block_free |