syzbot


possible deadlock in hfsplus_get_block

Status: upstream: reported on 2025/06/17 11:25
Reported-by: syzbot+5620460156f848837a86@syzkaller.appspotmail.com
First crash: 21h13m, last: 21h13m
Similar bugs (5)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 possible deadlock in hfsplus_get_block origin:upstream missing-backport C 1200 1d08h 826d 0/3 upstream: reported C repro on 2023/03/14 12:03
linux-4.19 possible deadlock in hfsplus_get_block hfsplus C 248 834d 935d 0/1 upstream: reported C repro on 2022/11/26 01:19
linux-4.14 possible deadlock in hfsplus_get_block hfsplus C 131 839d 931d 0/1 upstream: reported C repro on 2022/11/30 01:33
linux-6.1 possible deadlock in hfsplus_get_block origin:lts-only C 1079 1d23h 827d 0/3 upstream: reported C repro on 2023/03/13 14:59
upstream possible deadlock in hfsplus_get_block hfs C error 13244 2h51m 935d 0/28 upstream: reported C repro on 2022/11/25 09:45

Sample crash report:
loop1: detected capacity change from 0 to 1024
============================================
WARNING: possible recursive locking detected
6.6.93-syzkaller #0 Not tainted
--------------------------------------------
syz.1.51/5975 is trying to acquire lock:
ffff888021915f88 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_get_block+0x39f/0x1530 fs/hfsplus/extents.c:260

but task is already holding lock:
ffff8880219173c8 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x293/0xb40 fs/hfsplus/extents.c:577

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&HFSPLUS_I(inode)->extents_lock);
  lock(&HFSPLUS_I(inode)->extents_lock);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

4 locks held by syz.1.51/5975:
 #0: ffff88807e676418 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:403
 #1: ffff8880219175d0 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
 #1: ffff8880219175d0 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: do_truncate+0x187/0x220 fs/open.c:64
 #2: ffff8880219173c8 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x293/0xb40 fs/hfsplus/extents.c:577
 #3: ffff888053ec00f8 (&sbi->alloc_mutex){+.+.}-{3:3}, at: hfsplus_block_free+0xc3/0x4b0 fs/hfsplus/bitmap.c:182

stack backtrace:
CPU: 0 PID: 5975 Comm: syz.1.51 Not tainted 6.6.93-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
 check_deadlock kernel/locking/lockdep.c:3062 [inline]
 validate_chain kernel/locking/lockdep.c:3856 [inline]
 __lock_acquire+0x5d40/0x7c80 kernel/locking/lockdep.c:5137
 lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
 __mutex_lock_common kernel/locking/mutex.c:603 [inline]
 __mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
 hfsplus_get_block+0x39f/0x1530 fs/hfsplus/extents.c:260
 block_read_full_folio+0x42e/0xf40 fs/buffer.c:2406
 filemap_read_folio+0x167/0x760 mm/filemap.c:2420
 do_read_cache_folio+0x470/0x7e0 mm/filemap.c:3789
 do_read_cache_page+0x32/0x250 mm/filemap.c:3855
 read_mapping_page include/linux/pagemap.h:892 [inline]
 hfsplus_block_free+0x12c/0x4b0 fs/hfsplus/bitmap.c:185
 hfsplus_free_extents+0x176/0xac0 fs/hfsplus/extents.c:363
 hfsplus_file_truncate+0x735/0xb40 fs/hfsplus/extents.c:592
 hfsplus_setattr+0x1c3/0x280 fs/hfsplus/inode.c:269
 notify_change+0xb0d/0xe10 fs/attr.c:499
 do_truncate+0x19b/0x220 fs/open.c:66
 handle_truncate fs/namei.c:3291 [inline]
 do_open fs/namei.c:3636 [inline]
 path_openat+0x298c/0x3190 fs/namei.c:3789
 do_filp_open+0x1c5/0x3d0 fs/namei.c:3816
 do_sys_openat2+0x12c/0x1c0 fs/open.c:1419
 do_sys_open fs/open.c:1434 [inline]
 __do_sys_creat fs/open.c:1512 [inline]
 __se_sys_creat fs/open.c:1506 [inline]
 __x64_sys_creat+0x90/0xb0 fs/open.c:1506
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fc4ea38e929
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fc4eb137038 EFLAGS: 00000246 ORIG_RAX: 0000000000000055
RAX: ffffffffffffffda RBX: 00007fc4ea5b5fa0 RCX: 00007fc4ea38e929
RDX: 0000000000000000 RSI: 0000000000000002 RDI: 0000200000000200
RBP: 00007fc4ea410b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fc4ea5b5fa0 R15: 00007fffe84c8d78
 </TASK>
hfsplus: unable to mark blocks free: error -5
hfsplus: can't free extent

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/06/17 11:24 linux-6.6.y c2603c511feb cfebc887 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in hfsplus_get_block
2025/06/17 11:24 linux-6.6.y c2603c511feb cfebc887 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in hfsplus_get_block
* Struck through repros no longer work on HEAD.