syzbot


possible deadlock in diFree (3)

Status: upstream: reported on 2025/12/10 21:46
Subsystems: jfs
[Documentation on labels]
Reported-by: syzbot+1bcae2d9e9040bb283cc@syzkaller.appspotmail.com
First crash: 64d, last: 2h09m
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [jfs?] possible deadlock in diFree (3) 0 (1) 2025/12/10 21:46
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in diFree (2) jfs 4 C 40 319d 444d 28/29 fixed on 2025/06/10 16:19
upstream possible deadlock in diFree jfs 4 91 551d 657d 0/29 auto-obsoleted due to no activity on 2024/10/15 15:16
linux-6.1 possible deadlock in diFree 4 1 320d 320d 0/3 auto-obsoleted due to no activity on 2025/07/03 19:22

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
kswapd0/72 is trying to acquire lock:
ffff888037ad8920 (&(imap->im_aglock[index])){+.+.}-{4:4}, at: diFree+0x2e9/0x2ca0 fs/jfs/jfs_imap.c:889

but task is already holding lock:
ffffffff8e67e8a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6975 [inline]
ffffffff8e67e8a0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x90d/0x2800 mm/vmscan.c:7354

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (fs_reclaim){+.+.}-{0:0}:
       __fs_reclaim_acquire mm/page_alloc.c:4331 [inline]
       fs_reclaim_acquire+0x71/0x100 mm/page_alloc.c:4345
       might_alloc include/linux/sched/mm.h:317 [inline]
       slab_pre_alloc_hook mm/slub.c:4904 [inline]
       slab_alloc_node mm/slub.c:5239 [inline]
       __do_kmalloc_node mm/slub.c:5656 [inline]
       __kmalloc_noprof+0x9c/0x7e0 mm/slub.c:5669
       kmalloc_noprof include/linux/slab.h:961 [inline]
       __jfs_set_acl+0x9d/0x1c0 fs/jfs/acl.c:80
       jfs_set_acl+0x1da/0x320 fs/jfs/acl.c:115
       set_posix_acl fs/posix_acl.c:954 [inline]
       vfs_set_acl+0x87d/0xb00 fs/posix_acl.c:1133
       do_set_acl+0xf5/0x190 fs/posix_acl.c:1278
       do_setxattr fs/xattr.c:633 [inline]
       filename_setxattr+0x2fc/0x630 fs/xattr.c:665
       path_setxattrat+0x3f3/0x430 fs/xattr.c:713
       __do_sys_setxattr fs/xattr.c:747 [inline]
       __se_sys_setxattr fs/xattr.c:743 [inline]
       __x64_sys_setxattr+0xbc/0xe0 fs/xattr.c:743
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&jfs_ip->commit_mutex){+.+.}-{4:4}:
       __mutex_lock_common kernel/locking/mutex.c:614 [inline]
       __mutex_lock+0x19f/0x1300 kernel/locking/mutex.c:776
       diNewIAG fs/jfs/jfs_imap.c:2522 [inline]
       diAllocExt fs/jfs/jfs_imap.c:1905 [inline]
       diAllocAG+0x145b/0x1db0 fs/jfs/jfs_imap.c:1669
       diAlloc+0x1d5/0x1680 fs/jfs/jfs_imap.c:1590
       ialloc+0x8c/0x8f0 fs/jfs/jfs_inode.c:56
       jfs_mkdir+0x1e1/0xb00 fs/jfs/namei.c:225
       vfs_mkdir+0x753/0x870 fs/namei.c:5139
       do_mkdirat+0x27d/0x4b0 fs/namei.c:5173
       __do_sys_mkdirat fs/namei.c:5195 [inline]
       __se_sys_mkdirat fs/namei.c:5193 [inline]
       __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:5193
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&jfs_ip->rdwrlock/1){++++}-{4:4}:
       down_read_nested+0x49/0x2e0 kernel/locking/rwsem.c:1662
       diAlloc+0x795/0x1680 fs/jfs/jfs_imap.c:1388
       ialloc+0x8c/0x8f0 fs/jfs/jfs_inode.c:56
       jfs_create+0x1da/0xb10 fs/jfs/namei.c:92
       lookup_open fs/namei.c:4449 [inline]
       open_last_lookups fs/namei.c:4549 [inline]
       path_openat+0x18dd/0x3e20 fs/namei.c:4793
       do_filp_open+0x22d/0x490 fs/namei.c:4823
       do_sys_openat2+0x12f/0x220 fs/open.c:1430
       do_sys_open fs/open.c:1436 [inline]
       __do_sys_openat fs/open.c:1452 [inline]
       __se_sys_openat fs/open.c:1447 [inline]
       __x64_sys_openat+0x138/0x170 fs/open.c:1447
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&(imap->im_aglock[index])){+.+.}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3165 [inline]
       check_prevs_add kernel/locking/lockdep.c:3284 [inline]
       validate_chain kernel/locking/lockdep.c:3908 [inline]
       __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
       lock_acquire+0x106/0x330 kernel/locking/lockdep.c:5868
       __mutex_lock_common kernel/locking/mutex.c:614 [inline]
       __mutex_lock+0x19f/0x1300 kernel/locking/mutex.c:776
       diFree+0x2e9/0x2ca0 fs/jfs/jfs_imap.c:889
       jfs_evict_inode+0x331/0x440 fs/jfs/inode.c:162
       evict+0x61e/0xb10 fs/inode.c:837
       __dentry_kill+0x1a2/0x5e0 fs/dcache.c:670
       shrink_kill+0xa9/0x2c0 fs/dcache.c:1147
       shrink_dentry_list+0x2e0/0x5e0 fs/dcache.c:1174
       prune_dcache_sb+0x119/0x180 fs/dcache.c:1256
       super_cache_scan+0x369/0x4b0 fs/super.c:222
       do_shrink_slab+0x6df/0x10d0 mm/shrinker.c:437
       shrink_slab_memcg mm/shrinker.c:550 [inline]
       shrink_slab+0x830/0x1150 mm/shrinker.c:628
       shrink_one+0x2d9/0x710 mm/vmscan.c:4921
       shrink_many mm/vmscan.c:4982 [inline]
       lru_gen_shrink_node mm/vmscan.c:5060 [inline]
       shrink_node+0x2f8b/0x35f0 mm/vmscan.c:6047
       kswapd_shrink_node mm/vmscan.c:6901 [inline]
       balance_pgdat mm/vmscan.c:7084 [inline]
       kswapd+0x144c/0x2800 mm/vmscan.c:7354
       kthread+0x726/0x8b0 kernel/kthread.c:463
       ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246

other info that might help us debug this:

Chain exists of:
  &(imap->im_aglock[index]) --> &jfs_ip->commit_mutex --> fs_reclaim

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&jfs_ip->commit_mutex);
                               lock(fs_reclaim);
  lock(&(imap->im_aglock[index]));

 *** DEADLOCK ***

2 locks held by kswapd0/72:
 #0: ffffffff8e67e8a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6975 [inline]
 #0: ffffffff8e67e8a0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x90d/0x2800 mm/vmscan.c:7354
 #1: ffff88804272a0e0 (&type->s_umount_key#51){.+.+}-{4:4}, at: super_trylock_shared fs/super.c:563 [inline]
 #1: ffff88804272a0e0 (&type->s_umount_key#51){.+.+}-{4:4}, at: super_cache_scan+0x91/0x4b0 fs/super.c:197

stack backtrace:
CPU: 0 UID: 0 PID: 72 Comm: kswapd0 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043
 check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
 check_prev_add kernel/locking/lockdep.c:3165 [inline]
 check_prevs_add kernel/locking/lockdep.c:3284 [inline]
 validate_chain kernel/locking/lockdep.c:3908 [inline]
 __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
 lock_acquire+0x106/0x330 kernel/locking/lockdep.c:5868
 __mutex_lock_common kernel/locking/mutex.c:614 [inline]
 __mutex_lock+0x19f/0x1300 kernel/locking/mutex.c:776
 diFree+0x2e9/0x2ca0 fs/jfs/jfs_imap.c:889
 jfs_evict_inode+0x331/0x440 fs/jfs/inode.c:162
 evict+0x61e/0xb10 fs/inode.c:837
 __dentry_kill+0x1a2/0x5e0 fs/dcache.c:670
 shrink_kill+0xa9/0x2c0 fs/dcache.c:1147
 shrink_dentry_list+0x2e0/0x5e0 fs/dcache.c:1174
 prune_dcache_sb+0x119/0x180 fs/dcache.c:1256
 super_cache_scan+0x369/0x4b0 fs/super.c:222
 do_shrink_slab+0x6df/0x10d0 mm/shrinker.c:437
 shrink_slab_memcg mm/shrinker.c:550 [inline]
 shrink_slab+0x830/0x1150 mm/shrinker.c:628
 shrink_one+0x2d9/0x710 mm/vmscan.c:4921
 shrink_many mm/vmscan.c:4982 [inline]
 lru_gen_shrink_node mm/vmscan.c:5060 [inline]
 shrink_node+0x2f8b/0x35f0 mm/vmscan.c:6047
 kswapd_shrink_node mm/vmscan.c:6901 [inline]
 balance_pgdat mm/vmscan.c:7084 [inline]
 kswapd+0x144c/0x2800 mm/vmscan.c:7354
 kthread+0x726/0x8b0 kernel/kthread.c:463
 ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/09 01:08 upstream e98f34af6116 4c131dc4 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in diFree
2025/12/06 21:41 upstream 416f99c3b16f d1b870e1 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in diFree
* Struck through repros no longer work on HEAD.