syzbot


possible deadlock in evict (4)

Status: upstream: reported on 2025/11/26 23:09
Subsystems: ext4
[Documentation on labels]
Reported-by: syzbot+a30a00d3e694e4fa1315@syzkaller.appspotmail.com
First crash: 52d, last: 4d08h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ext4?] possible deadlock in evict (4) 0 (1) 2025/11/26 23:09
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in evict (2) ext4 4 39 1061d 1364d 0/29 closed as dup on 2022/04/21 19:37
upstream possible deadlock in evict ext4 4 3 1768d 1796d 0/29 auto-closed as invalid on 2021/07/11 09:40
upstream possible deadlock in evict (3) ext4 4 569 567d 1050d 0/29 auto-obsoleted due to no activity on 2024/09/04 11:02

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Tainted: G             L     
------------------------------------------------------
syz.1.736/8722 is trying to acquire lock:
ffff88802cdf4610 (sb_internal){.+.+}-{0:0}, at: evict+0x3c2/0xad0 fs/inode.c:837

but task is already holding lock:
ffff88802cdf6b98 (&sbi->s_writepages_rwsem){++++}-{0:0}, at: ext4_writepages_down_write fs/ext4/ext4.h:1832 [inline]
ffff88802cdf6b98 (&sbi->s_writepages_rwsem){++++}-{0:0}, at: ext4_ext_migrate+0x39c/0x1ee0 fs/ext4/migrate.c:438

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&sbi->s_writepages_rwsem){++++}-{0:0}:
       percpu_down_read_internal include/linux/percpu-rwsem.h:53 [inline]
       percpu_down_read include/linux/percpu-rwsem.h:77 [inline]
       ext4_writepages_down_read fs/ext4/ext4.h:1820 [inline]
       ext4_writepages+0x224/0x7d0 fs/ext4/inode.c:3025
       do_writepages+0x27a/0x600 mm/page-writeback.c:2598
       __writeback_single_inode+0x168/0x14a0 fs/fs-writeback.c:1737
       writeback_single_inode+0x425/0x10f0 fs/fs-writeback.c:1858
       write_inode_now+0x170/0x1e0 fs/fs-writeback.c:2924
       iput_final fs/inode.c:1944 [inline]
       iput.part.0+0x815/0x1190 fs/inode.c:2006
       iput+0x35/0x40 fs/inode.c:1969
       ext4_xattr_block_set+0x67c/0x3640 fs/ext4/xattr.c:2203
       ext4_xattr_move_to_block fs/ext4/xattr.c:2668 [inline]
       ext4_xattr_make_inode_space fs/ext4/xattr.c:2743 [inline]
       ext4_expand_extra_isize_ea+0x1442/0x1ab0 fs/ext4/xattr.c:2831
       __ext4_expand_extra_isize+0x346/0x480 fs/ext4/inode.c:6349
       ext4_try_to_expand_extra_isize fs/ext4/inode.c:6392 [inline]
       __ext4_mark_inode_dirty+0x544/0x840 fs/ext4/inode.c:6470
       ext4_evict_inode+0x713/0x1730 fs/ext4/inode.c:253
       evict+0x3c2/0xad0 fs/inode.c:837
       iput_final fs/inode.c:1954 [inline]
       iput.part.0+0x621/0x1190 fs/inode.c:2006
       iput+0x35/0x40 fs/inode.c:1969
       ext4_orphan_cleanup+0x731/0x11e0 fs/ext4/orphan.c:472
       __ext4_fill_super fs/ext4/super.c:5658 [inline]
       ext4_fill_super+0x7ec1/0xb570 fs/ext4/super.c:5777
       get_tree_bdev_flags+0x38c/0x620 fs/super.c:1691
       vfs_get_tree+0x8e/0x330 fs/super.c:1751
       fc_mount fs/namespace.c:1199 [inline]
       do_new_mount_fc fs/namespace.c:3636 [inline]
       do_new_mount fs/namespace.c:3712 [inline]
       path_mount+0x7bf/0x23a0 fs/namespace.c:4022
       do_mount fs/namespace.c:4035 [inline]
       __do_sys_mount fs/namespace.c:4224 [inline]
       __se_sys_mount fs/namespace.c:4201 [inline]
       __x64_sys_mount+0x293/0x310 fs/namespace.c:4201
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (sb_internal){.+.+}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3165 [inline]
       check_prevs_add kernel/locking/lockdep.c:3284 [inline]
       validate_chain kernel/locking/lockdep.c:3908 [inline]
       __lock_acquire+0x1669/0x2890 kernel/locking/lockdep.c:5237
       lock_acquire kernel/locking/lockdep.c:5868 [inline]
       lock_acquire+0x179/0x330 kernel/locking/lockdep.c:5825
       percpu_down_read_internal include/linux/percpu-rwsem.h:53 [inline]
       percpu_down_read_freezable include/linux/percpu-rwsem.h:83 [inline]
       __sb_start_write include/linux/fs/super.h:19 [inline]
       sb_start_intwrite include/linux/fs/super.h:177 [inline]
       ext4_evict_inode+0xccd/0x1730 fs/ext4/inode.c:214
       evict+0x3c2/0xad0 fs/inode.c:837
       iput_final fs/inode.c:1954 [inline]
       iput.part.0+0x621/0x1190 fs/inode.c:2006
       iput+0x35/0x40 fs/inode.c:1969
       ext4_ext_migrate+0xc6f/0x1ee0 fs/ext4/migrate.c:588
       __ext4_ioctl+0x1de3/0x4220 fs/ext4/ioctl.c:1688
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:597 [inline]
       __se_sys_ioctl fs/ioctl.c:583 [inline]
       __x64_sys_ioctl+0x18e/0x210 fs/ioctl.c:583
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&sbi->s_writepages_rwsem);
                               lock(sb_internal);
                               lock(&sbi->s_writepages_rwsem);
  rlock(sb_internal);

 *** DEADLOCK ***

3 locks held by syz.1.736/8722:
 #0: ffff88802cdf4420 (sb_writers#4){.+.+}-{0:0}, at: __ext4_ioctl+0x1db1/0x4220 fs/ext4/ioctl.c:1678
 #1: ffff888058a4bda0 (&sb->s_type->i_mutex_key#11){++++}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #1: ffff888058a4bda0 (&sb->s_type->i_mutex_key#11){++++}-{4:4}, at: __ext4_ioctl+0x1ddb/0x4220 fs/ext4/ioctl.c:1687
 #2: ffff88802cdf6b98 (&sbi->s_writepages_rwsem){++++}-{0:0}, at: ext4_writepages_down_write fs/ext4/ext4.h:1832 [inline]
 #2: ffff88802cdf6b98 (&sbi->s_writepages_rwsem){++++}-{0:0}, at: ext4_ext_migrate+0x39c/0x1ee0 fs/ext4/migrate.c:438

stack backtrace:
CPU: 0 UID: 0 PID: 8722 Comm: syz.1.736 Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 print_circular_bug+0x275/0x340 kernel/locking/lockdep.c:2043
 check_noncircular+0x146/0x160 kernel/locking/lockdep.c:2175
 check_prev_add kernel/locking/lockdep.c:3165 [inline]
 check_prevs_add kernel/locking/lockdep.c:3284 [inline]
 validate_chain kernel/locking/lockdep.c:3908 [inline]
 __lock_acquire+0x1669/0x2890 kernel/locking/lockdep.c:5237
 lock_acquire kernel/locking/lockdep.c:5868 [inline]
 lock_acquire+0x179/0x330 kernel/locking/lockdep.c:5825
 percpu_down_read_internal include/linux/percpu-rwsem.h:53 [inline]
 percpu_down_read_freezable include/linux/percpu-rwsem.h:83 [inline]
 __sb_start_write include/linux/fs/super.h:19 [inline]
 sb_start_intwrite include/linux/fs/super.h:177 [inline]
 ext4_evict_inode+0xccd/0x1730 fs/ext4/inode.c:214
 evict+0x3c2/0xad0 fs/inode.c:837
 iput_final fs/inode.c:1954 [inline]
 iput.part.0+0x621/0x1190 fs/inode.c:2006
 iput+0x35/0x40 fs/inode.c:1969
 ext4_ext_migrate+0xc6f/0x1ee0 fs/ext4/migrate.c:588
 __ext4_ioctl+0x1de3/0x4220 fs/ext4/ioctl.c:1688
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl fs/ioctl.c:583 [inline]
 __x64_sys_ioctl+0x18e/0x210 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb59998f749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fb597bf6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fb599be6090 RCX: 00007fb59998f749
RDX: 0000000000000000 RSI: 0000000000006609 RDI: 0000000000000004
RBP: 00007fb599a13f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fb599be6128 R14: 00007fb599be6090 R15: 00007ffdd3aefbd8
 </TASK>

Crashes (10):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/01/10 07:01 upstream 372800cb95a3 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in evict
2026/01/10 07:01 upstream 372800cb95a3 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in evict
2026/01/01 02:13 upstream 9528d5c091c5 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in evict
2025/12/24 20:35 upstream b927546677c8 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in evict
2025/12/21 20:10 upstream 9094662f6707 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in evict
2025/12/02 15:56 upstream 4a26e7032d7d d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in evict
2025/12/01 00:04 upstream e69c7c175115 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in evict
2025/11/26 09:41 upstream 30f09200cc4a 64219f15 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in evict
2025/11/26 09:41 upstream 30f09200cc4a 64219f15 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in evict
2025/11/22 22:59 upstream 2eba5e05d9bc 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in evict
* Struck through repros no longer work on HEAD.