syzbot


possible deadlock in do_qc

Status: upstream: reported on 2025/06/17 09:35
Reported-by: syzbot+0523e0803f6b4091fb43@syzkaller.appspotmail.com
First crash: 22h35m, last: 22h35m
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in do_qc gfs2 785 308d 437d 0/28 auto-obsoleted due to no activity on 2024/10/22 09:11
linux-5.15 possible deadlock in do_qc 514 13h33m 416d 0/3 upstream: reported on 2024/04/27 22:18
linux-6.1 possible deadlock in do_qc 382 1d22h 416d 0/3 upstream: reported on 2024/04/28 01:25

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.6.93-syzkaller #0 Not tainted
------------------------------------------------------
syz.0.2713/15850 is trying to acquire lock:
ffff8880797d4ae8 (&sdp->sd_quota_mutex){+.+.}-{3:3}, at: do_qc+0xaf/0x670 fs/gfs2/quota.c:689

but task is already holding lock:
ffff8880227c5688 (&ip->i_rw_mutex){++++}-{3:3}, at: sweep_bh_for_rgrps fs/gfs2/bmap.c:1526 [inline]
ffff8880227c5688 (&ip->i_rw_mutex){++++}-{3:3}, at: punch_hole+0x1c7d/0x2d70 fs/gfs2/bmap.c:1850

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&ip->i_rw_mutex){++++}-{3:3}:
       down_read+0x46/0x2e0 kernel/locking/rwsem.c:1520
       __gfs2_iomap_get+0x159/0x13f0 fs/gfs2/bmap.c:856
       gfs2_iomap_get+0xc5/0x120 fs/gfs2/bmap.c:1410
       bh_get+0x225/0x710 fs/gfs2/quota.c:401
       qdsb_get+0x210/0x330 fs/gfs2/quota.c:535
       gfs2_quota_hold+0x198/0x5e0 fs/gfs2/quota.c:615
       punch_hole+0x974/0x2d70 fs/gfs2/bmap.c:1811
       gfs2_iomap_end+0x4f6/0x6b0 fs/gfs2/bmap.c:1171
       iomap_iter+0x21e/0xec0 fs/iomap/iter.c:79
       iomap_file_buffered_write+0xb8b/0xd10 fs/iomap/buffered-io.c:974
       gfs2_file_buffered_write+0x4bd/0x820 fs/gfs2/file.c:1060
       gfs2_file_write_iter+0x427/0xe10 fs/gfs2/file.c:1158
       call_write_iter include/linux/fs.h:2018 [inline]
       new_sync_write fs/read_write.c:491 [inline]
       vfs_write+0x43b/0x940 fs/read_write.c:584
       ksys_write+0x147/0x250 fs/read_write.c:637
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (&sdp->sd_quota_mutex){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain kernel/locking/lockdep.c:3869 [inline]
       __lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
       lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
       do_qc+0xaf/0x670 fs/gfs2/quota.c:689
       gfs2_quota_change+0x2b9/0x800 fs/gfs2/quota.c:1289
       punch_hole+0x280c/0x2d70 fs/gfs2/bmap.c:1951
       gfs2_iomap_end+0x4f6/0x6b0 fs/gfs2/bmap.c:1171
       iomap_iter+0x21e/0xec0 fs/iomap/iter.c:79
       iomap_file_buffered_write+0xb8b/0xd10 fs/iomap/buffered-io.c:974
       gfs2_file_buffered_write+0x4bd/0x820 fs/gfs2/file.c:1060
       gfs2_file_write_iter+0x427/0xe10 fs/gfs2/file.c:1158
       call_write_iter include/linux/fs.h:2018 [inline]
       new_sync_write fs/read_write.c:491 [inline]
       vfs_write+0x43b/0x940 fs/read_write.c:584
       ksys_write+0x147/0x250 fs/read_write.c:637
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&ip->i_rw_mutex);
                               lock(&sdp->sd_quota_mutex);
                               lock(&ip->i_rw_mutex);
  lock(&sdp->sd_quota_mutex);

 *** DEADLOCK ***

6 locks held by syz.0.2713/15850:
 #0: ffff888066b0c0c8 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0x2a3/0x330 fs/file.c:1042
 #1: ffff88805ac2e418 (sb_writers#32){.+.+}-{0:0}, at: vfs_write+0x20e/0x940 fs/read_write.c:580
 #2: ffff8880227c5200 (&sb->s_type->i_mutex_key#38){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
 #2: ffff8880227c5200 (&sb->s_type->i_mutex_key#38){+.+.}-{3:3}, at: gfs2_file_write_iter+0x304/0xe10 fs/gfs2/file.c:1115
 #3: ffff88805ac2e608 (sb_internal#6){.+.+}-{0:0}, at: gfs2_trans_begin+0x6f/0xe0 fs/gfs2/trans.c:118
 #4: ffff8880797d5060 (&sdp->sd_log_flush_lock){.+.+}-{3:3}, at: __gfs2_trans_begin+0x511/0x880 fs/gfs2/trans.c:87
 #5: ffff8880227c5688 (&ip->i_rw_mutex){++++}-{3:3}, at: sweep_bh_for_rgrps fs/gfs2/bmap.c:1526 [inline]
 #5: ffff8880227c5688 (&ip->i_rw_mutex){++++}-{3:3}, at: punch_hole+0x1c7d/0x2d70 fs/gfs2/bmap.c:1850

stack backtrace:
CPU: 0 PID: 15850 Comm: syz.0.2713 Not tainted 6.6.93-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
 check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain kernel/locking/lockdep.c:3869 [inline]
 __lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
 lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
 __mutex_lock_common kernel/locking/mutex.c:603 [inline]
 __mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
 do_qc+0xaf/0x670 fs/gfs2/quota.c:689
 gfs2_quota_change+0x2b9/0x800 fs/gfs2/quota.c:1289
 punch_hole+0x280c/0x2d70 fs/gfs2/bmap.c:1951
 gfs2_iomap_end+0x4f6/0x6b0 fs/gfs2/bmap.c:1171
 iomap_iter+0x21e/0xec0 fs/iomap/iter.c:79
 iomap_file_buffered_write+0xb8b/0xd10 fs/iomap/buffered-io.c:974
 gfs2_file_buffered_write+0x4bd/0x820 fs/gfs2/file.c:1060
 gfs2_file_write_iter+0x427/0xe10 fs/gfs2/file.c:1158
 call_write_iter include/linux/fs.h:2018 [inline]
 new_sync_write fs/read_write.c:491 [inline]
 vfs_write+0x43b/0x940 fs/read_write.c:584
 ksys_write+0x147/0x250 fs/read_write.c:637
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f1a6098e929
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f1a6173b038 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f1a60bb6080 RCX: 00007f1a6098e929
RDX: 000000000208e24b RSI: 0000200000000240 RDI: 0000000000000008
RBP: 00007f1a60a10b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f1a60bb6080 R15: 00007ffcccd053a8
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/06/17 09:34 linux-6.6.y c2603c511feb cfebc887 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in do_qc
* Struck through repros no longer work on HEAD.