syzbot


possible deadlock in ocfs2_lock_global_qf

Status: upstream: reported on 2024/10/03 18:26
Subsystems: ocfs2
[Documentation on labels]
Reported-by: syzbot+b53d753ae8fb473e2397@syzkaller.appspotmail.com
First crash: 315d, last: 19h08m
Discussions (4)
Title Replies (including bot) Last reply
[syzbot] Monthly ocfs2 report (Aug 2025) 0 (1) 2025/08/01 13:49
[syzbot] Monthly ocfs2 report (Jul 2025) 0 (1) 2025/07/01 10:01
[syzbot] Monthly ocfs2 report (May 2025) 0 (1) 2025/06/03 11:11
[syzbot] [ocfs2?] possible deadlock in ocfs2_lock_global_qf 0 (1) 2024/10/03 18:26
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.6 possible deadlock in ocfs2_lock_global_qf 4 1 17d 17d 0/2 upstream: reported on 2025/07/24 13:37
linux-5.15 possible deadlock in ocfs2_lock_global_qf origin:lts-only 4 C done 252 1d14h 315d 0/3 upstream: reported C repro on 2024/09/29 22:07
linux-6.1 possible deadlock in ocfs2_lock_global_qf 4 184 8h34m 313d 0/3 upstream: reported on 2024/10/01 11:31

Sample crash report:
(syz.0.836,10926,0):ocfs2_block_check_validate:402 ERROR: CRC32 failed: stored: 0xb3775c19, computed 0x2dd1c265. Applying ECC.
JBD2: Ignoring recovery information on journal
ocfs2: Mounting device (7,0) on (node local, slot 0) with ordered data mode.
======================================================
WARNING: possible circular locking dependency detected
6.16.0-syzkaller-12288-g2b38afce25c4 #0 Tainted: G        W          
------------------------------------------------------
syz.0.836/10926 is trying to acquire lock:
ffff88805f0898d0 (&ocfs2_quota_ip_alloc_sem_key){++++}-{4:4}, at: ocfs2_lock_global_qf+0x1e8/0x270 fs/ocfs2/quota_global.c:314

but task is already holding lock:
ffff88805f089c80 (&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
ffff88805f089c80 (&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]){+.+.}-{4:4}, at: ocfs2_lock_global_qf+0x1ca/0x270 fs/ocfs2/quota_global.c:313

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #6 (&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]){+.+.}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       down_write+0x3a/0x50 kernel/locking/rwsem.c:1590
       inode_lock include/linux/fs.h:869 [inline]
       ocfs2_lock_global_qf+0x1ca/0x270 fs/ocfs2/quota_global.c:313
       ocfs2_acquire_dquot+0x2b0/0xb30 fs/ocfs2/quota_global.c:828
       dqget+0x7c1/0xf20 fs/quota/dquot.c:977
       __dquot_initialize+0x3b3/0xcb0 fs/quota/dquot.c:1505
       ocfs2_get_init_inode+0x13b/0x1b0 fs/ocfs2/namei.c:205
       ocfs2_mknod+0x863/0x2050 fs/ocfs2/namei.c:313
       ocfs2_mkdir+0x191/0x440 fs/ocfs2/namei.c:659
       vfs_mkdir+0x306/0x510 fs/namei.c:4366
       do_mkdirat+0x247/0x590 fs/namei.c:4399
       __do_sys_mkdirat fs/namei.c:4416 [inline]
       __se_sys_mkdirat fs/namei.c:4414 [inline]
       __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:4414
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #5 (&dquot->dq_lock){+.+.}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       __mutex_lock_common kernel/locking/rtmutex_api.c:535 [inline]
       mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:547
       wait_on_dquot fs/quota/dquot.c:354 [inline]
       dqget+0x73a/0xf20 fs/quota/dquot.c:972
       dquot_transfer+0x4b8/0x6d0 fs/quota/dquot.c:2154
       ext4_setattr+0x865/0x1bc0 fs/ext4/inode.c:5902
       notify_change+0xb31/0xe60 fs/attr.c:552
       chown_common+0x40c/0x5c0 fs/open.c:791
       vfs_fchown fs/open.c:859 [inline]
       ksys_fchown+0xea/0x160 fs/open.c:871
       __do_sys_fchown fs/open.c:876 [inline]
       __se_sys_fchown fs/open.c:874 [inline]
       __x64_sys_fchown+0x7a/0x90 fs/open.c:874
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #4 (&ei->xattr_sem){++++}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       down_write+0x3a/0x50 kernel/locking/rwsem.c:1590
       ext4_write_lock_xattr fs/ext4/xattr.h:157 [inline]
       ext4_xattr_set_handle+0x165/0x1590 fs/ext4/xattr.c:2362
       ext4_initxattrs+0x9f/0x110 fs/ext4/xattr_security.c:44
       security_inode_init_security+0x2a0/0x3f0 security/security.c:1852
       __ext4_new_inode+0x3314/0x3cb0 fs/ext4/ialloc.c:1325
       ext4_create+0x22d/0x460 fs/ext4/namei.c:2822
       lookup_open fs/namei.c:3708 [inline]
       open_last_lookups fs/namei.c:3807 [inline]
       path_openat+0x14fd/0x3840 fs/namei.c:4043
       do_filp_open+0x1fa/0x410 fs/namei.c:4073
       do_sys_openat2+0x121/0x1c0 fs/open.c:1435
       do_sys_open fs/open.c:1450 [inline]
       __do_sys_openat fs/open.c:1466 [inline]
       __se_sys_openat fs/open.c:1461 [inline]
       __x64_sys_openat+0x138/0x170 fs/open.c:1461
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (jbd2_handle){++++}-{0:0}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       start_this_handle+0x1fa7/0x21c0 fs/jbd2/transaction.c:444
       jbd2__journal_start+0x2c1/0x5b0 fs/jbd2/transaction.c:501
       jbd2_journal_start+0x2a/0x40 fs/jbd2/transaction.c:540
       ocfs2_start_trans+0x377/0x6d0 fs/ocfs2/journal.c:374
       ocfs2_modify_bh+0xe8/0x470 fs/ocfs2/quota_local.c:101
       ocfs2_local_read_info+0x1465/0x17e0 fs/ocfs2/quota_local.c:767
       dquot_load_quota_sb+0x791/0xbd0 fs/quota/dquot.c:2459
       dquot_load_quota_inode+0x2e1/0x5d0 fs/quota/dquot.c:2496
       ocfs2_enable_quotas+0x1c6/0x450 fs/ocfs2/super.c:930
       ocfs2_fill_super+0x5197/0x65f0 fs/ocfs2/super.c:1140
       get_tree_bdev_flags+0x40e/0x4d0 fs/super.c:1692
       vfs_get_tree+0x8f/0x2b0 fs/super.c:1815
       do_new_mount+0x2a2/0x9e0 fs/namespace.c:3805
       do_mount fs/namespace.c:4133 [inline]
       __do_sys_mount fs/namespace.c:4344 [inline]
       __se_sys_mount+0x317/0x410 fs/namespace.c:4321
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&journal->j_trans_barrier){.+.+}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       down_read+0x97/0x1f0 kernel/locking/rwsem.c:1537
       ocfs2_start_trans+0x36b/0x6d0 fs/ocfs2/journal.c:372
       ocfs2_modify_bh+0xe8/0x470 fs/ocfs2/quota_local.c:101
       ocfs2_local_read_info+0x1465/0x17e0 fs/ocfs2/quota_local.c:767
       dquot_load_quota_sb+0x791/0xbd0 fs/quota/dquot.c:2459
       dquot_load_quota_inode+0x2e1/0x5d0 fs/quota/dquot.c:2496
       ocfs2_enable_quotas+0x1c6/0x450 fs/ocfs2/super.c:930
       ocfs2_fill_super+0x5197/0x65f0 fs/ocfs2/super.c:1140
       get_tree_bdev_flags+0x40e/0x4d0 fs/super.c:1692
       vfs_get_tree+0x8f/0x2b0 fs/super.c:1815
       do_new_mount+0x2a2/0x9e0 fs/namespace.c:3805
       do_mount fs/namespace.c:4133 [inline]
       __do_sys_mount fs/namespace.c:4344 [inline]
       __se_sys_mount+0x317/0x410 fs/namespace.c:4321
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (sb_internal#4){.+.+}-{0:0}:
       reacquire_held_locks+0x127/0x1d0 kernel/locking/lockdep.c:5385
       __lock_release kernel/locking/lockdep.c:5574 [inline]
       lock_release+0x1b4/0x3e0 kernel/locking/lockdep.c:5889
       up_write+0x1a/0x60 kernel/locking/rwsem.c:1642
       inode_unlock include/linux/fs.h:879 [inline]
       ocfs2_free_ac_resource fs/ocfs2/suballoc.c:130 [inline]
       ocfs2_free_alloc_context+0x97/0x1a0 fs/ocfs2/suballoc.c:144
       ocfs2_write_begin_nolock+0x4296/0x4340 fs/ocfs2/aops.c:1804
       ocfs2_write_begin+0x1bb/0x310 fs/ocfs2/aops.c:1884
       generic_perform_write+0x29d/0x8c0 mm/filemap.c:4175
       ocfs2_file_write_iter+0x157d/0x1d20 fs/ocfs2/file.c:2469
       new_sync_write fs/read_write.c:593 [inline]
       vfs_write+0x5d2/0xb40 fs/read_write.c:686
       ksys_write+0x14b/0x260 fs/read_write.c:738
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&ocfs2_quota_ip_alloc_sem_key){++++}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3165 [inline]
       check_prevs_add kernel/locking/lockdep.c:3284 [inline]
       validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
       __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       down_write+0x3a/0x50 kernel/locking/rwsem.c:1590
       ocfs2_lock_global_qf+0x1e8/0x270 fs/ocfs2/quota_global.c:314
       ocfs2_acquire_dquot+0x2b0/0xb30 fs/ocfs2/quota_global.c:828
       dqget+0x7c1/0xf20 fs/quota/dquot.c:977
       __dquot_initialize+0x3b3/0xcb0 fs/quota/dquot.c:1505
       ocfs2_get_init_inode+0x13b/0x1b0 fs/ocfs2/namei.c:205
       ocfs2_mknod+0x863/0x2050 fs/ocfs2/namei.c:313
       ocfs2_mkdir+0x191/0x440 fs/ocfs2/namei.c:659
       vfs_mkdir+0x306/0x510 fs/namei.c:4366
       do_mkdirat+0x247/0x590 fs/namei.c:4399
       __do_sys_mkdirat fs/namei.c:4416 [inline]
       __se_sys_mkdirat fs/namei.c:4414 [inline]
       __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:4414
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &ocfs2_quota_ip_alloc_sem_key --> &dquot->dq_lock --> &ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]);
                               lock(&dquot->dq_lock);
                               lock(&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]);
  lock(&ocfs2_quota_ip_alloc_sem_key);

 *** DEADLOCK ***

5 locks held by syz.0.836/10926:
 #0: ffff88803af64488 (sb_writers#24){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88805f088b80 (&type->i_mutex_dir_key#17/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:914 [inline]
 #1: ffff88805f088b80 (&type->i_mutex_dir_key#17/1){+.+.}-{4:4}, at: filename_create+0x1f8/0x3c0 fs/namei.c:4139
 #2: ffff88805dc29c80 (&ocfs2_sysfile_lock_key[INODE_ALLOC_SYSTEM_INODE]){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #2: ffff88805dc29c80 (&ocfs2_sysfile_lock_key[INODE_ALLOC_SYSTEM_INODE]){+.+.}-{4:4}, at: ocfs2_reserve_suballoc_bits+0x15e/0x4640 fs/ocfs2/suballoc.c:788
 #3: ffff88803e0c4098 (&dquot->dq_lock){+.+.}-{4:4}, at: ocfs2_acquire_dquot+0x2a3/0xb30 fs/ocfs2/quota_global.c:823
 #4: ffff88805f089c80 (&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #4: ffff88805f089c80 (&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]){+.+.}-{4:4}, at: ocfs2_lock_global_qf+0x1ca/0x270 fs/ocfs2/quota_global.c:313

stack backtrace:
CPU: 0 UID: 0 PID: 10926 Comm: syz.0.836 Tainted: G        W           6.16.0-syzkaller-12288-g2b38afce25c4 #0 PREEMPT_{RT,(full)} 
Tainted: [W]=WARN
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2043
 check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2175
 check_prev_add kernel/locking/lockdep.c:3165 [inline]
 check_prevs_add kernel/locking/lockdep.c:3284 [inline]
 validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
 __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
 down_write+0x3a/0x50 kernel/locking/rwsem.c:1590
 ocfs2_lock_global_qf+0x1e8/0x270 fs/ocfs2/quota_global.c:314
 ocfs2_acquire_dquot+0x2b0/0xb30 fs/ocfs2/quota_global.c:828
 dqget+0x7c1/0xf20 fs/quota/dquot.c:977
 __dquot_initialize+0x3b3/0xcb0 fs/quota/dquot.c:1505
 ocfs2_get_init_inode+0x13b/0x1b0 fs/ocfs2/namei.c:205
 ocfs2_mknod+0x863/0x2050 fs/ocfs2/namei.c:313
 ocfs2_mkdir+0x191/0x440 fs/ocfs2/namei.c:659
 vfs_mkdir+0x306/0x510 fs/namei.c:4366
 do_mkdirat+0x247/0x590 fs/namei.c:4399
 __do_sys_mkdirat fs/namei.c:4416 [inline]
 __se_sys_mkdirat fs/namei.c:4414 [inline]
 __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:4414
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f35116ad457
Code: 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 02 01 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f350f915e68 EFLAGS: 00000246 ORIG_RAX: 0000000000000102
RAX: ffffffffffffffda RBX: 00007f350f915ef0 RCX: 00007f35116ad457
RDX: 00000000000001ff RSI: 0000200000000040 RDI: 00000000ffffff9c
RBP: 00002000000002c0 R08: 00002000000000c0 R09: 0000000000000000
R10: 00002000000002c0 R11: 0000000000000246 R12: 0000200000000040
R13: 00007f350f915eb0 R14: 0000000000000000 R15: 0000000000000000
 </TASK>

Crashes (2836):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/08/10 09:43 upstream 2b38afce25c4 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/10 06:35 upstream 561c80369df0 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/09 19:30 upstream 561c80369df0 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/09 07:05 upstream 0227b49b5027 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/09 05:55 upstream 0227b49b5027 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/08 23:15 upstream 37816488247d 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/08 17:02 upstream 37816488247d 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/08 05:39 upstream bec077162bd0 6a893178 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/08 04:22 upstream 6e64f4580381 6a893178 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/07 20:22 upstream 6e64f4580381 04cffc22 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/07 12:32 upstream 6e64f4580381 04cffc22 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/07 04:22 upstream cca7a0aae895 9a42d6b1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/06 19:51 upstream 479058002c32 ffe1dd46 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/06 18:32 upstream 479058002c32 ffe1dd46 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/06 17:22 upstream 479058002c32 ffe1dd46 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/06 14:42 upstream 479058002c32 ffe1dd46 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/06 13:29 upstream 479058002c32 ffe1dd46 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/06 10:48 upstream 6bcdbd62bd56 ffe1dd46 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/06 08:07 upstream 6bcdbd62bd56 ffe1dd46 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/06 07:06 upstream 6bcdbd62bd56 ffe1dd46 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/05 21:52 upstream 7e161a991ea7 37880f40 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/05 09:49 upstream 7e161a991ea7 f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/05 07:04 upstream d632ab86aff2 f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/05 02:26 upstream d632ab86aff2 f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/04 23:26 upstream d632ab86aff2 f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/04 22:02 upstream d632ab86aff2 f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/04 20:31 upstream d632ab86aff2 f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/04 18:54 upstream d632ab86aff2 f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/04 18:53 upstream d632ab86aff2 f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/04 16:57 upstream d2eedaa3909b 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/04 07:02 upstream 352af6a011d5 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/03 23:31 upstream 352af6a011d5 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/03 19:58 upstream 186f3edfdd41 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/03 11:30 upstream 186f3edfdd41 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/03 04:46 upstream 186f3edfdd41 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/03 03:14 upstream a6923c06a3b2 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/02 11:38 upstream 0905809b38bd 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/02 02:12 upstream 0905809b38bd 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/01 18:47 upstream f2d282e1dfb3 40127d41 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/01 13:58 upstream f2d282e1dfb3 40127d41 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/08/01 12:54 upstream f2d282e1dfb3 40127d41 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/07/31 17:00 upstream 260f6f4fda93 0c075d67 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/07/31 10:03 upstream e8d780dcd957 f8f2b4da .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/07/31 09:06 upstream e8d780dcd957 f8f2b4da .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/07/24 11:07 upstream 01a412d06bc5 0c1d6ded .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in ocfs2_lock_global_qf
2025/04/19 04:53 upstream 3088d26962e8 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in ocfs2_lock_global_qf
2024/10/03 17:15 upstream 7ec462100ef9 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2024/09/29 18:22 upstream e7ed34365879 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/01/09 19:06 linux-next 7b4b9bf203da 40f46913 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in ocfs2_lock_global_qf
2025/08/10 11:21 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 82af5ea7c611 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/08/05 19:05 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 82af5ea7c611 904e669c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
* Struck through repros no longer work on HEAD.