syzbot


possible deadlock in ocfs2_read_folio

Status: upstream: reported C repro on 2025/02/13 14:33
Subsystems: ocfs2
[Documentation on labels]
Reported-by: syzbot+bd316bb736c7dc2f318e@syzkaller.appspotmail.com
First crash: 197d, last: 20d
Cause bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ocfs2?] possible deadlock in ocfs2_read_folio 0 (2) 2025/05/28 14:48
Last patch testing requests (3)
Created Duration User Patch Repo Result
2025/08/20 15:48 20m retest repro upstream OK log
2025/08/05 03:47 17m retest repro linux-next report log
2025/06/11 14:59 18m retest repro upstream report log
Cause bisection attempts (2)
Created Duration User Patch Repo Result
2025/08/02 15:07 12h44m bisect linux-next error job log
2025/07/22 04:19 11h03m bisect linux-next error job log

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.16.0-rc6-next-20250718-syzkaller #0 Not tainted
------------------------------------------------------
syz.0.32/6187 is trying to acquire lock:
ffff88805b22bf60 (&oi->ip_alloc_sem){++++}-{4:4}, at: ocfs2_read_folio+0x353/0x970 fs/ocfs2/aops.c:287

but task is already holding lock:
ffff88805b22c460 (mapping.invalidate_lock#3){.+.+}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:934 [inline]
ffff88805b22c460 (mapping.invalidate_lock#3){.+.+}-{4:4}, at: filemap_fault+0x59e/0x13d0 mm/filemap.c:3445

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (mapping.invalidate_lock#3){.+.+}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
       down_read+0x46/0x2e0 kernel/locking/rwsem.c:1539
       filemap_invalidate_lock_shared include/linux/fs.h:934 [inline]
       filemap_fault+0x59e/0x13d0 mm/filemap.c:3445
       ocfs2_fault+0xa4/0x3f0 fs/ocfs2/mmap.c:38
       __do_fault+0x138/0x390 mm/memory.c:5158
       do_read_fault mm/memory.c:5579 [inline]
       do_fault mm/memory.c:5713 [inline]
       do_pte_missing mm/memory.c:4240 [inline]
       handle_pte_fault mm/memory.c:6058 [inline]
       __handle_mm_fault+0x3611/0x5440 mm/memory.c:6201
       handle_mm_fault+0x40a/0x8e0 mm/memory.c:6370
       faultin_page mm/gup.c:1144 [inline]
       __get_user_pages+0x1699/0x2ce0 mm/gup.c:1446
       populate_vma_page_range+0x29f/0x3a0 mm/gup.c:1880
       __mm_populate+0x24c/0x380 mm/gup.c:1983
       mm_populate include/linux/mm.h:3388 [inline]
       vm_mmap_pgoff+0x387/0x4d0 mm/util.c:584
       ksys_mmap_pgoff+0x51f/0x760 mm/mmap.c:604
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&mm->mmap_lock){++++}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
       __might_fault+0xcc/0x130 mm/memory.c:6964
       _inline_copy_to_user include/linux/uaccess.h:192 [inline]
       _copy_to_user+0x2c/0xb0 lib/usercopy.c:26
       copy_to_user include/linux/uaccess.h:225 [inline]
       fiemap_fill_next_extent+0x1c0/0x390 fs/ioctl.c:145
       ocfs2_fiemap+0x888/0xc90 fs/ocfs2/extent_map.c:806
       ioctl_fiemap fs/ioctl.c:220 [inline]
       do_vfs_ioctl+0x1170/0x1430 fs/ioctl.c:532
       __do_sys_ioctl fs/ioctl.c:596 [inline]
       __se_sys_ioctl+0x82/0x170 fs/ioctl.c:584
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&oi->ip_alloc_sem){++++}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3168 [inline]
       check_prevs_add kernel/locking/lockdep.c:3287 [inline]
       validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
       __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
       down_read+0x46/0x2e0 kernel/locking/rwsem.c:1539
       ocfs2_read_folio+0x353/0x970 fs/ocfs2/aops.c:287
       filemap_read_folio+0x114/0x380 mm/filemap.c:2413
       filemap_fault+0xcf6/0x13d0 mm/filemap.c:3549
       ocfs2_fault+0xa4/0x3f0 fs/ocfs2/mmap.c:38
       __do_fault+0x138/0x390 mm/memory.c:5158
       do_read_fault mm/memory.c:5579 [inline]
       do_fault mm/memory.c:5713 [inline]
       do_pte_missing mm/memory.c:4240 [inline]
       handle_pte_fault mm/memory.c:6058 [inline]
       __handle_mm_fault+0x3611/0x5440 mm/memory.c:6201
       handle_mm_fault+0x40a/0x8e0 mm/memory.c:6370
       faultin_page mm/gup.c:1144 [inline]
       __get_user_pages+0x1699/0x2ce0 mm/gup.c:1446
       populate_vma_page_range+0x29f/0x3a0 mm/gup.c:1880
       __mm_populate+0x24c/0x380 mm/gup.c:1983
       mm_populate include/linux/mm.h:3388 [inline]
       vm_mmap_pgoff+0x387/0x4d0 mm/util.c:584
       ksys_mmap_pgoff+0x51f/0x760 mm/mmap.c:604
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &oi->ip_alloc_sem --> &mm->mmap_lock --> mapping.invalidate_lock#3

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  rlock(mapping.invalidate_lock#3);
                               lock(&mm->mmap_lock);
                               lock(mapping.invalidate_lock#3);
  rlock(&oi->ip_alloc_sem);

 *** DEADLOCK ***

1 lock held by syz.0.32/6187:
 #0: ffff88805b22c460 (mapping.invalidate_lock#3){.+.+}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:934 [inline]
 #0: ffff88805b22c460 (mapping.invalidate_lock#3){.+.+}-{4:4}, at: filemap_fault+0x59e/0x13d0 mm/filemap.c:3445

stack backtrace:
CPU: 1 UID: 0 PID: 6187 Comm: syz.0.32 Not tainted 6.16.0-rc6-next-20250718-syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2046
 check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2178
 check_prev_add kernel/locking/lockdep.c:3168 [inline]
 check_prevs_add kernel/locking/lockdep.c:3287 [inline]
 validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
 __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
 down_read+0x46/0x2e0 kernel/locking/rwsem.c:1539
 ocfs2_read_folio+0x353/0x970 fs/ocfs2/aops.c:287
 filemap_read_folio+0x114/0x380 mm/filemap.c:2413
 filemap_fault+0xcf6/0x13d0 mm/filemap.c:3549
 ocfs2_fault+0xa4/0x3f0 fs/ocfs2/mmap.c:38
 __do_fault+0x138/0x390 mm/memory.c:5158
 do_read_fault mm/memory.c:5579 [inline]
 do_fault mm/memory.c:5713 [inline]
 do_pte_missing mm/memory.c:4240 [inline]
 handle_pte_fault mm/memory.c:6058 [inline]
 __handle_mm_fault+0x3611/0x5440 mm/memory.c:6201
 handle_mm_fault+0x40a/0x8e0 mm/memory.c:6370
 faultin_page mm/gup.c:1144 [inline]
 __get_user_pages+0x1699/0x2ce0 mm/gup.c:1446
 populate_vma_page_range+0x29f/0x3a0 mm/gup.c:1880
 __mm_populate+0x24c/0x380 mm/gup.c:1983
 mm_populate include/linux/mm.h:3388 [inline]
 vm_mmap_pgoff+0x387/0x4d0 mm/util.c:584
 ksys_mmap_pgoff+0x51f/0x760 mm/mmap.c:604
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f88d538e9a9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f88d620a038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f88d55b6080 RCX: 00007f88d538e9a9
RDX: 0000000001000003 RSI: 0000000000b36000 RDI: 0000200000000000
RBP: 00007f88d5410d69 R08: 0000000000000006 R09: 0000000000000000
R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007f88d55b6080 R15: 00007ffe3e10dbb8
 </TASK>

Crashes (8):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/22 01:57 linux-next d086c886ceb9 0b3788a0 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro (clean fs)] ci-upstream-linux-next-kasan-gce-root possible deadlock in ocfs2_read_folio
2025/07/10 18:08 upstream 8c2e52ebbe88 3cda49cf .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_read_folio
2025/06/12 15:23 upstream 2c4a1f3fe03e 98683f8f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_read_folio
2025/04/24 19:07 upstream e72e9e693307 9882047a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_read_folio
2025/04/15 18:15 upstream 1a1d569a75f3 23b969b7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_read_folio
2025/02/09 14:24 upstream 9946eaf552b1 ef44b750 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_read_folio
2025/05/28 14:47 upstream c89756bcf406 874a1386 .config console log report syz / log C [disk image (non-bootable)] [vmlinux] [kernel image] [mounted in repro (clean fs)] ci-snapshot-upstream-root possible deadlock in ocfs2_read_folio
2025/05/28 13:19 upstream c89756bcf406 874a1386 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in ocfs2_read_folio
* Struck through repros no longer work on HEAD.