loop5: detected capacity change from 0 to 4096 ====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Not tainted ------------------------------------------------------ syz.5.4080/28029 is trying to acquire lock: ffff88804d4379b8 (mapping.invalidate_lock#5){++++}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:1045 [inline] ffff88804d4379b8 (mapping.invalidate_lock#5){++++}-{4:4}, at: filemap_fault+0x5d0/0x12b0 mm/filemap.c:3477 but task is already holding lock: ffff8880255757e0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:368 [inline] ffff8880255757e0 (&mm->mmap_lock){++++}-{4:4}, at: __mm_populate+0x16f/0x380 mm/gup.c:1942 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&mm->mmap_lock){++++}-{4:4}: lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 __might_fault+0xcc/0x130 mm/memory.c:7081 _inline_copy_to_user include/linux/uaccess.h:192 [inline] _copy_to_user+0x2c/0xb0 lib/usercopy.c:26 copy_to_user include/linux/uaccess.h:225 [inline] fiemap_fill_next_extent+0x1c0/0x390 fs/ioctl.c:144 ni_fiemap+0x391/0xbf0 fs/ntfs3/frecord.c:1896 ntfs_fiemap+0x11d/0x1a0 fs/ntfs3/file.c:1354 ioctl_fiemap fs/ioctl.c:219 [inline] do_vfs_ioctl+0x1173/0x1430 fs/ioctl.c:531 __do_sys_ioctl fs/ioctl.c:595 [inline] __se_sys_ioctl+0x82/0x170 fs/ioctl.c:583 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #1 (&ni->ni_lock#2/5){+.+.}-{4:4}: lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/mutex.c:598 [inline] __mutex_lock+0x187/0x1350 kernel/locking/mutex.c:760 ni_lock fs/ntfs3/ntfs_fs.h:1113 [inline] ntfs_fallocate+0x57a/0x10b0 fs/ntfs3/file.c:589 vfs_fallocate+0x669/0x7e0 fs/open.c:342 madvise_remove mm/madvise.c:1049 [inline] madvise_vma_behavior+0x31b3/0x3a10 mm/madvise.c:1346 madvise_walk_vmas+0x51c/0xa30 mm/madvise.c:1669 madvise_do_behavior+0x38e/0x550 mm/madvise.c:1885 do_madvise+0x1bc/0x270 mm/madvise.c:1978 __do_sys_madvise mm/madvise.c:1987 [inline] __se_sys_madvise mm/madvise.c:1985 [inline] __x64_sys_madvise+0xa7/0xc0 mm/madvise.c:1985 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (mapping.invalidate_lock#5){++++}-{4:4}: check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908 __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 down_read+0x46/0x2e0 kernel/locking/rwsem.c:1537 filemap_invalidate_lock_shared include/linux/fs.h:1045 [inline] filemap_fault+0x5d0/0x12b0 mm/filemap.c:3477 __do_fault+0x138/0x390 mm/memory.c:5280 do_read_fault mm/memory.c:5698 [inline] do_fault mm/memory.c:5832 [inline] do_pte_missing mm/memory.c:4361 [inline] handle_pte_fault mm/memory.c:6177 [inline] __handle_mm_fault+0x35e3/0x5400 mm/memory.c:6318 handle_mm_fault+0x40a/0x8e0 mm/memory.c:6487 faultin_page mm/gup.c:1126 [inline] __get_user_pages+0x165c/0x2a00 mm/gup.c:1428 populate_vma_page_range+0x29f/0x3a0 mm/gup.c:1860 __mm_populate+0x24c/0x380 mm/gup.c:1963 mm_populate include/linux/mm.h:3466 [inline] vm_mmap_pgoff+0x387/0x4d0 mm/util.c:585 ksys_mmap_pgoff+0x51f/0x760 mm/mmap.c:604 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f other info that might help us debug this: Chain exists of: mapping.invalidate_lock#5 --> &ni->ni_lock#2/5 --> &mm->mmap_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- rlock(&mm->mmap_lock); lock(&ni->ni_lock#2/5); lock(&mm->mmap_lock); rlock(mapping.invalidate_lock#5); *** DEADLOCK *** 1 lock held by syz.5.4080/28029: #0: ffff8880255757e0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:368 [inline] #0: ffff8880255757e0 (&mm->mmap_lock){++++}-{4:4}, at: __mm_populate+0x16f/0x380 mm/gup.c:1942 stack backtrace: CPU: 0 UID: 0 PID: 28029 Comm: syz.5.4080 Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025 Call Trace: dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2043 check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2175 check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908 __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 down_read+0x46/0x2e0 kernel/locking/rwsem.c:1537 filemap_invalidate_lock_shared include/linux/fs.h:1045 [inline] filemap_fault+0x5d0/0x12b0 mm/filemap.c:3477 __do_fault+0x138/0x390 mm/memory.c:5280 do_read_fault mm/memory.c:5698 [inline] do_fault mm/memory.c:5832 [inline] do_pte_missing mm/memory.c:4361 [inline] handle_pte_fault mm/memory.c:6177 [inline] __handle_mm_fault+0x35e3/0x5400 mm/memory.c:6318 handle_mm_fault+0x40a/0x8e0 mm/memory.c:6487 faultin_page mm/gup.c:1126 [inline] __get_user_pages+0x165c/0x2a00 mm/gup.c:1428 populate_vma_page_range+0x29f/0x3a0 mm/gup.c:1860 __mm_populate+0x24c/0x380 mm/gup.c:1963 mm_populate include/linux/mm.h:3466 [inline] vm_mmap_pgoff+0x387/0x4d0 mm/util.c:585 ksys_mmap_pgoff+0x51f/0x760 mm/mmap.c:604 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f62c598eec9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f62c67f7038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f62c5be5fa0 RCX: 00007f62c598eec9 RDX: 000000000000000a RSI: 0000000000b36000 RDI: 0000200000000000 RBP: 00007f62c5a11f91 R08: 0000000000000009 R09: 0000000000000000 R10: 0000000000028011 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f62c5be6038 R14: 00007f62c5be5fa0 R15: 00007ffd923959b8