syzbot


KCSAN: data-race in munlock_folio / need_mlock_drain (7)

Status: moderation: reported on 2025/07/12 13:25
Subsystems: mm
[Documentation on labels]
Reported-by: syzbot+1b2b913aa57b1c233c99@syzkaller.appspotmail.com
First crash: 109d, last: 12d
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream KCSAN: data-race in munlock_folio / need_mlock_drain mm 6 1 912d 912d 0/29 auto-obsoleted due to no activity on 2023/06/05 22:24
upstream KCSAN: data-race in munlock_folio / need_mlock_drain (2) mm 6 1 757d 757d 0/29 auto-obsoleted due to no activity on 2023/11/07 10:03
upstream KCSAN: data-race in munlock_folio / need_mlock_drain (5) mm 6 16 359d 514d 0/29 auto-obsoleted due to no activity on 2024/12/31 00:27
upstream KCSAN: data-race in munlock_folio / need_mlock_drain (4) mm 6 1 551d 551d 0/29 auto-obsoleted due to no activity on 2024/05/31 11:19
upstream KCSAN: data-race in munlock_folio / need_mlock_drain (6) mm 6 7 179d 276d 0/29 auto-obsoleted due to no activity on 2025/06/29 06:03
upstream KCSAN: data-race in munlock_folio / need_mlock_drain (3) mm 6 2 654d 683d 0/29 auto-obsoleted due to no activity on 2024/02/18 23:01

Sample crash report:
loop3: detected capacity change from 0 to 256
==================================================================
BUG: KCSAN: data-race in munlock_folio / need_mlock_drain

read-write to 0xffff888237d26a90 of 1 bytes by task 3659 on cpu 1:
 folio_batch_add include/linux/pagevec.h:77 [inline]
 munlock_folio+0x44/0x120 mm/mlock.c:301
 munlock_vma_folio mm/internal.h:1060 [inline]
 __folio_remove_rmap mm/rmap.c:1775 [inline]
 folio_remove_rmap_ptes+0x197/0x1a0 mm/rmap.c:1792
 zap_present_folio_ptes mm/memory.c:1651 [inline]
 zap_present_ptes mm/memory.c:1709 [inline]
 do_zap_pte_range mm/memory.c:1810 [inline]
 zap_pte_range mm/memory.c:1854 [inline]
 zap_pmd_range mm/memory.c:1946 [inline]
 zap_pud_range mm/memory.c:1975 [inline]
 zap_p4d_range mm/memory.c:1996 [inline]
 unmap_page_range+0x1523/0x25c0 mm/memory.c:2017
 unmap_single_vma mm/memory.c:2060 [inline]
 unmap_vmas+0x23d/0x3a0 mm/memory.c:2104
 exit_mmap+0x1b0/0x6c0 mm/mmap.c:1280
 __mmput+0x28/0x1c0 kernel/fork.c:1133
 mmput+0x40/0x50 kernel/fork.c:1156
 exit_mm+0xe4/0x180 kernel/exit.c:582
 do_exit+0x417/0x15c0 kernel/exit.c:954
 do_group_exit+0xff/0x140 kernel/exit.c:1107
 get_signal+0xe58/0xf70 kernel/signal.c:3034
 arch_do_signal_or_restart+0x96/0x440 arch/x86/kernel/signal.c:337
 exit_to_user_mode_loop+0x77/0x110 kernel/entry/common.c:40
 exit_to_user_mode_prepare include/linux/irq-entry-common.h:225 [inline]
 syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline]
 do_syscall_64+0x1d6/0x200 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

read to 0xffff888237d26a90 of 1 bytes by task 3681 on cpu 0:
 folio_batch_count include/linux/pagevec.h:56 [inline]
 need_mlock_drain+0x30/0x50 mm/mlock.c:235
 cpu_needs_drain mm/swap.c:786 [inline]
 __lru_add_drain_all+0x273/0x450 mm/swap.c:877
 lru_add_drain_all+0x10/0x20 mm/swap.c:893
 invalidate_bdev+0x47/0x70 block/bdev.c:101
 reconfigure_super+0x417/0x580 fs/super.c:1099
 do_remount fs/namespace.c:3279 [inline]
 path_mount+0xad2/0xb70 fs/namespace.c:4029
 do_mount fs/namespace.c:4050 [inline]
 __do_sys_mount fs/namespace.c:4238 [inline]
 __se_sys_mount+0x28c/0x2e0 fs/namespace.c:4215
 __x64_sys_mount+0x67/0x80 fs/namespace.c:4215
 x64_sys_call+0x2b51/0x3000 arch/x86/include/generated/asm/syscalls_64.h:166
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xd2/0x200 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

value changed: 0x09 -> 0x1f

Reported by Kernel Concurrency Sanitizer on:
CPU: 0 UID: 0 PID: 3681 Comm: syz.3.41 Not tainted syzkaller #0 PREEMPT(voluntary) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
==================================================================

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/10/17 22:38 upstream cf1ea8854e4f 1c8c8cd8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in munlock_folio / need_mlock_drain
2025/08/30 03:19 upstream fb679c832b64 807a3b61 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in munlock_folio / need_mlock_drain
2025/07/12 13:24 upstream 379f604cc3dc 3cda49cf .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in munlock_folio / need_mlock_drain
* Struck through repros no longer work on HEAD.