syzbot


KASAN: out-of-bounds Write in end_buffer_read_sync

Status: upstream: reported on 2025/06/17 22:10
Reported-by: syzbot+ea13088d57bd525c3722@syzkaller.appspotmail.com
First crash: 72d, last: 2d22h
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream KASAN: out-of-bounds Write in end_buffer_read_sync ntfs3 24 C done 4896 9d17h 1222d 0/29 upstream: reported C repro on 2022/04/25 03:07
linux-6.1 KASAN: out-of-bounds Write in end_buffer_read_sync origin:upstream missing-backport 23 C unreliable 89 17d 822d 0/3 upstream: reported C repro on 2023/05/29 19:57
linux-5.15 KASAN: out-of-bounds Write in end_buffer_read_sync origin:lts-only 23 C done 100 1d23h 877d 0/3 upstream: reported C repro on 2023/04/04 06:42

Sample crash report:
==================================================================
BUG: KASAN: out-of-bounds in instrument_atomic_read_write include/linux/instrumented.h:96 [inline]
BUG: KASAN: out-of-bounds in atomic_dec include/linux/atomic/atomic-instrumented.h:592 [inline]
BUG: KASAN: out-of-bounds in put_bh include/linux/buffer_head.h:308 [inline]
BUG: KASAN: out-of-bounds in end_buffer_read_sync+0xc3/0xd0 fs/buffer.c:161
Write of size 4 at addr ffffc9000378f740 by task syz.1.2601/16120

CPU: 0 PID: 16120 Comm: syz.1.2601 Not tainted 6.6.102-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
 <IRQ>
 dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
 print_address_description mm/kasan/report.c:364 [inline]
 print_report+0xac/0x220 mm/kasan/report.c:468
 kasan_report+0x117/0x150 mm/kasan/report.c:581
 check_region_inline mm/kasan/generic.c:-1 [inline]
 kasan_check_range+0x288/0x290 mm/kasan/generic.c:187
 instrument_atomic_read_write include/linux/instrumented.h:96 [inline]
 atomic_dec include/linux/atomic/atomic-instrumented.h:592 [inline]
 put_bh include/linux/buffer_head.h:308 [inline]
 end_buffer_read_sync+0xc3/0xd0 fs/buffer.c:161
 end_bio_bh_io_sync+0xb7/0x110 fs/buffer.c:2784
 req_bio_endio block/blk-mq.c:784 [inline]
 blk_update_request+0x597/0xe40 block/blk-mq.c:929
 blk_mq_end_request+0x3e/0x70 block/blk-mq.c:1056
 blk_complete_reqs block/blk-mq.c:1136 [inline]
 blk_done_softirq+0x10b/0x160 block/blk-mq.c:1141
 handle_softirqs+0x280/0x820 kernel/softirq.c:578
 __do_softirq kernel/softirq.c:612 [inline]
 invoke_softirq kernel/softirq.c:452 [inline]
 __irq_exit_rcu+0xc7/0x190 kernel/softirq.c:661
 irq_exit_rcu+0x9/0x20 kernel/softirq.c:673
 instr_sysvec_call_function_single arch/x86/kernel/smp.c:262 [inline]
 sysvec_call_function_single+0xa1/0xc0 arch/x86/kernel/smp.c:262
 </IRQ>
 <TASK>
 asm_sysvec_call_function_single+0x1a/0x20 arch/x86/include/asm/idtentry.h:694
RIP: 0010:finish_task_switch+0x26a/0x920 kernel/sched/core.c:5254
Code: 0f 84 37 01 00 00 48 85 db 0f 85 56 01 00 00 e9 f6 04 00 00 4c 8b 75 d0 4c 89 e7 e8 c0 a3 1a 09 e8 ab a2 2f 00 fb 4c 8b 65 c0 <49> 8d bc 24 f8 15 00 00 48 89 f8 48 c1 e8 03 42 0f b6 04 28 84 c0
RSP: 0000:ffffc9000444fc78 EFLAGS: 00000286
RAX: 475c07b61f18d700 RBX: 0000000000000000 RCX: 475c07b61f18d700
RDX: dffffc0000000000 RSI: ffffffff8aaab9c0 RDI: ffffffff8afc66c0
RBP: ffffc9000444fcd0 R08: ffffffff8e4a882f R09: 1ffffffff1c95105
R10: dffffc0000000000 R11: fffffbfff1c95106 R12: ffff88802216bc00
R13: dffffc0000000000 R14: ffff88801ba45a00 R15: ffff8880b8e3cf08
 context_switch kernel/sched/core.c:5383 [inline]
 __schedule+0x14da/0x44d0 kernel/sched/core.c:6699
 schedule+0xbd/0x170 kernel/sched/core.c:6773
 exit_to_user_mode_loop+0x47/0x110 kernel/entry/common.c:165
 exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:210
 irqentry_exit_to_user_mode+0x9/0x40 kernel/entry/common.c:315
 asm_sysvec_call_function_single+0x1a/0x20 arch/x86/include/asm/idtentry.h:694
RIP: 0033:0x7f47e606802d
Code: 08 48 83 c3 08 48 39 d1 72 f3 48 83 e8 08 48 39 f2 73 17 66 2e 0f 1f 84 00 00 00 00 00 48 8b 70 f8 48 83 e8 08 48 39 f2 72 f3 <48> 39 c3 73 3e 48 89 33 48 83 c3 08 48 8b 70 f8 48 89 08 48 8b 0b
RSP: 002b:00007ffc40d7b0d0 EFLAGS: 00000216
RAX: 00007f47e5818f68 RBX: 00007f47e58125c8 RCX: ffffffff841d4422
RDX: ffffffff813aa799 RSI: ffffffff81009abe RDI: 00007f47e584f0f8
RBP: 00007f47e57fd010 R08: 00007f47e5826080 R09: 00007f47e63a2000
R10: 00007f47e57fd008 R11: 0000000000000009 R12: 00007f47e57fd008
R13: 000000000000001d R14: 000000000014aa66 R15: 00007f47e57fd008
 </TASK>

The buggy address belongs to a 8-page vmalloc region starting at 0xffffc90003788000 allocated at copy_process+0x549/0x3d70 kernel/fork.c:2331
The buggy address belongs to the physical page:
page:ffffea0001a1b840 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x686e1
memcg:ffff88802edfb302
flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffffff()
raw: 00fff00000000000 0000000000000000 dead000000000122 0000000000000000
raw: 0000000000000000 0000000000000000 00000001ffffffff ffff88802edfb302
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x102dc2(GFP_HIGHUSER|__GFP_NOWARN|__GFP_ZERO), pid 16086, tgid 16084 (syz.1.2590), ts 1348040312802, free_ts 1346454923953
 set_page_owner include/linux/page_owner.h:31 [inline]
 post_alloc_hook+0x1cd/0x210 mm/page_alloc.c:1554
 prep_new_page mm/page_alloc.c:1561 [inline]
 get_page_from_freelist+0x195c/0x19f0 mm/page_alloc.c:3191
 __alloc_pages+0x1e3/0x460 mm/page_alloc.c:4457
 vm_area_alloc_pages mm/vmalloc.c:3089 [inline]
 __vmalloc_area_node mm/vmalloc.c:3158 [inline]
 __vmalloc_node_range+0x96b/0x1320 mm/vmalloc.c:3339
 alloc_thread_stack_node kernel/fork.c:310 [inline]
 dup_task_struct+0x3d0/0x7c0 kernel/fork.c:1124
 copy_process+0x549/0x3d70 kernel/fork.c:2331
 kernel_clone+0x21b/0x840 kernel/fork.c:2914
 __do_sys_clone kernel/fork.c:3057 [inline]
 __se_sys_clone kernel/fork.c:3041 [inline]
 __x64_sys_clone+0x18c/0x1e0 kernel/fork.c:3041
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
page last free stack trace:
 reset_page_owner include/linux/page_owner.h:24 [inline]
 free_pages_prepare mm/page_alloc.c:1154 [inline]
 free_unref_page_prepare+0x7ce/0x8e0 mm/page_alloc.c:2336
 free_unref_page+0x32/0x2e0 mm/page_alloc.c:2429
 tlb_batch_list_free mm/mmu_gather.c:114 [inline]
 tlb_finish_mmu+0x112/0x1d0 mm/mmu_gather.c:395
 exit_mmap+0x3f0/0xb50 mm/mmap.c:3311
 __mmput+0x118/0x3c0 kernel/fork.c:1355
 exit_mm+0x1da/0x2c0 kernel/exit.c:569
 do_exit+0x88e/0x23c0 kernel/exit.c:870
 do_group_exit+0x21b/0x2d0 kernel/exit.c:1024
 __do_sys_exit_group kernel/exit.c:1035 [inline]
 __se_sys_exit_group kernel/exit.c:1033 [inline]
 __x64_sys_exit_group+0x3f/0x40 kernel/exit.c:1033
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2

Memory state around the buggy address:
 ffffc9000378f600: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 ffffc9000378f680: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffffc9000378f700: 00 00 00 00 f1 f1 f1 f1 00 00 00 00 00 00 00 f2
                                              ^
 ffffc9000378f780: f2 f2 f2 f2 00 f2 f2 f2 01 f3 f3 f3 00 00 00 00
 ffffc9000378f800: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
==================================================================
----------------
Code disassembly (best guess):
   0:	0f 84 37 01 00 00    	je     0x13d
   6:	48 85 db             	test   %rbx,%rbx
   9:	0f 85 56 01 00 00    	jne    0x165
   f:	e9 f6 04 00 00       	jmp    0x50a
  14:	4c 8b 75 d0          	mov    -0x30(%rbp),%r14
  18:	4c 89 e7             	mov    %r12,%rdi
  1b:	e8 c0 a3 1a 09       	call   0x91aa3e0
  20:	e8 ab a2 2f 00       	call   0x2fa2d0
  25:	fb                   	sti
  26:	4c 8b 65 c0          	mov    -0x40(%rbp),%r12
* 2a:	49 8d bc 24 f8 15 00 	lea    0x15f8(%r12),%rdi <-- trapping instruction
  31:	00
  32:	48 89 f8             	mov    %rdi,%rax
  35:	48 c1 e8 03          	shr    $0x3,%rax
  39:	42 0f b6 04 28       	movzbl (%rax,%r13,1),%eax
  3e:	84 c0                	test   %al,%al

Crashes (10):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/08/26 04:52 linux-6.6.y bb9c90ab9c5a bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: out-of-bounds Write in end_buffer_read_sync
2025/08/25 00:04 linux-6.6.y bb9c90ab9c5a bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: out-of-bounds Write in end_buffer_read_sync
2025/08/24 23:44 linux-6.6.y bb9c90ab9c5a bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: out-of-bounds Write in end_buffer_read_sync
2025/08/20 11:18 linux-6.6.y bb9c90ab9c5a 79512909 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: out-of-bounds Write in end_buffer_read_sync
2025/08/02 13:35 linux-6.6.y 3a8ababb8b6a 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: out-of-bounds Write in end_buffer_read_sync
2025/07/19 18:02 linux-6.6.y d96eb99e2f0e 7117feec .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: out-of-bounds Write in end_buffer_read_sync
2025/07/08 13:24 linux-6.6.y a5df3a702b2c 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: out-of-bounds Write in end_buffer_read_sync
2025/07/02 11:24 linux-6.6.y 3f5b4c104b7d bc80e4f0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: out-of-bounds Write in end_buffer_read_sync
2025/06/28 22:53 linux-6.6.y 3f5b4c104b7d fc9d8ee5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: out-of-bounds Write in end_buffer_read_sync
2025/06/17 22:09 linux-6.6.y c2603c511feb e77fae15 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: out-of-bounds Write in end_buffer_read_sync
* Struck through repros no longer work on HEAD.