syzbot


INFO: task hung in block_read_full_folio (3)

Status: upstream: reported C repro on 2026/02/11 16:37
Subsystems: ext4
[Documentation on labels]
Reported-by: syzbot+03afbb29537f0336b7ad@syzkaller.appspotmail.com
First crash: 26d, last: 6d19h
Cause bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ext4?] INFO: task hung in block_read_full_folio (3) 0 (2) 2026/02/26 09:36
Similar bugs (2)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in block_read_full_folio (2) ext4 1 1 106d 106d 0/29 auto-obsoleted due to no activity on 2026/01/18 06:02
upstream INFO: task hung in block_read_full_folio udf 1 1 610d 610d 0/29 auto-obsoleted due to no activity on 2024/10/01 11:05
Last patch testing requests (1)
Created Duration User Patch Repo Result
2026/02/21 16:47 23m retest repro upstream OK log

Sample crash report:
INFO: task udevd:5880 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:udevd           state:D stack:24960 pid:5880  tgid:5880  ppid:5186   task_flags:0x400140 flags:0x00080000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0x1585/0x5340 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0x164/0x360 kernel/sched/core.c:7004
 io_schedule+0x7f/0xd0 kernel/sched/core.c:7831
 bit_wait_io+0x11/0xd0 kernel/sched/wait_bit.c:250
 __wait_on_bit_lock+0xec/0x4e0 kernel/sched/wait_bit.c:93
 out_of_line_wait_on_bit_lock+0x13b/0x190 kernel/sched/wait_bit.c:120
 wait_on_bit_lock_io include/linux/wait_bit.h:221 [inline]
 __lock_buffer fs/buffer.c:72 [inline]
 lock_buffer include/linux/buffer_head.h:432 [inline]
 block_read_full_folio+0x38f/0x830 fs/buffer.c:2436
 filemap_read_folio+0x137/0x3b0 mm/filemap.c:2496
 filemap_update_page mm/filemap.c:2583 [inline]
 filemap_get_pages+0x1744/0x1f10 mm/filemap.c:2713
 filemap_read+0x447/0x1230 mm/filemap.c:2800
 blkdev_read_iter+0x30a/0x440 block/fops.c:855
 new_sync_read fs/read_write.c:493 [inline]
 vfs_read+0x582/0xa70 fs/read_write.c:574
 ksys_read+0x150/0x270 fs/read_write.c:717
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f30023de407
RSP: 002b:00007ffef214d590 EFLAGS: 00000202 ORIG_RAX: 0000000000000000
RAX: ffffffffffffffda RBX: 00007f3002352880 RCX: 00007f30023de407
RDX: 0000000000000200 RSI: 00007f300234e000 RDI: 0000000000000009
RBP: 0000556fb2cd5c50 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000018
R13: 0000000000001000 R14: 0000556fb2ce78f8 R15: 00007f30025f739c
 </TASK>
INFO: task syz.4.874:8117 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.874       state:D stack:25632 pid:8117  tgid:8117  ppid:5974   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0x1585/0x5340 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0x164/0x360 kernel/sched/core.c:7004
 io_schedule+0x7f/0xd0 kernel/sched/core.c:7831
 folio_wait_bit_common+0x6d8/0xbc0 mm/filemap.c:1323
 folio_lock include/linux/pagemap.h:1170 [inline]
 __find_get_block_slow fs/buffer.c:206 [inline]
 find_get_block_common+0x34f/0xe10 fs/buffer.c:1405
 bdev_getblk+0x53/0x6e0 include/linux/gfp.h:-1
 __getblk include/linux/buffer_head.h:380 [inline]
 sb_getblk include/linux/buffer_head.h:386 [inline]
 __ext4_get_inode_loc+0x7d8/0xfa0 fs/ext4/inode.c:4812
 ext4_get_inode_loc fs/ext4/inode.c:4915 [inline]
 ext4_reserve_inode_write+0x18b/0x360 fs/ext4/inode.c:6235
 __ext4_mark_inode_dirty+0x14b/0x730 fs/ext4/inode.c:6413
 __ext4_new_inode+0x3383/0x3d20 fs/ext4/ialloc.c:1333
 ext4_mkdir+0x3da/0xbf0 fs/ext4/namei.c:3005
 vfs_mkdir+0x413/0x630 fs/namei.c:5233
 filename_mkdirat+0x285/0x510 fs/namei.c:5266
 __do_sys_mkdir fs/namei.c:5293 [inline]
 __se_sys_mkdir+0x34/0x150 fs/namei.c:5290
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f08d499c629
RSP: 002b:00007ffd2b703978 EFLAGS: 00000246 ORIG_RAX: 0000000000000053
RAX: ffffffffffffffda RBX: 00007f08d4c15fa0 RCX: 00007f08d499c629
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000680
RBP: 00007f08d4a32b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f08d4c15fac R14: 00007f08d4c15fa0 R15: 00007f08d4c15fa0
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:1/13:
 #0: ffff88801fabd948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3250 [inline]
 #0: ffff88801fabd948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9ea/0x1830 kernel/workqueue.c:3358
 #1: ffffc90000127c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline]
 #1: ffffc90000127c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa25/0x1830 kernel/workqueue.c:3358
 #2: ffff88801b6f20e0 (&type->s_umount_key#41){.+.+}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:565
1 lock held by khungtaskd/32:
 #0: ffffffff8e7602e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7602e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7602e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
2 locks held by kworker/u8:5/127:
 #0: ffff8880b863ade0 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x31/0x150 kernel/sched/core.c:647
 #1: ffff8880b8624588 (psi_seq){-.-.}-{0:0}, at: psi_task_switch+0x53/0x880 kernel/sched/psi.c:933
1 lock held by udevd/5186:
2 locks held by getty/5573:
 #0: ffff888032cfe0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000331e2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x45c/0x13c0 drivers/tty/n_tty.c:2211
2 locks held by udevd/5880:
 #0: ffff888023452a28 (&sb->s_type->i_mutex_key#11){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1043 [inline]
 #0: ffff888023452a28 (&sb->s_type->i_mutex_key#11){++++}-{4:4}, at: blkdev_read_iter+0x2f8/0x440 block/fops.c:854
 #1: ffff888023452bc8 (mapping.invalidate_lock){++++}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:1093 [inline]
 #1: ffff888023452bc8 (mapping.invalidate_lock){++++}-{4:4}, at: filemap_update_page mm/filemap.c:2549 [inline]
 #1: ffff888023452bc8 (mapping.invalidate_lock){++++}-{4:4}, at: filemap_get_pages+0x991/0x1f10 mm/filemap.c:2713
1 lock held by syz-executor/5965:
 #0: ffffffff8e766578 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:343 [inline]
 #0: ffffffff8e766578 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x38d/0x770 kernel/rcu/tree_exp.h:961
1 lock held by syz-executor/5966:
4 locks held by udevd/6188:
2 locks held by syz.4.874/8117:
 #0: ffff88807bcaa420 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff88805b1d03e0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff88805b1d03e0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2923 [inline]
 #1: ffff88805b1d03e0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2934 [inline]
 #1: ffff88805b1d03e0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: filename_create+0x200/0x370 fs/namei.c:4922
2 locks held by udevd/8626:
 #0: ffff888023451328 (&sb->s_type->i_mutex_key#11){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1043 [inline]
 #0: ffff888023451328 (&sb->s_type->i_mutex_key#11){++++}-{4:4}, at: blkdev_read_iter+0x2f8/0x440 block/fops.c:854
 #1: ffff8880b873ade0 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x31/0x150 kernel/sched/core.c:647
1 lock held by udevd/8653:
1 lock held by udevd/8654:
 #0: ffff888023454128 (&sb->s_type->i_mutex_key#11){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1043 [inline]
 #0: ffff888023454128 (&sb->s_type->i_mutex_key#11){++++}-{4:4}, at: blkdev_read_iter+0x2f8/0x440 block/fops.c:854
2 locks held by syz.5.2965/13038:
1 lock held by syz.6.2966/13039:
1 lock held by syz.0.2964/13042:
1 lock held by kmmpd-loop0/13048:
 #0: ffff8880b873ade0 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x31/0x150 kernel/sched/core.c:647

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 32 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xfd9/0x1030 kernel/hung_task.c:515
 kthread+0x388/0x470 kernel/kthread.c:467
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 6188 Comm: udevd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
RIP: 0010:srso_alias_safe_ret+0x0/0x7 arch/x86/lib/retpoline.S:210
Code: cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc <48> 8d 64 24 08 c3 cc e8 f4 ff ff ff 0f 0b cc cc cc cc cc cc cc cc
RSP: 0018:ffffc90002ed73d0 EFLAGS: 00000293
RAX: ffffffff8ba52031 RBX: ffff888036ffd900 RCX: ffff88801ef73c80
RDX: 0000000000000000 RSI: 0000000000000300 RDI: 0000000000000300
RBP: 0000000000000001 R08: ffff88801ef73c80 R09: 0000000000000004
R10: 0000000000000003 R11: 0000000000000000 R12: dffffc0000000000
R13: ffff8880760d8800 R14: ffff8880595dbdc0 R15: ffffffffffffffff
FS:  00007f3002352880(0000) GS:ffff888125564000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f300234d000 CR3: 000000002c631000 CR4: 0000000000350ef0
Call Trace:
 <TASK>
 srso_alias_return_thunk+0x5/0xfbef5 arch/x86/lib/retpoline.S:220
 mt_locked lib/maple_tree.c:729 [inline]
 mt_slot lib/maple_tree.c:736 [inline]
 mas_next_slot+0x951/0xcf0 lib/maple_tree.c:4406
 mas_find+0xb0e/0xd30 lib/maple_tree.c:5622
 vma_next include/linux/mm.h:1323 [inline]
 validate_mm+0xfe/0x4c0 mm/vma.c:650
 mmap_region+0x1513/0x2240 mm/vma.c:2843
 do_mmap+0xc39/0x10c0 mm/mmap.c:559
 vm_mmap_pgoff+0x2c9/0x4f0 mm/util.c:581
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f3002454822
Code: 00 00 00 0f 1f 44 00 00 41 f7 c1 ff 0f 00 00 75 27 55 89 cd 53 48 89 fb 48 85 ff 74 3b 41 89 ea 48 89 df b8 09 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 76 5b 5d c3 0f 1f 00 48 8b 05 a1 35 0d 00 64
RSP: 002b:00007ffef214d608 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f3002454822
RDX: 0000000000000003 RSI: 0000000000000200 RDI: 0000000000000000
RBP: 0000000000000022 R08: 00000000ffffffff R09: 0000000000000000
R10: 0000000000000022 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000008000 R14: 0000556fb2cd30b0 R15: 0000000000004000
 </TASK>

Crashes (5):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/26 09:35 upstream d9d32e5bd5a4 e0f78d93 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro (corrupt fs)] ci-upstream-kasan-gce-root INFO: task hung in block_read_full_folio
2026/02/26 20:18 upstream f4d0ec0aa20d ffa54287 .config console log report syz / log [disk image] [vmlinux] [kernel image] [mounted in repro (corrupt fs)] ci2-upstream-fs INFO: task hung in block_read_full_folio
2026/02/26 01:43 upstream d9d32e5bd5a4 e0f78d93 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in block_read_full_folio
2026/02/07 16:21 upstream 2687c848e578 f20fc9f9 .config console log report syz / log [disk image] [vmlinux] [kernel image] [mounted in repro (corrupt fs)] ci-upstream-kasan-gce-root INFO: task hung in block_read_full_folio
2026/02/06 16:35 upstream b7ff7151e653 97745f52 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in block_read_full_folio
* Struck through repros no longer work on HEAD.