syzbot


INFO: task hung in read_part_sector (3)

Status: upstream: reported on 2025/07/28 11:29
Reported-by: syzbot+910d3e8c08500bfcbc85@syzkaller.appspotmail.com
First crash: 173d, last: 1d00h
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in read_part_sector origin:upstream 1 C error 325 1d01h 903d 0/3 upstream: reported C repro on 2023/07/29 13:11
linux-6.1 INFO: task hung in read_part_sector 1 1 710d 710d 0/3 auto-obsoleted due to no activity on 2024/05/17 07:34
upstream INFO: task hung in read_part_sector block 1 2 726d 792d 0/29 auto-obsoleted due to no activity on 2024/04/21 10:47
linux-6.6 INFO: task hung in read_part_sector 1 243 13h18m 212d 0/2 upstream: reported on 2025/06/19 22:23
linux-6.1 INFO: task hung in read_part_sector (2) 1 1 588d 588d 0/3 auto-obsoleted due to no activity on 2024/09/16 08:59
upstream INFO: task hung in read_part_sector (2) block 1 syz error 12788 now 531d 0/29 upstream: reported syz repro on 2024/08/04 19:22

Sample crash report:
INFO: task udevd:4864 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:udevd           state:D stack:23216 pid:4864  ppid:3637   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5245 [inline]
 __schedule+0x11d1/0x40e0 kernel/sched/core.c:6562
 schedule+0xb9/0x180 kernel/sched/core.c:6638
 io_schedule+0x7c/0xd0 kernel/sched/core.c:8798
 folio_wait_bit_common+0x70a/0xfa0 mm/filemap.c:1324
 folio_put_wait_locked mm/filemap.c:1493 [inline]
 do_read_cache_folio+0x1a9/0x760 mm/filemap.c:3641
 read_mapping_folio include/linux/pagemap.h:799 [inline]
 read_part_sector+0xce/0x350 block/partitions/core.c:724
 adfspart_check_POWERTEC+0xc0/0x890 block/partitions/acorn.c:454
 check_partition block/partitions/core.c:146 [inline]
 blk_add_partitions block/partitions/core.c:609 [inline]
 bdev_disk_changed+0x7d9/0x14a0 block/partitions/core.c:695
 blkdev_get_whole+0x2e8/0x370 block/bdev.c:704
 blkdev_get_by_dev+0x32e/0xa60 block/bdev.c:841
 blkdev_open+0x11e/0x2e0 block/fops.c:500
 do_dentry_open+0x7e9/0x10d0 fs/open.c:882
 do_open fs/namei.c:3634 [inline]
 path_openat+0x2635/0x2ee0 fs/namei.c:3791
 do_filp_open+0x1f1/0x430 fs/namei.c:3818
 do_sys_openat2+0x150/0x4b0 fs/open.c:1320
 do_sys_open fs/open.c:1336 [inline]
 __do_sys_openat fs/open.c:1352 [inline]
 __se_sys_openat fs/open.c:1347 [inline]
 __x64_sys_openat+0x135/0x160 fs/open.c:1347
 do_syscall_x64 arch/x86/entry/common.c:46 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:76
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f6eb26a7407
RSP: 002b:00007ffea533da10 EFLAGS: 00000202 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f6eb2da0880 RCX: 00007f6eb26a7407
RDX: 00000000000a0800 RSI: 0000564367701430 RDI: ffffffffffffff9c
RBP: 0000564367700910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 0000564367725aa0
R13: 0000564367718410 R14: 0000000000000000 R15: 0000564367725aa0
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8cb2bc30 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8cb2c450 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/28:
 #0: ffffffff8cb2b2a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8cb2b2a0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8cb2b2a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
2 locks held by getty/4028:
 #0: ffff88802f907098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x429/0x1390 drivers/tty/n_tty.c:2198
2 locks held by kworker/1:6/4334:
 #0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
 #1: ffffc90004067d00 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
3 locks held by kworker/1:15/4597:
 #0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
 #1: ffffc9000523fd00 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
 #2: ffffffff8cb30f78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #2: ffffffff8cb30f78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3c0/0x890 kernel/rcu/tree_exp.h:962
1 lock held by udevd/4864:
 #0: ffff888143fe74c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x13d/0xa60 block/bdev.c:832
4 locks held by kworker/u4:18/10751:
 #0: ffff888017616938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
 #1: ffffc900037f7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
 #2: ffffffff8dd314d0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x148/0xba0 net/core/net_namespace.c:594
 #3: ffffffff8cb30f78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
 #3: ffffffff8cb30f78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x2ec/0x890 kernel/rcu/tree_exp.h:962
2 locks held by kworker/1:19/11431:
 #0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
 #1: ffffc9000351fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7b0/0x1160 kernel/workqueue.c:2267
1 lock held by rm/16981:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x188/0x24e lib/dump_stack.c:106
 nmi_cpu_backtrace+0x3e6/0x460 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xeee/0xf30 kernel/hung_task.c:377
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
NMI backtrace for cpu 1 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
NMI backtrace for cpu 1 skipped: idling at default_idle+0xb/0x10 arch/x86/kernel/process.c:741

Crashes (16):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/01/17 04:48 linux-6.1.y bec0e10ee67e 20d37d28 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2026/01/17 04:47 linux-6.1.y bec0e10ee67e 20d37d28 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2026/01/16 12:32 linux-6.1.y bec0e10ee67e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2026/01/15 22:39 linux-6.1.y bec0e10ee67e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2026/01/14 09:54 linux-6.1.y bec0e10ee67e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2026/01/13 16:23 linux-6.1.y bec0e10ee67e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/12/09 01:33 linux-6.1.y 50cbba13faa2 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/12/08 07:56 linux-6.1.y 50cbba13faa2 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/11/30 08:37 linux-6.1.y f6e38ae624cf d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/11/30 08:35 linux-6.1.y f6e38ae624cf d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/11/30 08:34 linux-6.1.y f6e38ae624cf d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/11/05 02:25 linux-6.1.y f6e38ae624cf 686bf657 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/11/04 09:11 linux-6.1.y f6e38ae624cf 686bf657 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/10/25 08:51 linux-6.1.y 8e6e2188d949 c0460fcd .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/10/17 03:12 linux-6.1.y c2fda4b3f577 19568248 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/07/28 11:29 linux-6.1.y 3594f306da12 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
* Struck through repros no longer work on HEAD.