syzbot


INFO: task hung in read_part_sector

Status: upstream: reported on 2025/06/19 22:23
Reported-by: syzbot+df988a9b3a05646bca4c@syzkaller.appspotmail.com
First crash: 74d, last: 15h17m
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in read_part_sector origin:upstream 1 C error 199 8h58m 766d 0/3 upstream: reported C repro on 2023/07/29 13:11
linux-6.1 INFO: task hung in read_part_sector 1 1 573d 573d 0/3 auto-obsoleted due to no activity on 2024/05/17 07:34
upstream INFO: task hung in read_part_sector block 1 2 589d 655d 0/29 auto-obsoleted due to no activity on 2024/04/21 10:47
linux-6.1 INFO: task hung in read_part_sector (2) 1 1 451d 451d 0/3 auto-obsoleted due to no activity on 2024/09/16 08:59
linux-6.1 INFO: task hung in read_part_sector (3) 1 1 36d 36d 0/3 upstream: reported on 2025/07/28 11:29
upstream INFO: task hung in read_part_sector (2) block 1 2465 now 394d 0/29 upstream: reported on 2024/08/04 19:22

Sample crash report:
INFO: task udevd:5799 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:udevd           state:D stack:24584 pid:5799  ppid:5159   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5380 [inline]
 __schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
 schedule+0xbd/0x170 kernel/sched/core.c:6773
 io_schedule+0x80/0xd0 kernel/sched/core.c:9022
 folio_wait_bit_common+0x6eb/0xf70 mm/filemap.c:1329
 folio_put_wait_locked mm/filemap.c:1493 [inline]
 do_read_cache_folio+0x1c0/0x7e0 mm/filemap.c:3771
 read_mapping_folio include/linux/pagemap.h:898 [inline]
 read_part_sector+0xd2/0x350 block/partitions/core.c:718
 adfspart_check_POWERTEC+0x8d/0xf00 block/partitions/acorn.c:454
 check_partition block/partitions/core.c:138 [inline]
 blk_add_partitions block/partitions/core.c:600 [inline]
 bdev_disk_changed+0x73a/0x1410 block/partitions/core.c:689
 blkdev_get_whole+0x30d/0x390 block/bdev.c:655
 blkdev_get_by_dev+0x279/0x600 block/bdev.c:797
 blkdev_open+0x152/0x360 block/fops.c:589
 do_dentry_open+0x8c6/0x1500 fs/open.c:929
 do_open fs/namei.c:3632 [inline]
 path_openat+0x274b/0x3190 fs/namei.c:3789
 do_filp_open+0x1c5/0x3d0 fs/namei.c:3816
 do_sys_openat2+0x12c/0x1c0 fs/open.c:1419
 do_sys_open fs/open.c:1434 [inline]
 __do_sys_openat fs/open.c:1450 [inline]
 __se_sys_openat fs/open.c:1445 [inline]
 __x64_sys_openat+0x139/0x160 fs/open.c:1445
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fb6ee2a7407
RSP: 002b:00007fff2ffc9530 EFLAGS: 00000202 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007fb6eea6a880 RCX: 00007fb6ee2a7407
RDX: 00000000000a0800 RSI: 000056168c05d0f0 RDI: ffffffffffffff9c
RBP: 000056168c045910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 000056168c059ba0
R13: 000056168c05d410 R14: 0000000000000000 R15: 000056168c059ba0
 </TASK>
INFO: task udevd:5801 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:udevd           state:D stack:23976 pid:5801  ppid:5159   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5380 [inline]
 __schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
 schedule+0xbd/0x170 kernel/sched/core.c:6773
 io_schedule+0x80/0xd0 kernel/sched/core.c:9022
 folio_wait_bit_common+0x6eb/0xf70 mm/filemap.c:1329
 folio_put_wait_locked mm/filemap.c:1493 [inline]
 do_read_cache_folio+0x1c0/0x7e0 mm/filemap.c:3771
 read_mapping_folio include/linux/pagemap.h:898 [inline]
 read_part_sector+0xd2/0x350 block/partitions/core.c:718
 adfspart_check_POWERTEC+0x8d/0xf00 block/partitions/acorn.c:454
 check_partition block/partitions/core.c:138 [inline]
 blk_add_partitions block/partitions/core.c:600 [inline]
 bdev_disk_changed+0x73a/0x1410 block/partitions/core.c:689
 blkdev_get_whole+0x30d/0x390 block/bdev.c:655
 blkdev_get_by_dev+0x279/0x600 block/bdev.c:797
 blkdev_open+0x152/0x360 block/fops.c:589
 do_dentry_open+0x8c6/0x1500 fs/open.c:929
 do_open fs/namei.c:3632 [inline]
 path_openat+0x274b/0x3190 fs/namei.c:3789
 do_filp_open+0x1c5/0x3d0 fs/namei.c:3816
 do_sys_openat2+0x12c/0x1c0 fs/open.c:1419
 do_sys_open fs/open.c:1434 [inline]
 __do_sys_openat fs/open.c:1450 [inline]
 __se_sys_openat fs/open.c:1445 [inline]
 __x64_sys_openat+0x139/0x160 fs/open.c:1445
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fb6ee2a7407
RSP: 002b:00007fff2ffc9530 EFLAGS: 00000202 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007fb6eea6a880 RCX: 00007fb6ee2a7407
RDX: 00000000000a0800 RSI: 000056168c058a40 RDI: ffffffffffffff9c
RBP: 000056168c045910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 000056168c059ba0
R13: 000056168c05d410 R14: 0000000000000000 R15: 000056168c059ba0
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/29:
 #0: ffffffff8cd2fbe0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 #0: ffffffff8cd2fbe0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
 #0: ffffffff8cd2fbe0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
3 locks held by kworker/u4:4/58:
 #0: ffff88802c223138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff88802c223138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc90001597d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc90001597d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #2: ffffffff8dfbc348 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4700
2 locks held by kworker/0:3/5174:
 #0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc90003297d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc90003297d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
1 lock held by dhcpcd/5455:
 #0: ffffffff8dfbc348 (rtnl_mutex){+.+.}-{3:3}, at: devinet_ioctl+0x32c/0x1c60 net/ipv4/devinet.c:1102
2 locks held by getty/5551:
 #0: ffff888030dea0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
1 lock held by udevd/5799:
 #0: ffff888140fe94c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
1 lock held by udevd/5801:
 #0: ffff888021f6b4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
1 lock held by udevd/6081:
 #0: ffff8881476d34c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
5 locks held by kworker/u4:12/6420:
 #0: ffff888017873938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff888017873938 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc9000c9cfd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc9000c9cfd00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #2: ffffffff8dfaf510 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x136/0xb90 net/core/net_namespace.c:606
 #3: ffff88804aefd3e8 (&wg->device_update_lock){+.+.}-{3:3}, at: wg_destruct+0x116/0x310 drivers/net/wireguard/device.c:250
 #4: ffffffff8cd35bb8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
 #4: ffffffff8cd35bb8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004
3 locks held by kworker/u4:14/6422:
 #0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc9000c9dfd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc9000c9dfd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #2: ffffffff8dfbc348 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:286
2 locks held by syz-executor/7430:
 #0: ffffffff8dfbc348 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8dfbc348 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6472
 #1: ffffffff8cd35bb8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
 #1: ffffffff8cd35bb8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004
1 lock held by syz.6.353/7521:
 #0: ffffffff8dfbc348 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8dfbc348 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x41/0x1c0 drivers/net/tun.c:3511

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xf41/0xf80 kernel/hung_task.c:379
 kthread+0x2fa/0x390 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 9 Comm: kworker/0:1 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Workqueue: rcu_gp process_srcu
RIP: 0010:preempt_count arch/x86/include/asm/preempt.h:27 [inline]
RIP: 0010:preempt_count_sub+0x37/0x160 kernel/sched/core.c:5872
Code: fc ff df 48 c7 c0 80 c4 ea 96 48 c1 e8 03 0f b6 04 18 84 c0 0f 85 bd 00 00 00 83 3d c2 58 8f 15 00 75 25 65 8b 05 41 4e a8 7e <89> c1 81 e1 ff ff ff 7f 39 cf 7f 25 81 ff ff 00 00 00 0f 93 c1 84
RSP: 0018:ffffc900000e79e8 EFLAGS: 00000246
RAX: 0000000080000001 RBX: dffffc0000000000 RCX: ffffffff96eac403
RDX: 00000000000000eb RSI: ffffffff8afc7020 RDI: 0000000000000001
RBP: 0000000000000000 R08: ffff8880b8f43f8f R09: 1ffff110171e87f1
R10: dffffc0000000000 R11: ffffed10171e87f2 R12: 0000000000000002
R13: ffff8880b8f43f88 R14: 000000ebae3a4dfa R15: 0000000000000ece
FS:  0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f9d34794198 CR3: 0000000074d35000 CR4: 00000000003506f0
Call Trace:
 <TASK>
 delay_tsc+0x4e/0xc0 arch/x86/lib/delay.c:77
 try_check_zero+0x39a/0x3e0 kernel/rcu/srcutree.c:1090
 srcu_advance_state kernel/rcu/srcutree.c:1692 [inline]
 process_srcu+0x243/0x1330 kernel/rcu/srcutree.c:1795
 process_one_work kernel/workqueue.c:2634 [inline]
 process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
 worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
 kthread+0x2fa/0x390 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
 </TASK>

Crashes (27):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/09/02 04:29 linux-6.6.y cc1a1c5b404a 807a3b61 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/31 03:41 linux-6.6.y cc1a1c5b404a 807a3b61 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/30 07:37 linux-6.6.y cc1a1c5b404a 807a3b61 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/27 05:13 linux-6.6.y bb9c90ab9c5a e12e5ba4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/13 16:02 linux-6.6.y 3a8ababb8b6a 22ec1469 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/13 02:18 linux-6.6.y 3a8ababb8b6a 22ec1469 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/13 02:12 linux-6.6.y 3a8ababb8b6a 22ec1469 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/11 12:35 linux-6.6.y 3a8ababb8b6a 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/05 02:15 linux-6.6.y 3a8ababb8b6a f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/04 08:19 linux-6.6.y 3a8ababb8b6a 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/03 22:41 linux-6.6.y 3a8ababb8b6a 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/03 12:32 linux-6.6.y 3a8ababb8b6a 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/08/03 12:28 linux-6.6.y 3a8ababb8b6a 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/07/27 05:07 linux-6.6.y dbcb8d8e4163 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/07/27 05:06 linux-6.6.y dbcb8d8e4163 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/07/27 05:06 linux-6.6.y dbcb8d8e4163 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/07/26 18:28 linux-6.6.y dbcb8d8e4163 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/07/21 20:35 linux-6.6.y d96eb99e2f0e 56d87229 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/07/19 09:19 linux-6.6.y d96eb99e2f0e 7117feec .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/07/12 02:53 linux-6.6.y 59a2de10b81a 3cda49cf .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/07/05 20:08 linux-6.6.y 3f5b4c104b7d 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/07/05 20:05 linux-6.6.y 3f5b4c104b7d 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/06/30 01:50 linux-6.6.y 3f5b4c104b7d fc9d8ee5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/06/23 00:25 linux-6.6.y 6282921b6825 d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/06/21 14:00 linux-6.6.y 6282921b6825 d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/06/21 06:39 linux-6.6.y 6282921b6825 d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
2025/06/19 22:23 linux-6.6.y 6282921b6825 ed3e87f7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in read_part_sector
* Struck through repros no longer work on HEAD.