syzbot


INFO: task hung in read_part_sector (3)

Status: upstream: reported on 2025/07/28 11:29
Reported-by: syzbot+910d3e8c08500bfcbc85@syzkaller.appspotmail.com
First crash: 90d, last: 1d02h
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in read_part_sector origin:upstream 1 C error 252 23h43m 819d 0/3 upstream: reported C repro on 2023/07/29 13:11
linux-6.1 INFO: task hung in read_part_sector 1 1 627d 627d 0/3 auto-obsoleted due to no activity on 2024/05/17 07:34
upstream INFO: task hung in read_part_sector block 1 2 643d 709d 0/29 auto-obsoleted due to no activity on 2024/04/21 10:47
linux-6.6 INFO: task hung in read_part_sector 1 130 1d09h 128d 0/2 upstream: reported on 2025/06/19 22:23
linux-6.1 INFO: task hung in read_part_sector (2) 1 1 505d 505d 0/3 auto-obsoleted due to no activity on 2024/09/16 08:59
upstream INFO: task hung in read_part_sector (2) block 1 4540 now 447d 0/29 upstream: reported on 2024/08/04 19:22

Sample crash report:
INFO: task udevd:6817 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:udevd           state:D stack:25288 pid:6817  ppid:3638   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5244 [inline]
 __schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
 schedule+0xb9/0x180 kernel/sched/core.c:6637
 io_schedule+0x7c/0xd0 kernel/sched/core.c:8797
 folio_wait_bit_common+0x6e1/0xf60 mm/filemap.c:1324
 folio_put_wait_locked mm/filemap.c:1493 [inline]
 do_read_cache_folio+0x1a9/0x760 mm/filemap.c:3609
 read_mapping_folio include/linux/pagemap.h:797 [inline]
 read_part_sector+0xce/0x350 block/partitions/core.c:724
 adfspart_check_POWERTEC+0xb4/0x870 block/partitions/acorn.c:454
 check_partition block/partitions/core.c:146 [inline]
 blk_add_partitions block/partitions/core.c:609 [inline]
 bdev_disk_changed+0x7bd/0x1480 block/partitions/core.c:695
 blkdev_get_whole+0x2e8/0x370 block/bdev.c:687
 blkdev_get_by_dev+0x32e/0xa60 block/bdev.c:824
 blkdev_open+0x11e/0x2e0 block/fops.c:500
 do_dentry_open+0x7e9/0x10d0 fs/open.c:882
 do_open fs/namei.c:3634 [inline]
 path_openat+0x25c6/0x2e70 fs/namei.c:3791
 do_filp_open+0x1c1/0x3c0 fs/namei.c:3818
 do_sys_openat2+0x142/0x490 fs/open.c:1318
 do_sys_open fs/open.c:1334 [inline]
 __do_sys_openat fs/open.c:1350 [inline]
 __se_sys_openat fs/open.c:1345 [inline]
 __x64_sys_openat+0x135/0x160 fs/open.c:1345
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f811eca7407
RSP: 002b:00007ffdae63a580 EFLAGS: 00000202 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f811f3c2880 RCX: 00007f811eca7407
RDX: 00000000000a0800 RSI: 00005588db6f80f0 RDI: ffffffffffffff9c
RBP: 00005588db6e0910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 00005588db6f8e00
R13: 00005588db6f8410 R14: 0000000000000000 R15: 00005588db6f8e00
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8cb2b630 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8cb2be50 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/28:
 #0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
2 locks held by kworker/0:2/952:
 #0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90004777d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by getty/4026:
 #0: ffff88802fa02098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
3 locks held by kworker/u4:7/4395:
 #0: ffff888017479138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc900050dfd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xa/0x50 net/core/link_watch.c:263
5 locks held by kworker/u4:11/4433:
4 locks held by kworker/u4:12/4461:
 #0: ffff888017616938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc900050cfd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffffffff8dd34b90 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x132/0xb80 net/core/net_namespace.c:594
 #3: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: cangw_pernet_exit_batch+0x1c/0x90 net/can/gw.c:1281
1 lock held by udevd/6817:
 #0: ffff8880249d44c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x13d/0xa60 block/bdev.c:815
1 lock held by udevd/7176:
1 lock held by syz.8.1054/10387:
 #0: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6150
2 locks held by syz.8.1054/10390:
 #0: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6150
 #1: ffffffff8cb30978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #1: ffffffff8cb30978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
1 lock held by syz.8.1054/10392:
 #0: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6150
1 lock held by syz.9.1053/10393:
 #0: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: devinet_ioctl+0x288/0x1af0 net/ipv4/devinet.c:1080
2 locks held by syz.1.1055/10400:
 #0: ffffffff8dd34b90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x32e/0x5b0 net/core/net_namespace.c:504
 #1: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: register_nexthop_notifier+0x7d/0x210 net/ipv4/nexthop.c:3613
1 lock held by syz.1.1055/10401:
 #0: ffffffff8dd41868 (rtnl_mutex){+.+.}-{3:3}, at: ip6_mroute_setsockopt+0x8b3/0xe50 net/ipv6/ip6mr.c:1743
1 lock held by syz.6.1056/10422:
2 locks held by dhcpcd/10403:
 #0: ffff88807471d010 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
 #0: ffff88807471d010 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release net/socket.c:653 [inline]
 #0: ffff88807471d010 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: sock_close+0x90/0x240 net/socket.c:1400
 #1: ffffffff8cb30978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
 #1: ffffffff8cb30978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x455/0x830 kernel/rcu/tree_exp.h:962
1 lock held by udevadm/10424:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
 nmi_cpu_backtrace+0x3f4/0x470 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xeee/0xf30 kernel/hung_task.c:377
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 10422 Comm: syz.6.1056 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
RIP: 0010:hlock_class kernel/locking/lockdep.c:228 [inline]
RIP: 0010:mark_lock+0x9c/0x320 kernel/locking/lockdep.c:4606
Code: 89 c7 41 81 e7 ff 1f 00 00 c1 e8 03 25 f8 03 00 00 48 8d b8 40 22 ae 90 be 08 00 00 00 e8 3c 54 6d 00 4c 0f a3 3d 34 73 4a 0f <73> 10 4b 8d 04 7f c1 e0 06 4c 8d b8 00 a1 46 90 eb 24 48 c7 c0 40
RSP: 0018:ffffc9000d84f1e0 EFLAGS: 00000057
RAX: 0000000000000001 RBX: ffff888019a81dc0 RCX: ffffffff8163af04
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff90ae22d8
RBP: 0000000000000008 R08: dffffc0000000000 R09: fffffbfff215c45c
R10: fffffbfff215c45c R11: 1ffffffff215c45b R12: 0000000000000100
R13: dffffc0000000000 R14: ffff888019a828c8 R15: 00000000000004ff
FS:  00007f68330b86c0(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005588db70ff18 CR3: 000000006878f000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 mark_usage kernel/locking/lockdep.c:4549 [inline]
 __lock_acquire+0xd6f/0x7c50 kernel/locking/lockdep.c:5003
 lock_acquire+0x1b4/0x490 kernel/locking/lockdep.c:5662
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162
 __skb_try_recv_datagram+0x147/0x4d0 net/core/datagram.c:262
 __unix_dgram_recvmsg+0x2d1/0xd70 net/unix/af_unix.c:2451
 ____sys_recvmsg+0x292/0x580 net/socket.c:-1
 ___sys_recvmsg+0x1b2/0x510 net/socket.c:2780
 do_recvmmsg+0x359/0x7d0 net/socket.c:2874
 __sys_recvmmsg net/socket.c:2953 [inline]
 __do_sys_recvmmsg net/socket.c:2976 [inline]
 __se_sys_recvmmsg net/socket.c:2969 [inline]
 __x64_sys_recvmmsg+0x18d/0x240 net/socket.c:2969
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f683218efc9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f68330b8038 EFLAGS: 00000246 ORIG_RAX: 000000000000012b
RAX: ffffffffffffffda RBX: 00007f68323e6090 RCX: 00007f683218efc9
RDX: 0000000000010106 RSI: 00002000000000c0 RDI: 0000000000000004
RBP: 00007f6832211f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000002 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f68323e6128 R14: 00007f68323e6090 R15: 00007fffd7120d48
 </TASK>

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/10/25 08:51 linux-6.1.y 8e6e2188d949 c0460fcd .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/10/17 03:12 linux-6.1.y c2fda4b3f577 19568248 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/07/28 11:29 linux-6.1.y 3594f306da12 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
* Struck through repros no longer work on HEAD.