syzbot


INFO: task hung in loop_set_status (2)

Status: auto-obsoleted due to no activity on 2025/10/17 04:36
Subsystems: block
[Documentation on labels]
First crash: 145d, last: 106d
Similar bugs (2)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in loop_set_status block 1 109 1442d 1444d 0/29 closed as dup on 2021/11/21 14:00
upstream INFO: task can't die in blk_mq_freeze_queue_wait block 1 221 1440d 1448d 20/29 fixed on 2022/03/08 16:11

Sample crash report:
INFO: task syz.2.526:10049 blocked for more than 146 seconds.
      Not tainted 6.16.0-rc6-syzkaller-gaaef6f251176 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.526       state:D stack:0     pid:10049 tgid:10047 ppid:6539   task_flags:0x400140 flags:0x00000011
Call trace:
 __switch_to+0x414/0x834 arch/arm64/kernel/process.c:741 (T)
 context_switch kernel/sched/core.c:5397 [inline]
 __schedule+0x1414/0x2a28 kernel/sched/core.c:6786
 __schedule_loop kernel/sched/core.c:6864 [inline]
 schedule+0xb4/0x230 kernel/sched/core.c:6879
 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6936
 __mutex_lock_common+0xbd0/0x2190 kernel/locking/mutex.c:679
 __mutex_lock kernel/locking/mutex.c:747 [inline]
 mutex_lock_nested+0x2c/0x38 kernel/locking/mutex.c:799
 loop_reread_partitions drivers/block/loop.c:439 [inline]
 loop_set_status+0x6c4/0x9b4 drivers/block/loop.c:1266
 lo_ioctl+0x7f0/0x1b8c drivers/block/loop.c:-1
 blkdev_ioctl+0x610/0xac0 block/ioctl.c:704
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:907 [inline]
 __se_sys_ioctl fs/ioctl.c:893 [inline]
 __arm64_sys_ioctl+0x14c/0x1c4 fs/ioctl.c:893
 __invoke_syscall arch/arm64/kernel/syscall.c:35 [inline]
 invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:49
 el0_svc_common+0x130/0x23c arch/arm64/kernel/syscall.c:132
 do_el0_svc+0x48/0x58 arch/arm64/kernel/syscall.c:151
 el0_svc+0x58/0x180 arch/arm64/kernel/entry-common.c:879
 el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:898
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596

Showing all locks held in the system:
1 lock held by kthreadd/2:
1 lock held by kworker/R-kvfre/6:
 #0: ffff80008f7113c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: raw_spin_rq_lock_nested kernel/sched/core.c:606 [inline]
 #0: ffff80008f7113c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: raw_spin_rq_lock kernel/sched/sched.h:1532 [inline]
 #0: ffff80008f7113c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rq_lock kernel/sched/sched.h:1856 [inline]
 #0: ffff80008f7113c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: __schedule+0x340/0x2a28 kernel/sched/core.c:6710
2 locks held by kworker/0:0/9:
5 locks held by kworker/0:1/11:
3 locks held by kworker/u8:0/12:
6 locks held by kworker/u8:1/13:
1 lock held by kworker/R-mm_pe/14:
 #0: ffff80008f7113c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffff80008f7113c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3531
3 locks held by kworker/1:0/24:
1 lock held by khungtaskd/32:
 #0: ffff80008f869980 (
rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x4/0x48 include/linux/rcupdate.h:330
3 locks held by kworker/u8:2/41:
3 locks held by kworker/u8:3/42:
2 locks held by kworker/u8:4/62:
3 locks held by kworker/u8:5/270:
3 locks held by kworker/u8:6/581:
3 locks held by kworker/u8:7/592:
3 locks held by kworker/u8:8/606:
4 locks held by kworker/u8:9/686:
4 locks held by kworker/1:2/1816:
 #0: ffff0000f4bfb148 ((wq_completion)wg-kex-wg1#6){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3212
 #1: ffff8000a00c7bc0 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker)))); (typeof((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3212
 #2: ffff0000d4d85308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_response+0x180/0x988 drivers/net/wireguard/noise.c:742
 #3: ffff0000ced50338 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_response+0x20c/0x988 drivers/net/wireguard/noise.c:753
4 locks held by kworker/u8:10/2073:
3 locks held by kworker/u8:11/2342:
4 locks held by kworker/R-bat_e/4255:
1 lock held by klogd/6132:
2 locks held by udevd/6143:
1 lock held by dhcpcd/6198:
4 locks held by kworker/0:3/6276:
2 locks held by getty/6297:
 #0: ffff0000d6c3a0a0 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/19 04:35 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci aaef6f251176 f550e092 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in loop_set_status
2025/06/09 16:11 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci d7fa1af5b33e 4826c28e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in loop_set_status
* Struck through repros no longer work on HEAD.