syzbot


INFO: task hung in __sync_dirty_buffer (3)

Status: auto-obsoleted due to no activity on 2025/11/28 10:20
Subsystems: nilfs
[Documentation on labels]
First crash: 257d, last: 105d
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.14 INFO: task hung in __sync_dirty_buffer ext4 nilfs2 1 C error 9 1025d 1901d 0/1 upstream: reported C repro on 2020/09/29 05:34
linux-4.19 INFO: task hung in __sync_dirty_buffer ext4 nilfs2 1 C error 25 1014d 1889d 0/1 upstream: reported C repro on 2020/10/11 09:03
linux-6.1 INFO: task hung in __sync_dirty_buffer 1 2 949d 972d 0/3 auto-obsoleted due to no activity on 2023/08/23 09:07
linux-5.15 INFO: task hung in __sync_dirty_buffer 1 10 943d 980d 0/3 auto-obsoleted due to no activity on 2023/08/22 15:19
upstream INFO: task hung in __sync_dirty_buffer ext4 1 C inconclusive error 832 915d 1564d 22/29 fixed on 2023/07/01 16:05
upstream INFO: task hung in __sync_dirty_buffer (2) ntfs3 1 6 520d 601d 0/29 auto-obsoleted due to no activity on 2024/10/09 06:51

Sample crash report:
INFO: task syz-executor:6556 blocked for more than 144 seconds.
      Not tainted 6.17.0-rc1-syzkaller-g8f5ae30d69d7 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D
 stack:0     pid:6556  tgid:6556  ppid:6545   task_flags:0x400140 flags:0x00000010
Call trace:
 __switch_to+0x418/0x87c arch/arm64/kernel/process.c:741 (T)
 context_switch kernel/sched/core.c:5357 [inline]
 __schedule+0x13b0/0x2864 kernel/sched/core.c:6961
 __schedule_loop kernel/sched/core.c:7043 [inline]
 schedule+0xb4/0x230 kernel/sched/core.c:7058
 io_schedule+0x84/0xf0 kernel/sched/core.c:7903
 bit_wait_io+0x1c/0xac kernel/sched/wait_bit.c:250
 __wait_on_bit kernel/sched/wait_bit.c:52 [inline]
 out_of_line_wait_on_bit+0x158/0x1f0 kernel/sched/wait_bit.c:67
 wait_on_bit_io include/linux/wait_bit.h:105 [inline]
 __wait_on_buffer fs/buffer.c:123 [inline]
 wait_on_buffer include/linux/buffer_head.h:420 [inline]
 __sync_dirty_buffer+0x204/0x304 fs/buffer.c:2868
 nilfs_sync_super fs/nilfs2/super.c:186 [inline]
 nilfs_commit_super+0x3a4/0x864 fs/nilfs2/super.c:302
 nilfs_sync_fs+0x258/0x3dc fs/nilfs2/super.c:541
 sync_filesystem+0xe8/0x218 fs/sync.c:56
 generic_shutdown_super+0x70/0x2b8 fs/super.c:622
 kill_block_super+0x44/0x90 fs/super.c:1766
 deactivate_locked_super+0xc4/0x12c fs/super.c:474
 deactivate_super+0xe0/0x100 fs/super.c:507
 cleanup_mnt+0x31c/0x3ac fs/namespace.c:1378
 __cleanup_mnt+0x20/0x30 fs/namespace.c:1385
 task_work_run+0x1dc/0x260 kernel/task_work.c:227
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 do_notify_resume+0x174/0x1f4 arch/arm64/kernel/entry-common.c:155
 exit_to_user_mode_prepare arch/arm64/kernel/entry-common.c:173 [inline]
 exit_to_user_mode arch/arm64/kernel/entry-common.c:182 [inline]
 el0_svc+0xb8/0x180 arch/arm64/kernel/entry-common.c:880
 el0t_64_sync_handler+0x84/0x12c arch/arm64/kernel/entry-common.c:898
 el0t_64_sync+0x198/0x19c arch/arm64/kernel/entry.S:596

Showing all locks held in the system:
1 lock held by kworker/R-kvfre/6:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
3 locks held by kworker/0:0/9:
3 locks held by kworker/u8:0/12:
1 lock held by kworker/R-mm_pe/13:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
3 locks held by kworker/u8:1/15:
2 locks held by kworker/1:0/24:
2 locks held by kworker/1:1/26:
1 lock held by khungtaskd/32:
 #0: ffff80008f9a9060 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire+0x4/0x48 include/linux/rcupdate.h:330
1 lock held by kworker/R-write/34:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3529
3 locks held by kworker/u8:2/41:
 #0: ffff0000c0032148 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3210
 #1: ffff800099277bc0 ((crda_timeout).work){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3210
 #2: ffff800092a58268 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80
3 locks held by kworker/u8:3/42:
 #0: ffff0000c0032148 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3210
 #1: ffff800099287bc0 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3210
 #2: ffff800092a58268 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80
2 locks held by pr/ttyAMA-1/43:
2 locks held by kswapd0/74:
3 locks held by kworker/u8:4/354:
3 locks held by kworker/u8:5/690:
2 locks held by kworker/0:2/1818:
3 locks held by kworker/R-ipv6_/4197:
 #0: ffff0000d2e51148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3210
 #1: ffff8000a22e7ba0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3210
 #2: ffff800092a58268 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80
1 lock held by kworker/R-bat_e/4272:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3529
3 locks held by kworker/u8:6/4942:
1 lock held by klogd/6153:
1 lock held by dhcpcd/6218:
3 locks held by dhcpcd/6219:
2 locks held by getty/6308:
 #0: ffff0000d321a0a0 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340
 #1: ffff80009bbae2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x34c/0xfa4 drivers/tty/n_tty.c:2222
4 locks held by udevd/6539:
3 locks held by syz-executor/6547:
1 lock held by syz-executor/6548:
3 locks held by syz-executor/6552:
2 locks held by kworker/1:3/6553:
2 locks held by syz-executor/6556:
 #0: ffff0000d89640e0 (&type->s_umount_key#82){+.+.}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff0000d89640e0 (&type->s_umount_key#82){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff0000d89640e0 (&type->s_umount_key#82){+.+.}-{4:4}, at: deactivate_super+0xd8/0x100 fs/super.c:506
 #1: ffff0000d5b44090 (&nilfs->ns_sem){++++}-{4:4}, at: nilfs_sync_fs+0xcc/0x3dc fs/nilfs2/super.c:536
4 locks held by kworker/1:4/6562:
 #0: ffff0000c0028d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3210
 #1: ffff8000a1cc7bc0 (reg_work){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3210
 #2: ffff800092a58268 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80
 #3: ffff0000d2000768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: class_wiphy_constructor include/net/cfg80211.h:6212 [inline]
 #3: ffff0000d2000768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: reg_process_self_managed_hints+0xc8/0x1dc net/wireless/reg.c:3209
1 lock held by kworker/R-wg-cr/6583:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/6585:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/6586:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/6587:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/6590:
1 lock held by kworker/R-wg-cr/6591:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/6592:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/6593:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/6596:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/6597:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/6598:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x86c/0xec8 kernel/workqueue.c:3529
1 lock held by kworker/R-wg-cr/6599:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/6600:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676
4 locks held by kworker/1:5/6607:
 #0: ffff0000f1fd9948 ((wq_completion)wg-kex-wg2#4){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3210
 #1: ffff8000a4807bc0 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker)))); (typeof((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3210
 #2: ffff0000d6f91308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x130/0x748 drivers/net/wireguard/noise.c:598
 #3: ffff0000d47c34c0 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x414/0x748 drivers/net/wireguard/noise.c:632
4 locks held by kworker/1:6/6620:
4 locks held by kworker/0:3/6621:
3 locks held by kworker/0:4/6628:
2 locks held by kworker/1:7/6659:
2 locks held by kworker/0:5/6662:
3 locks held by kworker/u8:7/6692:
3 locks held by kworker/u8:8/6826:
1 lock held by syz.0.29/6929:
 #0: ffff800092a58268 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:80
3 locks held by kworker/u8:9/7655:
2 locks held by kworker/u8:10/8034:
3 locks held by kworker/u8:11/8035:
2 locks held by kworker/u8:12/8036:
4 locks held by kworker/u8:13/8037:
 #0: ffff0000d9056948 ((wq_completion)wg-kex-wg0#3){+.+.}-{0:0}, at: process_one_work+0x63c/0x155c kernel/workqueue.c:3210
 #1: ffff80009c2b7bc0 ((work_completion)(&peer->transmit_handshake_work)){+.+.}-{0:0}, at: process_one_work+0x6d4/0x155c kernel/workqueue.c:3210
 #2: ffff0000d6fb5308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0x10c/0x6c8 drivers/net/wireguard/noise.c:529
 #3: ffff0000d47c0338 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0x114/0x6c8 drivers/net/wireguard/noise.c:530
3 locks held by syz-executor/8038:
1 lock held by kworker/u8:14/8039:
 #0: ffff80008f850e28 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x40/0x348 kernel/workqueue.c:2676

=============================================


Crashes (5):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/08/30 10:16 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 8f5ae30d69d7 807a3b61 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in __sync_dirty_buffer
2025/06/19 18:39 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 39dfc971e42d ed3e87f7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in __sync_dirty_buffer
2025/05/09 04:50 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci c32f8dc5aaf9 bb813bcc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in __sync_dirty_buffer
2025/05/04 00:44 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci e0f4c8dd9d2d b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in __sync_dirty_buffer
2025/03/31 02:39 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci a2392f333575 d3999433 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in __sync_dirty_buffer
* Struck through repros no longer work on HEAD.