syzbot


INFO: task hung in xfs_buf_item_unpin (2)

Status: upstream: reported C repro on 2025/08/22 07:02
Reported-by: syzbot+140ba3fddd5e22a27d02@syzkaller.appspotmail.com
First crash: 10d, last: 9d20h
Similar bugs (4)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in xfs_buf_item_unpin 1 2 861d 873d 0/3 auto-obsoleted due to no activity on 2023/08/21 23:02
upstream INFO: task hung in xfs_buf_item_unpin xfs 1 C error done 147 609d 916d 0/29 auto-obsoleted due to no activity on 2024/03/11 08:02
upstream INFO: task hung in xfs_buf_item_unpin (2) kernel 1 C done done 104 8d09h 381d 0/29 upstream: reported C repro on 2024/08/16 05:17
linux-6.1 INFO: task hung in xfs_buf_item_unpin 1 10 791d 902d 0/3 auto-obsoleted due to no activity on 2023/10/11 09:07

Sample crash report:
INFO: task syz.3.39:4742 blocked for more than 143 seconds.
      Not tainted 6.1.148-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.39        state:D stack:24608 pid:4742  ppid:4394   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5244 [inline]
 __schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
 schedule+0xb9/0x180 kernel/sched/core.c:6637
 schedule_timeout+0x97/0x280 kernel/time/timer.c:1941
 ___down_common kernel/locking/semaphore.c:229 [inline]
 __down_common+0x2e7/0x700 kernel/locking/semaphore.c:250
 down+0x7c/0xd0 kernel/locking/semaphore.c:64
 xfs_buf_lock+0x163/0x560 fs/xfs/xfs_buf.c:1120
 xfs_buf_item_unpin+0x1c7/0x770 fs/xfs/xfs_buf_item.c:582
 xfs_trans_committed_bulk+0x333/0x7f0 fs/xfs/xfs_trans.c:808
 xlog_cil_committed+0x26c/0xe60 fs/xfs/xfs_log_cil.c:795
 xlog_cil_process_committed+0x155/0x1a0 fs/xfs/xfs_log_cil.c:823
 xlog_state_shutdown_callbacks+0x266/0x360 fs/xfs/xfs_log.c:538
 xlog_force_shutdown+0x2c5/0x320 fs/xfs/xfs_log.c:3802
 xfs_do_force_shutdown+0x27d/0x660 fs/xfs/xfs_fsops.c:540
 xfs_fs_goingdown+0x6d/0x150 fs/xfs/xfs_fsops.c:-1
 xfs_file_ioctl+0x1031/0x1590 fs/xfs/xfs_ioctl.c:2132
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:870 [inline]
 __se_sys_ioctl+0xfa/0x170 fs/ioctl.c:856
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f32a258ebe9
RSP: 002b:00007f32a3472038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f32a27b5fa0 RCX: 00007f32a258ebe9
RDX: 0000200000000080 RSI: 000000008004587d RDI: 0000000000000005
RBP: 00007f32a2611e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f32a27b6038 R14: 00007f32a27b5fa0 R15: 00007ffc00e44428
 </TASK>
INFO: task syz.3.39:4794 blocked for more than 144 seconds.
      Not tainted 6.1.148-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.39        state:D stack:26784 pid:4794  ppid:4394   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5244 [inline]
 __schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
 schedule+0xb9/0x180 kernel/sched/core.c:6637
 xlog_wait fs/xfs/xfs_log_priv.h:617 [inline]
 xlog_wait_on_iclog+0x497/0x730 fs/xfs/xfs_log.c:890
 xlog_force_lsn+0x557/0x9d0 fs/xfs/xfs_log.c:3337
 __xfs_trans_commit+0x959/0xe00 fs/xfs/xfs_trans.c:1013
 xfs_sync_sb_buf+0xe7/0x180 fs/xfs/libxfs/xfs_sb.c:1162
 xfs_ioc_setlabel fs/xfs/xfs_ioctl.c:1827 [inline]
 xfs_file_ioctl+0x1290/0x1590 fs/xfs/xfs_ioctl.c:1925
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:870 [inline]
 __se_sys_ioctl+0xfa/0x170 fs/ioctl.c:856
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f32a258ebe9
RSP: 002b:00007f32a3451038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f32a27b6090 RCX: 00007f32a258ebe9
RDX: 0000200000000100 RSI: 0000000041009432 RDI: 0000000000000004
RBP: 00007f32a2611e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f32a27b6128 R14: 00007f32a27b6090 R15: 00007ffc00e44428
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/u4:1/11:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8cb2b770 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8cb2bf90 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
2 locks held by kworker/1:1/26:
 #0: ffff888024018538 ((wq_completion)xfs-sync/loop0){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90000a2fd00 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
1 lock held by khungtaskd/28:
 #0: ffffffff8cb2ade0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8cb2ade0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8cb2ade0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
2 locks held by kworker/u4:3/46:
 #0: ffff88814478f938 ((wq_completion)xfs-cil/loop0){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90000b77d00 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/u4:4/1011:
2 locks held by getty/4028:
 #0: ffff88814cd83098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
2 locks held by kworker/1:15/4351:
 #0: ffff88801fee1d38 ((wq_completion)xfs-sync/loop7){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc900032a7d00 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/1:18/4354:
 #0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc900032e7d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/0:8/4457:
 #0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90003657d00 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by kworker/u4:8/4612:
1 lock held by udevd/4698:
1 lock held by syz.3.39/4794:
 #0: ffff88807a1f0460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
1 lock held by syz.1.66/5109:
 #0: ffff88807acba460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
1 lock held by syz.4.80/5266:
 #0: ffff88805a89a460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
1 lock held by syz-executor/5358:
 #0: ffff88805c7980e0 (&type->s_umount_key#53){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:362
1 lock held by syz.5.112/5634:
 #0: ffff88807405a460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
1 lock held by syz.0.122/5755:
 #0: ffff88805783e460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
2 locks held by syz-executor/5878:
 #0: ffff8880589d60e0 (&type->s_umount_key#53){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:362
 #1: ffffffff8cb30ab8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #1: ffffffff8cb30ab8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
2 locks held by syz-executor/6012:
 #0: ffff88801f6720e0 (&type->s_umount_key#53){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:362
 #1: ffffffff8cb30ab8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #1: ffffffff8cb30ab8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
2 locks held by kworker/u4:10/6015:
 #0: ffff888074add138 ((wq_completion)xfs-cil/loop7){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc900046b7d00 ((work_completion)(&ctx->push_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
1 lock held by syz.2.151/6086:
 #0: ffff888069a8a460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
1 lock held by syz.7.210/6736:
 #0: ffff8880284ba460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write_file+0x5c/0x200 fs/namespace.c:437
2 locks held by syz.1.334/7584:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.1.148-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
 nmi_cpu_backtrace+0x3f4/0x470 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xeee/0xf30 kernel/hung_task.c:377
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 11 Comm: kworker/u4:1 Not tainted 6.1.148-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:pvclock_clocksource_read+0x6a/0x760 arch/x86/kernel/pvclock.c:68
Code: 84 24 90 00 00 00 48 89 4c 24 50 48 c1 e9 03 48 89 8c 24 88 00 00 00 49 8d 49 03 4c 89 c8 48 c1 e8 03 48 89 84 24 80 00 00 00 <48> 89 4c 24 48 48 c1 e9 03 48 89 4c 24 78 48 89 f0 48 c1 e8 03 48
RSP: 0018:ffffc90000107460 EFLAGS: 00000a02
RAX: 1ffffffff1f3e60b RBX: ffffc90000107580 RCX: ffffffff8f9f305b
RDX: 1ffffffff1f3e608 RSI: ffffffff8f9f305c RDI: ffffffff8f9f3040
RBP: ffffc900001075e8 R08: ffffffff8f9f3048 R09: ffffffff8f9f3058
R10: ffffffff8f9f3050 R11: ffffffff8f9f3043 R12: 000000000000000b
R13: dffffc0000000000 R14: 1ffffffff1f3e608 R15: ffff888019c5ee30
FS:  0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe9653fe000 CR3: 0000000056141000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 kvm_clock_read arch/x86/kernel/kvmclock.c:79 [inline]
 kvm_sched_clock_read+0x14/0x40 arch/x86/kernel/kvmclock.c:91
 sched_clock_cpu+0x6e/0x250 kernel/sched/clock.c:369
 local_clock include/linux/sched/clock.h:84 [inline]
 __set_page_owner_handle+0x1a9/0x3c0 mm/page_owner.c:174
 __set_page_owner+0x41/0x60 mm/page_owner.c:195
 set_page_owner include/linux/page_owner.h:31 [inline]
 post_alloc_hook+0x173/0x1a0 mm/page_alloc.c:2532
 prep_new_page mm/page_alloc.c:2539 [inline]
 get_page_from_freelist+0x1a26/0x1ac0 mm/page_alloc.c:4328
 __alloc_pages+0x1df/0x4e0 mm/page_alloc.c:5614
 alloc_slab_page+0x5d/0x160 mm/slub.c:1794
 allocate_slab mm/slub.c:1939 [inline]
 new_slab+0x87/0x2c0 mm/slub.c:1992
 ___slab_alloc+0xbc6/0x1220 mm/slub.c:3180
 __slab_alloc mm/slub.c:3279 [inline]
 slab_alloc_node mm/slub.c:3364 [inline]
 __kmem_cache_alloc_node+0x1a0/0x260 mm/slub.c:3437
 __do_kmalloc_node mm/slab_common.c:935 [inline]
 __kmalloc_node_track_caller+0x9e/0x230 mm/slab_common.c:956
 kmalloc_reserve net/core/skbuff.c:446 [inline]
 __alloc_skb+0x22a/0x7e0 net/core/skbuff.c:515
 alloc_skb include/linux/skbuff.h:1271 [inline]
 nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:748 [inline]
 nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
 nsim_dev_trap_report_work+0x28f/0xaf0 drivers/net/netdevsim/dev.c:851
 process_one_work+0x898/0x1160 kernel/workqueue.c:2292
 worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/08/22 23:02 linux-6.1.y 0bc96de781b4 bf27483f .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro (corrupt fs)] ci2-linux-6-1-kasan INFO: task hung in xfs_buf_item_unpin
2025/08/22 07:01 linux-6.1.y 0bc96de781b4 bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in xfs_buf_item_unpin
* Struck through repros no longer work on HEAD.