syzbot


INFO: task hung in hfs_mdb_commit (4)

Status: upstream: reported on 2025/11/19 20:02
Reported-by: syzbot+0801789b05cd1d16eabf@syzkaller.appspotmail.com
First crash: 31d, last: 31d
Similar bugs (10)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in hfs_mdb_commit hfs 1 C error done 25 691d 1069d 25/29 fixed on 2024/03/20 11:33
linux-6.1 INFO: task hung in hfs_mdb_commit 1 1 942d 942d 0/3 auto-obsoleted due to no activity on 2023/08/31 05:50
linux-5.15 INFO: task hung in hfs_mdb_commit (2) origin:upstream 1 C error 9 35d 621d 0/3 upstream: reported C repro on 2024/04/08 12:18
linux-4.19 INFO: task hung in hfs_mdb_commit hfs 1 C error 1 1060d 1060d 0/1 upstream: reported C repro on 2023/01/25 02:58
linux-6.1 INFO: task hung in hfs_mdb_commit (3) 1 1 310d 310d 0/3 auto-obsoleted due to no activity on 2025/05/23 22:52
linux-5.15 INFO: task hung in hfs_mdb_commit 1 1 924d 924d 0/3 auto-obsoleted due to no activity on 2023/09/18 06:33
linux-6.1 INFO: task hung in hfs_mdb_commit (2) 1 1 622d 622d 0/3 auto-obsoleted due to no activity on 2024/07/16 10:21
linux-6.6 INFO: task hung in hfs_mdb_commit 1 1 113d 113d 0/2 auto-obsoleted due to no activity on 2025/12/07 18:32
upstream INFO: task hung in hfs_mdb_commit (2) hfs 1 1 543d 543d 0/29 auto-obsoleted due to no activity on 2024/09/23 18:09
upstream INFO: task hung in hfs_mdb_commit (3) hfs 1 C error done 12 33d 190d 0/29 upstream: reported C repro on 2025/06/13 11:04

Sample crash report:
INFO: task kworker/1:1:26 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:1     state:D stack:22112 pid:26    ppid:2      flags:0x00004000
Workqueue: events_long flush_mdb
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5244 [inline]
 __schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
 schedule+0xb9/0x180 kernel/sched/core.c:6637
 io_schedule+0x7c/0xd0 kernel/sched/core.c:8797
 bit_wait_io+0xd/0xc0 kernel/sched/wait_bit.c:209
 __wait_on_bit_lock+0xd8/0x580 kernel/sched/wait_bit.c:90
 out_of_line_wait_on_bit_lock+0x11f/0x160 kernel/sched/wait_bit.c:117
 lock_buffer include/linux/buffer_head.h:397 [inline]
 hfs_mdb_commit+0x111/0x1110 fs/hfs/mdb.c:271
 process_one_work+0x898/0x1160 kernel/workqueue.c:2292
 worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8cb2b630 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8cb2be50 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
2 locks held by kworker/1:1/26:
 #0: ffff888017471138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90000a1fd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
1 lock held by khungtaskd/28:
 #0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8cb2aca0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
4 locks held by kworker/u4:2/41:
 #0: ffff888144e77138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90000b27d00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffff8880b8e3aad8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:537
 #3: ffff888070c35e18 (&xfs_nondir_ilock_class){++++}-{3:3}, at: xfs_bmapi_convert_one_delalloc fs/xfs/libxfs/xfs_bmap.c:4576 [inline]
 #3: ffff888070c35e18 (&xfs_nondir_ilock_class){++++}-{3:3}, at: xfs_bmapi_convert_delalloc+0x329/0x1480 fs/xfs/libxfs/xfs_bmap.c:4698
2 locks held by getty/4027:
 #0: 
ffff88814d1de098
 (
&tty->ldisc_sem
){++++}-{0:0}
, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
2 locks held by syz-executor/4273:
2 locks held by syz-executor/4278:
 #0: ffff88807e6e40e0 (&type->s_umount_key#67){++++}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:362
 #1: ffff8880247407d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:362 [inline]
 #1: ffff8880247407d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x19c/0x9e0 fs/fs-writeback.c:2748
3 locks held by kworker/0:12/4996:
 #0: ffff88802eb98538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc9001c387d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffffffff8dd416e8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xc4/0x14d0 net/ipv6/addrconf.c:4131
1 lock held by syz-executor/5412:
 #0: ffffffff8cb30978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #0: ffffffff8cb30978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
2 locks held by kworker/1:18/5546:
2 locks held by kworker/1:19/6746:
 #0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc9001d42fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
2 locks held by syz.7.1228/8186:
 #0: ffff888070f86e10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
 #0: ffff888070f86e10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release net/socket.c:653 [inline]
 #0: ffff888070f86e10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: sock_close+0x90/0x240 net/socket.c:1400
 #1: 
ffffffff8dd416e8
 (
rtnl_mutex){+.+.}-{3:3}, at: gtp_encap_destroy+0xe/0x20 drivers/net/gtp.c:643
2 locks held by syz.7.1228/8187:
 #0: 
ffffffff8dd416e8 (
rtnl_mutex
){+.+.}-{3:3}
, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6147
 #1: 
ffffffff8cb30978

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/11/19 20:01 linux-6.1.y f6e38ae624cf 26ee5237 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in hfs_mdb_commit
* Struck through repros no longer work on HEAD.