syzbot


INFO: task hung in lock_two_nondirectories (2)

Status: auto-obsoleted due to no activity on 2025/10/04 07:10
Subsystems: bcachefs
[Documentation on labels]
First crash: 484d, last: 95d
Similar bugs (2)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
android-49 INFO: task hung in lock_two_nondirectories 1 2 2643d 2661d 0/3 auto-closed as invalid on 2019/02/22 12:36
upstream INFO: task hung in lock_two_nondirectories ext4 1 1 853d 853d 0/29 auto-obsoleted due to no activity on 2023/09/08 04:44

Sample crash report:
INFO: task syz.4.111:7020 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc4-syzkaller-00319-g05df91921da6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.111       state:D stack:24632 pid:7020  tgid:6942  ppid:5839   task_flags:0x400040 flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x16a2/0x4cb0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1ab/0x1f0 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:869 [inline]
 lock_two_nondirectories+0xe7/0x180 fs/inode.c:1233
 vfs_rename+0x69a/0xec0 fs/namei.c:5108
 do_renameat2+0x878/0xc50 fs/namei.c:5286
 __do_sys_rename fs/namei.c:5333 [inline]
 __se_sys_rename fs/namei.c:5331 [inline]
 __x64_sys_rename+0x82/0x90 fs/namei.c:5331
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa7f658e929
RSP: 002b:00007fa7f43b4038 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007fa7f67b6160 RCX: 00007fa7f658e929
RDX: 0000000000000000 RSI: 00002000000002c0 RDI: 0000200000000780
RBP: 00007fa7f6610b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007fa7f67b6160 R15: 00007fff85e5a1d8
 </TASK>

Showing all locks held in the system:
1 lock held by pool_workqueue_/3:
 #0: ffffffff8e144938 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline]
 #0: ffffffff8e144938 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:998
3 locks held by kworker/u8:0/12:
 #0: ffff88801a489148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a489148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc90000117bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90000117bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffffffff8f50ae88 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
5 locks held by kworker/u8:1/13:
 #0: ffff88801b2fb948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801b2fb948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc90000127bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90000127bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffffffff8f4fe290 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf7/0x800 net/core/net_namespace.c:662
 #3: ffffffff8f50ae88 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0xdc/0x890 net/core/dev.c:12630
 #4: ffffffff8e144938 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline]
 #4: ffffffff8e144938 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:998
1 lock held by khungtaskd/31:
 #0: ffffffff8e13ee20 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e13ee20 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e13ee20 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6770
1 lock held by dhcpcd/5501:
 #0: ffffffff8f50ae88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f50ae88 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_delroute+0x107/0x2f0 net/ipv4/fib_frontend.c:887
2 locks held by getty/5597:
 #0: ffff88814cdc20a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc900036cb2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
5 locks held by syz.4.111/6947:
 #0: ffff888028ed4428 (sb_writers#25){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff8880574baed8 (&sb->s_type->i_mutex_key#30){++++}-{4:4}, at: inode_lock_killable include/linux/fs.h:874 [inline]
 #1: ffff8880574baed8 (&sb->s_type->i_mutex_key#30){++++}-{4:4}, at: do_truncate+0x171/0x220 fs/open.c:63
 #2: ffff88804ab00a50 (&c->snapshot_create_lock){.+.+}-{4:4}, at: bch2_truncate+0xeb/0x200 fs/bcachefs/io_misc.c:295
 #3: ffff88804ab04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:161 [inline]
 #3: ffff88804ab04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:253 [inline]
 #3: ffff88804ab04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: bch2_trans_srcu_lock+0xaf/0x220 fs/bcachefs/btree_iter.c:3299
 #4: ffff88804ab26710 (&c->gc_lock){.+.+}-{4:4}, at: bch2_btree_update_start+0x542/0x1de0 fs/bcachefs/btree_update_interior.c:1211
5 locks held by syz.4.111/7020:
 #0: ffff888028ed4428 (sb_writers#25){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff888028ed4738 (&type->s_vfs_rename_key#2){+.+.}-{4:4}, at: lock_rename fs/namei.c:3285 [inline]
 #1: ffff888028ed4738 (&type->s_vfs_rename_key#2){+.+.}-{4:4}, at: do_renameat2+0x37f/0xc50 fs/namei.c:5232
 #2: ffff8880574ba740 (&sb->s_type->i_mutex_key#30/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:914 [inline]
 #2: ffff8880574ba740 (&sb->s_type->i_mutex_key#30/1){+.+.}-{4:4}, at: lock_two_directories+0x141/0x220 fs/namei.c:3251
 #3: ffff8880574bb670 (&sb->s_type->i_mutex_key#30/5){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:914 [inline]
 #3: ffff8880574bb670 (&sb->s_type->i_mutex_key#30/5){+.+.}-{4:4}, at: lock_two_directories+0x16b/0x220 fs/namei.c:3252
 #4: ffff8880574baed8 (&sb->s_type->i_mutex_key#30){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #4: ffff8880574baed8 (&sb->s_type->i_mutex_key#30){++++}-{4:4}, at: lock_two_nondirectories+0xe7/0x180 fs/inode.c:1233
2 locks held by syz-executor/8846:
 #0: ffff8880311940e0 (&type->s_umount_key#76){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff8880311940e0 (&type->s_umount_key#76){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff8880311940e0 (&type->s_umount_key#76){++++}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:506
 #1: ffffffff8e144800 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3786
1 lock held by syz-executor/9507:
 #0: ffffffff8f50ae88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #0: ffffffff8f50ae88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #0: ffffffff8f50ae88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4054
4 locks held by syz.9.376/9606:
 #0: ffff888034ed6428 (sb_writers#10){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88805778f398 (&type->i_mutex_dir_key#7/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:914 [inline]
 #1: ffff88805778f398 (&type->i_mutex_dir_key#7/1){+.+.}-{4:4}, at: filename_create+0x1f9/0x470 fs/namei.c:4148
 #2: ffffffff8e176608 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:387 [inline]
 #2: ffffffff8e176608 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x13c/0x230 kernel/cgroup/cgroup.c:1686
 #3: ffffffff8f50ae88 (rtnl_mutex){+.+.}-{4:4}, at: cgrp_css_online+0x91/0x300 net/core/netprio_cgroup.c:157
1 lock held by syz.9.376/9607:
 #0: ffff88805778f398 (&type->i_mutex_dir_key#7){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:884 [inline]
 #0: ffff88805778f398 (&type->i_mutex_dir_key#7){++++}-{4:4}, at: lookup_slow+0x46/0x70 fs/namei.c:1833
2 locks held by syz.9.376/9608:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc4-syzkaller-00319-g05df91921da6 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline]
 watchdog+0xfee/0x1030 kernel/hung_task.c:470
 kthread+0x711/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 5833 Comm: syz-executor Not tainted 6.16.0-rc4-syzkaller-00319-g05df91921da6 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
RIP: 0010:check_kcov_mode kernel/kcov.c:185 [inline]
RIP: 0010:__sanitizer_cov_trace_pc+0x31/0x70 kernel/kcov.c:217
Code: 24 65 48 8b 0c 25 08 00 9d 92 65 8b 15 98 a2 dc 10 81 e2 00 01 ff 00 74 11 81 fa 00 01 00 00 75 35 83 b9 3c 16 00 00 00 74 2c <8b> 91 18 16 00 00 83 fa 02 75 21 48 8b 91 20 16 00 00 48 8b 32 48
RSP: 0018:ffffc9000404f500 EFLAGS: 00000246
RAX: ffffffff81f7b6c7 RBX: 0000000000000c71 RCX: ffff8880346ada00
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 1ffffd40003bf6e1 R08: ffffea0001dfb707 R09: 1ffffd40003bf6e0
R10: dffffc0000000000 R11: fffff940003bf6e1 R12: 0000000000000000
R13: 1ffffd40003bf6e0 R14: ffffea0001dfb700 R15: ffffea0001dfb708
FS:  0000555591394500(0000) GS:ffff888125d50000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055657f917840 CR3: 000000007667a000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 folio_contains+0x127/0x2b0 include/linux/pagemap.h:934
 find_lock_entries+0x7a8/0xa60 mm/filemap.c:2158
 shmem_undo_range+0x254/0x14b0 mm/shmem.c:1107
 shmem_truncate_range mm/shmem.c:1237 [inline]
 shmem_evict_inode+0x272/0xa70 mm/shmem.c:1365
 evict+0x504/0x9c0 fs/inode.c:810
 __dentry_kill+0x209/0x660 fs/dcache.c:669
 dput+0x19f/0x2b0 fs/dcache.c:911
 __fput+0x68e/0xa70 fs/file_table.c:473
 task_work_run+0x1d4/0x260 kernel/task_work.c:227
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 exit_to_user_mode_loop+0xec/0x110 kernel/entry/common.c:114
 exit_to_user_mode_prepare include/linux/entry-common.h:330 [inline]
 syscall_exit_to_user_mode_work include/linux/entry-common.h:414 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:449 [inline]
 do_syscall_64+0x2bd/0x3b0 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fd0c418fc57
Code: a8 ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 0f 1f 44 00 00 31 f6 e9 09 00 00 00 66 0f 1f 84 00 00 00 00 00 b8 a6 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 01 c3 48 c7 c2 a8 ff ff ff f7 d8 64 89 02 b8
RSP: 002b:00007fff6bbcd538 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00007fd0c4210925 RCX: 00007fd0c418fc57
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007fff6bbcd5f0
RBP: 00007fff6bbcd5f0 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000ffffffff R11: 0000000000000246 R12: 00007fff6bbce680
R13: 00007fd0c4210925 R14: 0000000000048875 R15: 00007fff6bbce6c0
 </TASK>

Crashes (18):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/06 07:03 upstream 05df91921da6 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2025/04/27 08:49 upstream 5bc1018675ec c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2025/04/23 06:22 upstream bc3372351d0c 53a8b9bd .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2025/04/20 12:13 upstream 119009db2674 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2025/04/18 07:01 upstream b5c6891b2c5b 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2025/04/09 19:40 upstream a24588245776 47d015b1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2025/01/12 06:02 upstream b62cef9a5c67 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/12/29 05:28 upstream 059dd502b263 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/12/21 02:25 upstream e9b8ffafd20a d7f584ee .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/12/07 01:18 upstream 9a6e8c7c3a02 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/12/04 07:37 upstream ceb8bf2ceaa7 b50eb251 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/11/26 23:20 upstream 7eef7e306d3c e9a9a9f2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/09/27 21:21 upstream 3630400697a3 440b26ec .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/09/21 18:59 upstream 1ec6d097897a 6f888b75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/09/21 13:06 upstream 1868f9d0260e 6f888b75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/09/17 10:55 upstream a430d95c5efa c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/07/17 01:07 upstream d67978318827 b66b37bd .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
2024/06/13 04:55 upstream 2ccbdf43d5e7 2aa5052f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_nondirectories
* Struck through repros no longer work on HEAD.