syzbot


INFO: task hung in do_rmdir (7)

Status: upstream: reported C repro on 2026/01/03 17:05
Subsystems: kernfs
[Documentation on labels]
Reported-by: syzbot+e68dbebd9617a9250e8d@syzkaller.appspotmail.com
First crash: 56d, last: 6d21h
Cause bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernfs?] INFO: task hung in do_rmdir (7) 0 (1) 2026/01/03 17:05
Similar bugs (9)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in do_rmdir (5) nilfs ext4 1 5 1089d 1256d 0/29 auto-obsoleted due to no activity on 2023/04/25 17:21
upstream INFO: task hung in do_rmdir (3) fs 1 10 1770d 1845d 0/29 auto-closed as invalid on 2021/06/01 08:24
linux-4.19 INFO: task hung in do_rmdir 1 1 1273d 1273d 0/1 auto-obsoleted due to no activity on 2022/11/10 04:55
android-49 INFO: task hung in do_rmdir 1 1 2795d 2795d 0/3 auto-closed as invalid on 2019/02/22 12:59
linux-5.15 INFO: task hung in do_rmdir origin:lts-only 1 C done 5 377d 400d 3/3 fixed on 2025/01/30 02:24
upstream INFO: task hung in do_rmdir (4) fs 1 1 1429d 1429d 0/29 auto-closed as invalid on 2022/05/08 10:41
upstream INFO: task hung in do_rmdir (2) exfat 1 3 2553d 2687d 0/29 closed as dup on 2018/09/11 15:01
upstream INFO: task hung in do_rmdir (6) fs 1 C error inconclusive 48 190d 438d 0/29 auto-obsoleted due to no activity on 2025/10/08 09:26
upstream INFO: task hung in do_rmdir fs 1 1 2842d 2842d 0/29 closed as invalid on 2018/03/27 11:14

Sample crash report:
INFO: task syz.0.50:6186 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disabl[  395.933528][   T38] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.50        state:D stack:25592 pid:6186  tgid:6173  ppid:5937   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x145f/0x5070 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7241
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1dfe/0x25e0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
 inode_lock_nested include/linux/fs.h:1072 [inline]
 __start_dirop fs/namei.c:2864 [inline]
 start_dirop fs/namei.c:2875 [inline]
 do_rmdir+0x1bb/0x4a0 fs/namei.c:5284
 __do_sys_rmdir fs/namei.c:5315 [inline]
 __se_sys_rmdir fs/namei.c:5313 [inline]
 __x64_sys_rmdir+0x47/0x50 fs/namei.c:5313
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f0f295af749
RSP: 002b:00007f0f28bfd038 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007f0f29806090 RCX: 00007f0f295af749
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000000
RBP: 00007f0f29633f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f0f29806128 R14: 00007f0f29806090 R15: 00007ffedfcf38c8
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:0/12:
 #0: ffff88803da68938 ((wq_completion)loop5){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88803da68938 ((wq_completion)loop5){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90000117bc0 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90000117bc0 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff888024649160 (&lo->lo_work_lock){+.+.}-{3:3}, at: spin_lock_irq include/linux/spinlock_rt.h:93 [inline]
 #2: ffff888024649160 (&lo->lo_work_lock){+.+.}-{3:3}, at: loop_process_work+0xf8/0x11a0 drivers/block/loop.c:1954
3 locks held by kworker/u8:1/13:
 #0: ffff88803da68938 ((wq_completion)loop5){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88803da68938 ((wq_completion)loop5){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90000127bc0 ((work_completion)(&worker->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90000127bc0 ((work_completion)(&worker->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff888024649160 (&lo->lo_work_lock){+.+.}-{3:3}, at: spin_lock_irq include/linux/spinlock_rt.h:93 [inline]
 #2: ffff888024649160 (&lo->lo_work_lock){+.+.}-{3:3}, at: loop_process_work+0xb6e/0x11a0 drivers/block/loop.c:1964
1 lock held by khungtaskd/38:
 #0: ffffffff8d5ae940 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8d5ae940 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8d5ae940 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
3 locks held by kworker/u8:12/2155:
 #0: ffff88814d6a9938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88814d6a9938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000695fbc0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000695fbc0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x119/0x15a0 net/ipv6/addrconf.c:4194
2 locks held by getty/5558:
 #0: ffff88803530d0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e762e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x44f/0x1460 drivers/tty/n_tty.c:2211
2 locks held by kworker/0:5/5949:
5 locks held by udevd/6131:
4 locks held by syz.0.50/6180:
2 locks held by syz.0.50/6186:
 #0: ffff88804a4ca480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88803dc004b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88803dc004b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88803dc004b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88803dc004b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: do_rmdir+0x1bb/0x4a0 fs/namei.c:5284
5 locks held by syz.2.180/6561:
2 locks held by syz.2.180/6570:
 #0: ffff8880381f2480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff888042de1c80 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff888042de1c80 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff888042de1c80 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff888042de1c80 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: do_rmdir+0x1bb/0x4a0 fs/namei.c:5284
3 locks held by syz.1.230/6724:
2 locks held by syz.1.230/6730:
 #0: ffff888036cb6480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff888042f2c038 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff888042f2c038 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff888042f2c038 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff888042f2c038 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: do_rmdir+0x1bb/0x4a0 fs/namei.c:5284
2 locks held by syz-executor/6791:
 #0: ffff888038d500d0 (&type->s_umount_key#32){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff888038d500d0 (&type->s_umount_key#32){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff888038d500d0 (&type->s_umount_key#32){++++}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:506
 #1: ffff88813ff74238 (&root->kernfs_rwsem){++++}-{4:4}, at: kernfs_remove_by_name_ns+0x3d/0x130 fs/kernfs/dir.c:1717
1 lock held by udevd/6931:
3 locks held by kworker/u8:20/6993:
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90005b37bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90005b37bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
3 locks held by syz.4.404/7276:
2 locks held by syz.4.404/7284:
 #0: ffff8880351aa480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805a6e63f0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805a6e63f0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88805a6e63f0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88805a6e63f0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: do_rmdir+0x1bb/0x4a0 fs/namei.c:5284
4 locks held by syz.8.630/7972:
2 locks held by syz.8.630/7976:
 #0: ffff88805ce3c480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff888042de5808 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff888042de5808 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff888042de5808 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff888042de5808 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: do_rmdir+0x1bb/0x4a0 fs/namei.c:5284
4 locks held by syz.7.704/8189:
2 locks held by syz.7.704/8198:
 #0: ffff888038388480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff888059554038 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff888059554038 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff888059554038 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff888059554038 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: do_rmdir+0x1bb/0x4a0 fs/namei.c:5284
1 lock held by syz-executor/8200:
 #0: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: inet6_rtm_newaddr+0x5b7/0xd20 net/ipv6/addrconf.c:5027
3 locks held by syz-executor/8366:
 #0: ffffffff8edd23c8 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8edd23c8 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8edd23c8 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8ec/0x1c90 net/core/rtnetlink.c:4071
 #2: ffff88813ff74238 (&root->kernfs_rwsem){++++}-{4:4}, at: kernfs_activate fs/kernfs/dir.c:1430 [inline]
 #2: ffff88813ff74238 (&root->kernfs_rwsem){++++}-{4:4}, at: kernfs_add_one+0x2ae/0x5c0 fs/kernfs/dir.c:839
1 lock held by syz.5.810/8531:
 #0: ffff88803f80e0d0 (&type->s_umount_key#28/1){+.+.}-{4:4}, at: alloc_super+0x28c/0xab0 fs/super.c:344

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xf95/0xfe0 kernel/hung_task.c:515
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 7276 Comm: syz.4.404 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:check_preemption_disabled+0x22/0xe0 lib/smp_processor_id.c:53
Code: 90 90 90 90 90 90 90 90 55 41 57 41 56 53 65 8b 05 77 64 e1 06 65 8b 0d 6c 64 e1 06 f7 c1 ff ff ff 7f 74 0c 5b 41 5e 41 5f 5d <e9> c9 a3 03 00 cc 9c 59 f7 c1 00 02 00 00 74 ea 65 4c 8b 3c 25 08
RSP: 0018:ffffc90006137438 EFLAGS: 00000046
RAX: 0000000000000001 RBX: 0000000000000202 RCX: 0000000000000046
RDX: 00000000d11f428d RSI: ffffffff8ce1f6dc RDI: ffffffff8b3f57e0
RBP: 0000000000000001 R08: ffffffff8ad3e001 R09: ffffffff8d5ae940
R10: 0000000000000000 R11: fffffbfff1db668f R12: 1ffff11004811458
R13: ffffffff8ad3eb60 R14: ffffffff8d5ae940 R15: ffff888024089e40
FS:  00007f5261f266c0(0000) GS:ffff888126def000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f5ab30a5000 CR3: 0000000027c10000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 lockdep_recursion_inc kernel/locking/lockdep.c:465 [inline]
 lock_release+0xa2/0x3b0 kernel/locking/lockdep.c:5888
 rcu_lock_release include/linux/rcupdate.h:341 [inline]
 rcu_read_unlock include/linux/rcupdate.h:897 [inline]
 rt_spin_unlock+0x15c/0x200 kernel/locking/spinlock_rt.c:82
 __dquot_free_space+0x852/0xc00 fs/quota/dquot.c:1898
 dquot_free_space_nodirty include/linux/quotaops.h:374 [inline]
 dquot_free_space include/linux/quotaops.h:379 [inline]
 dquot_free_block include/linux/quotaops.h:390 [inline]
 ext4_xattr_block_set+0x1481/0x2ac0 fs/ext4/xattr.c:2082
 ext4_xattr_set_handle+0xdfb/0x1590 fs/ext4/xattr.c:2456
 ext4_initxattrs+0x9f/0x110 fs/ext4/xattr_security.c:44
 security_inode_init_security+0x290/0x3d0 security/security.c:1344
 __ext4_new_inode+0x32f7/0x3c90 fs/ext4/ialloc.c:1324
 ext4_mkdir+0x3cb/0xc50 fs/ext4/namei.c:3005
 vfs_mkdir+0x52d/0x5d0 fs/namei.c:5130
 do_mkdirat+0x27a/0x4b0 fs/namei.c:5164
 __do_sys_mkdirat fs/namei.c:5186 [inline]
 __se_sys_mkdirat fs/namei.c:5184 [inline]
 __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:5184
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f52628bf749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f5261f26038 EFLAGS: 00000246 ORIG_RAX: 0000000000000102
RAX: ffffffffffffffda RBX: 00007f5262b15fa0 RCX: 00007f52628bf749
RDX: 0000000000000000 RSI: 0000200000000100 RDI: ffffffffffffff9c
RBP: 00007f5262943f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f5262b16038 R14: 00007f5262b15fa0 R15: 00007fffb28998f8
 </TASK>

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/12/30 16:52 upstream 8640b74557fc d6526ea3 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro (clean fs)] ci2-upstream-fs INFO: task hung in do_rmdir
2025/12/30 13:33 upstream 8640b74557fc d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_rmdir
2025/11/11 11:38 upstream 4427259cc7f7 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_rmdir
* Struck through repros no longer work on HEAD.