syzbot


INFO: task hung in find_inode_fast (4)

Status: upstream: reported syz repro on 2024/12/23 01:23
Subsystems: ext4
[Documentation on labels]
Reported-by: syzbot+fd5533bcd0f7343bb8ca@syzkaller.appspotmail.com
First crash: 255d, last: 38d
Cause bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ext4?] INFO: task hung in find_inode_fast (4) 0 (1) 2024/12/23 01:23
Similar bugs (9)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in find_inode_fast (3) 1 1 147d 147d 0/3 auto-obsoleted due to no activity on 2025/07/11 13:08
upstream INFO: task hung in find_inode_fast (3) ext4 1 7 380d 478d 0/29 auto-obsoleted due to no activity on 2024/11/10 11:05
upstream INFO: task hung in find_inode_fast (2) ext4 1 C unreliable done 10 623d 776d 25/29 fixed on 2024/01/30 15:47
linux-5.15 INFO: task hung in find_inode_fast 1 3 840d 858d 0/3 auto-obsoleted due to no activity on 2023/08/23 09:07
linux-6.6 INFO: task hung in find_inode_fast 1 1 29d 29d 0/2 upstream: reported on 2025/07/30 02:41
upstream INFO: task hung in find_inode_fast ext4 1 C error 28 814d 963d 22/29 fixed on 2023/06/08 14:41
linux-6.1 INFO: task hung in find_inode_fast (2) origin:lts-only 1 syz inconclusive 2 125d 132d 0/3 upstream: reported syz repro on 2025/04/17 13:02
linux-6.1 INFO: task hung in find_inode_fast 1 1 280d 280d 0/3 auto-obsoleted due to no activity on 2025/02/28 14:47
linux-5.15 INFO: task hung in find_inode_fast (2) 1 1 299d 299d 0/3 auto-obsoleted due to no activity on 2025/02/09 11:15
Last patch testing requests (4)
Created Duration User Patch Repo Result
2025/06/17 08:22 22m retest repro linux-next OK log
2025/04/07 16:57 43m retest repro linux-next error
2025/03/24 06:17 31m retest repro git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci OK log
2025/01/26 19:12 1h02m retest repro linux-next error

Sample crash report:
INFO: task syz.4.1203:11769 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc5-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.1203      state:D stack:24552 pid:11769 tgid:11756 ppid:5851   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5401 [inline]
 __schedule+0x16f5/0x4d00 kernel/sched/core.c:6790
 __schedule_loop kernel/sched/core.c:6868 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6883
 __wait_on_freeing_inode+0x1c5/0x2f0 fs/inode.c:2454
 find_inode_fast+0x2b0/0x470 fs/inode.c:1070
 iget_locked+0x99/0x570 fs/inode.c:1427
 __ext4_iget+0x23f/0x4260 fs/ext4/inode.c:5154
 ext4_xattr_inode_cache_find fs/ext4/xattr.c:1540 [inline]
 ext4_xattr_inode_lookup_create+0x433/0x1c20 fs/ext4/xattr.c:1579
 ext4_xattr_block_set+0x223/0x2ac0 fs/ext4/xattr.c:1908
 ext4_xattr_set_handle+0xdfb/0x1590 fs/ext4/xattr.c:2447
 ext4_xattr_set+0x230/0x320 fs/ext4/xattr.c:2549
 __vfs_setxattr+0x43c/0x480 fs/xattr.c:200
 __vfs_setxattr_noperm+0x12d/0x660 fs/xattr.c:234
 vfs_setxattr+0x16b/0x2f0 fs/xattr.c:321
 do_setxattr fs/xattr.c:636 [inline]
 filename_setxattr+0x274/0x600 fs/xattr.c:665
 path_setxattrat+0x364/0x3a0 fs/xattr.c:713
 __do_sys_setxattr fs/xattr.c:747 [inline]
 __se_sys_setxattr fs/xattr.c:743 [inline]
 __x64_sys_setxattr+0xbc/0xe0 fs/xattr.c:743
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f2f2218e929
RSP: 002b:00007f2f230a8038 EFLAGS: 00000246 ORIG_RAX: 00000000000000bc
RAX: ffffffffffffffda RBX: 00007f2f223b6080 RCX: 00007f2f2218e929
RDX: 0000200000001400 RSI: 00002000000001c0 RDI: 0000200000000380
RBP: 00007f2f22210b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000835 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007f2f223b6080 R15: 00007ffd69991568
 </TASK>
INFO: task syz.4.1203:11775 blocked for more than 144 seconds.
      Not tainted 6.16.0-rc5-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.1203      state:D stack:25064 pid:11775 tgid:11756 ppid:5851   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5401 [inline]
 __schedule+0x16f5/0x4d00 kernel/sched/core.c:6790
 __schedule_loop kernel/sched/core.c:6868 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6883
 mb_cache_entry_wait_unused+0x165/0x250 fs/mbcache.c:148
 ext4_evict_ea_inode+0x14d/0x2f0 fs/ext4/xattr.c:477
 ext4_evict_inode+0x16f/0xee0 fs/ext4/inode.c:183
 evict+0x504/0x9c0 fs/inode.c:810
 ext4_xattr_set_entry+0x12dc/0x1e20 fs/ext4/xattr.c:1839
 ext4_xattr_ibody_set+0x254/0x6a0 fs/ext4/xattr.c:2263
 ext4_xattr_set_handle+0xc9a/0x1590 fs/ext4/xattr.c:2435
 ext4_xattr_set+0x230/0x320 fs/ext4/xattr.c:2549
 __vfs_setxattr+0x43c/0x480 fs/xattr.c:200
 __vfs_setxattr_noperm+0x12d/0x660 fs/xattr.c:234
 vfs_setxattr+0x16b/0x2f0 fs/xattr.c:321
 do_setxattr fs/xattr.c:636 [inline]
 filename_setxattr+0x274/0x600 fs/xattr.c:665
 path_setxattrat+0x364/0x3a0 fs/xattr.c:713
 __do_sys_setxattr fs/xattr.c:747 [inline]
 __se_sys_setxattr fs/xattr.c:743 [inline]
 __x64_sys_setxattr+0xbc/0xe0 fs/xattr.c:743
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f2f2218e929
RSP: 002b:00007f2f23087038 EFLAGS: 00000246 ORIG_RAX: 00000000000000bc
RAX: ffffffffffffffda RBX: 00007f2f223b6160 RCX: 00007f2f2218e929
RDX: 0000200000000000 RSI: 0000200000000080 RDI: 0000200000000180
RBP: 00007f2f22210b39 R08: 0000000000000002 R09: 0000000000000000
R10: 0000000000000835 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007f2f223b6160 R15: 00007ffd69991568
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e13ee60 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e13ee60 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e13ee60 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6770
3 locks held by kworker/u8:6/2955:
 #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc9000baf7bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000baf7bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffffffff8f51ce88 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
2 locks held by kworker/u8:10/3036:
 #0: ffff888144ea7948 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff888144ea7948 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc9000bda7bc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000bda7bc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
1 lock held by udevd/5213:
1 lock held by dhcpcd/5508:
 #0: ffffffff8f51ce88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f51ce88 (rtnl_mutex){+.+.}-{4:4}, at: devinet_ioctl+0x323/0x1b50 net/ipv4/devinet.c:1121
2 locks held by getty/5604:
 #0: ffff88803515a0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000333b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
6 locks held by kworker/u8:14/6048:
 #0: ffff88801b2f6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801b2f6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc900042e7bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc900042e7bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffffffff8f510290 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf7/0x800 net/core/net_namespace.c:662
 #3: ffff8880558520e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline]
 #3: ffff8880558520e8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline]
 #3: ffff8880558520e8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x10a/0x3d0 net/devlink/core.c:506
 #4: ffff888055856250 (&devlink->lock_key#22){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:276 [inline]
 #4: ffff888055856250 (&devlink->lock_key#22){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline]
 #4: ffff888055856250 (&devlink->lock_key#22){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x11c/0x3d0 net/devlink/core.c:506
 #5: ffffffff8e144840 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3786
7 locks held by kworker/0:3/8338:
 #0: ffff8881446d3948 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff8881446d3948 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc90003af7bc0 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90003af7bc0 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffff8880281bb198 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline]
 #2: ffff8880281bb198 (&dev->mutex){....}-{4:4}, at: hub_event+0x184/0x4a00 drivers/usb/core/hub.c:5894
 #3: ffff88807faee198 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline]
 #3: ffff88807faee198 (&dev->mutex){....}-{4:4}, at: usb_disconnect+0xf8/0x950 drivers/usb/core/hub.c:2335
 #4: ffff88807fc49160 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline]
 #4: ffff88807fc49160 (&dev->mutex){....}-{4:4}, at: __device_driver_lock drivers/base/dd.c:1094 [inline]
 #4: ffff88807fc49160 (&dev->mutex){....}-{4:4}, at: device_release_driver_internal+0xb6/0x7c0 drivers/base/dd.c:1292
 #5: ffffffff8ef93b28 (input_mutex){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:225 [inline]
 #5: ffffffff8ef93b28 (input_mutex){+.+.}-{4:4}, at: __input_unregister_device+0x2d8/0x5e0 drivers/input/input.c:2221
 #6: ffffffff8e144978 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline]
 #6: ffffffff8e144978 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:998
6 locks held by kworker/0:5/8356:
 #0: ffff8880b8639f98 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0xad/0x140 kernel/sched/core.c:614
 #1: ffff8880b8623f08 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x39a/0x6d0 kernel/sched/psi.c:987
 #2: ffff8880b8625958 (&base->lock){-.-.}-{2:2}, at: lock_timer_base kernel/time/timer.c:1004 [inline]
 #2: ffff8880b8625958 (&base->lock){-.-.}-{2:2}, at: __mod_timer+0x1ae/0xf30 kernel/time/timer.c:1085
 #3: ffffffff99d4fa50 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0xbb/0x420 lib/debugobjects.c:818
 #4: ffffffff8e13ee60 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #4: ffffffff8e13ee60 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #4: ffffffff8e13ee60 (rcu_read_lock){....}-{1:3}, at: ip6_finish_output2+0x701/0x16a0 net/ipv6/ip6_output.c:126
 #5: ffffffff8e13eec0 (rcu_read_lock_bh){....}-{1:3}, at: local_bh_disable include/linux/bottom_half.h:20 [inline]
 #5: ffffffff8e13eec0 (rcu_read_lock_bh){....}-{1:3}, at: rcu_read_lock_bh include/linux/rcupdate.h:892 [inline]
 #5: ffffffff8e13eec0 (rcu_read_lock_bh){....}-{1:3}, at: __dev_queue_xmit+0x27e/0x3a70 net/core/dev.c:4638
3 locks held by syz.4.1203/11769:
 #0: ffff888028738428 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88806c1883e0 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #1: ffff88806c1883e0 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: vfs_setxattr+0x144/0x2f0 fs/xattr.c:320
 #2: ffff88806c1880c8 (&ei->xattr_sem){++++}-{4:4}, at: ext4_write_lock_xattr fs/ext4/xattr.h:157 [inline]
 #2: ffff88806c1880c8 (&ei->xattr_sem){++++}-{4:4}, at: ext4_xattr_set_handle+0x165/0x1590 fs/ext4/xattr.c:2362
3 locks held by syz.4.1203/11775:
 #0: ffff888028738428 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88806c188d70 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #1: ffff88806c188d70 (&sb->s_type->i_mutex_key#9){++++}-{4:4}, at: vfs_setxattr+0x144/0x2f0 fs/xattr.c:320
 #2: ffff88806c188a58 (&ei->xattr_sem){++++}-{4:4}, at: ext4_write_lock_xattr fs/ext4/xattr.h:157 [inline]
 #2: ffff88806c188a58 (&ei->xattr_sem){++++}-{4:4}, at: ext4_xattr_set_handle+0x165/0x1590 fs/ext4/xattr.c:2362
3 locks held by syz-executor/13069:
 #0: ffffffff8eca4940 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8eca4940 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8eca4940 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8f51ce88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8f51ce88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8f51ce88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4054
 #2: ffffffff8e144978 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:304 [inline]
 #2: ffffffff8e144978 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x2f6/0x730 kernel/rcu/tree_exp.h:998
2 locks held by syz-executor/13098:
 #0: ffffffff8fa22098 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8fa22098 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8fa22098 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8f51ce88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8f51ce88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8f51ce88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4054

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc5-syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline]
 watchdog+0xfee/0x1030 kernel/hung_task.c:470
 kthread+0x711/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 6050 Comm: kworker/u8:15 Not tainted 6.16.0-rc5-syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Workqueue: bat_events batadv_tt_purge
RIP: 0010:trace_irq_disable+0x30/0x110 include/trace/events/preemptirq.h:36
Code: 8b 05 78 7d d8 10 83 f8 08 73 3c 89 c3 c1 e8 06 48 8d 3c c5 f0 e5 a1 8f be 08 00 00 00 e8 48 f8 5b 00 48 0f a3 1d 10 33 da 0d <73> 12 e8 c9 6c df ff 84 c0 75 09 80 3d 7c 17 c4 0d 00 74 0f 5b 41
RSP: 0018:ffffc90002ed78a8 EFLAGS: 00000057
RAX: 0000000000000001 RBX: 0000000000000000 RCX: ffffffff81c7b2d8
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8fa1e5f0
RBP: ffffc90002ed7958 R08: ffffffff8fa1e5f7 R09: 1ffffffff1f43cbe
R10: dffffc0000000000 R11: fffffbfff1f43cbf R12: ffffffff8b3fad67
R13: dffffc0000000000 R14: dffffc0000000000 R15: 1ffff920005daf18
FS:  0000000000000000(0000) GS:ffff888125c1d000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f6b9ad84338 CR3: 000000004004f000 CR4: 0000000000350ef0
Call Trace:
 <TASK>
 __local_bh_enable_ip+0xce/0x1c0 kernel/softirq.c:389
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 batadv_tt_local_purge+0x2a7/0x340 net/batman-adv/translation-table.c:1315
 batadv_tt_purge+0x35/0x9e0 net/batman-adv/translation-table.c:3509
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3321
 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3402
 kthread+0x711/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (12):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/07 18:25 upstream d7b8f8e20813 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in find_inode_fast
2025/05/07 22:16 upstream 707df3375124 350f4ffc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in find_inode_fast
2025/03/24 11:57 upstream 586de92313fc 875573af .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in find_inode_fast
2025/02/27 00:06 upstream 5394eea10651 6a8fcbc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in find_inode_fast
2025/01/12 18:52 upstream b62cef9a5c67 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in find_inode_fast
2024/12/31 14:05 upstream ccb98ccef0e5 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in find_inode_fast
2024/12/17 14:00 upstream f44d154d6e3d f93b2b55 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in find_inode_fast
2024/12/16 01:55 upstream dccbe2047a5b 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in find_inode_fast
2025/07/20 18:48 linux-next d086c886ceb9 7117feec .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in find_inode_fast
2025/07/19 20:27 linux-next d086c886ceb9 7117feec .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in find_inode_fast
2024/12/19 01:10 linux-next 7fa366f1b6e3 1432fc84 .config console log report syz / log [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in find_inode_fast
2025/03/09 22:13 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 77c95b8c7a16 163f510d .config console log report syz / log [disk image] [vmlinux] [kernel image] [mounted in repro (clean fs)] ci-upstream-gce-arm64 INFO: task hung in find_inode_fast
* Struck through repros no longer work on HEAD.