syzbot


INFO: task hung in switchdev_deferred_process_work

Status: upstream: reported on 2025/07/23 04:22
Reported-by: syzbot+44fb317673596387e12c@syzkaller.appspotmail.com
First crash: 35d, last: 35d
Similar bugs (9)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in switchdev_deferred_process_work (2) net 1 C inconclusive 1226 414d 1857d 26/29 fixed on 2024/07/09 19:14
upstream INFO: task hung in switchdev_deferred_process_work net 1 35 2147d 2684d 0/29 closed as invalid on 2019/10/23 07:26
upstream INFO: task hung in switchdev_deferred_process_work (3) net 1 syz inconclusive 508 13h57m 384d 0/29 upstream: reported syz repro on 2024/08/08 21:14
linux-5.15 INFO: task hung in switchdev_deferred_process_work 1 1 868d 868d 0/3 auto-obsoleted due to no activity on 2023/08/10 05:46
linux-4.19 INFO: task hung in switchdev_deferred_process_work 1 95 920d 1998d 0/1 upstream: reported on 2020/03/08 04:06
linux-4.14 INFO: task hung in switchdev_deferred_process_work 1 1 1209d 1209d 0/1 auto-obsoleted due to no activity on 2022/09/03 16:45
linux-5.15 INFO: task hung in switchdev_deferred_process_work (2) 1 36 522d 594d 0/3 auto-obsoleted due to no activity on 2024/06/01 16:21
linux-5.15 INFO: task hung in switchdev_deferred_process_work (3) 1 8 306d 449d 0/3 auto-obsoleted due to no activity on 2025/02/02 03:46
linux-6.1 INFO: task hung in switchdev_deferred_process_work 1 8 413d 576d 0/3 auto-obsoleted due to no activity on 2024/10/18 09:59

Sample crash report:
INFO: task kworker/1:0:16192 blocked for more than 143 seconds.
      Not tainted 6.6.99-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:0     state:D stack:24616 pid:16192 ppid:2      flags:0x00004000
Workqueue: events switchdev_deferred_process_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0x14e2/0x4580 kernel/sched/core.c:6700
 schedule+0xbd/0x170 kernel/sched/core.c:6774
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6833
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0x6b7/0xcc0 kernel/locking/mutex.c:747
 switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
 process_one_work kernel/workqueue.c:2634 [inline]
 process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
 worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
 kthread+0x2fa/0x390 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
 </TASK>

Showing all locks held in the system:
1 lock held by pool_workqueue_/3:
 #0: ffffffff8cd35b78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
 #0: ffffffff8cd35b78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004
3 locks held by kworker/0:0/8:
 #0: ffff88802cf1d938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff88802cf1d938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc900000d7d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc900000d7d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #2: ffffffff8dfbb008 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4701
1 lock held by khungtaskd/29:
 #0: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 #0: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
 #0: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
3 locks held by kworker/1:2/3382:
2 locks held by getty/5553:
 #0: ffff88802d7860a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000326e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
3 locks held by kworker/0:3/5781:
3 locks held by kworker/1:3/5833:
 #0: ffff88802cf1d938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff88802cf1d938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc900047ffd00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc900047ffd00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #2: ffffffff8dfbb008 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4701
2 locks held by kworker/0:6/5849:
 #0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc9000499fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc9000499fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
1 lock held by syz-executor/8913:
 #0: ffffffff8dfbb008 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8dfbb008 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x41/0x1c0 drivers/net/tun.c:3511
3 locks held by kworker/1:0/16192:
 #0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff888017870938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc9000428fd00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc9000428fd00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #2: ffffffff8dfbb008 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
2 locks held by syz.1.7584/23255:
 #0: ffffffff8dfbb008 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8dfbb008 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x76f/0xf10 net/core/rtnetlink.c:6472
 #1: ffffffff8cd35b78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
 #1: ffffffff8cd35b78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004
1 lock held by dhcpcd/23271:
 #0: ffff88807f6d6130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1731 [inline]
 #0: ffff88807f6d6130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3259
1 lock held by dhcpcd/23272:
 #0: ffff88807f6d2130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1731 [inline]
 #0: ffff88807f6d2130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3259
1 lock held by dhcpcd/23273:
 #0: ffff88805f6da130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1731 [inline]
 #0: ffff88805f6da130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3259
1 lock held by dhcpcd/23275:
 #0: ffff88805b932130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1731 [inline]
 #0: ffff88805b932130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcc0 net/packet/af_packet.c:3259
1 lock held by dhcpcd/23278:

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/23 04:22 linux-6.6.y d96eb99e2f0e e1dd4f22 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in switchdev_deferred_process_work
* Struck through repros no longer work on HEAD.