BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 192s! BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 192s! BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=-20 stuck for 186s! Showing busy workqueues and worker pools: workqueue events: flags=0x0 pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=12/256 refcnt=13 pending: 3*nsim_dev_hwstats_traffic_work, 3*psi_avgs_work, 4*ovs_dp_masks_rebalance, kfree_rcu_monitor, ima_keys_handler pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=9/256 refcnt=10 pending: nsim_dev_hwstats_traffic_work, vmstat_shepherd, ovs_dp_masks_rebalance, 2*psi_avgs_work, xfrm_state_gc_task, kfree_rcu_monitor, switchdev_deferred_process_work, rht_deferred_worker workqueue events_highpri: flags=0x10 pwq 1: cpus=0 node=0 flags=0x0 nice=-20 active=2/256 refcnt=3 in-flight: 95:snd_vmidi_output_work snd_vmidi_output_work workqueue events_long: flags=0x0 pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=4/256 refcnt=5 pending: 3*defense_work_handler, br_fdb_cleanup pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=4/256 refcnt=5 pending: 3*defense_work_handler, br_multicast_gc_work workqueue events_unbound: flags=0x2 pwq 4: cpus=0-1 flags=0x4 nice=0 active=11/512 refcnt=12 pending: toggle_allocation_gate, 4*nsim_dev_trap_report_work, cfg80211_wiphy_work, flush_memcg_stats_dwork, crng_reseed, macvlan_process_broadcast, 2*idle_cull_fn pwq 4: cpus=0-1 flags=0x4 nice=0 active=3/512 refcnt=4 pending: cfg80211_wiphy_work, 2*idle_cull_fn workqueue events_freezable: flags=0x4 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: update_balloon_stats_func workqueue events_power_efficient: flags=0x80 pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=6/256 refcnt=7 pending: wg_ratelimiter_gc_entries, neigh_managed_work, neigh_periodic_work, do_cache_clean, gc_worker, check_lifetime pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=2/256 refcnt=3 pending: neigh_managed_work, neigh_periodic_work workqueue rcu_gp: flags=0x8 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 in-flight: 786:wait_rcu_exp_gp workqueue netns: flags=0xe000a pwq 4: cpus=0-1 flags=0x4 nice=0 active=1/1 refcnt=4 in-flight: 3475:cleanup_net workqueue mm_percpu_wq: flags=0x8 pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: vmstat_update pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: vmstat_update workqueue writeback: flags=0x4a pwq 4: cpus=0-1 flags=0x4 nice=0 active=3/256 refcnt=4 pending: wb_update_bandwidth_workfn, 2*wb_workfn workqueue kblockd: flags=0x18 pwq 3: cpus=1 node=0 flags=0x0 nice=-20 active=1/256 refcnt=2 pending: blk_mq_timeout_work workqueue dm_bufio_cache: flags=0x8 pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: work_fn workqueue ipv6_addrconf: flags=0xe000a pwq 4: cpus=0-1 flags=0x4 nice=0 active=1/1 refcnt=8 pending: addrconf_verify_work inactive: 4*addrconf_verify_work workqueue krxrpcd: flags=0xa001a pwq 5: cpus=0-1 node=0 flags=0x4 nice=-20 active=1/1 refcnt=8 pending: rxrpc_peer_keepalive_worker inactive: 4*rxrpc_peer_keepalive_worker workqueue bat_events: flags=0xe000a pwq 4: cpus=0-1 flags=0x4 nice=0 active=1/1 refcnt=41 in-flight: 1124:batadv_nc_worker inactive: 3*batadv_purge_orig, batadv_nc_worker, 4*batadv_mcast_mla_update, 2*batadv_nc_worker, 13*batadv_iv_send_outstanding_bat_ogm_packet, batadv_dat_purge, batadv_bla_periodic_work, batadv_dat_purge, batadv_bla_periodic_work, batadv_dat_purge, batadv_bla_periodic_work, batadv_dat_purge, batadv_bla_periodic_work, batadv_purge_orig, batadv_iv_send_outstanding_bat_ogm_packet, 4*batadv_tt_purge workqueue wg-crypt-wg0: flags=0x28 pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: wg_packet_encrypt_worker pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: wg_packet_encrypt_worker workqueue wg-crypt-wg1: flags=0x28 pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: wg_packet_encrypt_worker pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: wg_packet_encrypt_worker workqueue wg-crypt-wg2: flags=0x28 pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: wg_packet_encrypt_worker pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256 refcnt=2 pending: wg_packet_encrypt_worker pool 0: cpus=0 node=0 flags=0x0 nice=0 hung=192s workers=7 idle: 8 9 5866 968 5865 5847 pool 1: cpus=0 node=0 flags=0x0 nice=-20 hung=0s workers=3 idle: 10 6313 pool 4: cpus=0-1 flags=0x4 nice=0 hung=192s workers=10 idle: 11 48 1142 63 42 2902 3444 12 Showing backtraces of running workers in stalled CPU-bound worker pools: pool 0: task:kworker/0:2 state:R running task stack:25096 pid:786 ppid:2 flags:0x00004000 Workqueue: rcu_gp wait_rcu_exp_gp Call Trace: context_switch kernel/sched/core.c:5380 [inline] __schedule+0x14d2/0x44d0 kernel/sched/core.c:6699 preempt_schedule_common+0x82/0xc0 kernel/sched/core.c:6866 preempt_schedule+0xab/0xc0 kernel/sched/core.c:6890 preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:45 __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:160 [inline] _raw_spin_unlock_irq+0x40/0x50 kernel/locking/spinlock.c:202 sched_submit_work kernel/sched/core.c:6737 [inline] schedule+0x6b/0x170 kernel/sched/core.c:6770 schedule_timeout+0x160/0x280 kernel/time/timer.c:2167 synchronize_rcu_expedited_wait_once kernel/rcu/tree_exp.h:581 [inline] synchronize_rcu_expedited_wait kernel/rcu/tree_exp.h:633 [inline] rcu_exp_wait_wake kernel/rcu/tree_exp.h:702 [inline] rcu_exp_sel_wait_wake+0x7f0/0x2070 kernel/rcu/tree_exp.h:736 process_one_work kernel/workqueue.c:2634 [inline] process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711 worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792 kthread+0x2fa/0x390 kernel/kthread.c:388 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293 hrtimer: interrupt took 209719430 ns