INFO: task kworker/0:8:9238 blocked for more than 143 seconds. Not tainted 6.16.0-rc7-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:8 state:D stack:30440 pid:9238 tgid:9238 ppid:2 task_flags:0x208040 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5397 [inline] __schedule+0x16fd/0x4cf0 kernel/sched/core.c:6786 __schedule_loop kernel/sched/core.c:6864 [inline] schedule+0x165/0x360 kernel/sched/core.c:6879 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6936 kthread+0x2bc/0x8a0 kernel/kthread.c:452 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Showing all locks held in the system: 1 lock held by kworker/R-rcu_g/4: 5 locks held by kworker/0:0/9: 4 locks held by kworker/u8:0/12: #0: ffff88801b2f6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801b2f6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc90000117bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc90000117bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 #2: ffffffff8f50f510 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf7/0x800 net/core/net_namespace.c:662 #3: ffff88807db114e8 (&wg->device_update_lock){+.+.}-{4:4}, at: wg_destruct+0x116/0x2f0 drivers/net/wireguard/device.c:249 1 lock held by kworker/R-mm_pe/14: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3531 1 lock held by khungtaskd/31: #0: ffffffff8e13f0e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8e13f0e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline] #0: ffffffff8e13f0e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6770 3 locks held by kworker/1:1/54: #0: ffff88801a478d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801a478d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc90000be7bc0 (free_ipc_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc90000be7bc0 (free_ipc_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 #2: ffffffff8e144bf8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline] #2: ffffffff8e144bf8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:998 2 locks held by kworker/u8:5/1097: 2 locks held by kworker/0:2/1203: 3 locks held by kworker/u8:6/2959: 2 locks held by kworker/u8:7/3486: #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc9000bba7bc0 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc9000bba7bc0 (connector_reaper_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 3 locks held by kworker/u8:8/3522: #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc9000bc87bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc9000bc87bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 #2: ffffffff8f51c108 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303 2 locks held by getty/5601: #0: ffff8880307470a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc9000363c2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222 2 locks held by syz-executor/5848: #0: ffff8880544d40e0 (&type->s_umount_key#61){+.+.}-{4:4}, at: __super_lock fs/super.c:57 [inline] #0: ffff8880544d40e0 (&type->s_umount_key#61){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline] #0: ffff8880544d40e0 (&type->s_umount_key#61){+.+.}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:506 #1: ffffffff8e144ac0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3786 2 locks held by kworker/0:3/5858: 2 locks held by kworker/R-wg-cr/5891: 1 lock held by kworker/R-wg-cr/5895: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x2e/0x3a0 kernel/workqueue.c:2678 1 lock held by kworker/R-wg-cr/5896: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3531 2 locks held by kworker/R-wg-cr/5897: 2 locks held by kworker/R-wg-cr/5898: 1 lock held by kworker/R-wg-cr/5899: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x2e/0x3a0 kernel/workqueue.c:2678 1 lock held by kworker/R-wg-cr/5900: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3531 2 locks held by kworker/R-wg-cr/5902: 1 lock held by kworker/R-wg-cr/5904: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x2e/0x3a0 kernel/workqueue.c:2678 2 locks held by kworker/R-wg-cr/5907: 2 locks held by kworker/R-wg-cr/5908: 1 lock held by kworker/R-wg-cr/5909: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3531 2 locks held by kworker/0:5/5944: 2 locks held by kworker/0:6/5951: 5 locks held by kworker/1:7/6005: #0: ffff8881446e4d48 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff8881446e4d48 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc90004a97bc0 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc90004a97bc0 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 #2: ffff88814471b198 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline] #2: ffff88814471b198 (&dev->mutex){....}-{4:4}, at: hub_event+0x184/0x4a20 drivers/usb/core/hub.c:5898 #3: ffff88805f7a7198 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline] #3: ffff88805f7a7198 (&dev->mutex){....}-{4:4}, at: usb_disconnect+0xf8/0x950 drivers/usb/core/hub.c:2335 #4: ffff888053252160 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline] #4: ffff888053252160 (&dev->mutex){....}-{4:4}, at: __device_driver_lock drivers/base/dd.c:1094 [inline] #4: ffff888053252160 (&dev->mutex){....}-{4:4}, at: device_release_driver_internal+0xb6/0x7c0 drivers/base/dd.c:1292 2 locks held by kworker/0:7/6045: 1 lock held by syz-executor/9317: #0: ffff888051a3a0e0 (&type->s_umount_key#87){+.+.}-{4:4}, at: __super_lock fs/super.c:57 [inline] #0: ffff888051a3a0e0 (&type->s_umount_key#87){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline] #0: ffff888051a3a0e0 (&type->s_umount_key#87){+.+.}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:506 1 lock held by kworker/R-wg-cr/9393: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3531 2 locks held by kworker/R-wg-cr/9394: 1 lock held by kworker/R-wg-cr/9396: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3531 2 locks held by kworker/u8:9/9467: #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321 #1: ffffc9001e5f7bc0 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc9001e5f7bc0 ((reaper_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321 1 lock held by kworker/R-wg-cr/9518: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3531 1 lock held by kworker/R-wg-cr/9519: 1 lock held by kworker/R-wg-cr/9520: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3531 2 locks held by kworker/R-wg-cr/9601: 3 locks held by kworker/R-wg-cr/9602: 1 lock held by kworker/R-wg-cr/9603: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3531 1 lock held by kworker/R-scsi_/10414: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: set_pf_worker kernel/workqueue.c:3327 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xd01/0xdd0 kernel/workqueue.c:3546 2 locks held by syz-executor/10456: #0: ffffffff8eca3700 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8eca3700 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline] #0: ffffffff8eca3700 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570 #1: ffffffff8f51c108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline] #1: ffffffff8f51c108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline] #1: ffffffff8f51c108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4054 1 lock held by syz-executor/10459: #0: ffffffff8f51c108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline] #0: ffffffff8f51c108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline] #0: ffffffff8f51c108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4054 2 locks held by syz-executor/10485: #0: ffffffff8f50f510 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x304/0x4d0 net/core/net_namespace.c:570 #1: ffffffff8f51c108 (rtnl_mutex){+.+.}-{4:4}, at: ip_tunnel_init_net+0x2ab/0x800 net/ipv4/ip_tunnel.c:1160 2 locks held by syz-executor/10501: #0: ffffffff8f50f510 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x304/0x4d0 net/core/net_namespace.c:570 #1: ffffffff8f51c108 (rtnl_mutex){+.+.}-{4:4}, at: ip_tunnel_init_net+0x2ab/0x800 net/ipv4/ip_tunnel.c:1160 1 lock held by kworker/R-xfs-c/10509: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: set_pf_worker kernel/workqueue.c:3327 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xd01/0xdd0 kernel/workqueue.c:3546 1 lock held by kworker/R-wg-cr/10511: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: set_pf_worker kernel/workqueue.c:3327 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x82/0xdd0 kernel/workqueue.c:3454 1 lock held by kworker/R-wg-cr/10512: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: set_pf_worker kernel/workqueue.c:3327 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x82/0xdd0 kernel/workqueue.c:3454 1 lock held by kworker/R-wg-cr/10514: #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: set_pf_worker kernel/workqueue.c:3327 [inline] #0: ffffffff8dfe5d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x82/0xdd0 kernel/workqueue.c:3454 ============================================= NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc7-syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Call Trace: dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline] watchdog+0xfee/0x1030 kernel/hung_task.c:470 kthread+0x711/0x8a0 kernel/kthread.c:464 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 5893 Comm: kworker/R-wg-cr Not tainted 6.16.0-rc7-syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Workqueue: 0x0 (wg-crypt-wg0) RIP: 0010:arch_atomic_read arch/x86/include/asm/atomic.h:23 [inline] RIP: 0010:raw_atomic_read include/linux/atomic/atomic-arch-fallback.h:457 [inline] RIP: 0010:rcu_is_watching_curr_cpu include/linux/context_tracking.h:128 [inline] RIP: 0010:rcu_is_watching+0x5a/0xb0 kernel/rcu/tree.c:745 Code: f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 fc 0c 7b 00 48 c7 c3 58 ff a0 92 49 03 1e 48 89 d8 48 c1 e8 03 42 0f b6 04 38 <84> c0 75 34 8b 03 65 ff 0d 39 a4 f8 10 74 11 83 e0 04 c1 e8 02 5b RSP: 0018:ffffc900000074b8 EFLAGS: 00000016 RAX: 0000000000000000 RBX: ffff8880b8632f58 RCX: 7e5a09160c9ecd00 RDX: ffffc90000007501 RSI: ffffffff8be28ce0 RDI: ffffffff8be28ca0 RBP: dffffc0000000000 R08: ffffc90000007fe0 R09: ffffc900000075f8 R10: dffffc0000000000 R11: fffff52000000ec1 R12: ffffc90000007ff0 R13: ffffc90000000000 R14: ffffffff8dbbcc70 R15: dffffc0000000000 FS: 0000000000000000(0000) GS:ffff888125c23000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b30f10ff8 CR3: 0000000079335000 CR4: 0000000000350ef0 Call Trace: rcu_read_unlock include/linux/rcupdate.h:869 [inline] class_rcu_destructor include/linux/rcupdate.h:1155 [inline] unwind_next_frame+0x1965/0x2390 arch/x86/kernel/unwind_orc.c:680 arch_stack_walk+0x11c/0x150 arch/x86/kernel/stacktrace.c:25 stack_trace_save+0x9c/0xe0 kernel/stacktrace.c:122 kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:68 kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:576 poison_slab_object mm/kasan/common.c:247 [inline] __kasan_slab_free+0x62/0x70 mm/kasan/common.c:264 kasan_slab_free include/linux/kasan.h:233 [inline] slab_free_hook mm/slub.c:2381 [inline] slab_free mm/slub.c:4643 [inline] kfree+0x18e/0x440 mm/slub.c:4842 dummy_timer+0x801/0x4550 drivers/usb/gadget/udc/dummy_hcd.c:1989 __run_hrtimer kernel/time/hrtimer.c:1761 [inline] __hrtimer_run_queues+0x52c/0xc60 kernel/time/hrtimer.c:1825 hrtimer_run_softirq+0x187/0x2b0 kernel/time/hrtimer.c:1842 handle_softirqs+0x286/0x870 kernel/softirq.c:579 __do_softirq kernel/softirq.c:613 [inline] invoke_softirq kernel/softirq.c:453 [inline] __irq_exit_rcu+0xca/0x1f0 kernel/softirq.c:680 irq_exit_rcu+0x9/0x30 kernel/softirq.c:696 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1050 [inline] sysvec_apic_timer_interrupt+0xa6/0xc0 arch/x86/kernel/apic/apic.c:1050 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702 RIP: 0010:__mutex_lock_common kernel/locking/mutex.c:581 [inline] RIP: 0010:__mutex_lock+0x149/0xe80 kernel/locking/mutex.c:747 Code: 0d 0c 00 00 83 3d c6 0d 3c 0e 00 75 23 49 8d 7c 24 60 48 89 f8 48 c1 e8 03 42 80 3c 28 00 74 05 e8 9c 33 b3 f6 4d 39 64 24 60 <0f> 85 7b 0b 00 00 bf 01 00 00 00 e8 87 dc 21 f6 49 8d 7c 24 68 45 RSP: 0018:ffffc900041dfac0 EFLAGS: 00000246 RAX: 1ffffffff1bfcba8 RBX: 0000000000000000 RCX: ffffffff99ab1203 RDX: ffff888030dcda00 RSI: ffffffff8db8406b RDI: ffffffff8dfe5d40 RBP: ffffc900041dfc60 R08: ffffc900041dfbc7 R09: ffffc900041dfba0 R10: dffffc0000000000 R11: fffff5200083bf79 R12: ffffffff8dfe5ce0 R13: dffffc0000000000 R14: ffff88802919d358 R15: 0000000000000000 worker_detach_from_pool kernel/workqueue.c:2736 [inline] rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3531 kthread+0x711/0x8a0 kernel/kthread.c:464 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245