INFO: task syz.3.10768:11154 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.3.10768 state:D stack:23896 pid:11154 tgid:11153 ppid:5867 task_flags:0x400140 flags:0x00080002 Call Trace: context_switch kernel/sched/core.c:5256 [inline] __schedule+0x14bc/0x5000 kernel/sched/core.c:6863 __schedule_loop kernel/sched/core.c:6945 [inline] schedule+0x165/0x360 kernel/sched/core.c:6960 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7017 __mutex_lock_common kernel/locking/mutex.c:692 [inline] __mutex_lock+0x7e6/0x1350 kernel/locking/mutex.c:776 nfsd_nl_listener_get_doit+0x10a/0x5e0 fs/nfsd/nfsctl.c:2030 genl_family_rcv_msg_doit+0x215/0x300 net/netlink/genetlink.c:1115 genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline] genl_rcv_msg+0x60e/0x790 net/netlink/genetlink.c:1210 netlink_rcv_skb+0x208/0x470 net/netlink/af_netlink.c:2550 genl_rcv+0x28/0x40 net/netlink/genetlink.c:1219 netlink_unicast_kernel net/netlink/af_netlink.c:1318 [inline] netlink_unicast+0x82f/0x9e0 net/netlink/af_netlink.c:1344 netlink_sendmsg+0x805/0xb30 net/netlink/af_netlink.c:1894 sock_sendmsg_nosec net/socket.c:718 [inline] __sock_sendmsg+0x21c/0x270 net/socket.c:733 ____sys_sendmsg+0x505/0x820 net/socket.c:2608 ___sys_sendmsg+0x21f/0x2a0 net/socket.c:2662 __sys_sendmsg net/socket.c:2694 [inline] __do_sys_sendmsg net/socket.c:2699 [inline] __se_sys_sendmsg net/socket.c:2697 [inline] __x64_sys_sendmsg+0x19b/0x260 net/socket.c:2697 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f957d78f749 RSP: 002b:00007f957e5e9038 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007f957d9e5fa0 RCX: 00007f957d78f749 RDX: 0000000020048000 RSI: 0000200000000000 RDI: 0000000000000009 RBP: 00007f957d813f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f957d9e6038 R14: 00007f957d9e5fa0 R15: 00007ffdf075e7d8 Showing all locks held in the system: 1 lock held by khungtaskd/31: #0: ffffffff8df41cc0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8df41cc0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline] #0: ffffffff8df41cc0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775 3 locks held by kworker/u8:14/3580: #0: ffff88814cc9b148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline] #0: ffff88814cc9b148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340 #1: ffffc9000c35fb80 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline] #1: ffffc9000c35fb80 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340 #2: ffffffff8f2f5608 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline] #2: ffffffff8f2f5608 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x112/0x14b0 net/ipv6/addrconf.c:4194 2 locks held by getty/5593: #0: ffff8880301030a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222 3 locks held by kworker/1:7/5918: #0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline] #0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340 #1: ffffc90004417b80 (free_ipc_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline] #1: ffffc90004417b80 (free_ipc_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340 #2: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:311 [inline] #2: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x2f6/0x730 kernel/rcu/tree_exp.h:956 7 locks held by kworker/0:8/11366: 3 locks held by kworker/u8:34/5378: #0: ffff88801a069948 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline] #0: ffff88801a069948 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340 #1: ffffc9000bdffb80 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline] #1: ffffc9000bdffb80 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340 #2: ffffffff8f2f5608 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303 2 locks held by syz.2.10373/9732: #0: ffffffff8f35d1b0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218 #1: ffffffff8e233a08 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x2cb/0x960 fs/nfsd/nfsctl.c:1590 5 locks held by syz-executor/10483: #0: ffff888057b08ec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:499 [inline] #0: ffff888057b08ec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x212/0x510 net/bluetooth/hci_core.c:2715 #1: ffff888057b080c0 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x640/0xff0 net/bluetooth/hci_sync.c:5314 #2: ffffffff8f467308 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2143 [inline] #2: ffffffff8f467308 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2637 #3: ffff888059f3eb38 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x70/0x680 net/bluetooth/l2cap_core.c:1763 #4: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:343 [inline] #4: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:956 2 locks held by syz.3.10768/11154: #0: ffffffff8f35d1b0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218 #1: ffffffff8e233a08 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_get_doit+0x10a/0x5e0 fs/nfsd/nfsctl.c:2030 2 locks held by syz-executor/16697: #0: ffffffff8ea7b708 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8ea7b708 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline] #0: ffffffff8ea7b708 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570 #1: ffffffff8f2f5608 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline] #1: ffffffff8f2f5608 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline] #1: ffffffff8f2f5608 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8ec/0x1c90 net/core/rtnetlink.c:4071 1 lock held by syz.4.12400/16833: #0: ffffffff8f2f5608 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline] #0: ffffffff8f2f5608 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3e/0x1c0 drivers/net/tun.c:3436 3 locks held by syz.6.12401/16838: #0: ffff888029ed0ec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:499 [inline] #0: ffff888029ed0ec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x212/0x510 net/bluetooth/hci_core.c:2715 #1: ffff888029ed00c0 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x640/0xff0 net/bluetooth/hci_sync.c:5314 #2: ffffffff8f467308 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2143 [inline] #2: ffffffff8f467308 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2637 3 locks held by syz.1.12403/16848: #0: ffff88804a764ec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:499 [inline] #0: ffff88804a764ec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x212/0x510 net/bluetooth/hci_core.c:2715 #1: ffff88804a7640c0 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x640/0xff0 net/bluetooth/hci_sync.c:5314 #2: ffffffff8f467308 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2143 [inline] #2: ffffffff8f467308 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2637 ============================================= NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Call Trace: dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline] watchdog+0xf3c/0xf80 kernel/hung_task.c:495 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 Sending NMI from CPU 1 to CPUs 0: GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 5374 Comm: kworker/u8:32 Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Workqueue: events_unbound nsim_dev_trap_report_work RIP: 0010:orc_ip arch/x86/kernel/unwind_orc.c:80 [inline] RIP: 0010:__orc_find arch/x86/kernel/unwind_orc.c:102 [inline] RIP: 0010:orc_find arch/x86/kernel/unwind_orc.c:227 [inline] RIP: 0010:unwind_next_frame+0x1315/0x2390 arch/x86/kernel/unwind_orc.c:494 Code: 83 e0 fe 4c 8d 3c 45 00 00 00 00 49 01 ef 4c 89 f8 48 c1 e8 03 48 b9 00 00 00 00 00 fc ff df 0f b6 04 08 84 c0 75 27 49 63 07 <4c> 01 f8 49 8d 4f 04 4c 39 e0 48 0f 46 e9 49 8d 47 fc 48 0f 47 d8 RSP: 0018:ffffc9000c19ee78 EFLAGS: 00000246 RAX: fffffffff27c4b53 RBX: ffffffff8fa07c48 RCX: dffffc0000000000 RDX: ffffffff8fa07c48 RSI: ffffffff901f90e6 RDI: ffffffff8bbfc600 RBP: ffffffff8fa07c48 R08: 0000000000000001 R09: ffffffff8df41cc0 R10: ffffc9000c19ef98 R11: ffffffff81ad4ad0 R12: ffffffff821cc815 R13: ffffffff8fa07c48 R14: ffffc9000c19ef48 R15: ffffffff8fa07c48 FS: 0000000000000000(0000) GS:ffff8881260b1000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000561f71502f40 CR3: 000000000dd3a000 CR4: 00000000003526f0 Call Trace: arch_stack_walk+0x11c/0x150 arch/x86/kernel/stacktrace.c:25 stack_trace_save+0x9c/0xe0 kernel/stacktrace.c:122 save_stack+0xf5/0x1f0 mm/page_owner.c:156 __set_page_owner+0x8d/0x4c0 mm/page_owner.c:332 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x234/0x290 mm/page_alloc.c:1845 prep_new_page mm/page_alloc.c:1853 [inline] get_page_from_freelist+0x2365/0x2440 mm/page_alloc.c:3879 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5183 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2416 alloc_slab_page mm/slub.c:3075 [inline] allocate_slab+0x86/0x3b0 mm/slub.c:3248 new_slab mm/slub.c:3302 [inline] ___slab_alloc+0xf2b/0x1960 mm/slub.c:4651 __slab_alloc+0x65/0x100 mm/slub.c:4774 __slab_alloc_node mm/slub.c:4850 [inline] slab_alloc_node mm/slub.c:5246 [inline] __do_kmalloc_node mm/slub.c:5651 [inline] __kmalloc_node_track_caller_noprof+0x5cb/0x810 mm/slub.c:5759 kmalloc_reserve+0x136/0x290 net/core/skbuff.c:608 __alloc_skb+0x27e/0x430 net/core/skbuff.c:690 alloc_skb include/linux/skbuff.h:1383 [inline] nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:818 [inline] nsim_dev_trap_report drivers/net/netdevsim/dev.c:875 [inline] nsim_dev_trap_report_work+0x29a/0xb80 drivers/net/netdevsim/dev.c:921 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0xad1/0x1770 kernel/workqueue.c:3340 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3421 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog