INFO: task udevd:6014 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:udevd state:D stack:21920 pid:6014 tgid:6014 ppid:5201 task_flags:0x400140 flags:0x00080000 Call Trace: context_switch kernel/sched/core.c:5256 [inline] __schedule+0x14bc/0x5000 kernel/sched/core.c:6863 __schedule_loop kernel/sched/core.c:6945 [inline] schedule+0x165/0x360 kernel/sched/core.c:6960 schedule_timeout+0x12b/0x270 kernel/time/sleep_timeout.c:99 wait_for_reconnect drivers/block/nbd.c:1107 [inline] nbd_handle_cmd drivers/block/nbd.c:1149 [inline] nbd_queue_rq+0x662/0xf10 drivers/block/nbd.c:1207 blk_mq_dispatch_rq_list+0x4c0/0x1900 block/blk-mq.c:2129 __blk_mq_do_dispatch_sched block/blk-mq-sched.c:168 [inline] blk_mq_do_dispatch_sched block/blk-mq-sched.c:182 [inline] __blk_mq_sched_dispatch_requests+0xda4/0x1570 block/blk-mq-sched.c:307 blk_mq_sched_dispatch_requests+0xd7/0x190 block/blk-mq-sched.c:329 blk_mq_run_hw_queue+0x348/0x4f0 block/blk-mq.c:2367 blk_mq_dispatch_list+0xd0c/0xe00 include/linux/spinlock.h:-1 blk_mq_flush_plug_list+0x469/0x550 block/blk-mq.c:2976 __blk_flush_plug+0x3d3/0x4b0 block/blk-core.c:1225 blk_finish_plug block/blk-core.c:1252 [inline] __submit_bio+0x2d3/0x5a0 block/blk-core.c:651 __submit_bio_noacct_mq block/blk-core.c:724 [inline] submit_bio_noacct_nocheck+0x2eb/0xa30 block/blk-core.c:755 submit_bh fs/buffer.c:2829 [inline] block_read_full_folio+0x599/0x830 fs/buffer.c:2447 filemap_read_folio+0x117/0x380 mm/filemap.c:2489 do_read_cache_folio+0x350/0x590 mm/filemap.c:4082 read_mapping_folio include/linux/pagemap.h:1009 [inline] read_part_sector+0xb6/0x2b0 block/partitions/core.c:722 adfspart_check_ICS+0xa4/0xa50 block/partitions/acorn.c:360 check_partition block/partitions/core.c:141 [inline] blk_add_partitions block/partitions/core.c:589 [inline] bdev_disk_changed+0x75f/0x14b0 block/partitions/core.c:693 blkdev_get_whole+0x380/0x510 block/bdev.c:765 bdev_open+0x31e/0xd30 block/bdev.c:974 blkdev_open+0x457/0x600 block/fops.c:702 do_dentry_open+0x7ce/0x1420 fs/open.c:962 vfs_open+0x3b/0x340 fs/open.c:1094 do_open fs/namei.c:4628 [inline] path_openat+0x340e/0x3dd0 fs/namei.c:4787 do_filp_open+0x1fa/0x410 fs/namei.c:4814 do_sys_openat2+0x121/0x200 fs/open.c:1430 do_sys_open fs/open.c:1436 [inline] __do_sys_openat fs/open.c:1452 [inline] __se_sys_openat fs/open.c:1447 [inline] __x64_sys_openat+0x138/0x170 fs/open.c:1447 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f6831ca7407 RSP: 002b:00007ffffc87cf60 EFLAGS: 00000202 ORIG_RAX: 0000000000000101 RAX: ffffffffffffffda RBX: 00007f6832498880 RCX: 00007f6831ca7407 RDX: 00000000000a0800 RSI: 000055d9cada4e50 RDI: ffffffffffffff9c RBP: 000055d9cad59910 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 000055d9cad71e00 R13: 000055d9cad67190 R14: 0000000000000000 R15: 000055d9cad71e00 Showing all locks held in the system: 1 lock held by rcu_exp_gp_kthr/18: #0: ffff8880b893a7d8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:639 1 lock held by khungtaskd/31: #0: ffffffff8df41cc0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8df41cc0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline] #0: ffffffff8df41cc0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775 2 locks held by getty/5589: #0: ffff888032e480a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222 4 locks held by syz-executor/5837: #0: ffff88807316cec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:499 [inline] #0: ffff88807316cec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x212/0x510 net/bluetooth/hci_core.c:2715 #1: ffff88807316c0c0 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x640/0xff0 net/bluetooth/hci_sync.c:5314 #2: ffffffff8f467348 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2143 [inline] #2: ffffffff8f467348 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2637 #3: ffff8880285c4338 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x70/0x680 net/bluetooth/l2cap_core.c:1763 1 lock held by syz-executor/5838: #0: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:343 [inline] #0: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:956 3 locks held by kworker/1:7/5920: #0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline] #0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340 #1: ffffc9000466fb80 (free_ipc_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline] #1: ffffc9000466fb80 (free_ipc_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340 #2: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:343 [inline] #2: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:956 2 locks held by kworker/0:7/5953: #0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline] #0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340 #1: ffffc90004effb80 (xfrm_state_gc_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline] #1: ffffc90004effb80 (xfrm_state_gc_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340 3 locks held by udevd/6014: #0: ffff888025013358 (&disk->open_mutex){+.+.}-{4:4}, at: bdev_open+0xe0/0xd30 block/bdev.c:962 #1: ffff888143733e18 (set->srcu){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:185 [inline] #1: ffff888143733e18 (set->srcu){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:277 [inline] #1: ffff888143733e18 (set->srcu){.+.+}-{0:0}, at: blk_mq_run_hw_queue+0x31f/0x4f0 block/blk-mq.c:2367 #2: ffff8880251451f8 (&cmd->lock){+.+.}-{4:4}, at: nbd_queue_rq+0xc8/0xf10 drivers/block/nbd.c:1199 ============================================= NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Call Trace: dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline] watchdog+0xf3c/0xf80 kernel/hung_task.c:495 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 5829 Comm: syz-executor Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:26 [inline] RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:109 [inline] RIP: 0010:arch_local_irq_save arch/x86/include/asm/irqflags.h:127 [inline] RIP: 0010:lock_is_held_type+0x65/0x190 kernel/locking/lockdep.c:5936 Code: 85 ef 00 00 00 65 4c 8b 2c 25 08 f0 76 92 41 83 bd 2c 0b 00 00 00 0f 85 d8 00 00 00 89 f5 49 89 fe 48 c7 04 24 00 00 00 00 9c <8f> 04 24 4c 8b 24 24 fa 48 c7 c7 94 09 78 8d e8 67 1a 00 00 65 ff RSP: 0018:ffffc9000400f9c0 EFLAGS: 00000246 RAX: 0000000000000000 RBX: 00000000ffffffff RCX: 32e75bc684637d00 RDX: 0000000000000000 RSI: 00000000ffffffff RDI: ffffffff8df41cc0 RBP: 00000000ffffffff R08: ffffffff8230866e R09: ffffffff8df41cc0 R10: 000000000000000b R11: ffffffff81ad4ad0 R12: 0000000000062073 R13: ffff888027645b80 R14: ffffffff8df41cc0 R15: 0000000000001000 FS: 0000000000000000(0000) GS:ffff8881261b1000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f2270117d60 CR3: 000000000dd3a000 CR4: 00000000003526f0 Call Trace: lookup_page_ext mm/page_ext.c:254 [inline] page_ext_lookup+0xe7/0x180 mm/page_ext.c:509 page_ext_iter_begin include/linux/page_ext.h:132 [inline] __update_page_owner_free_handle+0x103/0x470 mm/page_owner.c:275 __reset_page_owner+0x85/0x1f0 mm/page_owner.c:312 reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1394 [inline] __free_frozen_pages+0xbc4/0xd30 mm/page_alloc.c:2901 vfree+0x25a/0x400 mm/vmalloc.c:3440 kcov_put kernel/kcov.c:439 [inline] kcov_close+0x28/0x50 kernel/kcov.c:535 __fput+0x44c/0xa70 fs/file_table.c:468 task_work_run+0x1d4/0x260 kernel/task_work.c:233 exit_task_work include/linux/task_work.h:40 [inline] do_exit+0x6c5/0x2310 kernel/exit.c:973 do_group_exit+0x21c/0x2d0 kernel/exit.c:1114 __do_sys_exit_group kernel/exit.c:1125 [inline] __se_sys_exit_group kernel/exit.c:1123 [inline] __x64_sys_exit_group+0x3f/0x40 kernel/exit.c:1123 x64_sys_call+0x2210/0x2210 arch/x86/include/generated/asm/syscalls_64.h:232 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f226f38f749 Code: Unable to access opcode bytes at 0x7f226f38f71f. RSP: 002b:00007ffc6e613198 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7 RAX: ffffffffffffffda RBX: 00007f226f41585d RCX: 00007f226f38f749 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000043 RBP: 00007f226f41586f R08: 00007ffc6e610f37 R09: 00000000000927c0 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000006 R13: 00000000000927c0 R14: 0000000000044054 R15: 00007ffc6e613340 GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog