INFO: task kworker/1:0H:25 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:0H state:D stack:25096 pid:25 tgid:25 ppid:2 task_flags:0x4208060 flags:0x00080000 Workqueue: kblockd blk_mq_requeue_work Call Trace: context_switch kernel/sched/core.c:5256 [inline] __schedule+0x14bc/0x5000 kernel/sched/core.c:6863 __schedule_loop kernel/sched/core.c:6945 [inline] schedule+0x165/0x360 kernel/sched/core.c:6960 schedule_timeout+0x12b/0x270 kernel/time/sleep_timeout.c:99 wait_for_reconnect drivers/block/nbd.c:1107 [inline] nbd_handle_cmd drivers/block/nbd.c:1149 [inline] nbd_queue_rq+0x662/0xf10 drivers/block/nbd.c:1207 blk_mq_dispatch_rq_list+0x4c0/0x1900 block/blk-mq.c:2129 __blk_mq_do_dispatch_sched block/blk-mq-sched.c:168 [inline] blk_mq_do_dispatch_sched block/blk-mq-sched.c:182 [inline] __blk_mq_sched_dispatch_requests+0xda4/0x1570 block/blk-mq-sched.c:307 blk_mq_sched_dispatch_requests+0xd7/0x190 block/blk-mq-sched.c:329 blk_mq_run_hw_queue+0x348/0x4f0 block/blk-mq.c:2367 blk_mq_run_hw_queues+0x33e/0x430 block/blk-mq.c:2416 blk_mq_requeue_work+0x717/0x760 block/blk-mq.c:1583 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0xad1/0x1770 kernel/workqueue.c:3340 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3421 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 Showing all locks held in the system: 1 lock held by rcu_exp_gp_kthr/18: 4 locks held by kworker/1:0H/25: #0: ffff8881404cd148 ((wq_completion)kblockd){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline] #0: ffff8881404cd148 ((wq_completion)kblockd){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340 #1: ffffc900001f7b80 ((work_completion)(&(&q->requeue_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline] #1: ffffc900001f7b80 ((work_completion)(&(&q->requeue_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340 #2: ffff88814338a218 (set->srcu){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:185 [inline] #2: ffff88814338a218 (set->srcu){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:277 [inline] #2: ffff88814338a218 (set->srcu){.+.+}-{0:0}, at: blk_mq_run_hw_queue+0x31f/0x4f0 block/blk-mq.c:2367 #3: ffff8880250a51f8 (&cmd->lock){+.+.}-{4:4}, at: nbd_queue_rq+0xc8/0xf10 drivers/block/nbd.c:1199 1 lock held by khungtaskd/31: #0: ffffffff8df41cc0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8df41cc0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline] #0: ffffffff8df41cc0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775 2 locks held by getty/5597: #0: ffff8880340cb0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222 1 lock held by udevd/6827: #0: ffff888024fd0358 (&disk->open_mutex){+.+.}-{4:4}, at: bdev_open+0xe0/0xd30 block/bdev.c:962 2 locks held by kworker/u8:24/8284: #0: ffff88801a069148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline] #0: ffff88801a069148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340 #1: ffff8880b8924448 (psi_seq){-.-.}-{0:0}, at: psi_task_switch+0x53/0x880 kernel/sched/psi.c:933 1 lock held by syz.1.1808/13326: 2 locks held by syz.2.1824/13384: #0: ffff88804fb10d88 (&sb->s_type->i_mutex_key#12){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline] #0: ffff88804fb10d88 (&sb->s_type->i_mutex_key#12){+.+.}-{4:4}, at: __sock_release net/socket.c:652 [inline] #0: ffff88804fb10d88 (&sb->s_type->i_mutex_key#12){+.+.}-{4:4}, at: sock_close+0x9b/0x240 net/socket.c:1446 #1: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:311 [inline] #1: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x2f6/0x730 kernel/rcu/tree_exp.h:956 2 locks held by syz.0.1828/13403: #0: ffff88804fb12a48 (&sb->s_type->i_mutex_key#12){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline] #0: ffff88804fb12a48 (&sb->s_type->i_mutex_key#12){+.+.}-{4:4}, at: __sock_release net/socket.c:652 [inline] #0: ffff88804fb12a48 (&sb->s_type->i_mutex_key#12){+.+.}-{4:4}, at: sock_close+0x9b/0x240 net/socket.c:1446 #1: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:343 [inline] #1: ffffffff8df477f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:956 ============================================= NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Call Trace: dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline] watchdog+0xf3c/0xf80 kernel/hung_task.c:495 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 13326 Comm: syz.1.1808 Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:-1 [inline] RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:109 [inline] RIP: 0010:arch_local_irq_save arch/x86/include/asm/irqflags.h:127 [inline] RIP: 0010:lock_acquire+0xc9/0x340 kernel/locking/lockdep.c:5864 Code: 00 65 48 8b 04 25 08 f0 76 92 83 b8 2c 0b 00 00 00 0f 85 d5 00 00 00 48 c7 44 24 30 00 00 00 00 9c 8f 44 24 30 4c 89 74 24 10 <4d> 89 fe 4c 8b 7c 24 30 fa 48 c7 c7 50 09 78 8d e8 a2 34 ab 09 65 RSP: 0018:ffffc9000bfaf428 EFLAGS: 00000246 RAX: ffff8880290d5b80 RBX: 0000000000000000 RCX: fb0af9b936ba5100 RDX: 0000000000000000 RSI: ffffffff81f9cab8 RDI: 1ffffffff1be8398 RBP: ffffffff81f9ca9c R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000406 R11: 0000000000000000 R12: 0000000000000002 R13: ffffffff8df41cc0 R14: 0000000000000000 R15: 0000000000000000 FS: 00007f90cd5f66c0(0000) GS:ffff8881260b1000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000200000e09000 CR3: 0000000077a5a000 CR4: 00000000003526f0 Call Trace: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] rcu_read_lock include/linux/rcupdate.h:867 [inline] page_ref_add_unless include/linux/page_ref.h:235 [inline] folio_ref_add_unless include/linux/page_ref.h:248 [inline] folio_try_get+0x38/0x340 include/linux/page_ref.h:264 next_uptodate_folio+0xcb/0x5d0 mm/filemap.c:3694 filemap_map_pages+0x14d7/0x1fd0 mm/filemap.c:3914 do_fault_around mm/memory.c:5674 [inline] do_read_fault mm/memory.c:5707 [inline] do_fault mm/memory.c:5850 [inline] do_pte_missing mm/memory.c:4362 [inline] handle_pte_fault mm/memory.c:6234 [inline] __handle_mm_fault+0x34ab/0x5420 mm/memory.c:6366 handle_mm_fault+0x40a/0x8e0 mm/memory.c:6535 do_user_addr_fault+0x764/0x1380 arch/x86/mm/fault.c:1387 handle_page_fault arch/x86/mm/fault.c:1476 [inline] exc_page_fault+0x82/0x100 arch/x86/mm/fault.c:1532 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618 RIP: 0010:rep_movs_alternative+0x30/0x90 arch/x86/lib/copy_user_64.S:60 Code: 83 f9 08 73 25 85 c9 74 0f 8a 06 88 07 48 ff c7 48 ff c6 48 ff c9 75 f1 e9 8d 48 04 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 <48> 8b 06 48 89 07 48 83 c6 08 48 83 c7 08 83 e9 08 74 db 83 f9 08 RSP: 0018:ffffc9000bfafb78 EFLAGS: 00050216 RAX: 00007ffffffff001 RBX: 0000000000000020 RCX: 0000000000000020 RDX: 0000000000000001 RSI: 0000200000e09000 RDI: ffff888053f4fa40 RBP: ffffc9000bfafcc0 R08: ffff888053f4fa5f R09: 1ffff1100a7e9f4b R10: dffffc0000000000 R11: ffffed100a7e9f4c R12: ffffc9000bfafd88 R13: 1ffff920017f5fb1 R14: ffff888053f4fa40 R15: 0000200000e09000 copy_user_generic arch/x86/include/asm/uaccess_64.h:126 [inline] raw_copy_from_user arch/x86/include/asm/uaccess_64.h:141 [inline] _inline_copy_from_user include/linux/uaccess.h:187 [inline] _copy_from_user+0x7a/0xb0 lib/usercopy.c:18 copy_from_user include/linux/uaccess.h:221 [inline] generic_map_update_batch+0x566/0x810 kernel/bpf/syscall.c:2035 bpf_map_do_batch+0x39b/0x630 kernel/bpf/syscall.c:5647 __sys_bpf+0x690/0x860 kernel/bpf/syscall.c:-1 __do_sys_bpf kernel/bpf/syscall.c:6274 [inline] __se_sys_bpf kernel/bpf/syscall.c:6272 [inline] __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:6272 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f90cf38f749 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f90cd5f6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000141 RAX: ffffffffffffffda RBX: 00007f90cf5e5fa0 RCX: 00007f90cf38f749 RDX: 0000000000000086 RSI: 0000200000000300 RDI: 000000000000001a RBP: 00007f90cf413f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f90cf5e6038 R14: 00007f90cf5e5fa0 R15: 00007ffe26298908