INFO: task syz.0.6212:23363 blocked for more than 143 seconds. Not tainted syzkaller #0 Blocked by coredump. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.0.6212 state:D stack:25160 pid:23363 tgid:23363 ppid:4422 task_flags:0x40044c flags:0x00080002 Call Trace: context_switch kernel/sched/core.c:5325 [inline] __schedule+0x16f3/0x4c20 kernel/sched/core.c:6929 __schedule_loop kernel/sched/core.c:7011 [inline] schedule+0x165/0x360 kernel/sched/core.c:7026 request_wait_answer fs/fuse/dev.c:585 [inline] __fuse_request_send fs/fuse/dev.c:599 [inline] __fuse_simple_request+0x11d2/0x1bb0 fs/fuse/dev.c:693 fuse_simple_request fs/fuse/fuse_i.h:1250 [inline] fuse_flush+0x5dd/0x810 fs/fuse/file.c:482 filp_flush+0xc0/0x190 fs/open.c:1549 filp_close+0x1d/0x40 fs/open.c:1562 close_files fs/file.c:494 [inline] put_files_struct+0x1bd/0x360 fs/file.c:509 do_exit+0x6a0/0x2300 kernel/exit.c:961 do_group_exit+0x21c/0x2d0 kernel/exit.c:1107 get_signal+0x125d/0x1310 kernel/signal.c:3034 arch_do_signal_or_restart+0xa0/0x790 arch/x86/kernel/signal.c:337 exit_to_user_mode_loop+0x72/0x130 kernel/entry/common.c:40 exit_to_user_mode_prepare include/linux/irq-entry-common.h:225 [inline] syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline] syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline] do_syscall_64+0x2bd/0xfa0 arch/x86/entry/syscall_64.c:100 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7ff2b9f21885 RSP: 002b:00007ffd1b1c47c0 EFLAGS: 00000293 ORIG_RAX: 00000000000000e6 RAX: fffffffffffffdfc RBX: 00007ff2ba145fa0 RCX: 00007ff2b9f21885 RDX: 00007ffd1b1c4800 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 00007ff2ba147da0 R08: 0000000000000000 R09: 3fffffffffffffff R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000212070 R13: 00007ff2ba146090 R14: ffffffffffffffff R15: 00007ffd1b1c4940 INFO: task syz.0.6212:23364 blocked for more than 143 seconds. Not tainted syzkaller #0 Blocked by coredump. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.0.6212 state:D stack:22264 pid:23364 tgid:23363 ppid:4422 task_flags:0x40074c flags:0x00080003 Call Trace: context_switch kernel/sched/core.c:5325 [inline] __schedule+0x16f3/0x4c20 kernel/sched/core.c:6929 __schedule_loop kernel/sched/core.c:7011 [inline] schedule+0x165/0x360 kernel/sched/core.c:7026 request_wait_answer fs/fuse/dev.c:585 [inline] __fuse_request_send fs/fuse/dev.c:599 [inline] __fuse_simple_request+0x11d2/0x1bb0 fs/fuse/dev.c:693 fuse_simple_request fs/fuse/fuse_i.h:1250 [inline] fuse_flush+0x5dd/0x810 fs/fuse/file.c:482 filp_flush+0xc0/0x190 fs/open.c:1549 filp_close+0x1d/0x40 fs/open.c:1562 close_files fs/file.c:494 [inline] put_files_struct+0x1bd/0x360 fs/file.c:509 do_exit+0x66f/0x2300 kernel/exit.c:961 do_group_exit+0x21c/0x2d0 kernel/exit.c:1107 get_signal+0x125d/0x1310 kernel/signal.c:3034 arch_do_signal_or_restart+0xa0/0x790 arch/x86/kernel/signal.c:337 exit_to_user_mode_loop+0x72/0x130 kernel/entry/common.c:40 exit_to_user_mode_prepare include/linux/irq-entry-common.h:225 [inline] syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline] syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline] do_syscall_64+0x2bd/0xfa0 arch/x86/entry/syscall_64.c:100 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7ff2b9daf980 RSP: 002b:00007ff2b814da78 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 RAX: 0000000000000000 RBX: 00007ff2ba145fa0 RCX: 00007ff2b9eeefc9 RDX: 00007ff2b814da80 RSI: 00007ff2b814dbb0 RDI: 000000000000000b RBP: 00007ff2b9f71f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ff2ba146038 R14: 00007ff2ba145fa0 R15: 00007ffd1b1c46c8 Showing all locks held in the system: 1 lock held by khungtaskd/38: #0: ffffffff8d5aa4c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8d5aa4c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline] #0: ffffffff8d5aa4c0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775 2 locks held by getty/5559: #0: ffff88823bf668a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc90003e8b2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x444/0x1400 drivers/tty/n_tty.c:2222 6 locks held by kworker/u8:17/10800: #0: ffff888019ad4938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline] #0: ffff888019ad4938 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346 #1: ffffc9000d2b7ba0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline] #1: ffffc9000d2b7ba0 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346 #2: ffffffff8e855fa0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf7/0x820 net/core/net_namespace.c:669 #3: ffff88804c5580d8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:914 [inline] #3: ffff88804c5580d8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline] #3: ffff88804c5580d8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x10a/0x3d0 net/devlink/core.c:506 #4: ffff888026320300 (&devlink->lock_key#35){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:276 [inline] #4: ffff888026320300 (&devlink->lock_key#35){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline] #4: ffff888026320300 (&devlink->lock_key#35){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x11c/0x3d0 net/devlink/core.c:506 #5: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: nsim_destroy+0xed/0x680 drivers/net/netdevsim/netdev.c:1173 3 locks held by kworker/u8:56/22435: #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline] #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346 #1: ffffc9000ca57ba0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline] #1: ffffc9000ca57ba0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346 #2: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303 2 locks held by kworker/u8:72/23776: #0: ffff88801c3ff938 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline] #0: ffff88801c3ff938 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346 #1: ffffc9001599fba0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline] #1: ffffc9001599fba0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346 3 locks held by kworker/u8:82/24206: #0: ffff88814da20938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline] #0: ffff88814da20938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346 #1: ffffc90023447ba0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline] #1: ffffc90023447ba0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346 #2: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline] #2: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x119/0x15a0 net/ipv6/addrconf.c:4194 1 lock held by syz-executor/26079: #0: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline] #0: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978 7 locks held by syz-executor/26086: #0: ffff8880337be480 (sb_writers#7){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:3111 [inline] #0: ffff8880337be480 (sb_writers#7){.+.+}-{0:0}, at: vfs_write+0x217/0xb40 fs/read_write.c:682 #1: ffff88805bcf0078 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x1df/0x540 fs/kernfs/file.c:343 #2: ffff8881453e4a58 (kn->active#53){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline] #2: ffff8881453e4a58 (kn->active#53){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x232/0x540 fs/kernfs/file.c:344 #3: ffffffff8e0f3c58 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: new_device_store+0x12c/0x6f0 drivers/net/netdevsim/bus.c:184 #4: ffff8880105f90d8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:914 [inline] #4: ffff8880105f90d8 (&dev->mutex){....}-{4:4}, at: __device_attach+0x88/0x400 drivers/base/dd.c:1006 #5: ffff8880105fc300 (&devlink->lock_key#37){+.+.}-{4:4}, at: nsim_drv_probe+0xc2/0xba0 drivers/net/netdevsim/dev.c:1582 #6: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: nsim_init_netdevsim drivers/net/netdevsim/netdev.c:1047 [inline] #6: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: nsim_create+0x800/0x1060 drivers/net/netdevsim/netdev.c:1141 2 locks held by syz-executor/27047: #0: ffffffff8dfecfa0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8dfecfa0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline] #0: ffffffff8dfecfa0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570 #1: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline] #1: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline] #1: ffffffff8e862eb8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8e9/0x1c80 net/core/rtnetlink.c:4064 1 lock held by syz.3.6476/27480: #0: ffff88804c7bf550 (&mm->mmap_lock){++++}-{4:4}, at: mmap_write_lock_killable include/linux/mmap_lock.h:330 [inline] #0: ffff88804c7bf550 (&mm->mmap_lock){++++}-{4:4}, at: vm_mmap_pgoff+0x214/0x4d0 mm/util.c:579 1 lock held by syz.3.6476/27481: 6 locks held by syz.1.6477/27482: #0: ffff888142f6c280 (&dev->clientlist_mutex){+.+.}-{4:4}, at: drm_client_dev_restore+0xb0/0x280 drivers/gpu/drm/drm_client_event.c:112 #1: ffff888019b25aa0 (&helper->lock){+.+.}-{4:4}, at: __drm_fb_helper_restore_fbdev_mode_unlocked drivers/gpu/drm/drm_fb_helper.c:229 [inline] #1: ffff888019b25aa0 (&helper->lock){+.+.}-{4:4}, at: drm_fb_helper_restore_fbdev_mode_unlocked drivers/gpu/drm/drm_fb_helper.c:268 [inline] #1: ffff888019b25aa0 (&helper->lock){+.+.}-{4:4}, at: drm_fb_helper_lastclose+0x9c/0x1c0 drivers/gpu/drm/drm_fb_helper.c:1986 #2: ffff888142f6c158 (&dev->master_mutex){+.+.}-{4:4}, at: drm_master_internal_acquire+0x20/0x80 drivers/gpu/drm/drm_auth.c:435 #3: ffff888019b25888 (&client->modeset_mutex){+.+.}-{4:4}, at: drm_client_modeset_commit_locked+0x4c/0x4d0 drivers/gpu/drm/drm_client_modeset.c:1204 #4: ffffc90004fcfaf0 (crtc_ww_class_acquire){+.+.}-{0:0}, at: drm_client_modeset_commit_atomic+0xda/0x760 drivers/gpu/drm/drm_client_modeset.c:1042 #5: ffffc90004fcfb18 (crtc_ww_class_mutex){+.+.}-{4:4}, at: drm_client_modeset_commit_atomic+0xda/0x760 drivers/gpu/drm/drm_client_modeset.c:1042 ============================================= NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Call Trace: dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline] watchdog+0xf60/0xfa0 kernel/hung_task.c:495 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 27481 Comm: syz.3.6476 Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 RIP: 0010:lockdep_enabled kernel/locking/lockdep.c:124 [inline] RIP: 0010:lock_acquire+0xb2/0x360 kernel/locking/lockdep.c:5844 Code: e8 13 5d 84 00 83 3d cc 4b 3a 0d 00 0f 84 fa 00 00 00 65 8b 05 cf 93 06 10 85 c0 0f 85 eb 00 00 00 65 48 8b 04 25 08 40 a2 91 <83> b8 5c 0b 00 00 00 0f 85 d5 00 00 00 48 c7 44 24 30 00 00 00 00 RSP: 0018:ffffc9000505f0e0 EFLAGS: 00000246 RAX: ffff88802bcf0000 RBX: 0000000000000000 RCX: fb05b3c723f6d600 RDX: 0000000000000000 RSI: ffffffff82257737 RDI: 1ffffffff1ab5498 RBP: ffffffff8225771a R08: 0000000000000000 R09: 0000000000000000 R10: ffff88802bcf08f0 R11: ffffffff81aadb70 R12: 0000000000000002 R13: ffffffff8d5aa4c0 R14: 0000000000000000 R15: 0000000000000000 FS: 00007fe3313366c0(0000) GS:ffff888126efc000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000110c3185e9 CR3: 0000000055ee4000 CR4: 00000000003526f0 DR0: 0000000000000001 DR1: fffffffffffffff7 DR2: 0000000000000000 DR3: 000000000000000a DR6: 00000000ffff0ff0 DR7: 0000000000000400 Call Trace: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] rcu_read_lock include/linux/rcupdate.h:867 [inline] __update_page_owner_handle+0x77/0x570 mm/page_owner.c:246 __set_page_owner+0x10b/0x4b0 mm/page_owner.c:333 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x240/0x2a0 mm/page_alloc.c:1850 prep_new_page mm/page_alloc.c:1858 [inline] get_page_from_freelist+0x28c0/0x2960 mm/page_alloc.c:3884 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5183 alloc_pages_mpol+0xd1/0x380 mm/mempolicy.c:2416 folio_alloc_mpol_noprof+0x39/0xe0 mm/mempolicy.c:2435 shmem_alloc_folio mm/shmem.c:1871 [inline] shmem_alloc_and_add_folio mm/shmem.c:1910 [inline] shmem_get_folio_gfp+0x633/0x1a70 mm/shmem.c:2533 shmem_fault+0x170/0x380 mm/shmem.c:2734 __do_fault+0x138/0x390 mm/memory.c:5280 do_read_fault mm/memory.c:5698 [inline] do_fault mm/memory.c:5832 [inline] do_pte_missing mm/memory.c:4361 [inline] handle_pte_fault mm/memory.c:6177 [inline] __handle_mm_fault mm/memory.c:6318 [inline] handle_mm_fault+0x23c6/0x3400 mm/memory.c:6487 faultin_page mm/gup.c:1126 [inline] __get_user_pages+0x1685/0x2860 mm/gup.c:1428 populate_vma_page_range+0x29f/0x3a0 mm/gup.c:1860 __mm_populate+0x24c/0x380 mm/gup.c:1963 mm_populate include/linux/mm.h:3471 [inline] vm_mmap_pgoff+0x38a/0x4d0 mm/util.c:586 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fe3330cefc9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fe331336038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007fe333325fa0 RCX: 00007fe3330cefc9 RDX: b635773f07ebbeeb RSI: 0000000000b36000 RDI: 0000200000000000 RBP: 00007fe333151f91 R08: ffffffffffffffff R09: 00000000c36e5000 R10: 0000000000008031 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fe333326038 R14: 00007fe333325fa0 R15: 00007ffe08a56728 vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun vkms_vblank_simulate: vblank timer overrun