INFO: task syz.4.170:7108 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.4.170 state:D stack:28856 pid:7108 tgid:7085 ppid:5796 task_flags:0x400040 flags:0x00080002 Call Trace: context_switch kernel/sched/core.c:5295 [inline] __schedule+0x1553/0x5240 kernel/sched/core.c:6908 __schedule_loop kernel/sched/core.c:6990 [inline] rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7286 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline] __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline] __rt_mutex_slowlock_locked+0x1f8f/0x25c0 kernel/locking/rtmutex.c:1760 rt_mutex_slowlock+0xbd/0x170 kernel/locking/rtmutex.c:1800 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline] rwbase_write_lock+0x14d/0x730 kernel/locking/rwbase_rt.c:244 inode_lock_nested include/linux/fs.h:1073 [inline] lock_rename fs/namei.c:3756 [inline] __start_renaming+0x148/0x410 fs/namei.c:3852 filename_renameat2+0x38c/0x9c0 fs/namei.c:6119 __do_sys_renameat2 fs/namei.c:6173 [inline] __se_sys_renameat2+0x5a/0x2c0 fs/namei.c:6168 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f2f4d35c799 RSP: 002b:00007f2f4b595028 EFLAGS: 00000246 ORIG_RAX: 000000000000013c RAX: ffffffffffffffda RBX: 00007f2f4d5d6090 RCX: 00007f2f4d35c799 RDX: ffffffffffffff9c RSI: 00002000000000c0 RDI: ffffffffffffff9c RBP: 00007f2f4d3f2c99 R08: 0000000000000002 R09: 0000000000000000 R10: 0000200000000140 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f2f4d5d6128 R14: 00007f2f4d5d6090 R15: 00007fff7679db48 Showing all locks held in the system: 6 locks held by kworker/u8:0/12: 1 lock held by khungtaskd/37: #0: ffff[ 318.451425][ T37] #0: ffffffff8ddcb880 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #0: ffff[ 318.451425][ T37] #0: ffffffff8ddcb880 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #0: ffff[ 318.451425][ T37] #0: ffffffff8ddcb880 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775 2 locks held by kworker/u8:5/100: #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3250 [inline] #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9ea/0x1830 kernel/workqueue.c:3358 #1: ffffc900015afc40 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #1: ffffc900015afc40 (connector_reaper_work){+.+.}-{0:0}, at: process_scheduled_works+0xa25/0x1830 kernel/workqueue.c:3358 4 locks held by kworker/u8:6/683: #0: ffff88801f6af138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3250 [inline] #0: ffff88801f6af138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9ea/0x1830 kernel/workqueue.c:3358 #1: ffffc90003bffc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #1: ffffc90003bffc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa25/0x1830 kernel/workqueue.c:3358 #2: ffff888038ee20d0 (&type->s_umount_key#65){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:565 #3: ffff88803d9ff6e8 (&jfs_ip->commit_mutex){+.+.}-{4:4}, at: jfs_commit_inode+0x1ca/0x530 fs/jfs/inode.c:108 2 locks held by kworker/u8:14/3011: #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3250 [inline] #0: ffff888019c44138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9ea/0x1830 kernel/workqueue.c:3358 #1: ffffc9000e337c40 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #1: ffffc9000e337c40 ((reaper_work).work){+.+.}-{0:0}, at: process_scheduled_works+0xa25/0x1830 kernel/workqueue.c:3358 6 locks held by kworker/u8:16/4477: #0: ffff88801aee1138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3250 [inline] #0: ffff88801aee1138 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9ea/0x1830 kernel/workqueue.c:3358 #1: ffffc90011187c40 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #1: ffffc90011187c40 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0xa25/0x1830 kernel/workqueue.c:3358 #2: ffffffff8f14c000 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf4/0x800 net/core/net_namespace.c:675 #3: ffff888063ef70d8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:895 [inline] #3: ffff888063ef70d8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline] #3: ffff888063ef70d8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x117/0x3f0 net/devlink/core.c:504 #4: ffff888069760300 (&devlink->lock_key#11){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:274 [inline] #4: ffff888069760300 (&devlink->lock_key#11){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline] #4: ffff888069760300 (&devlink->lock_key#11){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x129/0x3f0 net/devlink/core.c:504 #5: ffffffff8ddd1af0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:3828 2 locks held by udevd/5167: 2 locks held by getty/5554: #0: ffff8880332cc0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc90003e762e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211 1 lock held by syz-executor/5783: #0: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #0: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: free_one_page+0x43/0x250 mm/page_alloc.c:1585 3 locks held by syz-executor/5795: #0: ffff8880b8942f58 (&pcp->lock){+.+.}-{3:3}, at: free_unref_folios+0x14c0/0x1c70 mm/page_alloc.c:3091 #1: ffffffff8ddcb880 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #1: ffffffff8ddcb880 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #1: ffffffff8ddcb880 (rcu_read_lock){....}-{1:3}, at: __rt_spin_trylock kernel/locking/spinlock_rt.c:110 [inline] #1: ffffffff8ddcb880 (rcu_read_lock){....}-{1:3}, at: rt_spin_trylock+0x10c/0x2b0 kernel/locking/spinlock_rt.c:118 #2: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #2: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: free_pcppages_bulk+0x61/0x4f0 mm/page_alloc.c:1503 2 locks held by udevd/5845: #0: ffffffff8ddcb880 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline] #0: ffffffff8ddcb880 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline] #0: ffffffff8ddcb880 (rcu_read_lock){....}-{1:3}, at: path_init+0x12d/0x14d0 fs/namei.c:2680 #1: ffff888034cde480 (&lockref->lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #1: ffff888034cde480 (&lockref->lock){+.+.}-{3:3}, at: lockref_get_not_dead+0x28/0xd0 lib/lockref.c:155 4 locks held by udevd/5939: 4 locks held by syz.4.170/7087: 2 locks held by syz.4.170/7108: #0: ffff888038ee2480 (sb_writers#16){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493 #1: ffff88803d9ffab8 (&type->i_mutex_dir_key#12/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline] #1: ffff88803d9ffab8 (&type->i_mutex_dir_key#12/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3756 [inline] #1: ffff88803d9ffab8 (&type->i_mutex_dir_key#12/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3852 2 locks held by syz.8.274/7917: #0: ffff888038ee20d0 (&type->s_umount_key#65){++++}-{4:4}, at: __super_lock fs/super.c:60 [inline] #0: ffff888038ee20d0 (&type->s_umount_key#65){++++}-{4:4}, at: super_lock+0x2d6/0x3d0 fs/super.c:122 #1: ffff88802642a8c0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:398 [inline] #1: ffff88802642a8c0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: sync_inodes_sb+0x1c5/0xc10 fs/fs-writeback.c:2922 2 locks held by syz-executor/8955: 2 locks held by dhcpcd-run-hook/8957: #0: ffff888050b741c8 (vm_lock){++++}-{0:0}, at: lock_vma_under_rcu+0x1d1/0x500 mm/mmap_lock.c:310 #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: rmqueue_buddy mm/page_alloc.c:3252 [inline] #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: rmqueue mm/page_alloc.c:3424 [inline] #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: get_page_from_freelist+0xd9f/0x2950 mm/page_alloc.c:3959 1 lock held by syz.4.388/8962: 1 lock held by syz.4.388/8963: 2 locks held by syz-executor/8961: #0: ffff888033cf83b0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_trylock include/linux/mmap_lock.h:611 [inline] #0: ffff888033cf83b0 (&mm->mmap_lock){++++}-{4:4}, at: get_mmap_lock_carefully mm/mmap_lock.c:441 [inline] #0: ffff888033cf83b0 (&mm->mmap_lock){++++}-{4:4}, at: lock_mm_and_find_vma+0x36/0x340 mm/mmap_lock.c:501 #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: rmqueue_buddy mm/page_alloc.c:3252 [inline] #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: rmqueue mm/page_alloc.c:3424 [inline] #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: get_page_from_freelist+0xd9f/0x2950 mm/page_alloc.c:3959 2 locks held by dhcpcd-run-hook/8964: #0: ffff88803a09a9f0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_trylock include/linux/mmap_lock.h:611 [inline] #0: ffff88803a09a9f0 (&mm->mmap_lock){++++}-{4:4}, at: get_mmap_lock_carefully mm/mmap_lock.c:441 [inline] #0: ffff88803a09a9f0 (&mm->mmap_lock){++++}-{4:4}, at: lock_mm_and_find_vma+0x36/0x340 mm/mmap_lock.c:501 #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline] #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: rmqueue_buddy mm/page_alloc.c:3252 [inline] #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: rmqueue mm/page_alloc.c:3424 [inline] #1: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: get_page_from_freelist+0xd9f/0x2950 mm/page_alloc.c:3959 ============================================= NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 37 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/27/2026 Call Trace: dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline] __sys_info lib/sys_info.c:157 [inline] sys_info+0x135/0x170 lib/sys_info.c:165 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline] watchdog+0xfd9/0x1030 kernel/hung_task.c:515 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 5167 Comm: udevd Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/27/2026 RIP: 0010:lock_release+0xb/0x3d0 kernel/locking/lockdep.c:5876 Code: be 2f 00 00 00 e9 0e ff ff ff 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 55 41 57 41 56 41 55 <41> 54 53 48 83 ec 30 49 89 f5 49 89 fe 65 48 8b 05 90 fb ad 10 48 RSP: 0018:ffffc90003b7f370 EFLAGS: 00000202 RAX: 0000000000000001 RBX: ffffffff90067301 RCX: 0000000080000001 RDX: ffffc90003b7f401 RSI: ffffffff81765db5 RDI: ffffffff8ddcb880 RBP: dffffc0000000000 R08: ffffc90003b7fdb0 R09: 0000000000000000 R10: ffffc90003b7f4b8 R11: fffff5200076fe99 R12: ffffc90003b7fdc0 R13: ffffc90003b78000 R14: ffffc90003b7f468 R15: ffffffff81765db5 FS: 00007f8779509880(0000) GS:ffff88812633c000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007ffcad8d6ca0 CR3: 000000003a198000 CR4: 00000000003526f0 Call Trace: rcu_lock_release include/linux/rcupdate.h:322 [inline] rcu_read_unlock include/linux/rcupdate.h:881 [inline] class_rcu_destructor include/linux/rcupdate.h:1193 [inline] unwind_next_frame+0x1aaa/0x23c0 arch/x86/kernel/unwind_orc.c:695 arch_stack_walk+0x11b/0x150 arch/x86/kernel/stacktrace.c:25 stack_trace_save+0xa9/0x100 kernel/stacktrace.c:122 kasan_save_stack mm/kasan/common.c:57 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:78 unpoison_slab_object mm/kasan/common.c:340 [inline] __kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:366 kasan_slab_alloc include/linux/kasan.h:253 [inline] slab_post_alloc_hook mm/slub.c:4542 [inline] slab_alloc_node mm/slub.c:4869 [inline] kmem_cache_alloc_noprof+0x33b/0x680 mm/slub.c:4876 lsm_file_alloc security/security.c:169 [inline] security_file_alloc+0x34/0x310 security/security.c:2380 init_file+0x96/0x2d0 fs/file_table.c:159 alloc_empty_file+0x6e/0x1d0 fs/file_table.c:241 path_openat+0x11b/0x38a0 fs/namei.c:4816 do_file_open+0x23e/0x4a0 fs/namei.c:4859 do_sys_openat2+0x113/0x200 fs/open.c:1366 do_sys_open fs/open.c:1372 [inline] __do_sys_openat fs/open.c:1388 [inline] __se_sys_openat fs/open.c:1383 [inline] __x64_sys_openat+0x138/0x170 fs/open.c:1383 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f87795f7407 Code: 48 89 fa 4c 89 df e8 38 aa 00 00 8b 93 08 03 00 00 59 5e 48 83 f8 fc 74 1a 5b c3 0f 1f 84 00 00 00 00 00 48 8b 44 24 10 0f 05 <5b> c3 0f 1f 80 00 00 00 00 83 e2 39 83 fa 08 75 de e8 23 ff ff ff RSP: 002b:00007ffff72b6c30 EFLAGS: 00000202 ORIG_RAX: 0000000000000101 RAX: ffffffffffffffda RBX: 00007f8779509880 RCX: 00007f87795f7407 RDX: 0000000000080000 RSI: 00007ffff72b6db0 RDI: ffffffffffffff9c RBP: 0000000000000008 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 000055a9be1057f5 R13: 000055a9be1057f5 R14: 0000000000000001 R15: 00007ffff72bb400