INFO: task udevd:13076 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:udevd           state:D stack:24488 pid:13076 ppid:5156   flags:0x00004006
Call Trace:
 
 context_switch kernel/sched/core.c:5380 [inline]
 __schedule+0x14d2/0x44d0 kernel/sched/core.c:6699
 schedule+0xbd/0x170 kernel/sched/core.c:6773
 io_schedule+0x80/0xd0 kernel/sched/core.c:9022
 folio_wait_bit_common+0x6eb/0xf70 mm/filemap.c:1329
 folio_put_wait_locked mm/filemap.c:1493 [inline]
 do_read_cache_folio+0x1c0/0x7e0 mm/filemap.c:3771
 read_mapping_folio include/linux/pagemap.h:898 [inline]
 read_part_sector+0xd2/0x350 block/partitions/core.c:718
 adfspart_check_POWERTEC+0x8d/0xf00 block/partitions/acorn.c:454
 check_partition block/partitions/core.c:138 [inline]
 blk_add_partitions block/partitions/core.c:600 [inline]
 bdev_disk_changed+0x73a/0x1410 block/partitions/core.c:689
 blkdev_get_whole+0x30d/0x390 block/bdev.c:655
 blkdev_get_by_dev+0x279/0x600 block/bdev.c:797
 blkdev_open+0x152/0x360 block/fops.c:589
 do_dentry_open+0x8c6/0x1500 fs/open.c:929
 do_open fs/namei.c:3632 [inline]
 path_openat+0x274b/0x3190 fs/namei.c:3789
 do_filp_open+0x1c5/0x3d0 fs/namei.c:3816
 do_sys_openat2+0x12c/0x1c0 fs/open.c:1419
 do_sys_open fs/open.c:1434 [inline]
 __do_sys_openat fs/open.c:1450 [inline]
 __se_sys_openat fs/open.c:1445 [inline]
 __x64_sys_openat+0x139/0x160 fs/open.c:1445
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f9b0f0a7407
RSP: 002b:00007ffec8482ce0 EFLAGS: 00000202 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f9b0f7b4880 RCX: 00007f9b0f0a7407
RDX: 00000000000a0800 RSI: 000055af3460f730 RDI: ffffffffffffff9c
RBP: 000055af34600910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 000055af34622040
R13: 000055af34618410 R14: 0000000000000000 R15: 000055af34622040
 
Showing all locks held in the system:
1 lock held by khungtaskd/29:
 #0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 #0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
 #0: ffffffff8cd2ff20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
3 locks held by kworker/u4:3/48:
 #0: ffff8880b8e3c218 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:558
 #1: ffff88802dbde418 (&p->pi_lock){-.-.}-{2:2}, at: psi_sched_switch kernel/sched/stats.h:189 [inline]
 #1: ffff88802dbde418 (&p->pi_lock){-.-.}-{2:2}, at: __schedule+0x20ee/0x44d0 kernel/sched/core.c:6694
 #2: ffff8880b8e3c218 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:558
2 locks held by kworker/1:2/965:
 #0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff888017872538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc900042efd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc900042efd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
4 locks held by kworker/u4:6/3537:
2 locks held by getty/5546:
 #0: ffff88803110a0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000326e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217
2 locks held by syz-executor/5774:
 #0: ffff8880b8e3c218 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:558
 #1: ffff8880b8e289c0 (psi_seq){-.-.}-{0:0}, at: psi_sched_switch kernel/sched/stats.h:189 [inline]
 #1: ffff8880b8e289c0 (psi_seq){-.-.}-{0:0}, at: __schedule+0x20ee/0x44d0 kernel/sched/core.c:6694
1 lock held by udevd/12605:
 #0: ffff888021df04c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
1 lock held by udevd/12627:
 #0: ffff888021ddb4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
1 lock held by udevd/12882:
 #0: ffff888021df54c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
1 lock held by udevd/13076:
 #0: ffff888021d854c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:788
3 locks held by kworker/u4:10/13194:
 #0: ffff88802c15a138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff88802c15a138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc9001919fd00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc9001919fd00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #2: ffffffff8dfbb548 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xd0/0x14e0 net/ipv6/addrconf.c:4158
3 locks held by kworker/u4:4/16362:
 #0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff888017871538 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #1: ffffc90003537d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc90003537d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x957/0x15b0 kernel/workqueue.c:2711
 #2: ffffffff8dfbb548 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:286
4 locks held by syz-executor/17687:
 #0: ffff88802f67c418 (sb_writers#11){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:403
 #1: ffff88807c668670 (&type->i_mutex_dir_key#7/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
 #1: ffff88807c668670 (&type->i_mutex_dir_key#7/1){+.+.}-{3:3}, at: filename_create+0x1f6/0x460 fs/namei.c:3882
 #2: ffffffff8cd59888 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_lock include/linux/cgroup.h:369 [inline]
 #2: ffffffff8cd59888 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0xf2/0x230 kernel/cgroup/cgroup.c:1677
 #3: ffffffff8dfbb548 (rtnl_mutex){+.+.}-{3:3}, at: cgrp_css_online+0x91/0x2f0 net/core/netprio_cgroup.c:157
2 locks held by syz.4.3107/18000:
 #0: ffffffff8dfbb548 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8dfbb548 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x41/0x1c0 drivers/net/tun.c:3511
 #1: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
 #1: ffffffff8cd358f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x448/0x830 kernel/rcu/tree_exp.h:1004
1 lock held by kvm-nx-lpage-re/18010:
 #0: ffffffff8cd59888 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_lock include/linux/cgroup.h:369 [inline]
 #0: ffffffff8cd59888 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_attach_task_all+0x26/0xe0 kernel/cgroup/cgroup-v1.c:61
=============================================
NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
 
 dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xf41/0xf80 kernel/hung_task.c:379
 kthread+0x2fa/0x390 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
 
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 48 Comm: kworker/u4:3 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Workqueue: bat_events batadv_nc_worker
RIP: 0010:mark_held_locks kernel/locking/lockdep.c:4274 [inline]
RIP: 0010:__trace_hardirqs_on_caller kernel/locking/lockdep.c:4292 [inline]
RIP: 0010:lockdep_hardirqs_on_prepare+0x24a/0x760 kernel/locking/lockdep.c:4359
Code: 00 74 2f 25 00 00 03 00 83 f8 01 ba 03 00 00 00 83 da 00 48 8b 7c 24 10 4c 89 fe e8 30 f1 00 00 48 ba 00 00 00 00 00 fc ff df <85> c0 0f 84 a2 01 00 00 41 0f b6 04 16 84 c0 75 5b 49 ff c5 48 63
RSP: 0018:ffffc90000b97960 EFLAGS: 00000086
RAX: 0000000000000001 RBX: ffff88801b6e0ad8 RCX: ffffffff8167b074
RDX: dffffc0000000000 RSI: 0000000000000008 RDI: ffffffff90da75d8
RBP: ffffc90000b97a08 R08: ffffffff90da75df R09: 1ffffffff21b4ebb
R10: dffffc0000000000 R11: fffffbfff21b4ebc R12: ffff88801b6e0b28
R13: 0000000000000001 R14: 1ffff110036dc15b R15: ffff88801b6e0b08
FS:  0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fb78fbb2d58 CR3: 00000000635e1000 CR4: 00000000003526f0
Call Trace:
 
 trace_hardirqs_on+0x28/0x40 kernel/trace/trace_preemptirq.c:61
 __local_bh_enable_ip+0x12e/0x1c0 kernel/softirq.c:411
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 batadv_nc_purge_paths+0x311/0x3a0 net/batman-adv/network-coding.c:471
 batadv_nc_worker+0x328/0x610 net/batman-adv/network-coding.c:720
 process_one_work kernel/workqueue.c:2634 [inline]
 process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
 worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
 kthread+0x2fa/0x390 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293