Warning: Permanently added '10.128.1.62' (ED25519) to the list of known hosts. [ 113.252910][ T30] audit: type=1400 audit(1766926175.702:62): avc: denied { execmem } for pid=5835 comm="syz-executor232" scontext=root:sysadm_r:sysadm_t tcontext=root:sysadm_r:sysadm_t tclass=process permissive=1 [ 113.368182][ T5836] SELinux: Context root:object_r:swapfile_t is not valid (left unmapped). [ 113.377618][ T30] audit: type=1400 audit(1766926175.832:63): avc: denied { relabelto } for pid=5836 comm="mkswap" name="swap-file" dev="sda1" ino=2023 scontext=root:sysadm_r:sysadm_t tcontext=system_u:object_r:unlabeled_t tclass=file permissive=1 trawcon="root:object_r:swapfile_t" Setting up swapspace version 1, size = 127995904 bytes [ 113.403143][ T30] audit: type=1400 audit(1766926175.832:64): avc: denied { write } for pid=5836 comm="mkswap" path="/root/swap-file" dev="sda1" ino=2023 scontext=root:sysadm_r:sysadm_t tcontext=system_u:object_r:unlabeled_t tclass=file permissive=1 trawcon="root:object_r:swapfile_t" [ 113.436444][ T30] audit: type=1400 audit(1766926175.882:65): avc: denied { read } for pid=5835 comm="syz-executor232" name="swap-file" dev="sda1" ino=2023 scontext=root:sysadm_r:sysadm_t tcontext=system_u:object_r:unlabeled_t tclass=file permissive=1 trawcon="root:object_r:swapfile_t" [ 113.466676][ T30] audit: type=1400 audit(1766926175.882:66): avc: denied { open } for pid=5835 comm="syz-executor232" path="/root/swap-file" dev="sda1" ino=2023 scontext=root:sysadm_r:sysadm_t tcontext=system_u:object_r:unlabeled_t tclass=file permissive=1 trawcon="root:object_r:swapfile_t" [ 114.396429][ T5835] Adding 124996k swap on ./swap-file. Priority:0 extents:1 across:124996k [ 114.417306][ T30] audit: type=1400 audit(1766926176.862:67): avc: denied { mounton } for pid=5843 comm="syz-executor232" path="/" dev="sda1" ino=2 scontext=root:sysadm_r:sysadm_t tcontext=system_u:object_r:root_t tclass=dir permissive=1 [ 114.618210][ T30] audit: type=1400 audit(1766926177.062:68): avc: denied { mounton } for pid=5844 comm="syz-executor232" path="/root/syzkaller.Rov7PA/syz-tmp" dev="sda1" ino=2029 scontext=root:sysadm_r:sysadm_t tcontext=root:object_r:user_home_t tclass=dir permissive=1 [ 114.642791][ T30] audit: type=1400 audit(1766926177.062:69): avc: denied { mount } for pid=5844 comm="syz-executor232" name="/" dev="tmpfs" ino=1 scontext=root:sysadm_r:sysadm_t tcontext=system_u:object_r:tmpfs_t tclass=filesystem permissive=1 executing program [ 114.665376][ T30] audit: type=1400 audit(1766926177.062:70): avc: denied { mounton } for pid=5844 comm="syz-executor232" path="/root/syzkaller.Rov7PA/syz-tmp/newroot/dev" dev="tmpfs" ino=3 scontext=root:sysadm_r:sysadm_t tcontext=root:object_r:user_tmpfs_t tclass=dir permissive=1 executing program executing program [ 114.697236][ T30] audit: type=1400 audit(1766926177.082:71): avc: denied { mount } for pid=5844 comm="syz-executor232" name="/" dev="proc" ino=1 scontext=root:sysadm_r:sysadm_t tcontext=system_u:object_r:proc_t tclass=filesystem permissive=1 [ 122.821937][ C1] sched: DL replenish lagged too much [ 310.041878][ C0] BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 191s! [ 310.051036][ C0] BUG: workqueue lockup - pool cpus=1 node=0 flags=0x0 nice=0 stuck for 192s! [ 310.059957][ C0] Showing busy workqueues and worker pools: [ 310.065937][ C0] workqueue events: flags=0x100 [ 310.070811][ C0] pwq 2: cpus=0 node=0 flags=0x0 nice=0 active=1 refcnt=2 [ 310.070860][ C0] pending: vmstat_shepherd [ 310.070928][ C0] pwq 6: cpus=1 node=0 flags=0x0 nice=0 active=7 refcnt=8 [ 310.070974][ C0] pending: psi_avgs_work, 6*ovs_dp_masks_rebalance [ 310.071039][ C0] workqueue events_long: flags=0x100 [ 310.102481][ C0] pwq 2: cpus=0 node=0 flags=0x0 nice=0 active=3 refcnt=4 [ 310.102531][ C0] pending: 3*defense_work_handler [ 310.102573][ C0] pwq 6: cpus=1 node=0 flags=0x0 nice=0 active=3 refcnt=4 [ 310.102616][ C0] pending: 3*defense_work_handler [ 310.102656][ C0] workqueue events_unbound: flags=0x2 [ 310.133258][ C0] pwq 8: cpus=0-1 flags=0x4 nice=0 active=1 refcnt=2 [ 310.133301][ C0] pending: flush_memcg_stats_dwork [ 310.133354][ C0] pwq 8: cpus=0-1 flags=0x4 nice=0 active=1 refcnt=2 [ 310.133393][ C0] pending: toggle_allocation_gate [ 310.133428][ C0] workqueue events_unbound: flags=0x2 [ 310.163263][ C0] pwq 8: cpus=0-1 flags=0x4 nice=0 active=1 refcnt=2 [ 310.163307][ C0] pending: crng_reseed [ 310.163346][ C0] workqueue events_power_efficient: flags=0x180 [ 310.180801][ C0] pwq 2: cpus=0 node=0 flags=0x0 nice=0 active=3 refcnt=4 [ 310.180851][ C0] pending: neigh_managed_work, neigh_periodic_work, reg_check_chans_work [ 310.180935][ C0] pwq 6: cpus=1 node=0 flags=0x0 nice=0 active=5 refcnt=6 [ 310.180980][ C0] pending: neigh_periodic_work, neigh_managed_work, do_cache_clean, gc_worker, check_lifetime [ 310.181114][ C0] workqueue kvfree_rcu_reclaim: flags=0xa [ 310.220682][ C0] pwq 8: cpus=0-1 flags=0x4 nice=0 active=1 refcnt=2 [ 310.220732][ C0] pending: kfree_rcu_monitor [ 310.220776][ C0] pwq 8: cpus=0-1 flags=0x4 nice=0 active=1 refcnt=2 [ 310.220816][ C0] pending: kfree_rcu_monitor [ 310.220857][ C0] workqueue mm_percpu_wq: flags=0x8 [ 310.249559][ C0] pwq 2: cpus=0 node=0 flags=0x0 nice=0 active=1 refcnt=2 [ 310.249609][ C0] pending: vmstat_update [ 310.249651][ C0] pwq 6: cpus=1 node=0 flags=0x0 nice=0 active=1 refcnt=2 [ 310.249696][ C0] pending: vmstat_update [ 310.249743][ C0] workqueue writeback: flags=0x4a [ 310.278437][ C0] pwq 8: cpus=0-1 flags=0x4 nice=0 active=1 refcnt=2 [ 310.278482][ C0] pending: wb_workfn [ 310.278522][ C0] workqueue kblockd: flags=0x18 [ 310.294435][ C0] pwq 3: cpus=0 node=0 flags=0x0 nice=-20 active=1 refcnt=2 [ 310.294485][ C0] pending: blk_mq_timeout_work [ 310.294599][ C0] workqueue ipv6_addrconf: flags=0x6000a [ 310.312783][ C0] pwq 8: cpus=0-1 flags=0x4 nice=0 active=1 refcnt=4 [ 310.312828][ C0] pending: addrconf_verify_work [ 310.312872][ C0] workqueue krxrpcd: flags=0x2001a [ 310.329988][ C0] pwq 9: cpus=0-1 node=0 flags=0x4 nice=-20 active=1 refcnt=9 [ 310.330038][ C0] pending: rxrpc_peer_keepalive_worker [ 310.330071][ C0] inactive: 5*rxrpc_peer_keepalive_worker [ 310.330138][ C0] Showing backtraces of running workers in stalled CPU-bound worker pools: