================================================================== BUG: KASAN: slab-use-after-free in instrument_atomic_read_write include/linux/instrumented.h:96 [inline] BUG: KASAN: slab-use-after-free in atomic_dec_and_test include/linux/atomic/atomic-instrumented.h:1383 [inline] BUG: KASAN: slab-use-after-free in gfs2_qd_dealloc+0x81/0xe0 fs/gfs2/quota.c:112 Write of size 4 at addr ffff888033854b68 by task ksoftirqd/1/23 CPU: 1 UID: 0 PID: 23 Comm: ksoftirqd/1 Tainted: G L syzkaller #0 PREEMPT(full) Tainted: [L]=SOFTLOCKUP Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Call Trace: dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0xca/0x240 mm/kasan/report.c:482 kasan_report+0x118/0x150 mm/kasan/report.c:595 check_region_inline mm/kasan/generic.c:-1 [inline] kasan_check_range+0x2b0/0x2c0 mm/kasan/generic.c:200 instrument_atomic_read_write include/linux/instrumented.h:96 [inline] atomic_dec_and_test include/linux/atomic/atomic-instrumented.h:1383 [inline] gfs2_qd_dealloc+0x81/0xe0 fs/gfs2/quota.c:112 rcu_do_batch kernel/rcu/tree.c:2605 [inline] rcu_core+0xc8e/0x1720 kernel/rcu/tree.c:2857 handle_softirqs+0x22b/0x7c0 kernel/softirq.c:622 run_ksoftirqd+0x36/0x60 kernel/softirq.c:1063 smpboot_thread_fn+0x542/0xa60 kernel/smpboot.c:160 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 Allocated by task 10189: kasan_save_stack mm/kasan/common.c:57 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:78 poison_kmalloc_redzone mm/kasan/common.c:398 [inline] __kasan_kmalloc+0x93/0xb0 mm/kasan/common.c:415 kasan_kmalloc include/linux/kasan.h:263 [inline] __kmalloc_cache_noprof+0x3e2/0x700 mm/slub.c:5776 kmalloc_noprof include/linux/slab.h:957 [inline] kzalloc_noprof include/linux/slab.h:1094 [inline] init_sbd fs/gfs2/ops_fstype.c:79 [inline] gfs2_fill_super+0x11f/0x21b0 fs/gfs2/ops_fstype.c:1121 get_tree_bdev_flags+0x40e/0x4d0 fs/super.c:1691 gfs2_get_tree+0x51/0x1e0 fs/gfs2/ops_fstype.c:1332 vfs_get_tree+0x92/0x2a0 fs/super.c:1751 fc_mount fs/namespace.c:1199 [inline] do_new_mount_fc fs/namespace.c:3636 [inline] do_new_mount+0x302/0xa10 fs/namespace.c:3712 do_mount fs/namespace.c:4035 [inline] __do_sys_mount fs/namespace.c:4224 [inline] __se_sys_mount+0x313/0x410 fs/namespace.c:4201 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Freed by task 10189: kasan_save_stack mm/kasan/common.c:57 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:78 kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:584 poison_slab_object mm/kasan/common.c:253 [inline] __kasan_slab_free+0x5c/0x80 mm/kasan/common.c:285 kasan_slab_free include/linux/kasan.h:235 [inline] slab_free_hook mm/slub.c:2540 [inline] slab_free mm/slub.c:6670 [inline] kfree+0x1c0/0x660 mm/slub.c:6878 free_sbd fs/gfs2/ops_fstype.c:72 [inline] gfs2_fill_super+0x14ef/0x21b0 fs/gfs2/ops_fstype.c:1316 get_tree_bdev_flags+0x40e/0x4d0 fs/super.c:1691 gfs2_get_tree+0x51/0x1e0 fs/gfs2/ops_fstype.c:1332 vfs_get_tree+0x92/0x2a0 fs/super.c:1751 fc_mount fs/namespace.c:1199 [inline] do_new_mount_fc fs/namespace.c:3636 [inline] do_new_mount+0x302/0xa10 fs/namespace.c:3712 do_mount fs/namespace.c:4035 [inline] __do_sys_mount fs/namespace.c:4224 [inline] __se_sys_mount+0x313/0x410 fs/namespace.c:4201 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f The buggy address belongs to the object at ffff888033854000 which belongs to the cache kmalloc-8k of size 8192 The buggy address is located 2920 bytes inside of freed 8192-byte region [ffff888033854000, ffff888033856000) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x33850 head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0 flags: 0xfff00000000040(head|node=0|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 00fff00000000040 ffff88813ffa7280 ffffea00007ae400 dead000000000006 raw: 0000000000000000 0000000000020002 00000000f5000000 0000000000000000 head: 00fff00000000040 ffff88813ffa7280 ffffea00007ae400 dead000000000006 head: 0000000000000000 0000000000020002 00000000f5000000 0000000000000000 head: 00fff00000000003 ffffea0000ce1401 00000000ffffffff 00000000ffffffff head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5487, tgid 5487 (start-stop-daem), ts 50546695292, free_ts 50544411150 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x234/0x290 mm/page_alloc.c:1857 prep_new_page mm/page_alloc.c:1865 [inline] get_page_from_freelist+0x24e0/0x2580 mm/page_alloc.c:3915 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5210 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2486 alloc_slab_page mm/slub.c:3075 [inline] allocate_slab+0x86/0x3b0 mm/slub.c:3248 new_slab mm/slub.c:3302 [inline] ___slab_alloc+0xe53/0x1820 mm/slub.c:4656 __slab_alloc+0x65/0x100 mm/slub.c:4779 __slab_alloc_node mm/slub.c:4855 [inline] slab_alloc_node mm/slub.c:5251 [inline] __kmalloc_cache_noprof+0x41e/0x700 mm/slub.c:5771 kmalloc_noprof include/linux/slab.h:957 [inline] kzalloc_noprof include/linux/slab.h:1094 [inline] tomoyo_print_bprm security/tomoyo/audit.c:26 [inline] tomoyo_init_log+0x111f/0x1f70 security/tomoyo/audit.c:264 tomoyo_supervisor+0x340/0x1480 security/tomoyo/common.c:2198 tomoyo_audit_env_log security/tomoyo/environ.c:36 [inline] tomoyo_env_perm+0x149/0x1e0 security/tomoyo/environ.c:63 tomoyo_environ security/tomoyo/domain.c:672 [inline] tomoyo_find_next_domain+0x15ce/0x1aa0 security/tomoyo/domain.c:888 tomoyo_bprm_check_security+0x11c/0x180 security/tomoyo/tomoyo.c:102 security_bprm_check+0x89/0x270 security/security.c:794 search_binary_handler fs/exec.c:1659 [inline] exec_binprm fs/exec.c:1701 [inline] bprm_execve+0x887/0x1400 fs/exec.c:1753 do_execveat_common+0x510/0x6a0 fs/exec.c:1859 page last free pid 5487 tgid 5487 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1406 [inline] __free_frozen_pages+0xbc8/0xd30 mm/page_alloc.c:2943 discard_slab mm/slub.c:3346 [inline] __put_partials+0x146/0x170 mm/slub.c:3886 __slab_free+0x294/0x320 mm/slub.c:5952 qlink_free mm/kasan/quarantine.c:163 [inline] qlist_free_all+0x97/0x100 mm/kasan/quarantine.c:179 kasan_quarantine_reduce+0x148/0x160 mm/kasan/quarantine.c:286 __kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:350 kasan_slab_alloc include/linux/kasan.h:253 [inline] slab_post_alloc_hook mm/slub.c:4953 [inline] slab_alloc_node mm/slub.c:5263 [inline] kmem_cache_alloc_noprof+0x37d/0x710 mm/slub.c:5270 new_handle fs/jbd2/transaction.c:457 [inline] jbd2__journal_start+0x146/0x5b0 fs/jbd2/transaction.c:484 __ext4_journal_start_sb+0x203/0x580 fs/ext4/ext4_jbd2.c:114 __ext4_journal_start fs/ext4/ext4_jbd2.h:242 [inline] ext4_dirty_inode+0x93/0x110 fs/ext4/inode.c:6499 __mark_inode_dirty+0x390/0x1330 fs/fs-writeback.c:2587 generic_update_time fs/inode.c:2158 [inline] inode_update_time fs/inode.c:2171 [inline] touch_atime+0x59b/0x6d0 fs/inode.c:2243 file_accessed include/linux/fs.h:2254 [inline] filemap_read+0x1002/0x11a0 mm/filemap.c:2872 __kernel_read+0x4cf/0x960 fs/read_write.c:530 prepare_binprm fs/exec.c:1608 [inline] search_binary_handler fs/exec.c:1655 [inline] exec_binprm fs/exec.c:1701 [inline] bprm_execve+0x867/0x1400 fs/exec.c:1753 do_execveat_common+0x510/0x6a0 fs/exec.c:1859 Memory state around the buggy address: ffff888033854a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888033854a80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff888033854b00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888033854b80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888033854c00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================