loop0: detected capacity change from 0 to 32768 gfs2: fsid=syz:syz: Trying to join cluster "lock_nolock", "syz:syz" gfs2: fsid=syz:syz: Now mounting FS (format 1801)... gfs2: fsid=syz:syz.0: journal 0 mapped with 3 extents in 0ms gfs2: fsid=syz:syz.0: first mount done, others may mount gfs2: fsid=syz:syz.0: found 1 quota changes loop0: detected capacity change from 32768 to 64 ================================================================== BUG: KASAN: slab-use-after-free in list_empty include/linux/list.h:381 [inline] BUG: KASAN: slab-use-after-free in gfs2_discard fs/gfs2/aops.c:593 [inline] BUG: KASAN: slab-use-after-free in gfs2_invalidate_folio+0x40b/0x750 fs/gfs2/aops.c:631 Read of size 8 at addr ffff88800b4ea248 by task syz.0.0/5322 CPU: 0 UID: 0 PID: 5322 Comm: syz.0.0 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0xca/0x240 mm/kasan/report.c:482 kasan_report+0x118/0x150 mm/kasan/report.c:595 list_empty include/linux/list.h:381 [inline] gfs2_discard fs/gfs2/aops.c:593 [inline] gfs2_invalidate_folio+0x40b/0x750 fs/gfs2/aops.c:631 folio_invalidate mm/truncate.c:140 [inline] truncate_cleanup_folio+0x2d8/0x430 mm/truncate.c:160 truncate_inode_pages_range+0x233/0xda0 mm/truncate.c:404 gfs2_evict_inode+0x87a/0x1000 fs/gfs2/super.c:1439 evict+0x504/0x9c0 fs/inode.c:810 gfs2_evict_inodes fs/gfs2/ops_fstype.c:1763 [inline] gfs2_kill_sb+0x234/0x340 fs/gfs2/ops_fstype.c:1789 deactivate_locked_super+0xbc/0x130 fs/super.c:473 cleanup_mnt+0x425/0x4c0 fs/namespace.c:1327 task_work_run+0x1d4/0x260 kernel/task_work.c:227 exit_task_work include/linux/task_work.h:40 [inline] do_exit+0x6b5/0x2300 kernel/exit.c:966 do_group_exit+0x21c/0x2d0 kernel/exit.c:1107 get_signal+0x1285/0x1340 kernel/signal.c:3034 arch_do_signal_or_restart+0xa0/0x790 arch/x86/kernel/signal.c:337 exit_to_user_mode_loop+0x72/0x130 kernel/entry/common.c:40 exit_to_user_mode_prepare include/linux/irq-entry-common.h:225 [inline] syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline] syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline] do_syscall_64+0x2bd/0xfa0 arch/x86/entry/syscall_64.c:100 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7ffb5638f6c9 Code: Unable to access opcode bytes at 0x7ffb5638f69f. RSP: 002b:00007ffb572f20e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca RAX: fffffffffffffe00 RBX: 00007ffb565e5fa8 RCX: 00007ffb5638f6c9 RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007ffb565e5fa8 RBP: 00007ffb565e5fa0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffb565e6038 R14: 00007ffd01a67af0 R15: 00007ffd01a67bd8 Allocated by task 5322: kasan_save_stack mm/kasan/common.c:56 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:77 unpoison_slab_object mm/kasan/common.c:342 [inline] __kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:368 kasan_slab_alloc include/linux/kasan.h:252 [inline] slab_post_alloc_hook mm/slub.c:4978 [inline] slab_alloc_node mm/slub.c:5288 [inline] kmem_cache_alloc_noprof+0x367/0x6e0 mm/slub.c:5295 gfs2_alloc_bufdata fs/gfs2/trans.c:168 [inline] gfs2_trans_add_data+0x200/0x620 fs/gfs2/trans.c:209 gfs2_trans_add_databufs+0x12f/0x1a0 fs/gfs2/trans.c:246 gfs2_iomap_put_folio+0x223/0x480 fs/gfs2/bmap.c:995 iomap_write_iter fs/iomap/buffered-io.c:1020 [inline] iomap_file_buffered_write+0x593/0x9b0 fs/iomap/buffered-io.c:1071 gfs2_file_buffered_write+0x4ed/0x880 fs/gfs2/file.c:1061 gfs2_file_write_iter+0x94e/0x1100 fs/gfs2/file.c:1166 new_sync_write fs/read_write.c:593 [inline] vfs_write+0x5c9/0xb30 fs/read_write.c:686 ksys_write+0x145/0x250 fs/read_write.c:738 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Freed by task 13: kasan_save_stack mm/kasan/common.c:56 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:77 __kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:587 kasan_save_free_info mm/kasan/kasan.h:406 [inline] poison_slab_object mm/kasan/common.c:252 [inline] __kasan_slab_free+0x5c/0x80 mm/kasan/common.c:284 kasan_slab_free include/linux/kasan.h:234 [inline] slab_free_hook mm/slub.c:2543 [inline] slab_free mm/slub.c:6638 [inline] kmem_cache_free+0x19b/0x690 mm/slub.c:6748 trans_drain fs/gfs2/log.c:1025 [inline] gfs2_log_flush+0x18df/0x24c0 fs/gfs2/log.c:1165 gfs2_write_inode+0x23f/0x3e0 fs/gfs2/super.c:447 write_inode fs/fs-writeback.c:1564 [inline] __writeback_single_inode+0x6f1/0xff0 fs/fs-writeback.c:1784 writeback_sb_inodes+0x6c7/0x1010 fs/fs-writeback.c:2015 __writeback_inodes_wb+0x111/0x240 fs/fs-writeback.c:2086 wb_writeback+0x44f/0xaf0 fs/fs-writeback.c:2197 wb_check_start_all fs/fs-writeback.c:2323 [inline] wb_do_writeback fs/fs-writeback.c:2349 [inline] wb_workfn+0x90b/0xef0 fs/fs-writeback.c:2382 process_one_work kernel/workqueue.c:3263 [inline] process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3346 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3427 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 The buggy address belongs to the object at ffff88800b4ea230 which belongs to the cache gfs2_bufdata of size 80 The buggy address is located 24 bytes inside of freed 80-byte region [ffff88800b4ea230, ffff88800b4ea280) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xb4ea flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 00fff00000000000 ffff88801c2e1b40 dead000000000122 0000000000000000 raw: 0000000000000000 0000000000240024 00000000f5000000 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x52c40(GFP_NOFS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 5322, tgid 5321 (syz.0.0), ts 76148996125, free_ts 75586058267 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x240/0x2a0 mm/page_alloc.c:1850 prep_new_page mm/page_alloc.c:1858 [inline] get_page_from_freelist+0x2365/0x2440 mm/page_alloc.c:3884 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5183 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2416 alloc_slab_page mm/slub.c:3059 [inline] allocate_slab+0x96/0x350 mm/slub.c:3232 new_slab mm/slub.c:3286 [inline] ___slab_alloc+0xf56/0x1990 mm/slub.c:4655 __slab_alloc+0x65/0x100 mm/slub.c:4778 __slab_alloc_node mm/slub.c:4854 [inline] slab_alloc_node mm/slub.c:5276 [inline] kmem_cache_alloc_noprof+0x3f9/0x6e0 mm/slub.c:5295 gfs2_alloc_bufdata fs/gfs2/trans.c:168 [inline] gfs2_trans_add_meta+0x2cf/0xa10 fs/gfs2/trans.c:272 do_gfs2_set_flags fs/gfs2/file.c:265 [inline] gfs2_fileattr_set+0x780/0x9b0 fs/gfs2/file.c:311 vfs_fileattr_set+0x932/0xb90 fs/file_attr.c:298 ioctl_setflags+0x180/0x1e0 fs/file_attr.c:334 do_vfs_ioctl+0x8ed/0x1430 fs/ioctl.c:560 __do_sys_ioctl fs/ioctl.c:595 [inline] __se_sys_ioctl+0x82/0x170 fs/ioctl.c:583 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f page last free pid 72 tgid 72 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1394 [inline] free_unref_folios+0xdb3/0x14f0 mm/page_alloc.c:2963 shrink_folio_list+0x44ab/0x4c70 mm/vmscan.c:1638 evict_folios+0x471e/0x57c0 mm/vmscan.c:4745 try_to_shrink_lruvec+0x8a3/0xb50 mm/vmscan.c:4908 shrink_one+0x21b/0x7c0 mm/vmscan.c:4953 shrink_many mm/vmscan.c:5016 [inline] lru_gen_shrink_node mm/vmscan.c:5094 [inline] shrink_node+0x315d/0x3780 mm/vmscan.c:6081 kswapd_shrink_node mm/vmscan.c:6941 [inline] balance_pgdat mm/vmscan.c:7124 [inline] kswapd+0x147c/0x2800 mm/vmscan.c:7389 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Memory state around the buggy address: ffff88800b4ea100: fb fb fb fb fb fb fc fc fc fc fa fb fb fb fb fb ffff88800b4ea180: fb fb fb fb fc fc fc fc fa fb fb fb fb fb fb fb >ffff88800b4ea200: fb fb fc fc fc fc fa fb fb fb fb fb fb fb fb fb ^ ffff88800b4ea280: fc fc fc fc fa fb fb fb fb fb fb fb fb fb fc fc ffff88800b4ea300: fc fc fa fb fb fb fb fb fb fb fb fb fc fc fc fc ==================================================================