==================================================================
BUG: KASAN: slab-use-after-free in drm_atomic_helper_wait_for_vblanks+0x30b/0x910 drivers/gpu/drm/drm_atomic_helper.c:1700
Read of size 1 at addr ffff888044072009 by task kworker/u4:7/1038

CPU: 0 UID: 0 PID: 1038 Comm: kworker/u4:7 Not tainted 6.15.0-rc1-syzkaller #0 PREEMPT(full) 
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Workqueue: events_unbound commit_work
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_address_description mm/kasan/report.c:408 [inline]
 print_report+0x16e/0x5b0 mm/kasan/report.c:521
 kasan_report+0x143/0x180 mm/kasan/report.c:634
 drm_atomic_helper_wait_for_vblanks+0x30b/0x910 drivers/gpu/drm/drm_atomic_helper.c:1700
 drm_atomic_helper_commit_tail+0x314/0x510 drivers/gpu/drm/drm_atomic_helper.c:1796
 commit_tail+0x2c4/0x3d0 drivers/gpu/drm/drm_atomic_helper.c:1873
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xac3/0x18e0 kernel/workqueue.c:3319
 worker_thread+0x870/0xd50 kernel/workqueue.c:3400
 kthread+0x7b7/0x940 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Allocated by task 5324:
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
 poison_kmalloc_redzone mm/kasan/common.c:377 [inline]
 __kasan_kmalloc+0x9d/0xb0 mm/kasan/common.c:394
 kasan_kmalloc include/linux/kasan.h:260 [inline]
 __kmalloc_cache_noprof+0x236/0x370 mm/slub.c:4362
 kmalloc_noprof include/linux/slab.h:905 [inline]
 drm_atomic_helper_crtc_duplicate_state+0x72/0xb0 drivers/gpu/drm/drm_atomic_state_helper.c:177
 drm_atomic_get_crtc_state+0x182/0x410 drivers/gpu/drm/drm_atomic.c:360
 drm_atomic_get_plane_state+0x44e/0x510 drivers/gpu/drm/drm_atomic.c:561
 drm_atomic_set_property+0x281/0x3240 drivers/gpu/drm/drm_atomic_uapi.c:1073
 drm_mode_atomic_ioctl+0x7f0/0x1420 drivers/gpu/drm/drm_atomic_uapi.c:1514
 drm_ioctl_kernel+0x34e/0x450 drivers/gpu/drm/drm_ioctl.c:796
 drm_ioctl+0x687/0xbb0 drivers/gpu/drm/drm_ioctl.c:893
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:906 [inline]
 __se_sys_ioctl+0xf1/0x160 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

Freed by task 5316:
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
 kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:576
 poison_slab_object mm/kasan/common.c:247 [inline]
 __kasan_slab_free+0x59/0x70 mm/kasan/common.c:264
 kasan_slab_free include/linux/kasan.h:233 [inline]
 slab_free_hook mm/slub.c:2389 [inline]
 slab_free mm/slub.c:4646 [inline]
 kfree+0x198/0x430 mm/slub.c:4845
 drm_atomic_state_default_clear+0x3bd/0xb80 drivers/gpu/drm/drm_atomic.c:224
 drm_atomic_state_clear drivers/gpu/drm/drm_atomic.c:293 [inline]
 __drm_atomic_state_free+0xb8/0x210 drivers/gpu/drm/drm_atomic.c:310
 kref_put include/linux/kref.h:65 [inline]
 drm_atomic_state_put include/drm/drm_atomic.h:588 [inline]
 drm_atomic_helper_dirtyfb+0xde9/0xe90 drivers/gpu/drm/drm_damage_helper.c:193
 drm_fbdev_shmem_helper_fb_dirty+0x151/0x2e0 drivers/gpu/drm/drm_fbdev_shmem.c:117
 drm_fb_helper_fb_dirty drivers/gpu/drm/drm_fb_helper.c:379 [inline]
 drm_fb_helper_damage_work+0x26c/0x910 drivers/gpu/drm/drm_fb_helper.c:402
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xac3/0x18e0 kernel/workqueue.c:3319
 worker_thread+0x870/0xd50 kernel/workqueue.c:3400
 kthread+0x7b7/0x940 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245

The buggy address belongs to the object at ffff888044072000
 which belongs to the cache kmalloc-512 of size 512
The buggy address is located 9 bytes inside of
 freed 512-byte region [ffff888044072000, ffff888044072200)

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x44072
head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
anon flags: 0x4fff00000000040(head|node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000040 ffff88801b041c80 0000000000000000 dead000000000001
raw: 0000000000000000 0000000000080008 00000000f5000000 0000000000000000
head: 04fff00000000040 ffff88801b041c80 0000000000000000 dead000000000001
head: 0000000000000000 0000000000080008 00000000f5000000 0000000000000000
head: 04fff00000000001 ffffea0001101c81 00000000ffffffff 00000000ffffffff
head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000002
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 1, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 4816, tgid 4816 (kworker/0:3), ts 46101071793, free_ts 45025142752
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x1f4/0x240 mm/page_alloc.c:1717
 prep_new_page mm/page_alloc.c:1725 [inline]
 get_page_from_freelist+0x352b/0x36c0 mm/page_alloc.c:3652
 __alloc_frozen_pages_noprof+0x211/0x5b0 mm/page_alloc.c:4934
 alloc_pages_mpol+0x339/0x690 mm/mempolicy.c:2301
 alloc_slab_page mm/slub.c:2459 [inline]
 allocate_slab+0x8f/0x3a0 mm/slub.c:2623
 new_slab mm/slub.c:2676 [inline]
 ___slab_alloc+0xc3b/0x1500 mm/slub.c:3862
 __slab_alloc+0x58/0xa0 mm/slub.c:3952
 __slab_alloc_node mm/slub.c:4027 [inline]
 slab_alloc_node mm/slub.c:4188 [inline]
 __kmalloc_cache_noprof+0x26a/0x370 mm/slub.c:4357
 kmalloc_noprof include/linux/slab.h:905 [inline]
 kzalloc_noprof include/linux/slab.h:1039 [inline]
 drm_atomic_helper_setup_commit+0x1d5/0x1490 drivers/gpu/drm/drm_atomic_helper.c:2326
 drm_atomic_helper_commit+0x62/0xa00 drivers/gpu/drm/drm_atomic_helper.c:2061
 drm_atomic_commit+0x296/0x2f0 drivers/gpu/drm/drm_atomic.c:1518
 drm_atomic_helper_dirtyfb+0xd34/0xe90 drivers/gpu/drm/drm_damage_helper.c:181
 drm_fbdev_shmem_helper_fb_dirty+0x151/0x2e0 drivers/gpu/drm/drm_fbdev_shmem.c:117
 drm_fb_helper_fb_dirty drivers/gpu/drm/drm_fb_helper.c:379 [inline]
 drm_fb_helper_damage_work+0x26c/0x910 drivers/gpu/drm/drm_fb_helper.c:402
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xac3/0x18e0 kernel/workqueue.c:3319
 worker_thread+0x870/0xd50 kernel/workqueue.c:3400
page last free pid 5145 tgid 5145 stack trace:
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1262 [inline]
 __free_frozen_pages+0xde8/0x10a0 mm/page_alloc.c:2680
 discard_slab mm/slub.c:2720 [inline]
 __put_partials+0x160/0x1c0 mm/slub.c:3189
 put_cpu_partial+0x17e/0x250 mm/slub.c:3264
 __slab_free+0x294/0x390 mm/slub.c:4516
 qlink_free mm/kasan/quarantine.c:163 [inline]
 qlist_free_all+0x9a/0x140 mm/kasan/quarantine.c:179
 kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286
 __kasan_slab_alloc+0x23/0x80 mm/kasan/common.c:329
 kasan_slab_alloc include/linux/kasan.h:250 [inline]
 slab_post_alloc_hook mm/slub.c:4151 [inline]
 slab_alloc_node mm/slub.c:4200 [inline]
 kmem_cache_alloc_noprof+0x1e1/0x390 mm/slub.c:4207
 getname_flags+0xb6/0x530 fs/namei.c:146
 getname include/linux/fs.h:2852 [inline]
 do_sys_openat2+0xbf/0x1d0 fs/open.c:1423
 do_sys_open fs/open.c:1444 [inline]
 __do_sys_openat fs/open.c:1460 [inline]
 __se_sys_openat fs/open.c:1455 [inline]
 __x64_sys_openat+0x249/0x2a0 fs/open.c:1455
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

Memory state around the buggy address:
 ffff888044071f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 ffff888044071f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffff888044072000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                      ^
 ffff888044072080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ffff888044072100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================