================================================================== BUG: KASAN: slab-use-after-free in drm_atomic_helper_wait_for_vblanks+0x30b/0x910 drivers/gpu/drm/drm_atomic_helper.c:1700 Read of size 1 at addr ffff888043945409 by task kworker/u4:10/4629 CPU: 0 UID: 0 PID: 4629 Comm: kworker/u4:10 Not tainted 6.15.0-rc1-syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Workqueue: events_unbound commit_work Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:408 [inline] print_report+0x16e/0x5b0 mm/kasan/report.c:521 kasan_report+0x143/0x180 mm/kasan/report.c:634 drm_atomic_helper_wait_for_vblanks+0x30b/0x910 drivers/gpu/drm/drm_atomic_helper.c:1700 drm_atomic_helper_commit_tail+0x314/0x510 drivers/gpu/drm/drm_atomic_helper.c:1796 commit_tail+0x2c4/0x3d0 drivers/gpu/drm/drm_atomic_helper.c:1873 process_one_work kernel/workqueue.c:3238 [inline] process_scheduled_works+0xac3/0x18e0 kernel/workqueue.c:3319 worker_thread+0x870/0xd50 kernel/workqueue.c:3400 kthread+0x7b7/0x940 kernel/kthread.c:464 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Allocated by task 5339: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 poison_kmalloc_redzone mm/kasan/common.c:377 [inline] __kasan_kmalloc+0x9d/0xb0 mm/kasan/common.c:394 kasan_kmalloc include/linux/kasan.h:260 [inline] __kmalloc_cache_noprof+0x236/0x370 mm/slub.c:4362 kmalloc_noprof include/linux/slab.h:905 [inline] drm_atomic_helper_crtc_duplicate_state+0x72/0xb0 drivers/gpu/drm/drm_atomic_state_helper.c:177 drm_atomic_get_crtc_state+0x182/0x410 drivers/gpu/drm/drm_atomic.c:360 drm_atomic_get_plane_state+0x44e/0x510 drivers/gpu/drm/drm_atomic.c:561 drm_atomic_set_property+0x281/0x3240 drivers/gpu/drm/drm_atomic_uapi.c:1073 drm_mode_atomic_ioctl+0x7f0/0x1420 drivers/gpu/drm/drm_atomic_uapi.c:1514 drm_ioctl_kernel+0x34e/0x450 drivers/gpu/drm/drm_ioctl.c:796 drm_ioctl+0x687/0xbb0 drivers/gpu/drm/drm_ioctl.c:893 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:906 [inline] __se_sys_ioctl+0xf1/0x160 fs/ioctl.c:892 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Freed by task 5338: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:576 poison_slab_object mm/kasan/common.c:247 [inline] __kasan_slab_free+0x59/0x70 mm/kasan/common.c:264 kasan_slab_free include/linux/kasan.h:233 [inline] slab_free_hook mm/slub.c:2389 [inline] slab_free mm/slub.c:4646 [inline] kfree+0x198/0x430 mm/slub.c:4845 drm_atomic_state_default_clear+0x3bd/0xb80 drivers/gpu/drm/drm_atomic.c:224 drm_atomic_state_clear drivers/gpu/drm/drm_atomic.c:293 [inline] __drm_atomic_state_free+0xb8/0x210 drivers/gpu/drm/drm_atomic.c:310 kref_put include/linux/kref.h:65 [inline] drm_atomic_state_put include/drm/drm_atomic.h:588 [inline] drm_client_modeset_commit_atomic+0x727/0x7d0 drivers/gpu/drm/drm_client_modeset.c:1085 drm_client_modeset_commit_locked+0xe0/0x520 drivers/gpu/drm/drm_client_modeset.c:1182 drm_client_modeset_commit+0x4a/0x70 drivers/gpu/drm/drm_client_modeset.c:1208 __drm_fb_helper_restore_fbdev_mode_unlocked+0xbd/0x200 drivers/gpu/drm/drm_fb_helper.c:237 drm_fbdev_client_restore+0x34/0x40 drivers/gpu/drm/clients/drm_fbdev_client.c:31 drm_client_dev_restore+0x132/0x270 drivers/gpu/drm/drm_client_event.c:116 drm_lastclose drivers/gpu/drm/drm_file.c:396 [inline] drm_release+0x335/0x410 drivers/gpu/drm/drm_file.c:429 __fput+0x3e9/0x9f0 fs/file_table.c:465 task_work_run+0x251/0x310 kernel/task_work.c:227 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] exit_to_user_mode_loop kernel/entry/common.c:114 [inline] exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline] __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline] syscall_exit_to_user_mode+0x13f/0x340 kernel/entry/common.c:218 do_syscall_64+0x100/0x230 arch/x86/entry/syscall_64.c:100 entry_SYSCALL_64_after_hwframe+0x77/0x7f The buggy address belongs to the object at ffff888043945400 which belongs to the cache kmalloc-512 of size 512 The buggy address is located 9 bytes inside of freed 512-byte region [ffff888043945400, ffff888043945600) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x43944 head: order:1 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0 flags: 0x4fff00000000040(head|node=1|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 04fff00000000040 ffff88801b041c80 dead000000000122 0000000000000000 raw: 0000000000000000 0000000000080008 00000000f5000000 0000000000000000 head: 04fff00000000040 ffff88801b041c80 dead000000000122 0000000000000000 head: 0000000000000000 0000000000080008 00000000f5000000 0000000000000000 head: 04fff00000000001 ffffea00010e5101 00000000ffffffff 00000000ffffffff head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000002 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 1, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5323, tgid 5323 (udevd), ts 149165511422, free_ts 146267261596 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x1f4/0x240 mm/page_alloc.c:1717 prep_new_page mm/page_alloc.c:1725 [inline] get_page_from_freelist+0x352b/0x36c0 mm/page_alloc.c:3652 __alloc_frozen_pages_noprof+0x211/0x5b0 mm/page_alloc.c:4934 alloc_pages_mpol+0x339/0x690 mm/mempolicy.c:2301 alloc_slab_page mm/slub.c:2459 [inline] allocate_slab+0x8f/0x3a0 mm/slub.c:2623 new_slab mm/slub.c:2676 [inline] ___slab_alloc+0xc3b/0x1500 mm/slub.c:3862 __slab_alloc+0x58/0xa0 mm/slub.c:3952 __slab_alloc_node mm/slub.c:4027 [inline] slab_alloc_node mm/slub.c:4188 [inline] __kmalloc_cache_noprof+0x26a/0x370 mm/slub.c:4357 kmalloc_noprof include/linux/slab.h:905 [inline] kzalloc_noprof include/linux/slab.h:1039 [inline] kernfs_fop_open+0x3a3/0xdf0 fs/kernfs/file.c:623 do_dentry_open+0xdec/0x1960 fs/open.c:956 vfs_open+0x3b/0x370 fs/open.c:1086 do_open fs/namei.c:3845 [inline] path_openat+0x2caf/0x35d0 fs/namei.c:4004 do_filp_open+0x284/0x4e0 fs/namei.c:4031 do_sys_openat2+0x12b/0x1d0 fs/open.c:1429 do_sys_open fs/open.c:1444 [inline] __do_sys_openat fs/open.c:1460 [inline] __se_sys_openat fs/open.c:1455 [inline] __x64_sys_openat+0x249/0x2a0 fs/open.c:1455 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94 page last free pid 4730 tgid 4730 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1262 [inline] __free_frozen_pages+0xde8/0x10a0 mm/page_alloc.c:2680 __slab_free+0x2c6/0x390 mm/slub.c:4557 qlink_free mm/kasan/quarantine.c:163 [inline] qlist_free_all+0x9a/0x140 mm/kasan/quarantine.c:179 kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286 __kasan_slab_alloc+0x23/0x80 mm/kasan/common.c:329 kasan_slab_alloc include/linux/kasan.h:250 [inline] slab_post_alloc_hook mm/slub.c:4151 [inline] slab_alloc_node mm/slub.c:4200 [inline] kmem_cache_alloc_noprof+0x1e1/0x390 mm/slub.c:4207 getname_flags+0xb6/0x530 fs/namei.c:146 getname include/linux/fs.h:2852 [inline] do_sys_openat2+0xbf/0x1d0 fs/open.c:1423 do_sys_open fs/open.c:1444 [inline] __do_sys_openat fs/open.c:1460 [inline] __se_sys_openat fs/open.c:1455 [inline] __x64_sys_openat+0x249/0x2a0 fs/open.c:1455 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Memory state around the buggy address: ffff888043945300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff888043945380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff888043945400: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888043945480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888043945500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================