syzbot


INFO: task hung in migrate_pages_batch (2)

Status: upstream: reported on 2026/02/02 06:38
Reported-by: syzbot+fb7f46c00136f93b092c@syzkaller.appspotmail.com
First crash: 20d, last: 10d
Similar bugs (5)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.6 INFO: task hung in migrate_pages_batch 1 2 140d 164d 0/2 auto-obsoleted due to no activity on 2026/01/13 03:58
upstream INFO: task hung in migrate_pages_batch (4) mm 1 90 2h33m 131d 0/29 upstream: reported on 2025/10/14 11:22
upstream INFO: task hung in migrate_pages_batch nilfs 1 C 22 744d 755d 25/29 fixed on 2024/03/25 11:41
upstream INFO: task hung in migrate_pages_batch (3) mm 1 4 373d 388d 0/29 auto-obsoleted due to no activity on 2025/05/15 10:31
upstream INFO: task hung in migrate_pages_batch (2) mm fs 1 2 543d 625d 0/29 auto-obsoleted due to no activity on 2024/11/25 18:13

Sample crash report:
INFO: task syz.1.592:7613 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.592       state:D stack:22800 pid:7613  ppid:5767   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0x1553/0x45a0 kernel/sched/core.c:6700
 schedule+0xbd/0x170 kernel/sched/core.c:6774
 io_schedule+0x80/0xd0 kernel/sched/core.c:9023
 folio_wait_bit_common+0x714/0xfa0 mm/filemap.c:1329
 migrate_folio_unmap mm/migrate.c:1162 [inline]
 migrate_pages_batch+0x1393/0x3440 mm/migrate.c:1672
 migrate_pages_sync mm/migrate.c:1865 [inline]
 migrate_pages+0x1f5a/0x27a0 mm/migrate.c:1947
 compact_zone+0x2200/0x43a0 mm/compaction.c:2515
 compact_node+0x195/0x300 mm/compaction.c:2807
 compact_nodes mm/compaction.c:2820 [inline]
 sysctl_compaction_handler+0xf9/0x1a0 mm/compaction.c:2866
 proc_sys_call_handler+0x463/0x6d0 fs/proc/proc_sysctl.c:599
 do_iter_readv_writev fs/read_write.c:-1 [inline]
 do_iter_write+0x738/0xc30 fs/read_write.c:860
 iter_file_splice_write+0x6a3/0xcb0 fs/splice.c:736
 do_splice_from fs/splice.c:933 [inline]
 direct_splice_actor+0xe8/0x130 fs/splice.c:1142
 splice_direct_to_actor+0x304/0x8c0 fs/splice.c:1088
 do_splice_direct+0x1d5/0x2f0 fs/splice.c:1194
 do_sendfile+0x5f2/0xef0 fs/read_write.c:1254
 __do_sys_sendfile64 fs/read_write.c:1316 [inline]
 __se_sys_sendfile64+0xe0/0x1a0 fs/read_write.c:1308
 do_syscall_x64 arch/x86/entry/common.c:46 [inline]
 do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f835539bf79
RSP: 002b:00007f83562fa028 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f8355616090 RCX: 00007f835539bf79
RDX: 00002000000000c0 RSI: 0000000000000005 RDI: 0000000000000006
RBP: 00007f83554327e0 R08: 0000000000000000 R09: 0000000000000000
R10: 000000000000000a R11: 0000000000000246 R12: 0000000000000000
R13: 00007f8355616128 R14: 00007f8355616090 R15: 00007fffea9b9128
 </TASK>

Showing all locks held in the system:
1 lock held by pool_workqueue_/3:
3 locks held by kworker/0:1/9:
1 lock held by khungtaskd/28:
 #0: ffffffff8d131fa0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 #0: ffffffff8d131fa0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
 #0: ffffffff8d131fa0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633
2 locks held by kworker/1:1/42:
 #0: ffff888017c72538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff888017c72538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
 #1: ffffc90000b2fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc90000b2fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
4 locks held by kworker/u4:6/2987:
 #0: ffff888017c73938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #0: ffff888017c73938 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
 #1: ffffc9000be87d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2609 [inline]
 #1: ffffc9000be87d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x96f/0x15d0 kernel/workqueue.c:2711
 #2: ffffffff8e3b31d0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x14c/0xbb0 net/core/net_namespace.c:606
 #3: ffffffff8e3c0208 (rtnl_mutex){+.+.}-{3:3}, at: default_device_exit_batch+0xf2/0xa80 net/core/dev.c:11619
2 locks held by kworker/u4:8/3524:
1 lock held by dhcpcd/5433:
 #0: ffffffff8e3c0208 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8e3c0208 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x811/0xfa0 net/core/rtnetlink.c:6469
2 locks held by getty/5526:
 #0: ffff8880314ca0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000326e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x433/0x1390 drivers/tty/n_tty.c:2217
1 lock held by udevd/5758:
 #0: ffff888021dbf4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805
1 lock held by syz.1.592/7613:
 #0: ffff88801b73e418 (sb_writers#3){.+.+}-{0:0}, at: do_sendfile+0x5cf/0xef0 fs/read_write.c:1253
1 lock held by syz-executor/9089:
2 locks held by syz.6.1243/10498:
1 lock held by syz.5.1257/10592:
 #0: ffffffff8e3c0208 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8e3c0208 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x41/0x1c0 drivers/net/tun.c:3511
1 lock held by syz.5.1257/10593:
 #0: ffffffff8e3c0208 (rtnl_mutex){+.+.}-{3:3}, at: dev_ioctl+0x7a4/0x1140 net/core/dev_ioctl.c:769
1 lock held by syz.6.1259/10603:
 #0: ffffffff8d137840 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x4c/0x580 kernel/rcu/tree.c:4089
2 locks held by dhcpcd/10616:
 #0: ffff88805dadb220 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
 #0: ffff88805dadb220 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
 #0: ffff88805dadb220 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
 #1: ffffffff8d137978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
 #1: ffffffff8d137978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3da/0x880 kernel/rcu/tree_exp.h:1004
2 locks held by dhcpcd/10617:
 #0: ffff888078173220 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
 #0: ffff888078173220 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
 #0: ffff888078173220 (&sb->s_type->i_mutex_key#11){+.+.}-{3:3}, at: sock_close+0x9b/0x230 net/socket.c:1420
 #1: ffffffff8d137978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
 #1: ffffffff8d137978 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3da/0x880 kernel/rcu/tree_exp.h:1004

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0x18c/0x250 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x3a6/0x3e0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xf3d/0xf80 kernel/hung_task.c:379
 kthread+0x2fa/0x390 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 10593 Comm: syz.5.1257 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
RIP: 0010:__lock_acquire+0x38c/0x7d40 kernel/locking/lockdep.c:5070
Code: 48 8b 44 24 20 4c 8d b0 e0 0a 00 00 48 8b 74 24 60 83 fe 31 0f 83 7c 65 00 00 48 8d 04 b6 4d 8d 24 c6 4c 89 b4 24 a8 00 00 00 <49> 8d 4c c6 20 48 89 4c 24 10 48 c1 e9 03 48 89 4c 24 48 42 0f b6
RSP: 0018:ffffc9000ff9f0e0 EFLAGS: 00000083
RAX: 0000000000000005 RBX: 0000000000000000 RCX: 0000000000000001
RDX: ffff888029126500 RSI: 0000000000000001 RDI: ffffffff97533de8
RBP: ffffc9000ff9f328 R08: dffffc0000000000 R09: 0000000000000001
R10: dffffc0000000000 R11: fffffbfff1d15bb6 R12: ffff888029126508
R13: 0000000000000001 R14: ffff8880291264e0 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000056268a002660 CR3: 000000002cccc000 CR4: 00000000003526e0
Call Trace:
 <TASK>
 lock_acquire+0x19e/0x420 kernel/locking/lockdep.c:5754
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xb4/0x100 kernel/locking/spinlock.c:162
 __debug_check_no_obj_freed lib/debugobjects.c:979 [inline]
 debug_check_no_obj_freed+0x13a/0x540 lib/debugobjects.c:1020
 slab_free_hook mm/slub.c:1786 [inline]
 slab_free_freelist_hook+0xd2/0x1a0 mm/slub.c:1837
 slab_free mm/slub.c:3830 [inline]
 kmem_cache_free+0xf8/0x270 mm/slub.c:3852
 pagetable_pte_dtor include/linux/mm.h:2998 [inline]
 ___pte_free_tlb+0x2d/0x130 arch/x86/mm/pgtable.c:55
 __pte_free_tlb arch/x86/include/asm/pgalloc.h:61 [inline]
 free_pte_range mm/memory.c:194 [inline]
 free_pmd_range mm/memory.c:212 [inline]
 free_pud_range mm/memory.c:246 [inline]
 free_p4d_range mm/memory.c:280 [inline]
 free_pgd_range+0x628/0xb10 mm/memory.c:360
 free_pgtables+0x697/0x770 mm/memory.c:-1
 exit_mmap+0x420/0xb90 mm/mmap.c:3313
 __mmput+0x118/0x3c0 kernel/fork.c:1355
 exit_mm+0x1f2/0x2c0 kernel/exit.c:569
 do_exit+0x8dd/0x2460 kernel/exit.c:870
 do_group_exit+0x21b/0x2d0 kernel/exit.c:1024
 get_signal+0x12fc/0x13f0 kernel/signal.c:2902
 arch_do_signal_or_restart+0xc2/0x800 arch/x86/kernel/signal.c:310
 exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
 exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210
 __syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
 syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
 do_syscall_64+0x61/0xa0 arch/x86/entry/common.c:82
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fb92299bf79
Code: Unable to access opcode bytes at 0x7fb92299bf4f.
RSP: 002b:00007fb9237730e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: fffffffffffffe00 RBX: 00007fb922c15fa8 RCX: 00007fb92299bf79
RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fb922c15fa8
RBP: 00007fb922c15fa0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fb922c16038 R14: 00007ffd74c2b460 R15: 00007ffd74c2b548
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/11 16:44 linux-6.6.y 1b4ef5214f17 75707236 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in migrate_pages_batch
2026/02/02 06:37 linux-6.6.y 2cf6f68313dc 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: task hung in migrate_pages_batch
* Struck through repros no longer work on HEAD.