syzbot


INFO: task hung in migrate_pages (5)

Status: upstream: reported on 2026/04/02 11:36
Reported-by: syzbot+2bb05a81cf78d25dcebe@syzkaller.appspotmail.com
First crash: 22d, last: 21d
Similar bugs (8)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in migrate_pages mm 1 14 2021d 2145d 0/29 auto-closed as invalid on 2021/01/09 03:04
linux-6.1 INFO: task hung in migrate_pages 1 1 928d 928d 0/3 auto-obsoleted due to no activity on 2024/01/17 09:52
linux-5.15 INFO: task hung in migrate_pages 1 4 802d 849d 0/3 auto-obsoleted due to no activity on 2024/05/22 13:48
linux-6.1 INFO: task hung in migrate_pages (3) 1 1 258d 258d 0/3 auto-obsoleted due to no activity on 2025/11/17 16:31
linux-6.1 INFO: task hung in migrate_pages (4) 1 1 154d 154d 0/3 auto-obsoleted due to no activity on 2026/03/01 17:12
linux-6.1 INFO: task hung in migrate_pages (2) 1 1 765d 765d 0/3 auto-obsoleted due to no activity on 2024/06/28 12:05
upstream INFO: task hung in migrate_pages (2) mm 1 70 1597d 1624d 0/29 auto-closed as invalid on 2022/04/08 00:51
upstream INFO: task hung in migrate_pages (3) fs mm 1 10 1153d 1368d 0/29 auto-obsoleted due to no activity on 2023/05/27 13:42

Sample crash report:
INFO: task syz.2.1248:8119 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.1248      state:D stack:26032 pid:8119  ppid:4271   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5245 [inline]
 __schedule+0x11d1/0x40e0 kernel/sched/core.c:6562
 schedule+0xb9/0x180 kernel/sched/core.c:6638
 io_schedule+0x7c/0xd0 kernel/sched/core.c:8798
 folio_wait_bit_common+0x70a/0xfa0 mm/filemap.c:1324
 __migrate_folio_unmap mm/migrate.c:1088 [inline]
 migrate_folio_unmap mm/migrate.c:1270 [inline]
 migrate_pages_batch mm/migrate.c:1635 [inline]
 migrate_pages+0x2748/0x55e0 mm/migrate.c:1843
 compact_zone+0x276b/0x3e90 mm/compaction.c:2414
 compact_node+0x1c4/0x400 mm/compaction.c:2691
 compact_nodes mm/compaction.c:2707 [inline]
 sysctl_compaction_handler+0x99/0x130 mm/compaction.c:2749
 proc_sys_call_handler+0x45e/0x6d0 fs/proc/proc_sysctl.c:602
 call_write_iter include/linux/fs.h:2265 [inline]
 aio_write+0x545/0x780 fs/aio.c:1615
 __io_submit_one fs/aio.c:-1 [inline]
 io_submit_one+0x6f2/0x1380 fs/aio.c:2034
 __do_sys_io_submit fs/aio.c:2093 [inline]
 __se_sys_io_submit+0x19d/0x310 fs/aio.c:2063
 do_syscall_x64 arch/x86/entry/common.c:46 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:76
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7ff91c79c819
RSP: 002b:00007ff91d63f028 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1
RAX: ffffffffffffffda RBX: 00007ff91ca15fa0 RCX: 00007ff91c79c819
RDX: 0000200000000340 RSI: 0000000000000001 RDI: 00007ff91d61e000
RBP: 00007ff91c832c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ff91ca16038 R14: 00007ff91ca15fa0 R15: 00007ffe6a1f3388
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8cb2df30 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8cb2e750 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/28:
 #0: ffffffff8cb2d5a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8cb2d5a0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8cb2d5a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
2 locks held by getty/4026:
 #0: ffff88807e738098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x429/0x1390 drivers/tty/n_tty.c:2198
7 locks held by kworker/u4:7/4339:
4 locks held by kworker/1:14/4952:
 #0: ffff8880b8f3ab18 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:538
 #1: ffff8880b8f27888 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x398/0x6d0 kernel/sched/psi.c:999
 #2: ffff88805422cee0 (&r->consumer_lock#2){+.+.}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
 #2: ffff88805422cee0 (&r->consumer_lock#2){+.+.}-{2:2}, at: ptr_ring_consume_bh include/linux/ptr_ring.h:365 [inline]
 #2: ffff88805422cee0 (&r->consumer_lock#2){+.+.}-{2:2}, at: wg_packet_decrypt_worker+0x980/0xe00 drivers/net/wireguard/receive.c:499
 #3: ffff8880b8f3ab18 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:538
1 lock held by udevd/4957:
 #0: ffff8881443964c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x13d/0xa60 block/bdev.c:832

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0x188/0x24e lib/dump_stack.c:106
 nmi_cpu_backtrace+0x3e6/0x460 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xeee/0xf30 kernel/hung_task.c:377
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 4339 Comm: kworker/u4:7 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:__should_failslab+0x0/0xf0 mm/failslab.c:18
Code: 89 f9 80 e1 07 80 c1 03 38 c1 0f 8c 64 fd ff ff 4c 89 ff e8 f2 7e ff ff e9 57 fd ff ff 00 00 cc cc 00 00 cc cc 00 00 cc cc 00 <41> 57 41 56 41 54 53 89 f3 49 89 fe 49 bc 00 00 00 00 00 fc ff df
RSP: 0018:ffffc90004b97a38 EFLAGS: 00000246
RAX: 0000000004208060 RBX: ffffc90004b97aa8 RCX: dffffc0000000000
RDX: ffffc90004b97aa8 RSI: 0000000000000a20 RDI: ffff888018452140
RBP: 0000000000000a20 R08: 0000000000000a20 R09: 0000000000000004
R10: dffffc0000000000 R11: fffff52000972f58 R12: 0000000000000000
R13: 0000000000000a20 R14: ffff888018452140 R15: 0000000000000001
FS:  0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1378be7158 CR3: 0000000020f5a000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 should_failslab+0x5/0x20 mm/slab_common.c:1440
 slab_pre_alloc_hook+0x59/0x310 mm/slab.h:712
 slab_alloc_node mm/slub.c:3279 [inline]
 kmem_cache_alloc_node+0x5a/0x320 mm/slub.c:3404
 __alloc_skb+0xfc/0x7e0 net/core/skbuff.c:505
 alloc_skb include/linux/skbuff.h:1303 [inline]
 nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:748 [inline]
 nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
 nsim_dev_trap_report_work+0x28f/0xaf0 drivers/net/netdevsim/dev.c:851
 process_one_work+0x8a2/0x1160 kernel/workqueue.c:2292
 worker_thread+0xaa2/0x1270 kernel/workqueue.c:2439
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/04/02 23:59 linux-6.1.y 1989cd3d56e2 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in migrate_pages
2026/04/02 11:35 linux-6.1.y 1989cd3d56e2 91bc79b0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in migrate_pages
* Struck through repros no longer work on HEAD.