INFO: task syz.3.182:4604 blocked for more than 146 seconds. Not tainted 6.1.112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.3.182 state:D stack:25752 pid:4604 ppid:4080 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:5241 [inline] __schedule+0x143f/0x4570 kernel/sched/core.c:6558 schedule+0xbf/0x180 kernel/sched/core.c:6634 io_schedule+0x88/0x100 kernel/sched/core.c:8786 folio_wait_bit_common+0x878/0x1290 mm/filemap.c:1296 lock_page include/linux/pagemap.h:995 [inline] pickup_page_for_submission fs/erofs/zdata.c:1346 [inline] z_erofs_submit_queue fs/erofs/zdata.c:1539 [inline] z_erofs_runqueue+0x993/0x1ca0 fs/erofs/zdata.c:1611 z_erofs_readahead+0xc26/0x1030 fs/erofs/zdata.c:1758 read_pages+0x17f/0x830 mm/readahead.c:161 page_cache_ra_unbounded+0x68b/0x7b0 mm/readahead.c:270 do_page_cache_ra mm/readahead.c:300 [inline] force_page_cache_ra+0x2a3/0x300 mm/readahead.c:331 force_page_cache_readahead mm/internal.h:106 [inline] generic_fadvise+0x553/0x7b0 mm/fadvise.c:107 vfs_fadvise mm/fadvise.c:185 [inline] ksys_fadvise64_64 mm/fadvise.c:199 [inline] __do_sys_fadvise64 mm/fadvise.c:214 [inline] __se_sys_fadvise64 mm/fadvise.c:212 [inline] __x64_sys_fadvise64+0x138/0x180 mm/fadvise.c:212 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x68/0xd2 RIP: 0033:0x7f122397dff9 RSP: 002b:00007f1224808038 EFLAGS: 00000246 ORIG_RAX: 00000000000000dd RAX: ffffffffffffffda RBX: 00007f1223b35f80 RCX: 00007f122397dff9 RDX: 0080000000000006 RSI: 0000000000008001 RDI: 0000000000000006 RBP: 00007f12239f0296 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000003 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f1223b35f80 R15: 00007fff947f7e38 Showing all locks held in the system: 1 lock held by rcu_tasks_kthre/12: #0: ffffffff8d32b1d0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:517 1 lock held by rcu_tasks_trace/13: #0: ffffffff8d32b9d0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:517 3 locks held by kworker/1:0/22: 1 lock held by khungtaskd/27: #0: ffffffff8d32b000 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline] #0: ffffffff8d32b000 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline] #0: ffffffff8d32b000 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6494 2 locks held by getty/3394: #0: ffff88814c220098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244 #1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2198 2 locks held by kworker/1:7/3690: #0: ffff888017c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #1: ffffc9000448fd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 2 locks held by kworker/1:12/4002: #0: ffff888017c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #1: ffffc90005577d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 2 locks held by kworker/1:13/4004: #0: ffff888017c72138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #1: ffffc90005587d20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 1 lock held by syz.3.182/4604: #0: ffff888070320338 (mapping.invalidate_lock#4){.+.+}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:813 [inline] #0: ffff888070320338 (mapping.invalidate_lock#4){.+.+}-{3:3}, at: page_cache_ra_unbounded+0xed/0x7b0 mm/readahead.c:226 3 locks held by kworker/u4:16/4804: #0: ffff8880b8f3a9d8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:537 #1: ffff8880b8f27788 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x3a3/0x770 kernel/sched/psi.c:989 #2: ffff8880b8f3a9d8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:537 1 lock held by syz.3.543/5963: #0: ffffffff8d3305f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline] #0: ffffffff8d3305f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x930 kernel/rcu/tree_exp.h:962 1 lock held by syz.4.542/5968: #0: ffffffff8e4fa7e8 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline] #0: ffffffff8e4fa7e8 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3a/0x1b0 drivers/net/tun.c:3492 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 27 Comm: khungtaskd Not tainted 6.1.112-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106 nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111 nmi_trigger_cpumask_backtrace+0x1ae/0x3f0 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline] watchdog+0xf88/0xfd0 kernel/hung_task.c:377 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 PID: 7 Comm: kworker/0:0 Not tainted 6.1.112-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Workqueue: events drain_vmap_area_work RIP: 0010:unwind_next_frame+0x113c/0x2220 arch/x86/kernel/unwind_orc.c:637 Code: 00 00 00 e8 d6 b1 21 00 65 8b 05 a7 a9 c5 7e 85 c0 0f 84 ca 09 00 00 48 b8 00 00 00 00 00 fc ff df 48 8b 4c 24 28 0f b6 04 01 <84> c0 48 8b 1c 24 0f 85 66 0a 00 00 c7 03 00 00 00 00 31 c0 48 81 RSP: 0018:ffffc900000c7520 EFLAGS: 00000282 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 1ffff92000018ec0 RDX: ffffffff8f103a7a RSI: ffffffff810058af RDI: 0000000000000001 RBP: ffffffff8f103b1c R08: 0000000000000021 R09: ffffc900000c76f0 R10: 0000000000000000 R11: dffffc0000000001 R12: 000000000000001b R13: ffffffff8f103b20 R14: ffffffff8ea5d460 R15: 1ffffffff1e20764 FS: 0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000000d08e000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: arch_stack_walk+0x10d/0x140 arch/x86/kernel/stacktrace.c:25 stack_trace_save+0x113/0x1c0 kernel/stacktrace.c:122 save_stack+0xf6/0x1e0 mm/page_owner.c:127 __reset_page_owner+0x52/0x1a0 mm/page_owner.c:148 reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1444 [inline] free_pcp_prepare mm/page_alloc.c:1494 [inline] free_unref_page_prepare+0xf63/0x1120 mm/page_alloc.c:3369 free_unref_page+0x33/0x3e0 mm/page_alloc.c:3464 kasan_depopulate_vmalloc_pte+0x66/0x80 mm/kasan/shadow.c:375 apply_to_pte_range mm/memory.c:2662 [inline] apply_to_pmd_range mm/memory.c:2706 [inline] apply_to_pud_range mm/memory.c:2742 [inline] apply_to_p4d_range mm/memory.c:2778 [inline] __apply_to_page_range+0x9c5/0xcc0 mm/memory.c:2812 kasan_release_vmalloc+0x96/0xb0 mm/kasan/shadow.c:492 __purge_vmap_area_lazy+0x157c/0x1720 mm/vmalloc.c:1774 drain_vmap_area_work+0x3c/0xd0 mm/vmalloc.c:1803 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295