syzbot


INFO: task hung in __purge_vmap_area_lazy

Status: auto-obsoleted due to no activity on 2024/12/24 06:27
Subsystems: kernel
[Documentation on labels]
First crash: 214d, last: 117d

Sample crash report:
INFO: task kworker/0:4:5278 blocked for more than 144 seconds.
      Not tainted 6.11.0-syzkaller-02574-ga430d95c5efa #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:4     state:D stack:22720 pid:5278  tgid:5278  ppid:2      flags:0x00004000
Workqueue: events drain_vmap_area_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5188 [inline]
 __schedule+0xe37/0x5490 kernel/sched/core.c:6529
 __schedule_loop kernel/sched/core.c:6606 [inline]
 schedule+0xe7/0x350 kernel/sched/core.c:6621
 schedule_timeout+0x258/0x2a0 kernel/time/timer.c:2557
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common+0x3de/0x5f0 kernel/sched/completion.c:116
 __flush_work+0x776/0xc30 kernel/workqueue.c:4216
 __purge_vmap_area_lazy+0x84d/0xc10 mm/vmalloc.c:2302
 drain_vmap_area_work+0x27/0x40 mm/vmalloc.c:2327
 process_one_work+0x9c5/0x1b40 kernel/workqueue.c:3231
 process_scheduled_works kernel/workqueue.c:3312 [inline]
 worker_thread+0x6c8/0xf00 kernel/workqueue.c:3393
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Showing all locks held in the system:
2 locks held by kthreadd/2:
4 locks held by kworker/0:0/8:
2 locks held by kworker/0:1/9:
2 locks held by kworker/u8:0/11:
6 locks held by kworker/u8:1/12:
1 lock held by khungtaskd/30:
 #0: ffffffff8ddba6a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:326 [inline]
 #0: ffffffff8ddba6a0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
 #0: ffffffff8ddba6a0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x75/0x340 kernel/locking/lockdep.c:6626
2 locks held by kworker/u8:2/35:
4 locks held by kworker/u8:4/62:
3 locks held by kworker/1:2/942:
4 locks held by kworker/u8:7/1114:
 #0: ffff8880611f3948 ((wq_completion)wg-kex-wg1#9){+.+.}-{0:0}, at: process_one_work+0x1277/0x1b40 kernel/workqueue.c:3206
 #1: ffffc900044e7d80 ((work_completion)(&peer->transmit_handshake_work)){+.+.}-{0:0}, at: process_one_work+0x921/0x1b40 kernel/workqueue.c:3207
 #2: ffff88804125d248 (&wg->static_identity.lock){++++}-{3:3}, at: wg_noise_handshake_create_initiation+0xed/0x650 drivers/net/wireguard/noise.c:529
 #3: ffff88807dc05798 (&handshake->lock){++++}-{3:3}, at: wg_noise_handshake_create_initiation+0x101/0x650 drivers/net/wireguard/noise.c:530
2 locks held by syslogd/4652:
3 locks held by udevd/4670:
5 locks held by kworker/1:3/4862:
2 locks held by getty/4974:
 #0: ffff8880309a40a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xfba/0x1480 drivers/tty/n_tty.c:2211
1 lock held by syz-executor/5229:
1 lock held by syz-executor/5240:
1 lock held by kworker/R-wg-cr/5261:
 #0: ffffffff8dc74668 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2729 [inline]
 #0: ffffffff8dc74668 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0x856/0xe20 kernel/workqueue.c:3528
1 lock held by kworker/R-wg-cr/5262:
 #0: ffffffff8dc74668 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/5265:
 #0: ffffffff8dc74668 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/5268:
 #0: ffffffff8dc74668 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2729 [inline]
 #0: ffffffff8dc74668 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0x856/0xe20 kernel/workqueue.c:3528
1 lock held by kworker/R-wg-cr/5270:
 #0: ffffffff8dc74668 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2729 [inline]
 #0: ffffffff8dc74668 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0x856/0xe20 kernel/workqueue.c:3528
1 lock held by kworker/R-wg-cr/5273:
1 lock held by kworker/R-wg-cr/5274:
1 lock held by kworker/R-wg-cr/5276:
4 locks held by kworker/0:3/5277:
3 locks held by kworker/0:4/5278:
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1277/0x1b40 kernel/workqueue.c:3206
 #1: ffffc90004147d80 (drain_vmap_work){+.+.}-{0:0}, at: process_one_work+0x921/0x1b40 kernel/workqueue.c:3207
 #2: ffffffff8df39d68 (vmap_purge_lock){+.+.}-{3:3}, at: drain_vmap_area_work+0x17/0x40 mm/vmalloc.c:2326
3 locks held by kworker/0:7/5300:
2 locks held by kworker/1:4/5306:
3 locks held by kworker/1:6/5342:
1 lock held by syz-executor/5733:
2 locks held by syz.0.4647/15260:
1 lock held by syz.1.4676/15325:
2 locks held by syz-executor/15368:
2 locks held by syz-executor/15393:
6 locks held by syz.4.4727/15464:
3 locks held by kworker/0:2/15533:
2 locks held by syz-executor/15536:
5 locks held by kworker/0:5/15543:
2 locks held by syz-executor/15546:
3 locks held by kworker/0:6/15552:
 #0: ffff88803468a548 ((wq_completion)wg-kex-wg1#10){+.+.}-{0:0}, at: process_one_work+0x1277/0x1b40 kernel/workqueue.c:3206
 #1: ffffc900042f7d80 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __asm__ ("" : "=r"(__ptr) : "0"((typeof(*((worker))) *)((worker)))); (typeof((typeof(*((worker))) *)((worker)))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x921/0x1b40 kernel/workqueue.c:3207
 #2: ffff88807dc05798 (&handshake->lock){++++}-{3:3}, at: wg_noise_handshake_begin_session+0x30/0xe80 drivers/net/wireguard/noise.c:822
3 locks held by kworker/1:0/15553:
4 locks held by syz-executor/15555:
 #0: ffff88807e160420 (sb_writers#5){.+.+}-{0:0}, at: filename_create+0x10d/0x530 fs/namei.c:4019
 #1: ffff888041b8d3b8 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:850 [inline]
 #1: ffff888041b8d3b8 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: filename_create+0x1c2/0x530 fs/namei.c:4026
 #2: ffff88807e164958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0xf6c/0x1430 fs/jbd2/transaction.c:448
 #3: ffff888079bf14b8 (&ei->xattr_sem){++++}-{3:3}, at: ext4_write_lock_xattr fs/ext4/xattr.h:155 [inline]
 #3: ffff888079bf14b8 (&ei->xattr_sem){++++}-{3:3}, at: ext4_xattr_set_handle+0x159/0x1420 fs/ext4/xattr.c:2373
2 locks held by kworker/0:9/15557:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.11.0-syzkaller-02574-ga430d95c5efa #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:93 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:119
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xf0c/0x1240 kernel/hung_task.c:379
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 5277 Comm: kworker/0:3 Not tainted 6.11.0-syzkaller-02574-ga430d95c5efa #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
Workqueue: events_power_efficient neigh_periodic_work
RIP: 0010:preempt_count arch/x86/include/asm/preempt.h:26 [inline]
RIP: 0010:check_kcov_mode kernel/kcov.c:182 [inline]
RIP: 0010:write_comp_data+0x11/0x90 kernel/kcov.c:245
Code: cc cc cc cc 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 49 89 d2 49 89 f8 49 89 f1 65 48 8b 15 4f ff 77 7e <65> 8b 05 50 ff 77 7e a9 00 01 ff 00 74 1d f6 c4 01 74 67 a9 00 00
RSP: 0018:ffffc90000006848 EFLAGS: 00000246
RAX: 0000000000000000 RBX: ffffc900000068d0 RCX: ffffffff813d050d
RDX: ffff888022b3da00 RSI: 0000000000000001 RDI: 0000000000000001
RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000001
R10: 0000000000000002 R11: 0000000000000000 R12: ffffffff913d62ec
R13: ffffffff913d62f1 R14: 0000000000000002 R15: ffffc90000006905
FS:  0000000000000000(0000) GS:ffff8880b8800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fd5166ad038 CR3: 000000004d3da000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 unwind_next_frame+0x60d/0x23a0 arch/x86/kernel/unwind_orc.c:508
 arch_stack_walk+0x100/0x170 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x95/0xd0 kernel/stacktrace.c:122
 kasan_save_stack+0x33/0x60 mm/kasan/common.c:47
 kasan_save_track+0x14/0x30 mm/kasan/common.c:68
 kasan_save_free_info+0x3b/0x60 mm/kasan/generic.c:579
 poison_slab_object+0xf7/0x160 mm/kasan/common.c:240
 __kasan_slab_free+0x32/0x50 mm/kasan/common.c:256
 kasan_slab_free include/linux/kasan.h:184 [inline]
 slab_free_hook mm/slub.c:2250 [inline]
 slab_free mm/slub.c:4474 [inline]
 kmem_cache_free+0x12f/0x3a0 mm/slub.c:4549
 __skb_ext_put+0x102/0x2c0 net/core/skbuff.c:7101
 __skb_ext_del+0xf3/0x340 net/core/skbuff.c:7068
 skb_ext_del include/linux/skbuff.h:4843 [inline]
 nf_bridge_info_free net/bridge/br_netfilter_hooks.c:155 [inline]
 br_nf_dev_queue_xmit+0x6f2/0x2900 net/bridge/br_netfilter_hooks.c:942
 NF_HOOK include/linux/netfilter.h:314 [inline]
 NF_HOOK include/linux/netfilter.h:308 [inline]
 br_nf_post_routing+0x8ee/0x11b0 net/bridge/br_netfilter_hooks.c:989
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xbb/0x200 net/netfilter/core.c:626
 nf_hook+0x474/0x7d0 include/linux/netfilter.h:269
 NF_HOOK include/linux/netfilter.h:312 [inline]
 br_forward_finish+0xcd/0x130 net/bridge/br_forward.c:66
 br_nf_hook_thresh+0x303/0x410 net/bridge/br_netfilter_hooks.c:1190
 br_nf_forward_finish+0x66a/0xba0 net/bridge/br_netfilter_hooks.c:689
 NF_HOOK include/linux/netfilter.h:314 [inline]
 NF_HOOK include/linux/netfilter.h:308 [inline]
 br_nf_forward_ip.part.0+0x610/0x820 net/bridge/br_netfilter_hooks.c:743
 br_nf_forward_ip net/bridge/br_netfilter_hooks.c:703 [inline]
 br_nf_forward+0xf11/0x1bd0 net/bridge/br_netfilter_hooks.c:800
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xbb/0x200 net/netfilter/core.c:626
 nf_hook+0x474/0x7d0 include/linux/netfilter.h:269
 NF_HOOK include/linux/netfilter.h:312 [inline]
 __br_forward+0x1be/0x5b0 net/bridge/br_forward.c:115
 deliver_clone+0x5b/0xa0 net/bridge/br_forward.c:131
 br_flood+0x493/0x5c0 net/bridge/br_forward.c:245
 br_handle_frame_finish+0xda5/0x1c80 net/bridge/br_input.c:215
 br_nf_hook_thresh+0x303/0x410 net/bridge/br_netfilter_hooks.c:1190
 br_nf_pre_routing_finish_ipv6+0x76a/0xfb0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:314 [inline]
 br_nf_pre_routing_ipv6+0x3ce/0x8c0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x860/0x15b0 net/bridge/br_netfilter_hooks.c:532
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
 br_handle_frame+0x9eb/0x1490 net/bridge/br_input.c:424
 __netif_receive_skb_core.constprop.0+0xa3d/0x4330 net/core/dev.c:5556
 __netif_receive_skb_one_core+0xb1/0x1e0 net/core/dev.c:5660
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:5775
 process_backlog+0x443/0x15f0 net/core/dev.c:6107
 __napi_poll.constprop.0+0xb7/0x550 net/core/dev.c:6771
 napi_poll net/core/dev.c:6840 [inline]
 net_rx_action+0xa92/0x1010 net/core/dev.c:6962
 handle_softirqs+0x216/0x8f0 kernel/softirq.c:554
 do_softirq kernel/softirq.c:455 [inline]
 do_softirq+0xb2/0xf0 kernel/softirq.c:442
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:382
 neigh_periodic_work+0x6be/0xc40 net/core/neighbour.c:1019
 process_one_work+0x9c5/0x1b40 kernel/workqueue.c:3231
 process_scheduled_works kernel/workqueue.c:3312 [inline]
 worker_thread+0x6c8/0xf00 kernel/workqueue.c:3393
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (8):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/09/25 06:24 upstream a430d95c5efa 349a68c4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __purge_vmap_area_lazy
2024/09/23 20:05 upstream a430d95c5efa 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __purge_vmap_area_lazy
2024/09/23 13:11 upstream a430d95c5efa 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __purge_vmap_area_lazy
2024/08/05 20:42 upstream de9c2c66ad8e e35c337f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __purge_vmap_area_lazy
2024/07/28 09:25 upstream 6342649c33d2 46eb10b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __purge_vmap_area_lazy
2024/07/04 15:12 upstream 795c58e4c7fc dc6bbff0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __purge_vmap_area_lazy
2024/06/20 22:16 upstream 50736169ecc8 dac2aa43 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __purge_vmap_area_lazy
2024/07/08 17:18 linux-next 0b58e108042b cde64f7d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __purge_vmap_area_lazy
* Struck through repros no longer work on HEAD.