===================================================== WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected 6.1.84-syzkaller #0 Not tainted ----------------------------------------------------- kworker/0:8/4200 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire: ffff88801bbc7820 (&htab->buckets[i].lock){+.-.}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932 and this task is already holding: ffffffff8d12f818 (rcu_node_0){-.-.}-{2:2}, at: sync_rcu_exp_done_unlocked+0xe/0x140 kernel/rcu/tree_exp.h:168 which would create a new lock dependency: (rcu_node_0){-.-.}-{2:2} -> (&htab->buckets[i].lock){+.-.}-{2:2} but this new dependency connects a HARDIRQ-irq-safe lock: (rcu_node_0){-.-.}-{2:2} ... which became HARDIRQ-irq-safe at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162 rcu_report_exp_cpu_mult+0x27/0x2e0 kernel/rcu/tree_exp.h:238 __flush_smp_call_function_queue+0x60c/0xd00 kernel/smp.c:676 __sysvec_call_function_single+0xbb/0x360 arch/x86/kernel/smp.c:267 sysvec_call_function_single+0x89/0xb0 arch/x86/kernel/smp.c:262 asm_sysvec_call_function_single+0x16/0x20 arch/x86/include/asm/idtentry.h:661 clear_page_erms+0x7/0x10 arch/x86/lib/clear_page_64.S:49 clear_page arch/x86/include/asm/page_64.h:57 [inline] clear_highpage include/linux/highmem.h:242 [inline] clear_highpage_kasan_tagged include/linux/highmem.h:252 [inline] kernel_init_pages mm/page_alloc.c:1377 [inline] post_alloc_hook+0x145/0x1b0 mm/page_alloc.c:2508 prep_new_page mm/page_alloc.c:2520 [inline] get_page_from_freelist+0x31a1/0x3320 mm/page_alloc.c:4279 __alloc_pages+0x28d/0x770 mm/page_alloc.c:5547 __alloc_pages_node include/linux/gfp.h:237 [inline] alloc_pages_node include/linux/gfp.h:260 [inline] alloc_pages_exact_nid+0x115/0x1b9 mm/page_alloc.c:5847 alloc_page_ext+0x1f/0x48 mm/page_ext.c:294 init_section_page_ext+0x101/0x15e mm/page_ext.c:317 page_ext_init+0x5b8/0x782 mm/page_ext.c:511 kernel_init_freeable+0x450/0x60f init/main.c:1623 kernel_init+0x19/0x290 init/main.c:1513 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 to a HARDIRQ-irq-unsafe lock: (&htab->buckets[i].lock){+.-.}-{2:2} ... which became HARDIRQ-irq-unsafe at: ... lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x160/0x820 net/core/sock_map.c:1149 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&htab->buckets[i].lock); local_irq_disable(); lock(rcu_node_0); lock(&htab->buckets[i].lock); lock(rcu_node_0); *** DEADLOCK *** 4 locks held by kworker/0:8/4200: #0: ffff888012472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #1: ffffc90005c37d20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #2: ffffffff8d12f818 (rcu_node_0){-.-.}-{2:2}, at: sync_rcu_exp_done_unlocked+0xe/0x140 kernel/rcu/tree_exp.h:168 #3: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline] #3: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline] #3: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline] #3: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312 the dependencies between HARDIRQ-irq-safe lock and the holding lock: -> (rcu_node_0){-.-.}-{2:2} { IN-HARDIRQ-W at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162 rcu_report_exp_cpu_mult+0x27/0x2e0 kernel/rcu/tree_exp.h:238 __flush_smp_call_function_queue+0x60c/0xd00 kernel/smp.c:676 __sysvec_call_function_single+0xbb/0x360 arch/x86/kernel/smp.c:267 sysvec_call_function_single+0x89/0xb0 arch/x86/kernel/smp.c:262 asm_sysvec_call_function_single+0x16/0x20 arch/x86/include/asm/idtentry.h:661 clear_page_erms+0x7/0x10 arch/x86/lib/clear_page_64.S:49 clear_page arch/x86/include/asm/page_64.h:57 [inline] clear_highpage include/linux/highmem.h:242 [inline] clear_highpage_kasan_tagged include/linux/highmem.h:252 [inline] kernel_init_pages mm/page_alloc.c:1377 [inline] post_alloc_hook+0x145/0x1b0 mm/page_alloc.c:2508 prep_new_page mm/page_alloc.c:2520 [inline] get_page_from_freelist+0x31a1/0x3320 mm/page_alloc.c:4279 __alloc_pages+0x28d/0x770 mm/page_alloc.c:5547 __alloc_pages_node include/linux/gfp.h:237 [inline] alloc_pages_node include/linux/gfp.h:260 [inline] alloc_pages_exact_nid+0x115/0x1b9 mm/page_alloc.c:5847 alloc_page_ext+0x1f/0x48 mm/page_ext.c:294 init_section_page_ext+0x101/0x15e mm/page_ext.c:317 page_ext_init+0x5b8/0x782 mm/page_ext.c:511 kernel_init_freeable+0x450/0x60f init/main.c:1623 kernel_init+0x19/0x290 init/main.c:1513 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 IN-SOFTIRQ-W at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 rcu_accelerate_cbs_unlocked+0x8a/0x230 kernel/rcu/tree.c:1184 rcu_core+0x5a0/0x17e0 kernel/rcu/tree.c:2547 __do_softirq+0x2e9/0xa4c kernel/softirq.c:571 invoke_softirq kernel/softirq.c:445 [inline] __irq_exit_rcu+0x155/0x240 kernel/softirq.c:650 irq_exit_rcu+0x5/0x20 kernel/softirq.c:662 sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1106 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653 lock_acquire+0x26f/0x5a0 kernel/locking/lockdep.c:5666 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 spin_lock include/linux/spinlock.h:351 [inline] kasan_populate_vmalloc_pte+0x83/0xf0 mm/kasan/shadow.c:278 apply_to_pte_range mm/memory.c:2645 [inline] apply_to_pmd_range mm/memory.c:2689 [inline] apply_to_pud_range mm/memory.c:2725 [inline] apply_to_p4d_range mm/memory.c:2761 [inline] __apply_to_page_range+0x9c5/0xcc0 mm/memory.c:2795 alloc_vmap_area+0x1977/0x1ac0 mm/vmalloc.c:1646 __get_vm_area_node+0x16c/0x360 mm/vmalloc.c:2505 get_vm_area_caller mm/vmalloc.c:2558 [inline] vmap+0xf5/0x2d0 mm/vmalloc.c:2853 map_irq_stack arch/x86/kernel/irq_64.c:48 [inline] irq_init_percpu_irqstack+0x333/0x490 arch/x86/kernel/irq_64.c:75 common_cpu_up+0xe0/0x1e0 arch/x86/kernel/smpboot.c:1072 native_cpu_up+0x2a8/0x15c0 arch/x86/kernel/smpboot.c:1243 __cpu_up arch/x86/include/asm/smp.h:84 [inline] bringup_cpu+0x62/0x380 kernel/cpu.c:630 cpuhp_invoke_callback+0x49f/0x820 kernel/cpu.c:192 __cpuhp_invoke_callback_range kernel/cpu.c:700 [inline] cpuhp_invoke_callback_range kernel/cpu.c:724 [inline] cpuhp_up_callbacks kernel/cpu.c:755 [inline] _cpu_up+0x490/0x880 kernel/cpu.c:1458 cpu_up+0x204/0x290 kernel/cpu.c:1494 bringup_nonboot_cpus+0x12c/0x1d0 kernel/cpu.c:1560 smp_init+0x30/0x149 kernel/smp.c:1123 kernel_init_freeable+0x40c/0x60f init/main.c:1616 kernel_init+0x19/0x290 init/main.c:1513 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 INITIAL USE at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162 rcutree_prepare_cpu+0x6d/0x520 kernel/rcu/tree.c:4173 rcu_init+0xb4/0x200 kernel/rcu/tree.c:4857 start_kernel+0x20d/0x53f init/main.c:1032 secondary_startup_64_no_verify+0xcf/0xdb } ... key at: [] rcu_init_one.rcu_node_class+0x0/0x20 the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock: -> (&htab->buckets[i].lock){+.-.}-{2:2} { HARDIRQ-ON-W at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x160/0x820 net/core/sock_map.c:1149 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 IN-SOFTIRQ-W at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932 bpf_prog_e42f6260c1b72fb3+0x22/0x37 bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline] __bpf_prog_run include/linux/filter.h:603 [inline] bpf_prog_run include/linux/filter.h:610 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline] bpf_trace_run4+0x253/0x470 kernel/trace/bpf_trace.c:2314 __bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:177 trace_mm_page_alloc include/trace/events/kmem.h:177 [inline] __alloc_pages+0x717/0x770 mm/page_alloc.c:5569 __alloc_pages_node include/linux/gfp.h:237 [inline] alloc_pages_node+0x127/0x1b0 include/linux/gfp.h:260 page_frag_alloc_1k net/core/skbuff.c:163 [inline] __napi_alloc_skb+0x34b/0x520 net/core/skbuff.c:681 napi_alloc_skb include/linux/skbuff.h:3231 [inline] page_to_skb+0x282/0xb60 drivers/net/virtio_net.c:501 receive_mergeable drivers/net/virtio_net.c:1128 [inline] receive_buf+0x436/0x5520 drivers/net/virtio_net.c:1267 virtnet_receive drivers/net/virtio_net.c:1562 [inline] virtnet_poll+0x6d3/0x1470 drivers/net/virtio_net.c:1680 __napi_poll+0xc7/0x470 net/core/dev.c:6537 napi_poll net/core/dev.c:6604 [inline] net_rx_action+0x70f/0xeb0 net/core/dev.c:6718 __do_softirq+0x2e9/0xa4c kernel/softirq.c:571 invoke_softirq kernel/softirq.c:445 [inline] __irq_exit_rcu+0x155/0x240 kernel/softirq.c:650 irq_exit_rcu+0x5/0x20 kernel/softirq.c:662 common_interrupt+0xa4/0xc0 arch/x86/kernel/irq.c:240 asm_common_interrupt+0x22/0x40 arch/x86/include/asm/idtentry.h:644 __preempt_count_add kernel/rcu/tree.c:717 [inline] rcu_is_watching+0x4/0xb0 kernel/rcu/tree.c:720 trace_lock_acquire include/trace/events/lock.h:24 [inline] lock_acquire+0xfa/0x5a0 kernel/locking/lockdep.c:5633 rcu_lock_acquire include/linux/rcupdate.h:350 [inline] rcu_read_lock include/linux/rcupdate.h:791 [inline] inet_twsk_purge+0x150/0xa40 net/ipv4/inet_timewait_sock.c:295 ops_exit_list net/core/net_namespace.c:174 [inline] cleanup_net+0x763/0xb60 net/core/net_namespace.c:601 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 INITIAL USE at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x160/0x820 net/core/sock_map.c:1149 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 } ... key at: [] sock_hash_alloc.__key+0x0/0x20 ... acquired at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932 bpf_prog_05fc780d7a5f93f9+0x42/0x46 bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline] __bpf_prog_run include/linux/filter.h:603 [inline] bpf_prog_run include/linux/filter.h:610 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline] bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312 trace_contention_end+0x14c/0x190 include/trace/events/lock.h:122 __pv_queued_spin_lock_slowpath+0x935/0xc50 kernel/locking/qspinlock.c:560 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline] queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51 queued_spin_lock include/asm-generic/qspinlock.h:114 [inline] do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline] _raw_spin_lock_irqsave+0xdd/0x120 kernel/locking/spinlock.c:162 sync_rcu_exp_done_unlocked+0xe/0x140 kernel/rcu/tree_exp.h:168 synchronize_rcu_expedited_wait_once kernel/rcu/tree_exp.h:580 [inline] synchronize_rcu_expedited_wait kernel/rcu/tree_exp.h:631 [inline] rcu_exp_wait_wake kernel/rcu/tree_exp.h:699 [inline] rcu_exp_sel_wait_wake+0x787/0x1d50 kernel/rcu/tree_exp.h:733 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 stack backtrace: CPU: 0 PID: 4200 Comm: kworker/0:8 Not tainted 6.1.84-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 Workqueue: rcu_gp wait_rcu_exp_gp Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106 print_bad_irq_dependency kernel/locking/lockdep.c:2604 [inline] check_irq_usage kernel/locking/lockdep.c:2843 [inline] check_prev_add kernel/locking/lockdep.c:3094 [inline] check_prevs_add kernel/locking/lockdep.c:3209 [inline] validate_chain+0x4d16/0x5950 kernel/locking/lockdep.c:3825 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932 bpf_prog_05fc780d7a5f93f9+0x42/0x46 bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline] __bpf_prog_run include/linux/filter.h:603 [inline] bpf_prog_run include/linux/filter.h:610 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline] bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312 trace_contention_end+0x14c/0x190 include/trace/events/lock.h:122 __pv_queued_spin_lock_slowpath+0x935/0xc50 kernel/locking/qspinlock.c:560 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline] queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51 queued_spin_lock include/asm-generic/qspinlock.h:114 [inline] do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline] _raw_spin_lock_irqsave+0xdd/0x120 kernel/locking/spinlock.c:162 sync_rcu_exp_done_unlocked+0xe/0x140 kernel/rcu/tree_exp.h:168 synchronize_rcu_expedited_wait_once kernel/rcu/tree_exp.h:580 [inline] synchronize_rcu_expedited_wait kernel/rcu/tree_exp.h:631 [inline] rcu_exp_wait_wake kernel/rcu/tree_exp.h:699 [inline] rcu_exp_sel_wait_wake+0x787/0x1d50 kernel/rcu/tree_exp.h:733 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 ------------[ cut here ]------------ raw_local_irq_restore() called with IRQs enabled WARNING: CPU: 0 PID: 4200 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x1d/0x20 kernel/locking/irqflag-debug.c:10 Modules linked in: CPU: 0 PID: 4200 Comm: kworker/0:8 Not tainted 6.1.84-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 Workqueue: rcu_gp wait_rcu_exp_gp RIP: 0010:warn_bogus_irq_restore+0x1d/0x20 kernel/locking/irqflag-debug.c:10 Code: 24 48 c7 c7 00 bc ea 8a e8 6c f5 fd ff 80 3d 2f 5b d5 03 00 74 01 c3 c6 05 25 5b d5 03 01 48 c7 c7 60 e6 eb 8a e8 23 64 c8 f6 <0f> 0b c3 41 56 53 48 83 ec 10 65 48 8b 04 25 28 00 00 00 48 89 44 RSP: 0018:ffffc90005c37a38 EFLAGS: 00010246 RAX: a438ce12eab5b400 RBX: 1ffff92000b86f4c RCX: ffff8880543bbb80 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: ffffc90005c37ac8 R08: ffffffff81527eae R09: fffff52000b86ea9 R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000 R13: 1ffff92000b86f48 R14: ffffc90005c37a60 R15: 0000000000000246 FS: 0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 0000000058de2000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600 Call Trace: __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:151 [inline] _raw_spin_unlock_irqrestore+0x118/0x130 kernel/locking/spinlock.c:194 sync_rcu_exp_done_unlocked+0xdb/0x140 kernel/rcu/tree_exp.h:170 synchronize_rcu_expedited_wait_once kernel/rcu/tree_exp.h:580 [inline] synchronize_rcu_expedited_wait kernel/rcu/tree_exp.h:631 [inline] rcu_exp_wait_wake kernel/rcu/tree_exp.h:699 [inline] rcu_exp_sel_wait_wake+0x787/0x1d50 kernel/rcu/tree_exp.h:733 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307