===================================================== WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected 5.0.0-rc2-next-20190121 #16 Not tainted ----------------------------------------------------- syz-executor3/27413 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: 0000000082972572 (&ctx->fault_pending_wqh){+.+.}, at: spin_lock include/linux/spinlock.h:329 [inline] 0000000082972572 (&ctx->fault_pending_wqh){+.+.}, at: userfaultfd_ctx_read+0x690/0x2060 fs/userfaultfd.c:1040 and this task is already holding: 00000000faa732b0 (&ctx->fd_wqh){....}, at: spin_lock_irq include/linux/spinlock.h:354 [inline] 00000000faa732b0 (&ctx->fd_wqh){....}, at: userfaultfd_ctx_read+0x25e/0x2060 fs/userfaultfd.c:1036 which would create a new lock dependency: (&ctx->fd_wqh){....} -> (&ctx->fault_pending_wqh){+.+.} but this new dependency connects a SOFTIRQ-irq-safe lock: (&(&ctx->ctx_lock)->rlock){..-.} ... which became SOFTIRQ-irq-safe at: lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline] _raw_spin_lock_irq+0x60/0x80 kernel/locking/spinlock.c:160 spin_lock_irq include/linux/spinlock.h:354 [inline] free_ioctx_users+0xa7/0x6e0 fs/aio.c:610 percpu_ref_put_many include/linux/percpu-refcount.h:285 [inline] percpu_ref_put include/linux/percpu-refcount.h:301 [inline] percpu_ref_call_confirm_rcu lib/percpu-refcount.c:123 [inline] percpu_ref_switch_to_atomic_rcu+0x50c/0x6b0 lib/percpu-refcount.c:158 __rcu_reclaim kernel/rcu/rcu.h:240 [inline] rcu_do_batch kernel/rcu/tree.c:2486 [inline] invoke_rcu_callbacks kernel/rcu/tree.c:2799 [inline] rcu_core+0xc4a/0x1680 kernel/rcu/tree.c:2780 __do_softirq+0x30b/0xb11 kernel/softirq.c:292 invoke_softirq kernel/softirq.c:373 [inline] irq_exit+0x180/0x1d0 kernel/softirq.c:413 exiting_irq arch/x86/include/asm/apic.h:536 [inline] smp_apic_timer_interrupt+0x1b7/0x760 arch/x86/kernel/apic/apic.c:1062 apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:807 arch_local_irq_enable arch/x86/include/asm/paravirt.h:776 [inline] __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:168 [inline] _raw_spin_unlock_irq+0x54/0x90 kernel/locking/spinlock.c:192 finish_lock_switch kernel/sched/core.c:2584 [inline] finish_task_switch+0x1e9/0xac0 kernel/sched/core.c:2684 context_switch kernel/sched/core.c:2837 [inline] __schedule+0x89f/0x1e60 kernel/sched/core.c:3475 preempt_schedule_common+0x4f/0xe0 kernel/sched/core.c:3599 preempt_schedule+0x4b/0x60 kernel/sched/core.c:3625 ___preempt_schedule+0x16/0x18 __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:161 [inline] _raw_spin_unlock_irqrestore+0xbd/0xe0 kernel/locking/spinlock.c:184 try_to_wake_up+0xf9/0x1480 kernel/sched/core.c:2061 wake_up_process kernel/sched/core.c:2129 [inline] wake_up_q+0x99/0x100 kernel/sched/core.c:440 futex_wake+0x638/0x7b0 kernel/futex.c:1621 do_futex+0x371/0x2910 kernel/futex.c:3598 __do_sys_futex kernel/futex.c:3654 [inline] __se_sys_futex kernel/futex.c:3622 [inline] __x64_sys_futex+0x459/0x670 kernel/futex.c:3622 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe to a SOFTIRQ-irq-unsafe lock: (&ctx->fault_pending_wqh){+.+.} ... which became SOFTIRQ-irq-unsafe at: ... lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:144 spin_lock include/linux/spinlock.h:329 [inline] handle_userfault+0x901/0x2510 fs/userfaultfd.c:461 do_anonymous_page mm/memory.c:2939 [inline] handle_pte_fault mm/memory.c:3803 [inline] __handle_mm_fault+0x45c5/0x5610 mm/memory.c:3929 handle_mm_fault+0x4ec/0xc80 mm/memory.c:3966 do_user_addr_fault arch/x86/mm/fault.c:1475 [inline] __do_page_fault+0x5ef/0xda0 arch/x86/mm/fault.c:1541 do_page_fault+0xe6/0x7d8 arch/x86/mm/fault.c:1572 page_fault+0x1e/0x30 arch/x86/entry/entry_64.S:1143 __get_user_4+0x21/0x30 arch/x86/lib/getuser.S:76 sock_common_setsockopt+0x9a/0xe0 net/core/sock.c:3016 __sys_setsockopt+0x1b0/0x3a0 net/socket.c:1902 __do_sys_setsockopt net/socket.c:1913 [inline] __se_sys_setsockopt net/socket.c:1910 [inline] __x64_sys_setsockopt+0xbe/0x150 net/socket.c:1910 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe other info that might help us debug this: Chain exists of: &(&ctx->ctx_lock)->rlock --> &ctx->fd_wqh --> &ctx->fault_pending_wqh Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ctx->fault_pending_wqh); local_irq_disable(); lock(&(&ctx->ctx_lock)->rlock); lock(&ctx->fd_wqh); lock(&(&ctx->ctx_lock)->rlock); *** DEADLOCK *** 1 lock held by syz-executor3/27413: #0: 00000000faa732b0 (&ctx->fd_wqh){....}, at: spin_lock_irq include/linux/spinlock.h:354 [inline] #0: 00000000faa732b0 (&ctx->fd_wqh){....}, at: userfaultfd_ctx_read+0x25e/0x2060 fs/userfaultfd.c:1036 the dependencies between SOFTIRQ-irq-safe lock and the holding lock: -> (&(&ctx->ctx_lock)->rlock){..-.} { IN-SOFTIRQ-W at: lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline] _raw_spin_lock_irq+0x60/0x80 kernel/locking/spinlock.c:160 spin_lock_irq include/linux/spinlock.h:354 [inline] free_ioctx_users+0xa7/0x6e0 fs/aio.c:610 percpu_ref_put_many include/linux/percpu-refcount.h:285 [inline] percpu_ref_put include/linux/percpu-refcount.h:301 [inline] percpu_ref_call_confirm_rcu lib/percpu-refcount.c:123 [inline] percpu_ref_switch_to_atomic_rcu+0x50c/0x6b0 lib/percpu-refcount.c:158 __rcu_reclaim kernel/rcu/rcu.h:240 [inline] rcu_do_batch kernel/rcu/tree.c:2486 [inline] invoke_rcu_callbacks kernel/rcu/tree.c:2799 [inline] rcu_core+0xc4a/0x1680 kernel/rcu/tree.c:2780 __do_softirq+0x30b/0xb11 kernel/softirq.c:292 invoke_softirq kernel/softirq.c:373 [inline] irq_exit+0x180/0x1d0 kernel/softirq.c:413 exiting_irq arch/x86/include/asm/apic.h:536 [inline] smp_apic_timer_interrupt+0x1b7/0x760 arch/x86/kernel/apic/apic.c:1062 apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:807 arch_local_irq_enable arch/x86/include/asm/paravirt.h:776 [inline] __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:168 [inline] _raw_spin_unlock_irq+0x54/0x90 kernel/locking/spinlock.c:192 finish_lock_switch kernel/sched/core.c:2584 [inline] finish_task_switch+0x1e9/0xac0 kernel/sched/core.c:2684 context_switch kernel/sched/core.c:2837 [inline] __schedule+0x89f/0x1e60 kernel/sched/core.c:3475 preempt_schedule_common+0x4f/0xe0 kernel/sched/core.c:3599 preempt_schedule+0x4b/0x60 kernel/sched/core.c:3625 ___preempt_schedule+0x16/0x18 __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:161 [inline] _raw_spin_unlock_irqrestore+0xbd/0xe0 kernel/locking/spinlock.c:184 try_to_wake_up+0xf9/0x1480 kernel/sched/core.c:2061 wake_up_process kernel/sched/core.c:2129 [inline] wake_up_q+0x99/0x100 kernel/sched/core.c:440 futex_wake+0x638/0x7b0 kernel/futex.c:1621 do_futex+0x371/0x2910 kernel/futex.c:3598 __do_sys_futex kernel/futex.c:3654 [inline] __se_sys_futex kernel/futex.c:3622 [inline] __x64_sys_futex+0x459/0x670 kernel/futex.c:3622 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe INITIAL USE at: lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline] _raw_spin_lock_irq+0x60/0x80 kernel/locking/spinlock.c:160 spin_lock_irq include/linux/spinlock.h:354 [inline] free_ioctx_users+0xa7/0x6e0 fs/aio.c:610 percpu_ref_put_many include/linux/percpu-refcount.h:285 [inline] percpu_ref_put include/linux/percpu-refcount.h:301 [inline] percpu_ref_call_confirm_rcu lib/percpu-refcount.c:123 [inline] percpu_ref_switch_to_atomic_rcu+0x50c/0x6b0 lib/percpu-refcount.c:158 __rcu_reclaim kernel/rcu/rcu.h:240 [inline] rcu_do_batch kernel/rcu/tree.c:2486 [inline] invoke_rcu_callbacks kernel/rcu/tree.c:2799 [inline] rcu_core+0xc4a/0x1680 kernel/rcu/tree.c:2780 __do_softirq+0x30b/0xb11 kernel/softirq.c:292 invoke_softirq kernel/softirq.c:373 [inline] irq_exit+0x180/0x1d0 kernel/softirq.c:413 exiting_irq arch/x86/include/asm/apic.h:536 [inline] smp_apic_timer_interrupt+0x1b7/0x760 arch/x86/kernel/apic/apic.c:1062 apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:807 arch_local_irq_enable arch/x86/include/asm/paravirt.h:776 [inline] __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:168 [inline] _raw_spin_unlock_irq+0x54/0x90 kernel/locking/spinlock.c:192 finish_lock_switch kernel/sched/core.c:2584 [inline] finish_task_switch+0x1e9/0xac0 kernel/sched/core.c:2684 context_switch kernel/sched/core.c:2837 [inline] __schedule+0x89f/0x1e60 kernel/sched/core.c:3475 preempt_schedule_common+0x4f/0xe0 kernel/sched/core.c:3599 preempt_schedule+0x4b/0x60 kernel/sched/core.c:3625 ___preempt_schedule+0x16/0x18 __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:161 [inline] _raw_spin_unlock_irqrestore+0xbd/0xe0 kernel/locking/spinlock.c:184 try_to_wake_up+0xf9/0x1480 kernel/sched/core.c:2061 wake_up_process kernel/sched/core.c:2129 [inline] wake_up_q+0x99/0x100 kernel/sched/core.c:440 futex_wake+0x638/0x7b0 kernel/futex.c:1621 do_futex+0x371/0x2910 kernel/futex.c:3598 __do_sys_futex kernel/futex.c:3654 [inline] __se_sys_futex kernel/futex.c:3622 [inline] __x64_sys_futex+0x459/0x670 kernel/futex.c:3622 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe } ... key at: [] __key.52266+0x0/0x40 ... acquired at: __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:144 spin_lock include/linux/spinlock.h:329 [inline] aio_poll+0x7b9/0x14e0 fs/aio.c:1772 __io_submit_one fs/aio.c:1875 [inline] io_submit_one+0xc39/0x1050 fs/aio.c:1908 __do_sys_io_submit fs/aio.c:1953 [inline] __se_sys_io_submit fs/aio.c:1923 [inline] __x64_sys_io_submit+0x1c4/0x5d0 fs/aio.c:1923 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> (&ctx->fd_wqh){....} { INITIAL USE at: lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x95/0xcd kernel/locking/spinlock.c:152 __wake_up_common_lock+0x19b/0x390 kernel/sched/wait.c:120 __wake_up+0xe/0x10 kernel/sched/wait.c:145 handle_userfault+0xdfb/0x2510 fs/userfaultfd.c:487 do_anonymous_page mm/memory.c:2939 [inline] handle_pte_fault mm/memory.c:3803 [inline] __handle_mm_fault+0x45c5/0x5610 mm/memory.c:3929 handle_mm_fault+0x4ec/0xc80 mm/memory.c:3966 do_user_addr_fault arch/x86/mm/fault.c:1475 [inline] __do_page_fault+0x5ef/0xda0 arch/x86/mm/fault.c:1541 do_page_fault+0xe6/0x7d8 arch/x86/mm/fault.c:1572 page_fault+0x1e/0x30 arch/x86/entry/entry_64.S:1143 __get_user_4+0x21/0x30 arch/x86/lib/getuser.S:76 sock_common_setsockopt+0x9a/0xe0 net/core/sock.c:3016 __sys_setsockopt+0x1b0/0x3a0 net/socket.c:1902 __do_sys_setsockopt net/socket.c:1913 [inline] __se_sys_setsockopt net/socket.c:1910 [inline] __x64_sys_setsockopt+0xbe/0x150 net/socket.c:1910 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe } ... key at: [] __key.45193+0x0/0x40 ... acquired at: lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:144 spin_lock include/linux/spinlock.h:329 [inline] userfaultfd_ctx_read+0x690/0x2060 fs/userfaultfd.c:1040 userfaultfd_read+0x1e0/0x2c0 fs/userfaultfd.c:1198 __vfs_read+0x116/0xb20 fs/read_write.c:416 vfs_read+0x194/0x3e0 fs/read_write.c:452 ksys_read+0x105/0x260 fs/read_write.c:578 __do_sys_read fs/read_write.c:588 [inline] __se_sys_read fs/read_write.c:586 [inline] __x64_sys_read+0x73/0xb0 fs/read_write.c:586 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe the dependencies between the lock to be acquired and SOFTIRQ-irq-unsafe lock: -> (&ctx->fault_pending_wqh){+.+.} { HARDIRQ-ON-W at: lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:144 spin_lock include/linux/spinlock.h:329 [inline] handle_userfault+0x901/0x2510 fs/userfaultfd.c:461 do_anonymous_page mm/memory.c:2939 [inline] handle_pte_fault mm/memory.c:3803 [inline] __handle_mm_fault+0x45c5/0x5610 mm/memory.c:3929 handle_mm_fault+0x4ec/0xc80 mm/memory.c:3966 do_user_addr_fault arch/x86/mm/fault.c:1475 [inline] __do_page_fault+0x5ef/0xda0 arch/x86/mm/fault.c:1541 do_page_fault+0xe6/0x7d8 arch/x86/mm/fault.c:1572 page_fault+0x1e/0x30 arch/x86/entry/entry_64.S:1143 __get_user_4+0x21/0x30 arch/x86/lib/getuser.S:76 sock_common_setsockopt+0x9a/0xe0 net/core/sock.c:3016 __sys_setsockopt+0x1b0/0x3a0 net/socket.c:1902 __do_sys_setsockopt net/socket.c:1913 [inline] __se_sys_setsockopt net/socket.c:1910 [inline] __x64_sys_setsockopt+0xbe/0x150 net/socket.c:1910 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe SOFTIRQ-ON-W at: lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:144 spin_lock include/linux/spinlock.h:329 [inline] handle_userfault+0x901/0x2510 fs/userfaultfd.c:461 do_anonymous_page mm/memory.c:2939 [inline] handle_pte_fault mm/memory.c:3803 [inline] __handle_mm_fault+0x45c5/0x5610 mm/memory.c:3929 handle_mm_fault+0x4ec/0xc80 mm/memory.c:3966 do_user_addr_fault arch/x86/mm/fault.c:1475 [inline] __do_page_fault+0x5ef/0xda0 arch/x86/mm/fault.c:1541 do_page_fault+0xe6/0x7d8 arch/x86/mm/fault.c:1572 page_fault+0x1e/0x30 arch/x86/entry/entry_64.S:1143 __get_user_4+0x21/0x30 arch/x86/lib/getuser.S:76 sock_common_setsockopt+0x9a/0xe0 net/core/sock.c:3016 __sys_setsockopt+0x1b0/0x3a0 net/socket.c:1902 __do_sys_setsockopt net/socket.c:1913 [inline] __se_sys_setsockopt net/socket.c:1910 [inline] __x64_sys_setsockopt+0xbe/0x150 net/socket.c:1910 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe INITIAL USE at: lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:144 spin_lock include/linux/spinlock.h:329 [inline] handle_userfault+0x901/0x2510 fs/userfaultfd.c:461 do_anonymous_page mm/memory.c:2939 [inline] handle_pte_fault mm/memory.c:3803 [inline] __handle_mm_fault+0x45c5/0x5610 mm/memory.c:3929 handle_mm_fault+0x4ec/0xc80 mm/memory.c:3966 do_user_addr_fault arch/x86/mm/fault.c:1475 [inline] __do_page_fault+0x5ef/0xda0 arch/x86/mm/fault.c:1541 do_page_fault+0xe6/0x7d8 arch/x86/mm/fault.c:1572 page_fault+0x1e/0x30 arch/x86/entry/entry_64.S:1143 __get_user_4+0x21/0x30 arch/x86/lib/getuser.S:76 sock_common_setsockopt+0x9a/0xe0 net/core/sock.c:3016 __sys_setsockopt+0x1b0/0x3a0 net/socket.c:1902 __do_sys_setsockopt net/socket.c:1913 [inline] __se_sys_setsockopt net/socket.c:1910 [inline] __x64_sys_setsockopt+0xbe/0x150 net/socket.c:1910 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe } ... key at: [] __key.45190+0x0/0x40 ... acquired at: lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:144 spin_lock include/linux/spinlock.h:329 [inline] userfaultfd_ctx_read+0x690/0x2060 fs/userfaultfd.c:1040 userfaultfd_read+0x1e0/0x2c0 fs/userfaultfd.c:1198 __vfs_read+0x116/0xb20 fs/read_write.c:416 vfs_read+0x194/0x3e0 fs/read_write.c:452 ksys_read+0x105/0x260 fs/read_write.c:578 __do_sys_read fs/read_write.c:588 [inline] __se_sys_read fs/read_write.c:586 [inline] __x64_sys_read+0x73/0xb0 fs/read_write.c:586 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe stack backtrace: CPU: 0 PID: 27413 Comm: syz-executor3 Not tainted 5.0.0-rc2-next-20190121 #16 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x1db/0x2d0 lib/dump_stack.c:113 print_bad_irq_dependency kernel/locking/lockdep.c:1574 [inline] check_usage.cold+0x5e2/0x917 kernel/locking/lockdep.c:1606 check_irq_usage kernel/locking/lockdep.c:1662 [inline] check_prev_add_irq kernel/locking/lockdep_states.h:8 [inline] check_prev_add kernel/locking/lockdep.c:1872 [inline] check_prevs_add kernel/locking/lockdep.c:1980 [inline] validate_chain kernel/locking/lockdep.c:2351 [inline] __lock_acquire+0x2142/0x4a10 kernel/locking/lockdep.c:3339 lock_acquire+0x1db/0x570 kernel/locking/lockdep.c:3860 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:144 spin_lock include/linux/spinlock.h:329 [inline] userfaultfd_ctx_read+0x690/0x2060 fs/userfaultfd.c:1040 userfaultfd_read+0x1e0/0x2c0 fs/userfaultfd.c:1198 __vfs_read+0x116/0xb20 fs/read_write.c:416 vfs_read+0x194/0x3e0 fs/read_write.c:452 ksys_read+0x105/0x260 fs/read_write.c:578 __do_sys_read fs/read_write.c:588 [inline] __se_sys_read fs/read_write.c:586 [inline] __x64_sys_read+0x73/0xb0 fs/read_write.c:586 do_syscall_64+0x1a3/0x800 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x458099 Code: 6d b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 3b b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007f227af46c78 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 0000000000458099 RDX: 0000000000000064 RSI: 0000000020910000 RDI: 0000000000000003 RBP: 000000000073bfa0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f227af476d4 R13: 00000000004c3b5d R14: 00000000004d83b0 R15: 00000000ffffffff