===================================================== WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected 5.2.0-rc7 #65 Not tainted ----------------------------------------------------- syz-executor.0/16198 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: 00000000f348d964 (&fiq->waitq){+.+.}, at: spin_lock include/linux/spinlock.h:338 [inline] 00000000f348d964 (&fiq->waitq){+.+.}, at: aio_poll fs/aio.c:1752 [inline] 00000000f348d964 (&fiq->waitq){+.+.}, at: __io_submit_one fs/aio.c:1826 [inline] 00000000f348d964 (&fiq->waitq){+.+.}, at: io_submit_one+0xefa/0x2ef0 fs/aio.c:1863 and this task is already holding: 00000000ffd906b2 (&(&ctx->ctx_lock)->rlock){..-.}, at: spin_lock_irq include/linux/spinlock.h:363 [inline] 00000000ffd906b2 (&(&ctx->ctx_lock)->rlock){..-.}, at: aio_poll fs/aio.c:1750 [inline] 00000000ffd906b2 (&(&ctx->ctx_lock)->rlock){..-.}, at: __io_submit_one fs/aio.c:1826 [inline] 00000000ffd906b2 (&(&ctx->ctx_lock)->rlock){..-.}, at: io_submit_one+0xeb5/0x2ef0 fs/aio.c:1863 which would create a new lock dependency: (&(&ctx->ctx_lock)->rlock){..-.} -> (&fiq->waitq){+.+.} but this new dependency connects a SOFTIRQ-irq-safe lock: (&(&ctx->ctx_lock)->rlock){..-.} ... which became SOFTIRQ-irq-safe at: lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4303 __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline] _raw_spin_lock_irq+0x60/0x80 kernel/locking/spinlock.c:167 spin_lock_irq include/linux/spinlock.h:363 [inline] free_ioctx_users+0x2d/0x490 fs/aio.c:620 percpu_ref_put_many include/linux/percpu-refcount.h:285 [inline] percpu_ref_put include/linux/percpu-refcount.h:301 [inline] percpu_ref_call_confirm_rcu lib/percpu-refcount.c:124 [inline] percpu_ref_switch_to_atomic_rcu+0x407/0x540 lib/percpu-refcount.c:159 __rcu_reclaim kernel/rcu/rcu.h:222 [inline] rcu_do_batch kernel/rcu/tree.c:2092 [inline] invoke_rcu_callbacks kernel/rcu/tree.c:2310 [inline] rcu_core+0xba5/0x1500 kernel/rcu/tree.c:2291 __do_softirq+0x25c/0x94c kernel/softirq.c:292 invoke_softirq kernel/softirq.c:373 [inline] irq_exit+0x180/0x1d0 kernel/softirq.c:413 exiting_irq arch/x86/include/asm/apic.h:536 [inline] smp_apic_timer_interrupt+0x13b/0x550 arch/x86/kernel/apic/apic.c:1068 apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:806 do_wait+0x15f/0x9d0 kernel/exit.c:1510 kernel_wait4+0x171/0x290 kernel/exit.c:1669 __do_sys_wait4+0x147/0x160 kernel/exit.c:1681 __se_sys_wait4 kernel/exit.c:1677 [inline] __x64_sys_wait4+0x97/0xf0 kernel/exit.c:1677 do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301 entry_SYSCALL_64_after_hwframe+0x49/0xbe to a SOFTIRQ-irq-unsafe lock: (&fiq->waitq){+.+.} ... which became SOFTIRQ-irq-unsafe at: ... lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4303 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:338 [inline] flush_bg_queue+0x1f3/0x3c0 fs/fuse/dev.c:415 fuse_request_queue_background+0x2d1/0x580 fs/fuse/dev.c:676 fuse_request_send_background+0x58/0x110 fs/fuse/dev.c:687 fuse_send_init fs/fuse/inode.c:986 [inline] fuse_fill_super+0x13b4/0x1720 fs/fuse/inode.c:1211 mount_nodev+0x66/0x110 fs/super.c:1392 fuse_mount+0x2d/0x40 fs/fuse/inode.c:1236 legacy_get_tree+0x108/0x220 fs/fs_context.c:661 vfs_get_tree+0x8e/0x390 fs/super.c:1476 do_new_mount fs/namespace.c:2791 [inline] do_mount+0x138c/0x1c00 fs/namespace.c:3111 ksys_mount+0xdb/0x150 fs/namespace.c:3320 __do_sys_mount fs/namespace.c:3334 [inline] __se_sys_mount fs/namespace.c:3331 [inline] __x64_sys_mount+0xbe/0x150 fs/namespace.c:3331 do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301 entry_SYSCALL_64_after_hwframe+0x49/0xbe other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&fiq->waitq); local_irq_disable(); lock(&(&ctx->ctx_lock)->rlock); lock(&fiq->waitq); lock(&(&ctx->ctx_lock)->rlock); *** DEADLOCK *** 1 lock held by syz-executor.0/16198: #0: 00000000ffd906b2 (&(&ctx->ctx_lock)->rlock){..-.}, at: spin_lock_irq include/linux/spinlock.h:363 [inline] #0: 00000000ffd906b2 (&(&ctx->ctx_lock)->rlock){..-.}, at: aio_poll fs/aio.c:1750 [inline] #0: 00000000ffd906b2 (&(&ctx->ctx_lock)->rlock){..-.}, at: __io_submit_one fs/aio.c:1826 [inline] #0: 00000000ffd906b2 (&(&ctx->ctx_lock)->rlock){..-.}, at: io_submit_one+0xeb5/0x2ef0 fs/aio.c:1863 the dependencies between SOFTIRQ-irq-safe lock and the holding lock: -> (&(&ctx->ctx_lock)->rlock){..-.} { IN-SOFTIRQ-W at: lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4303 __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline] _raw_spin_lock_irq+0x60/0x80 kernel/locking/spinlock.c:167 spin_lock_irq include/linux/spinlock.h:363 [inline] free_ioctx_users+0x2d/0x490 fs/aio.c:620 percpu_ref_put_many include/linux/percpu-refcount.h:285 [inline] percpu_ref_put include/linux/percpu-refcount.h:301 [inline] percpu_ref_call_confirm_rcu lib/percpu-refcount.c:124 [inline] percpu_ref_switch_to_atomic_rcu+0x407/0x540 lib/percpu-refcount.c:159 __rcu_reclaim kernel/rcu/rcu.h:222 [inline] rcu_do_batch kernel/rcu/tree.c:2092 [inline] invoke_rcu_callbacks kernel/rcu/tree.c:2310 [inline] rcu_core+0xba5/0x1500 kernel/rcu/tree.c:2291 __do_softirq+0x25c/0x94c kernel/softirq.c:292 invoke_softirq kernel/softirq.c:373 [inline] irq_exit+0x180/0x1d0 kernel/softirq.c:413 exiting_irq arch/x86/include/asm/apic.h:536 [inline] smp_apic_timer_interrupt+0x13b/0x550 arch/x86/kernel/apic/apic.c:1068 apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:806 do_wait+0x15f/0x9d0 kernel/exit.c:1510 kernel_wait4+0x171/0x290 kernel/exit.c:1669 __do_sys_wait4+0x147/0x160 kernel/exit.c:1681 __se_sys_wait4 kernel/exit.c:1677 [inline] __x64_sys_wait4+0x97/0xf0 kernel/exit.c:1677 do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301 entry_SYSCALL_64_after_hwframe+0x49/0xbe INITIAL USE at: lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4303 __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline] _raw_spin_lock_irq+0x60/0x80 kernel/locking/spinlock.c:167 spin_lock_irq include/linux/spinlock.h:363 [inline] free_ioctx_users+0x2d/0x490 fs/aio.c:620 percpu_ref_put_many include/linux/percpu-refcount.h:285 [inline] percpu_ref_put include/linux/percpu-refcount.h:301 [inline] percpu_ref_call_confirm_rcu lib/percpu-refcount.c:124 [inline] percpu_ref_switch_to_atomic_rcu+0x407/0x540 lib/percpu-refcount.c:159 __rcu_reclaim kernel/rcu/rcu.h:222 [inline] rcu_do_batch kernel/rcu/tree.c:2092 [inline] invoke_rcu_callbacks kernel/rcu/tree.c:2310 [inline] rcu_core+0xba5/0x1500 kernel/rcu/tree.c:2291 __do_softirq+0x25c/0x94c kernel/softirq.c:292 invoke_softirq kernel/softirq.c:373 [inline] irq_exit+0x180/0x1d0 kernel/softirq.c:413 exiting_irq arch/x86/include/asm/apic.h:536 [inline] smp_apic_timer_interrupt+0x13b/0x550 arch/x86/kernel/apic/apic.c:1068 apic_timer_interrupt+0xf/0x20 arch/x86/entry/entry_64.S:806 do_wait+0x15f/0x9d0 kernel/exit.c:1510 kernel_wait4+0x171/0x290 kernel/exit.c:1669 __do_sys_wait4+0x147/0x160 kernel/exit.c:1681 __se_sys_wait4 kernel/exit.c:1677 [inline] __x64_sys_wait4+0x97/0xf0 kernel/exit.c:1677 do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301 entry_SYSCALL_64_after_hwframe+0x49/0xbe } ... key at: [] __key.53436+0x0/0x40 ... acquired at: lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4303 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:338 [inline] aio_poll fs/aio.c:1752 [inline] __io_submit_one fs/aio.c:1826 [inline] io_submit_one+0xefa/0x2ef0 fs/aio.c:1863 __do_sys_io_submit fs/aio.c:1922 [inline] __se_sys_io_submit fs/aio.c:1892 [inline] __x64_sys_io_submit+0x1bd/0x570 fs/aio.c:1892 do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301 entry_SYSCALL_64_after_hwframe+0x49/0xbe the dependencies between the lock to be acquired and SOFTIRQ-irq-unsafe lock: -> (&fiq->waitq){+.+.} { HARDIRQ-ON-W at: lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4303 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:338 [inline] flush_bg_queue+0x1f3/0x3c0 fs/fuse/dev.c:415 fuse_request_queue_background+0x2d1/0x580 fs/fuse/dev.c:676 fuse_request_send_background+0x58/0x110 fs/fuse/dev.c:687 fuse_send_init fs/fuse/inode.c:986 [inline] fuse_fill_super+0x13b4/0x1720 fs/fuse/inode.c:1211 mount_nodev+0x66/0x110 fs/super.c:1392 fuse_mount+0x2d/0x40 fs/fuse/inode.c:1236 legacy_get_tree+0x108/0x220 fs/fs_context.c:661 vfs_get_tree+0x8e/0x390 fs/super.c:1476 do_new_mount fs/namespace.c:2791 [inline] do_mount+0x138c/0x1c00 fs/namespace.c:3111 ksys_mount+0xdb/0x150 fs/namespace.c:3320 __do_sys_mount fs/namespace.c:3334 [inline] __se_sys_mount fs/namespace.c:3331 [inline] __x64_sys_mount+0xbe/0x150 fs/namespace.c:3331 do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301 entry_SYSCALL_64_after_hwframe+0x49/0xbe SOFTIRQ-ON-W at: lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4303 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:338 [inline] flush_bg_queue+0x1f3/0x3c0 fs/fuse/dev.c:415 fuse_request_queue_background+0x2d1/0x580 fs/fuse/dev.c:676 fuse_request_send_background+0x58/0x110 fs/fuse/dev.c:687 fuse_send_init fs/fuse/inode.c:986 [inline] fuse_fill_super+0x13b4/0x1720 fs/fuse/inode.c:1211 mount_nodev+0x66/0x110 fs/super.c:1392 fuse_mount+0x2d/0x40 fs/fuse/inode.c:1236 legacy_get_tree+0x108/0x220 fs/fs_context.c:661 vfs_get_tree+0x8e/0x390 fs/super.c:1476 do_new_mount fs/namespace.c:2791 [inline] do_mount+0x138c/0x1c00 fs/namespace.c:3111 ksys_mount+0xdb/0x150 fs/namespace.c:3320 __do_sys_mount fs/namespace.c:3334 [inline] __se_sys_mount fs/namespace.c:3331 [inline] __x64_sys_mount+0xbe/0x150 fs/namespace.c:3331 do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301 entry_SYSCALL_64_after_hwframe+0x49/0xbe INITIAL USE at: lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4303 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:338 [inline] flush_bg_queue+0x1f3/0x3c0 fs/fuse/dev.c:415 fuse_request_queue_background+0x2d1/0x580 fs/fuse/dev.c:676 fuse_request_send_background+0x58/0x110 fs/fuse/dev.c:687 fuse_send_init fs/fuse/inode.c:986 [inline] fuse_fill_super+0x13b4/0x1720 fs/fuse/inode.c:1211 mount_nodev+0x66/0x110 fs/super.c:1392 fuse_mount+0x2d/0x40 fs/fuse/inode.c:1236 legacy_get_tree+0x108/0x220 fs/fs_context.c:661 vfs_get_tree+0x8e/0x390 fs/super.c:1476 do_new_mount fs/namespace.c:2791 [inline] do_mount+0x138c/0x1c00 fs/namespace.c:3111 ksys_mount+0xdb/0x150 fs/namespace.c:3320 __do_sys_mount fs/namespace.c:3334 [inline] __se_sys_mount fs/namespace.c:3331 [inline] __x64_sys_mount+0xbe/0x150 fs/namespace.c:3331 do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301 entry_SYSCALL_64_after_hwframe+0x49/0xbe } ... key at: [] __key.44051+0x0/0x40 ... acquired at: lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4303 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:338 [inline] aio_poll fs/aio.c:1752 [inline] __io_submit_one fs/aio.c:1826 [inline] io_submit_one+0xefa/0x2ef0 fs/aio.c:1863 __do_sys_io_submit fs/aio.c:1922 [inline] __se_sys_io_submit fs/aio.c:1892 [inline] __x64_sys_io_submit+0x1bd/0x570 fs/aio.c:1892 do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301 entry_SYSCALL_64_after_hwframe+0x49/0xbe stack backtrace: CPU: 1 PID: 16198 Comm: syz-executor.0 Not tainted 5.2.0-rc7 #65 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x172/0x1f0 lib/dump_stack.c:113 print_bad_irq_dependency kernel/locking/lockdep.c:1920 [inline] check_irq_usage.cold+0x711/0xba0 kernel/locking/lockdep.c:2114 check_prev_add kernel/locking/lockdep.c:2315 [inline] check_prevs_add kernel/locking/lockdep.c:2418 [inline] validate_chain kernel/locking/lockdep.c:2800 [inline] __lock_acquire+0x2469/0x5490 kernel/locking/lockdep.c:3793 lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:4303 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2f/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:338 [inline] aio_poll fs/aio.c:1752 [inline] __io_submit_one fs/aio.c:1826 [inline] io_submit_one+0xefa/0x2ef0 fs/aio.c:1863 __do_sys_io_submit fs/aio.c:1922 [inline] __se_sys_io_submit fs/aio.c:1892 [inline] __x64_sys_io_submit+0x1bd/0x570 fs/aio.c:1892 do_syscall_64+0xfd/0x680 arch/x86/entry/common.c:301 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x459519 Code: fd b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 cb b7 fb ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007f9d3cacbc78 EFLAGS: 00000246 ORIG_RAX: 00000000000000d1 RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 0000000000459519 RDX: 0000000020000040 RSI: 0000000000000001 RDI: 00007f9d3caab000 RBP: 000000000075bf20 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00007f9d3cacc6d4 R13: 00000000004c0898 R14: 00000000004d3548 R15: 00000000ffffffff