===================================================== WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected 5.8.0-syzkaller #0 Not tainted ----------------------------------------------------- io_wqe_worker-0/15004 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: ffff888028b7c420 (&fs->lock){+.+.}-{2:2}, at: spin_lock include/linux/spinlock.h:354 [inline] ffff888028b7c420 (&fs->lock){+.+.}-{2:2}, at: io_req_clean_work fs/io_uring.c:1126 [inline] ffff888028b7c420 (&fs->lock){+.+.}-{2:2}, at: io_dismantle_req+0x3ec/0x9e0 fs/io_uring.c:1544 and this task is already holding: ffff888026bb24d8 (&ctx->completion_lock){-...}-{2:2}, at: io_fail_links fs/io_uring.c:1674 [inline] ffff888026bb24d8 (&ctx->completion_lock){-...}-{2:2}, at: __io_req_find_next+0x35d/0x460 fs/io_uring.c:1698 which would create a new lock dependency: (&ctx->completion_lock){-...}-{2:2} -> (&fs->lock){+.+.}-{2:2} but this new dependency connects a HARDIRQ-irq-safe lock: (&ctx->completion_lock){-...}-{2:2} ... which became HARDIRQ-irq-safe at: lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x8c/0xc0 kernel/locking/spinlock.c:159 io_timeout_fn+0x6c/0x3f0 fs/io_uring.c:4999 __run_hrtimer kernel/time/hrtimer.c:1520 [inline] __hrtimer_run_queues+0x6a9/0xfc0 kernel/time/hrtimer.c:1584 hrtimer_interrupt+0x32a/0x930 kernel/time/hrtimer.c:1646 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1080 [inline] __sysvec_apic_timer_interrupt+0x142/0x5e0 arch/x86/kernel/apic/apic.c:1097 asm_call_on_stack+0xf/0x20 arch/x86/entry/entry_64.S:706 __run_on_irqstack arch/x86/include/asm/irq_stack.h:22 [inline] run_on_irqstack_cond arch/x86/include/asm/irq_stack.h:48 [inline] sysvec_apic_timer_interrupt+0xb2/0xf0 arch/x86/kernel/apic/apic.c:1091 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:581 arch_local_irq_enable arch/x86/include/asm/paravirt.h:780 [inline] __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:168 [inline] _raw_spin_unlock_irq+0x4b/0x80 kernel/locking/spinlock.c:199 spin_unlock_irq include/linux/spinlock.h:404 [inline] io_timeout fs/io_uring.c:5162 [inline] io_issue_sqe+0x2de6/0x60d0 fs/io_uring.c:5594 __io_queue_sqe+0x284/0x1190 fs/io_uring.c:5981 io_queue_sqe+0x73e/0x1130 fs/io_uring.c:6060 io_queue_link_head fs/io_uring.c:6071 [inline] io_submit_sqes+0xe4b/0x2380 fs/io_uring.c:6338 __do_sys_io_uring_enter+0xdc7/0x1ae0 fs/io_uring.c:8036 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 to a HARDIRQ-irq-unsafe lock: (&fs->lock){+.+.}-{2:2} ... which became HARDIRQ-irq-unsafe at: ... lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:354 [inline] set_fs_pwd+0x85/0x290 fs/fs_struct.c:39 ksys_chdir+0x11f/0x1d0 fs/open.c:499 devtmpfs_setup drivers/base/devtmpfs.c:391 [inline] devtmpfsd+0xd1/0x3e0 drivers/base/devtmpfs.c:401 kthread+0x3b5/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&fs->lock); local_irq_disable(); lock(&ctx->completion_lock); lock(&fs->lock); lock(&ctx->completion_lock); *** DEADLOCK *** 1 lock held by io_wqe_worker-0/15004: #0: ffff888026bb24d8 (&ctx->completion_lock){-...}-{2:2}, at: io_fail_links fs/io_uring.c:1674 [inline] #0: ffff888026bb24d8 (&ctx->completion_lock){-...}-{2:2}, at: __io_req_find_next+0x35d/0x460 fs/io_uring.c:1698 the dependencies between HARDIRQ-irq-safe lock and the holding lock: -> (&ctx->completion_lock){-...}-{2:2} { IN-HARDIRQ-W at: lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x8c/0xc0 kernel/locking/spinlock.c:159 io_timeout_fn+0x6c/0x3f0 fs/io_uring.c:4999 __run_hrtimer kernel/time/hrtimer.c:1520 [inline] __hrtimer_run_queues+0x6a9/0xfc0 kernel/time/hrtimer.c:1584 hrtimer_interrupt+0x32a/0x930 kernel/time/hrtimer.c:1646 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1080 [inline] __sysvec_apic_timer_interrupt+0x142/0x5e0 arch/x86/kernel/apic/apic.c:1097 asm_call_on_stack+0xf/0x20 arch/x86/entry/entry_64.S:706 __run_on_irqstack arch/x86/include/asm/irq_stack.h:22 [inline] run_on_irqstack_cond arch/x86/include/asm/irq_stack.h:48 [inline] sysvec_apic_timer_interrupt+0xb2/0xf0 arch/x86/kernel/apic/apic.c:1091 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:581 arch_local_irq_enable arch/x86/include/asm/paravirt.h:780 [inline] __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:168 [inline] _raw_spin_unlock_irq+0x4b/0x80 kernel/locking/spinlock.c:199 spin_unlock_irq include/linux/spinlock.h:404 [inline] io_timeout fs/io_uring.c:5162 [inline] io_issue_sqe+0x2de6/0x60d0 fs/io_uring.c:5594 __io_queue_sqe+0x284/0x1190 fs/io_uring.c:5981 io_queue_sqe+0x73e/0x1130 fs/io_uring.c:6060 io_queue_link_head fs/io_uring.c:6071 [inline] io_submit_sqes+0xe4b/0x2380 fs/io_uring.c:6338 __do_sys_io_uring_enter+0xdc7/0x1ae0 fs/io_uring.c:8036 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 INITIAL USE at: lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x8c/0xc0 kernel/locking/spinlock.c:159 io_cqring_add_event fs/io_uring.c:1419 [inline] __io_req_complete fs/io_uring.c:1458 [inline] __io_req_complete fs/io_uring.c:1454 [inline] io_req_complete fs/io_uring.c:1472 [inline] io_submit_sqes+0x192d/0x2380 fs/io_uring.c:6321 __do_sys_io_uring_enter+0xdc7/0x1ae0 fs/io_uring.c:8036 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 } ... key at: [] __key.9+0x0/0x40 ... acquired at: lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:354 [inline] io_req_clean_work fs/io_uring.c:1126 [inline] io_dismantle_req+0x3ec/0x9e0 fs/io_uring.c:1544 __io_free_req+0x16/0x3c0 fs/io_uring.c:1562 __io_double_put_req fs/io_uring.c:1909 [inline] __io_fail_links+0x433/0x5b0 fs/io_uring.c:1659 io_fail_links fs/io_uring.c:1675 [inline] __io_req_find_next+0x368/0x460 fs/io_uring.c:1698 io_req_find_next fs/io_uring.c:1706 [inline] io_steal_work fs/io_uring.c:1897 [inline] io_wq_submit_work+0x33c/0x3d0 fs/io_uring.c:5792 io_worker_handle_work+0xa45/0x13f0 fs/io-wq.c:527 io_wqe_worker+0xbf0/0x10e0 fs/io-wq.c:569 kthread+0x3b5/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock: -> (&fs->lock){+.+.}-{2:2} { HARDIRQ-ON-W at: lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:354 [inline] set_fs_pwd+0x85/0x290 fs/fs_struct.c:39 ksys_chdir+0x11f/0x1d0 fs/open.c:499 devtmpfs_setup drivers/base/devtmpfs.c:391 [inline] devtmpfsd+0xd1/0x3e0 drivers/base/devtmpfs.c:401 kthread+0x3b5/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 SOFTIRQ-ON-W at: lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:354 [inline] set_fs_pwd+0x85/0x290 fs/fs_struct.c:39 ksys_chdir+0x11f/0x1d0 fs/open.c:499 devtmpfs_setup drivers/base/devtmpfs.c:391 [inline] devtmpfsd+0xd1/0x3e0 drivers/base/devtmpfs.c:401 kthread+0x3b5/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 INITIAL USE at: lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:354 [inline] set_fs_pwd+0x85/0x290 fs/fs_struct.c:39 ksys_chdir+0x11f/0x1d0 fs/open.c:499 devtmpfs_setup drivers/base/devtmpfs.c:391 [inline] devtmpfsd+0xd1/0x3e0 drivers/base/devtmpfs.c:401 kthread+0x3b5/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 } ... key at: [] __key.1+0x0/0x40 ... acquired at: lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:354 [inline] io_req_clean_work fs/io_uring.c:1126 [inline] io_dismantle_req+0x3ec/0x9e0 fs/io_uring.c:1544 __io_free_req+0x16/0x3c0 fs/io_uring.c:1562 __io_double_put_req fs/io_uring.c:1909 [inline] __io_fail_links+0x433/0x5b0 fs/io_uring.c:1659 io_fail_links fs/io_uring.c:1675 [inline] __io_req_find_next+0x368/0x460 fs/io_uring.c:1698 io_req_find_next fs/io_uring.c:1706 [inline] io_steal_work fs/io_uring.c:1897 [inline] io_wq_submit_work+0x33c/0x3d0 fs/io_uring.c:5792 io_worker_handle_work+0xa45/0x13f0 fs/io-wq.c:527 io_wqe_worker+0xbf0/0x10e0 fs/io-wq.c:569 kthread+0x3b5/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 stack backtrace: CPU: 2 PID: 15004 Comm: io_wqe_worker-0 Not tainted 5.8.0-syzkaller #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x18f/0x20d lib/dump_stack.c:118 print_bad_irq_dependency kernel/locking/lockdep.c:2113 [inline] check_irq_usage.cold+0x4a5/0x5a1 kernel/locking/lockdep.c:2311 check_prev_add kernel/locking/lockdep.c:2500 [inline] check_prevs_add kernel/locking/lockdep.c:2601 [inline] validate_chain kernel/locking/lockdep.c:3218 [inline] __lock_acquire+0x2a81/0x5640 kernel/locking/lockdep.c:4426 lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:354 [inline] io_req_clean_work fs/io_uring.c:1126 [inline] io_dismantle_req+0x3ec/0x9e0 fs/io_uring.c:1544 __io_free_req+0x16/0x3c0 fs/io_uring.c:1562 __io_double_put_req fs/io_uring.c:1909 [inline] __io_fail_links+0x433/0x5b0 fs/io_uring.c:1659 io_fail_links fs/io_uring.c:1675 [inline] __io_req_find_next+0x368/0x460 fs/io_uring.c:1698 io_req_find_next fs/io_uring.c:1706 [inline] io_steal_work fs/io_uring.c:1897 [inline] io_wq_submit_work+0x33c/0x3d0 fs/io_uring.c:5792 io_worker_handle_work+0xa45/0x13f0 fs/io-wq.c:527 io_wqe_worker+0xbf0/0x10e0 fs/io-wq.c:569 kthread+0x3b5/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294