rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: 0-....: (1 GPs behind) idle=85a/1/0x4000000000000002 softirq=81173/81174 fqs=5239 (detected by 1, t=10502 jiffies, g=157289, q=210756) ============================================ WARNING: possible recursive locking detected 5.10.0-rc5-syzkaller #0 Not tainted -------------------------------------------- kworker/u4:5/1042 is trying to acquire lock: ffffffff8b33f998 (rcu_node_0){-.-.}-{2:2}, at: rcu_dump_cpu_stacks+0x9c/0x21e kernel/rcu/tree_stall.h:328 but task is already holding lock: ffffffff8b33f998 (rcu_node_0){-.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:487 [inline] ffffffff8b33f998 (rcu_node_0){-.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:646 [inline] ffffffff8b33f998 (rcu_node_0){-.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3694 [inline] ffffffff8b33f998 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq.cold+0xbc/0xee8 kernel/rcu/tree.c:2567 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(rcu_node_0); lock(rcu_node_0); *** DEADLOCK *** May be due to missing lock nesting notation 3 locks held by kworker/u4:5/1042: #0: ffff888147026138 ((wq_completion)krdsd){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888147026138 ((wq_completion)krdsd){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888147026138 ((wq_completion)krdsd){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888147026138 ((wq_completion)krdsd){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888147026138 ((wq_completion)krdsd){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888147026138 ((wq_completion)krdsd){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90003d1fda8 ((work_completion)(&cp->cp_down_w)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 #2: ffffffff8b33f998 (rcu_node_0){-.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:487 [inline] #2: ffffffff8b33f998 (rcu_node_0){-.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:646 [inline] #2: ffffffff8b33f998 (rcu_node_0){-.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3694 [inline] #2: ffffffff8b33f998 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq.cold+0xbc/0xee8 kernel/rcu/tree.c:2567 stack backtrace: CPU: 1 PID: 1042 Comm: kworker/u4:5 Not tainted 5.10.0-rc5-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: krdsd rds_shutdown_worker Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:118 print_deadlock_bug kernel/locking/lockdep.c:2761 [inline] check_deadlock kernel/locking/lockdep.c:2804 [inline] validate_chain kernel/locking/lockdep.c:3595 [inline] __lock_acquire.cold+0x15e/0x3b0 kernel/locking/lockdep.c:4832 lock_acquire kernel/locking/lockdep.c:5437 [inline] lock_acquire+0x29d/0x740 kernel/locking/lockdep.c:5402 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x39/0x50 kernel/locking/spinlock.c:159 rcu_dump_cpu_stacks+0x9c/0x21e kernel/rcu/tree_stall.h:328 print_other_cpu_stall kernel/rcu/tree_stall.h:504 [inline] check_cpu_stall kernel/rcu/tree_stall.h:646 [inline] rcu_pending kernel/rcu/tree.c:3694 [inline] rcu_sched_clock_irq.cold+0x6c8/0xee8 kernel/rcu/tree.c:2567 update_process_times+0x77/0xd0 kernel/time/timer.c:1709 tick_sched_handle+0x9b/0x180 kernel/time/tick-sched.c:176 tick_sched_timer+0x1d1/0x2a0 kernel/time/tick-sched.c:1328 __run_hrtimer kernel/time/hrtimer.c:1519 [inline] __hrtimer_run_queues+0x1ce/0xea0 kernel/time/hrtimer.c:1583 hrtimer_interrupt+0x334/0x940 kernel/time/hrtimer.c:1645 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1080 [inline] __sysvec_apic_timer_interrupt+0x146/0x540 arch/x86/kernel/apic/apic.c:1097 asm_call_irq_on_stack+0xf/0x20 __run_sysvec_on_irqstack arch/x86/include/asm/irq_stack.h:37 [inline] run_sysvec_on_irqstack_cond arch/x86/include/asm/irq_stack.h:89 [inline] sysvec_apic_timer_interrupt+0xbd/0x100 arch/x86/kernel/apic/apic.c:1091 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:631 RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:161 [inline] RIP: 0010:_raw_spin_unlock_irqrestore+0x25/0x50 kernel/locking/spinlock.c:191 Code: f8 5d c3 66 90 55 48 89 fd 48 83 c7 18 53 48 89 f3 48 8b 74 24 10 e8 2a 3b 6e f8 48 89 ef e8 b2 ef 6e f8 f6 c7 02 75 1a 53 9d 01 00 00 00 e8 b1 5b 63 f8 65 8b 05 ba 05 1a 77 85 c0 74 0a 5b RSP: 0018:ffffc90003d1f9e8 EFLAGS: 00000206 RAX: 00000000023653e7 RBX: 0000000000000206 RCX: ffffffff8155a947 RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000 RBP: ffffffff8f15f100 R08: 0000000000000001 R09: ffffffff8ebad767 R10: fffffbfff1d75aec R11: 0000000000000000 R12: dffffc0000000000 R13: ffffffff894cb520 R14: 1ffff920007a3f44 R15: ffff888072187728 debug_object_active_state lib/debugobjects.c:940 [inline] debug_object_active_state+0x260/0x350 lib/debugobjects.c:909 debug_rcu_head_queue kernel/rcu/rcu.h:177 [inline] __call_rcu kernel/rcu/tree.c:2939 [inline] call_rcu+0x45/0x700 kernel/rcu/tree.c:3027 destroy_inode+0x129/0x1b0 fs/inode.c:289 iput_final fs/inode.c:1654 [inline] iput.part.0+0x3fe/0x820 fs/inode.c:1680 iput+0x58/0x70 fs/inode.c:1670 __sock_release net/socket.c:608 [inline] sock_release+0x15a/0x1b0 net/socket.c:624 rds_tcp_conn_path_shutdown+0x1e5/0x3f0 net/rds/tcp_connect.c:216 rds_conn_shutdown+0x23e/0x930 net/rds/connection.c:386 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296