rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: 0-...!: (0 ticks this GP) idle=04a/1/0x4000000000000000 softirq=28582/28582 fqs=35 (detected by 1, t=10502 jiffies, g=48097, q=340) ============================================ WARNING: possible recursive locking detected 5.11.0-rc2-syzkaller #0 Not tainted -------------------------------------------- systemd-udevd/4887 is trying to acquire lock: ffffffff8b36b918 (rcu_node_0){-.-.}-{2:2}, at: rcu_dump_cpu_stacks+0x9c/0x21e kernel/rcu/tree_stall.h:334 but task is already holding lock: ffffffff8b36b918 (rcu_node_0){-.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:493 [inline] ffffffff8b36b918 (rcu_node_0){-.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:652 [inline] ffffffff8b36b918 (rcu_node_0){-.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3751 [inline] ffffffff8b36b918 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq.cold+0xbc/0xec3 kernel/rcu/tree.c:2580 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(rcu_node_0); lock(rcu_node_0); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by systemd-udevd/4887: #0: ffff888144437160 (&bdev->bd_mutex){+.+.}-{3:3}, at: blkdev_reread_part block/ioctl.c:93 [inline] #0: ffff888144437160 (&bdev->bd_mutex){+.+.}-{3:3}, at: blkdev_common_ioctl+0x1292/0x16a0 block/ioctl.c:501 #1: ffffffff8b36b918 (rcu_node_0){-.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:493 [inline] #1: ffffffff8b36b918 (rcu_node_0){-.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:652 [inline] #1: ffffffff8b36b918 (rcu_node_0){-.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3751 [inline] #1: ffffffff8b36b918 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq.cold+0xbc/0xec3 kernel/rcu/tree.c:2580 stack backtrace: CPU: 1 PID: 4887 Comm: systemd-udevd Not tainted 5.11.0-rc2-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:120 print_deadlock_bug kernel/locking/lockdep.c:2761 [inline] check_deadlock kernel/locking/lockdep.c:2804 [inline] validate_chain kernel/locking/lockdep.c:3595 [inline] __lock_acquire.cold+0x15e/0x3b0 kernel/locking/lockdep.c:4832 lock_acquire kernel/locking/lockdep.c:5437 [inline] lock_acquire+0x29d/0x740 kernel/locking/lockdep.c:5402 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x39/0x50 kernel/locking/spinlock.c:159 rcu_dump_cpu_stacks+0x9c/0x21e kernel/rcu/tree_stall.h:334 print_other_cpu_stall kernel/rcu/tree_stall.h:510 [inline] check_cpu_stall kernel/rcu/tree_stall.h:652 [inline] rcu_pending kernel/rcu/tree.c:3751 [inline] rcu_sched_clock_irq.cold+0x6db/0xec3 kernel/rcu/tree.c:2580 update_process_times+0x16d/0x200 kernel/time/timer.c:1782 tick_sched_handle+0x9b/0x180 kernel/time/tick-sched.c:226 tick_sched_timer+0x1b0/0x2d0 kernel/time/tick-sched.c:1369 __run_hrtimer kernel/time/hrtimer.c:1519 [inline] __hrtimer_run_queues+0x1ce/0xea0 kernel/time/hrtimer.c:1583 hrtimer_interrupt+0x334/0x940 kernel/time/hrtimer.c:1645 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline] __sysvec_apic_timer_interrupt+0x146/0x540 arch/x86/kernel/apic/apic.c:1102 asm_call_irq_on_stack+0xf/0x20 __run_sysvec_on_irqstack arch/x86/include/asm/irq_stack.h:37 [inline] run_sysvec_on_irqstack_cond arch/x86/include/asm/irq_stack.h:89 [inline] sysvec_apic_timer_interrupt+0xbd/0x100 arch/x86/kernel/apic/apic.c:1096 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:628 RIP: 0010:csd_lock_wait kernel/smp.c:227 [inline] RIP: 0010:smp_call_function_single+0x1b0/0x4b0 kernel/smp.c:512 Code: 10 8b 7c 24 1c 48 8d 74 24 40 48 89 44 24 50 48 8b 44 24 08 48 89 44 24 58 e8 0c fb ff ff 41 89 c5 eb 07 e8 32 44 0b 00 f3 90 <44> 8b 64 24 48 31 ff 41 83 e4 01 44 89 e6 e8 3d 4a 0b 00 45 85 e4 RSP: 0018:ffffc9000134fa80 EFLAGS: 00000293 RAX: 0000000000000000 RBX: 1ffff92000269f54 RCX: 0000000000000000 RDX: ffff88801cc72080 RSI: ffffffff816733ce RDI: 0000000000000003 RBP: ffffc9000134fb50 R08: 0000000000000000 R09: 0000000000000000 R10: ffffffff816733e3 R11: 0000000000000000 R12: 0000000000000001 R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000008 smp_call_function_many_cond+0x25f/0x9d0 kernel/smp.c:648 on_each_cpu_cond_mask+0x50/0x160 kernel/smp.c:899 invalidate_bdev+0x91/0xd0 fs/block_dev.c:95 blk_drop_partitions+0xba/0x190 block/partitions/core.c:549 bdev_disk_changed+0x223/0x410 fs/block_dev.c:1226 blkdev_reread_part block/ioctl.c:94 [inline] blkdev_common_ioctl+0x129c/0x16a0 block/ioctl.c:501 blkdev_ioctl+0x1d4/0x6b0 block/ioctl.c:570 block_ioctl+0xf9/0x140 fs/block_dev.c:1647 vfs_ioctl fs/ioctl.c:48 [inline] __do_sys_ioctl fs/ioctl.c:753 [inline] __se_sys_ioctl fs/ioctl.c:739 [inline] __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:739 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7ff7425bd017 Code: 00 00 00 48 8b 05 81 7e 2b 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 51 7e 2b 00 f7 d8 64 89 01 48 RSP: 002b:00007ffdcb38cf28 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007ffdcb38d010 RCX: 00007ff7425bd017 RDX: 0000000000000000 RSI: 000000000000125f RDI: 000000000000000e RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000010 R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffdcb38d0d0 R13: 000055f9c8469a60 R14: 000055f9c85f9860 R15: 00007ffdcb38cfa0