syzbot


INFO: rcu detected stall in rds_connect_worker (2)

Status: auto-closed as invalid on 2021/03/28 19:53
Subsystems: net
[Documentation on labels]
First crash: 1223d, last: 1212d
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in rds_connect_worker net 1 1459d 1459d 0/26 auto-closed as invalid on 2020/07/25 07:33

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	0-....: (1 GPs behind) idle=b46/1/0x4000000000000002 softirq=180178/180182 fqs=5240 
	(detected by 1, t=10502 jiffies, g=325977, q=120014)

============================================
WARNING: possible recursive locking detected
5.10.0-syzkaller #0 Not tainted
--------------------------------------------
kworker/u4:0/8 is trying to acquire lock:
ffffffff8ba4d058 (rcu_node_0){-.-.}-{2:2}, at: rcu_dump_cpu_stacks+0xcd/0x261 kernel/rcu/tree_stall.h:334

but task is already holding lock:
ffffffff8ba4d058 (rcu_node_0){-.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:493 [inline]
ffffffff8ba4d058 (rcu_node_0){-.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:652 [inline]
ffffffff8ba4d058 (rcu_node_0){-.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3751 [inline]
ffffffff8ba4d058 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq.cold+0xfd/0x10f9 kernel/rcu/tree.c:2580

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(rcu_node_0);
  lock(rcu_node_0);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

6 locks held by kworker/u4:0/8:
 #0: ffff888022ab0938 ((wq_completion)krdsd){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888022ab0938 ((wq_completion)krdsd){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888022ab0938 ((wq_completion)krdsd){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888022ab0938 ((wq_completion)krdsd){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888022ab0938 ((wq_completion)krdsd){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888022ab0938 ((wq_completion)krdsd){+.+.}-{0:0}, at: process_one_work+0x750/0x15c0 kernel/workqueue.c:2246
 #1: ffffc90000cd7da8 ((work_completion)(&(&cp->cp_conn_w)->work)){+.+.}-{0:0}, at: process_one_work+0x783/0x15c0 kernel/workqueue.c:2250
 #2: ffff8880204f6088 (&tc->t_conn_path_lock){+.+.}-{3:3}, at: rds_tcp_conn_path_connect+0x174/0x880 net/rds/tcp_connect.c:107
 #3: ffff88801bd9bbe0 (k-sk_lock-AF_INET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1594 [inline]
 #3: ffff88801bd9bbe0 (k-sk_lock-AF_INET){+.+.}-{0:0}, at: __inet_bind+0x827/0xbc0 net/ipv4/af_inet.c:514
 #4: ffffc9000757ab18 (&tcp_hashinfo.bhash[i].lock){+.-.}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:359 [inline]
 #4: ffffc9000757ab18 (&tcp_hashinfo.bhash[i].lock){+.-.}-{2:2}, at: inet_csk_find_open_port net/ipv4/inet_connection_sock.c:229 [inline]
 #4: ffffc9000757ab18 (&tcp_hashinfo.bhash[i].lock){+.-.}-{2:2}, at: inet_csk_get_port+0x9c6/0x17f0 net/ipv4/inet_connection_sock.c:367
 #5: ffffffff8ba4d058 (rcu_node_0){-.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:493 [inline]
 #5: ffffffff8ba4d058 (rcu_node_0){-.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:652 [inline]
 #5: ffffffff8ba4d058 (rcu_node_0){-.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3751 [inline]
 #5: ffffffff8ba4d058 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq.cold+0xfd/0x10f9 kernel/rcu/tree.c:2580

stack backtrace:
CPU: 1 PID: 8 Comm: kworker/u4:0 Not tainted 5.10.0-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: krdsd rds_connect_worker
Call Trace:
 <IRQ>
 __dump_stack lib/dump_stack.c:79 [inline]
 dump_stack+0x107/0x163 lib/dump_stack.c:120
 print_deadlock_bug kernel/locking/lockdep.c:2761 [inline]
 check_deadlock kernel/locking/lockdep.c:2804 [inline]
 validate_chain kernel/locking/lockdep.c:3595 [inline]
 __lock_acquire.cold+0x142/0x3e5 kernel/locking/lockdep.c:4832
 lock_acquire kernel/locking/lockdep.c:5437 [inline]
 lock_acquire+0x29d/0x780 kernel/locking/lockdep.c:5402
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0x39/0x50 kernel/locking/spinlock.c:159
 rcu_dump_cpu_stacks+0xcd/0x261 kernel/rcu/tree_stall.h:334
 print_other_cpu_stall kernel/rcu/tree_stall.h:510 [inline]
 check_cpu_stall kernel/rcu/tree_stall.h:652 [inline]
 rcu_pending kernel/rcu/tree.c:3751 [inline]
 rcu_sched_clock_irq.cold+0x849/0x10f9 kernel/rcu/tree.c:2580
 update_process_times+0x18f/0x240 kernel/time/timer.c:1782
 tick_sched_handle+0x9b/0x180 kernel/time/tick-sched.c:226
 tick_sched_timer+0x1b0/0x2d0 kernel/time/tick-sched.c:1376
 __run_hrtimer kernel/time/hrtimer.c:1519 [inline]
 __hrtimer_run_queues+0x1c0/0xea0 kernel/time/hrtimer.c:1583
 hrtimer_interrupt+0x355/0x9a0 kernel/time/hrtimer.c:1645
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
 __sysvec_apic_timer_interrupt+0x168/0x590 arch/x86/kernel/apic/apic.c:1102
 asm_call_irq_on_stack+0xf/0x20
 </IRQ>
 __run_sysvec_on_irqstack arch/x86/include/asm/irq_stack.h:37 [inline]
 run_sysvec_on_irqstack_cond arch/x86/include/asm/irq_stack.h:89 [inline]
 sysvec_apic_timer_interrupt+0xbd/0x100 arch/x86/kernel/apic/apic.c:1096
 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:628
RIP: 0010:__local_bh_enable_ip+0xa4/0x110 kernel/softirq.c:202
Code: e8 f1 3d 09 00 65 8b 05 1a e2 bb 7e a9 00 ff ff 00 74 45 bf 01 00 00 00 e8 d9 3d 09 00 e8 e4 1c 36 00 fb 65 8b 05 fc e1 bb 7e <85> c0 74 4a 5b 5d c3 65 8b 05 6a f0 bb 7e 85 c0 75 a6 0f 0b eb a2
RSP: 0018:ffffc90000cd7988 EFLAGS: 00000216
RAX: 0000000080000201 RBX: 0000000000000201 RCX: ffffffff815a9b57
RDX: 0000000000000000 RSI: 0000000000000201 RDI: 0000000000000000
RBP: ffffffff872c347e R08: 0000000000000001 R09: ffffffff8fe6e827
R10: fffffbfff1fcdd04 R11: 0000000000000000 R12: 0000000000000000
R13: 0000000000008335 R14: 0000000000000000 R15: ffff88805b6b1dc0
 sock_i_uid+0x8e/0xb0 net/core/sock.c:2175
 inet_csk_bind_conflict+0x58/0x510 net/ipv4/inet_connection_sock.c:140
 inet_csk_find_open_port net/ipv4/inet_connection_sock.c:233 [inline]
 inet_csk_get_port+0xad9/0x17f0 net/ipv4/inet_connection_sock.c:367
 __inet_bind+0x796/0xbc0 net/ipv4/af_inet.c:528
 inet_bind+0xf0/0x170 net/ipv4/af_inet.c:457
 rds_tcp_conn_path_connect+0x532/0x880 net/rds/tcp_connect.c:144
 rds_connect_worker+0x1a5/0x2c0 net/rds/threads.c:176
 process_one_work+0x868/0x15c0 kernel/workqueue.c:2275
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421
 kthread+0x3b1/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2020/12/28 19:50 bpf a61daaf351da 8259d56c .config console log report info ci-upstream-bpf-kasan-gce
2020/12/25 22:23 bpf a61daaf351da b982b3ea .config console log report info ci-upstream-bpf-kasan-gce
2020/12/18 04:14 bpf 8bee68338408 04201c06 .config console log report info ci-upstream-bpf-kasan-gce
* Struck through repros no longer work on HEAD.