====================================================== WARNING: possible circular locking dependency detected 6.5.0-rc7-syzkaller-00071-gfe4469582053 #0 Not tainted ------------------------------------------------------ kworker/u4:3/40 is trying to acquire lock: ffff0000de1f2cf0 (&rs->rs_recv_lock){....}-{2:2}, at: rds_wake_sk_sleep+0x34/0xc8 net/rds/af_rds.c:109 but task is already holding lock: ffff0000dc2fa100 (&rm->m_rs_lock){....}-{2:2}, at: rds_send_remove_from_sock+0x134/0x78c net/rds/send.c:628 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&rm->m_rs_lock){....}-{2:2}: __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x5c/0x7c kernel/locking/spinlock.c:162 rds_message_purge net/rds/message.c:138 [inline] rds_message_put+0x130/0xb30 net/rds/message.c:180 rds_loop_inc_free+0x20/0x30 net/rds/loop.c:115 rds_inc_put net/rds/recv.c:83 [inline] rds_clear_recv_queue+0x288/0x384 net/rds/recv.c:768 rds_release+0xbc/0x2d0 net/rds/af_rds.c:73 __sock_release net/socket.c:654 [inline] sock_close+0xb8/0x1fc net/socket.c:1386 __fput+0x324/0x824 fs/file_table.c:384 ____fput+0x20/0x30 fs/file_table.c:412 task_work_run+0x230/0x2e0 kernel/task_work.c:179 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] do_notify_resume+0x2180/0x3c90 arch/arm64/kernel/signal.c:1305 exit_to_user_mode_prepare arch/arm64/kernel/entry-common.c:137 [inline] exit_to_user_mode arch/arm64/kernel/entry-common.c:144 [inline] el0_svc+0xa0/0x16c arch/arm64/kernel/entry-common.c:679 el0t_64_sync_handler+0x84/0xfc arch/arm64/kernel/entry-common.c:696 el0t_64_sync+0x190/0x194 arch/arm64/kernel/entry.S:591 -> #0 (&rs->rs_recv_lock){....}-{2:2}: check_prev_add kernel/locking/lockdep.c:3142 [inline] check_prevs_add kernel/locking/lockdep.c:3261 [inline] validate_chain kernel/locking/lockdep.c:3876 [inline] __lock_acquire+0x3370/0x75e8 kernel/locking/lockdep.c:5144 lock_acquire+0x23c/0x71c kernel/locking/lockdep.c:5761 __raw_read_lock_irqsave include/linux/rwlock_api_smp.h:160 [inline] _raw_read_lock_irqsave+0x6c/0x8c kernel/locking/spinlock.c:236 rds_wake_sk_sleep+0x34/0xc8 net/rds/af_rds.c:109 rds_send_remove_from_sock+0x1a4/0x78c net/rds/send.c:634 rds_send_path_drop_acked+0x390/0x3f0 net/rds/send.c:710 rds_tcp_write_space+0x1a8/0x594 net/rds/tcp_send.c:199 tcp_new_space net/ipv4/tcp_input.c:5489 [inline] tcp_check_space+0x150/0x888 net/ipv4/tcp_input.c:5508 tcp_data_snd_check net/ipv4/tcp_input.c:5517 [inline] tcp_rcv_established+0xe14/0x1fc4 net/ipv4/tcp_input.c:6027 tcp_v4_do_rcv+0x3b0/0xe00 net/ipv4/tcp_ipv4.c:1728 sk_backlog_rcv include/net/sock.h:1115 [inline] __release_sock+0x1a8/0x408 net/core/sock.c:2981 release_sock+0x68/0x1b0 net/core/sock.c:3518 tcp_sock_set_cork+0x100/0x188 net/ipv4/tcp.c:3235 rds_tcp_xmit_path_complete+0x7c/0x8c net/rds/tcp_send.c:52 rds_send_xmit+0x1978/0x22a0 net/rds/send.c:422 rds_send_worker+0x84/0x36c net/rds/threads.c:200 process_one_work+0x800/0x1480 kernel/workqueue.c:2600 worker_thread+0x8e0/0xfe8 kernel/workqueue.c:2751 kthread+0x288/0x310 kernel/kthread.c:389 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:853 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&rm->m_rs_lock); lock(&rs->rs_recv_lock); lock(&rm->m_rs_lock); rlock(&rs->rs_recv_lock); *** DEADLOCK *** 5 locks held by kworker/u4:3/40: #0: ffff0000d492c138 ((wq_completion)krdsd){+.+.}-{0:0}, at: process_one_work+0x6b4/0x1480 kernel/workqueue.c:2572 #1: ffff800092f77c20 ((work_completion)(&(&cp->cp_send_w)->work)){+.+.}-{0:0}, at: process_one_work+0x6f0/0x1480 kernel/workqueue.c:2574 #2: ffff0000c1f96f70 (k-sk_lock-AF_INET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1714 [inline] #2: ffff0000c1f96f70 (k-sk_lock-AF_INET){+.+.}-{0:0}, at: tcp_sock_set_cork+0x38/0x188 net/ipv4/tcp.c:3233 #3: ffff0000c1f971f8 (k-clock-AF_INET){++.-}-{2:2}, at: rds_tcp_write_space+0x38/0x594 net/rds/tcp_send.c:185 #4: ffff0000dc2fa100 (&rm->m_rs_lock){....}-{2:2}, at: rds_send_remove_from_sock+0x134/0x78c net/rds/send.c:628 stack backtrace: CPU: 1 PID: 40 Comm: kworker/u4:3 Not tainted 6.5.0-rc7-syzkaller-00071-gfe4469582053 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/26/2023 Workqueue: krdsd rds_send_worker Call trace: dump_backtrace+0x1b8/0x1e4 arch/arm64/kernel/stacktrace.c:233 show_stack+0x2c/0x44 arch/arm64/kernel/stacktrace.c:240 __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd0/0x124 lib/dump_stack.c:106 dump_stack+0x1c/0x28 lib/dump_stack.c:113 print_circular_bug+0x150/0x1b8 kernel/locking/lockdep.c:2068 check_noncircular+0x310/0x404 kernel/locking/lockdep.c:2195 check_prev_add kernel/locking/lockdep.c:3142 [inline] check_prevs_add kernel/locking/lockdep.c:3261 [inline] validate_chain kernel/locking/lockdep.c:3876 [inline] __lock_acquire+0x3370/0x75e8 kernel/locking/lockdep.c:5144 lock_acquire+0x23c/0x71c kernel/locking/lockdep.c:5761 __raw_read_lock_irqsave include/linux/rwlock_api_smp.h:160 [inline] _raw_read_lock_irqsave+0x6c/0x8c kernel/locking/spinlock.c:236 rds_wake_sk_sleep+0x34/0xc8 net/rds/af_rds.c:109 rds_send_remove_from_sock+0x1a4/0x78c net/rds/send.c:634 rds_send_path_drop_acked+0x390/0x3f0 net/rds/send.c:710 rds_tcp_write_space+0x1a8/0x594 net/rds/tcp_send.c:199 tcp_new_space net/ipv4/tcp_input.c:5489 [inline] tcp_check_space+0x150/0x888 net/ipv4/tcp_input.c:5508 tcp_data_snd_check net/ipv4/tcp_input.c:5517 [inline] tcp_rcv_established+0xe14/0x1fc4 net/ipv4/tcp_input.c:6027 tcp_v4_do_rcv+0x3b0/0xe00 net/ipv4/tcp_ipv4.c:1728 sk_backlog_rcv include/net/sock.h:1115 [inline] __release_sock+0x1a8/0x408 net/core/sock.c:2981 release_sock+0x68/0x1b0 net/core/sock.c:3518 tcp_sock_set_cork+0x100/0x188 net/ipv4/tcp.c:3235 rds_tcp_xmit_path_complete+0x7c/0x8c net/rds/tcp_send.c:52 rds_send_xmit+0x1978/0x22a0 net/rds/send.c:422 rds_send_worker+0x84/0x36c net/rds/threads.c:200 process_one_work+0x800/0x1480 kernel/workqueue.c:2600 worker_thread+0x8e0/0xfe8 kernel/workqueue.c:2751 kthread+0x288/0x310 kernel/kthread.c:389 ret_from_fork+0x10/0x20 arch/arm64/kernel/entry.S:853