syzbot


possible deadlock in sock_hash_delete_elem (2)

Status: upstream: reported on 2024/03/19 18:11
Subsystems: bpf net
[Documentation on labels]
Reported-by: syzbot+ec941d6e24f633a59172@syzkaller.appspotmail.com
First crash: 30d, last: 1h22m
Discussions (2)
Title Replies (including bot) Last reply
Re: [syzbot] [bpf?] [net?] possible deadlock in sock_hash_delete_elem (2) 1 (1) 2024/03/21 08:42
[syzbot] [bpf?] [net?] possible deadlock in sock_hash_delete_elem (2) 0 (1) 2024/03/19 18:11
Similar bugs (2)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in sock_hash_delete_elem C 30 8h05m 22d 0/3 upstream: reported C repro on 2024/03/24 22:39
linux-5.15 possible deadlock in sock_hash_delete_elem C 22 7h11m 31d 0/3 upstream: reported C repro on 2024/03/16 12:55

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.8.0-syzkaller-08951-gfe46a7dd189e #0 Not tainted
------------------------------------------------------
syz-executor.4/8497 is trying to acquire lock:
ffff88805581e020 (&htab->buckets[i].lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
ffff88805581e020 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xcb/0x260 net/core/sock_map.c:939

but task is already holding lock:
ffff88807af1a1f8 (&trie->lock){....}-{2:2}, at: trie_update_elem+0xc8/0xdd0 kernel/bpf/lpm_trie.c:324

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #5 (&trie->lock){....}-{2:2}:
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0x3a/0x60 kernel/locking/spinlock.c:162
       trie_delete_elem+0xb0/0x7e0 kernel/bpf/lpm_trie.c:451
       ___bpf_prog_run+0x3e51/0xae80 kernel/bpf/core.c:1997
       __bpf_prog_run32+0xc1/0x100 kernel/bpf/core.c:2236
       bpf_dispatcher_nop_func include/linux/bpf.h:1234 [inline]
       __bpf_prog_run include/linux/filter.h:657 [inline]
       bpf_prog_run include/linux/filter.h:664 [inline]
       __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
       bpf_trace_run4+0x176/0x460 kernel/trace/bpf_trace.c:2422
       __bpf_trace_sched_switch+0x13e/0x190 include/trace/events/sched.h:222
       __traceiter_sched_switch+0x6c/0xc0 include/trace/events/sched.h:222
       trace_sched_switch include/trace/events/sched.h:222 [inline]
       __schedule+0x2266/0x5c70 kernel/sched/core.c:6733
       preempt_schedule_common+0x44/0xc0 kernel/sched/core.c:6915
       preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:12
       class_preempt_destructor include/linux/preempt.h:480 [inline]
       class_preempt_destructor include/linux/preempt.h:480 [inline]
       try_to_wake_up+0xc08/0x13e0 kernel/sched/core.c:4233
       wake_up_process kernel/sched/core.c:4510 [inline]
       wake_up_q+0x91/0x140 kernel/sched/core.c:1029
       futex_wake+0x43e/0x4e0 kernel/futex/waitwake.c:199
       do_futex+0x1e5/0x350 kernel/futex/syscalls.c:107
       __do_sys_futex kernel/futex/syscalls.c:179 [inline]
       __se_sys_futex kernel/futex/syscalls.c:160 [inline]
       __x64_sys_futex+0x1e1/0x4c0 kernel/futex/syscalls.c:160
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xd2/0x260 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x6d/0x75

-> #4 (&rq->__lock){-.-.}-{2:2}:
       _raw_spin_lock_nested+0x31/0x40 kernel/locking/spinlock.c:378
       raw_spin_rq_lock_nested+0x29/0x130 kernel/sched/core.c:559
       raw_spin_rq_lock kernel/sched/sched.h:1385 [inline]
       rq_lock kernel/sched/sched.h:1699 [inline]
       task_fork_fair+0x70/0x240 kernel/sched/fair.c:12629
       sched_cgroup_fork+0x3cf/0x510 kernel/sched/core.c:4845
       copy_process+0x4106/0x9160 kernel/fork.c:2498
       kernel_clone+0xfd/0x940 kernel/fork.c:2796
       user_mode_thread+0xb4/0xf0 kernel/fork.c:2874
       rest_init+0x27/0x2b0 init/main.c:695
       arch_call_rest_init+0x13/0x40 init/main.c:831
       start_kernel+0x3a3/0x490 init/main.c:1077
       x86_64_start_reservations+0x18/0x30 arch/x86/kernel/head64.c:509
       x86_64_start_kernel+0xb2/0xc0 arch/x86/kernel/head64.c:490
       common_startup_64+0x13e/0x148

-> #3 (&p->pi_lock){-.-.}-{2:2}:
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0x3a/0x60 kernel/locking/spinlock.c:162
       class_raw_spinlock_irqsave_constructor include/linux/spinlock.h:553 [inline]
       try_to_wake_up+0x9a/0x13e0 kernel/sched/core.c:4262
       create_worker+0x4d1/0x7c0 kernel/workqueue.c:2841
       workqueue_init+0x4b4/0xb70 kernel/workqueue.c:7750
       kernel_init_freeable+0x32f/0xc40 init/main.c:1534
       kernel_init+0x1c/0x2a0 init/main.c:1439
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243

-> #2 (&pool->lock){-.-.}-{2:2}:
       __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
       _raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
       __queue_work+0x39e/0x1170 kernel/workqueue.c:2360
       __queue_delayed_work+0x21b/0x2e0 kernel/workqueue.c:2551
       queue_delayed_work_on+0x10e/0x130 kernel/workqueue.c:2595
       queue_delayed_work include/linux/workqueue.h:620 [inline]
       rds_tcp_write_space+0x59d/0x6d0 net/rds/tcp_send.c:204
       tcp_new_space net/ipv4/tcp_input.c:5626 [inline]
       tcp_check_space+0x73c/0x900 net/ipv4/tcp_input.c:5645
       tcp_write_xmit+0x1003/0x7ee0 net/ipv4/tcp_output.c:2799
       __tcp_push_pending_frames+0xaf/0x390 net/ipv4/tcp_output.c:2977
       tcp_push+0x221/0x6f0 net/ipv4/tcp.c:738
       tcp_sendmsg_locked+0x27b8/0x3480 net/ipv4/tcp.c:1310
       tcp_sendmsg+0x2e/0x50 net/ipv4/tcp.c:1342
       inet6_sendmsg+0xb9/0x140 net/ipv6/af_inet6.c:661
       sock_sendmsg_nosec net/socket.c:730 [inline]
       __sock_sendmsg net/socket.c:745 [inline]
       sock_sendmsg+0x2b5/0x470 net/socket.c:768
       rds_tcp_xmit+0x34e/0xc50 net/rds/tcp_send.c:125
       rds_send_xmit+0xf36/0x24a0 net/rds/send.c:367
       rds_send_worker+0x8f/0x2e0 net/rds/threads.c:200
       process_one_work+0x9a9/0x1a60 kernel/workqueue.c:3254
       process_scheduled_works kernel/workqueue.c:3335 [inline]
       worker_thread+0x6c8/0xf70 kernel/workqueue.c:3416
       kthread+0x2c1/0x3a0 kernel/kthread.c:388
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243

-> #1 (clock-AF_INET6){++--}-{2:2}:
       __raw_write_lock_bh include/linux/rwlock_api_smp.h:202 [inline]
       _raw_write_lock_bh+0x33/0x40 kernel/locking/spinlock.c:334
       sk_psock_drop+0x24/0x390 net/core/skmsg.c:837
       sk_psock_put include/linux/skmsg.h:459 [inline]
       sock_map_unref+0x4ee/0x6e0 net/core/sock_map.c:181
       sock_hash_delete_elem+0x1c1/0x260 net/core/sock_map.c:943
       map_delete_elem kernel/bpf/syscall.c:1696 [inline]
       __sys_bpf+0x3940/0x4b40 kernel/bpf/syscall.c:5622
       __do_sys_bpf kernel/bpf/syscall.c:5738 [inline]
       __se_sys_bpf kernel/bpf/syscall.c:5736 [inline]
       __x64_sys_bpf+0x78/0xc0 kernel/bpf/syscall.c:5736
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xd2/0x260 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x6d/0x75

-> #0 (&htab->buckets[i].lock){+...}-{2:2}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain kernel/locking/lockdep.c:3869 [inline]
       __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
       lock_acquire kernel/locking/lockdep.c:5754 [inline]
       lock_acquire+0x1b1/0x540 kernel/locking/lockdep.c:5719
       __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
       _raw_spin_lock_bh+0x33/0x40 kernel/locking/spinlock.c:178
       spin_lock_bh include/linux/spinlock.h:356 [inline]
       sock_hash_delete_elem+0xcb/0x260 net/core/sock_map.c:939
       ___bpf_prog_run+0x3e51/0xae80 kernel/bpf/core.c:1997
       __bpf_prog_run32+0xc1/0x100 kernel/bpf/core.c:2236
       bpf_dispatcher_nop_func include/linux/bpf.h:1234 [inline]
       __bpf_prog_run include/linux/filter.h:657 [inline]
       bpf_prog_run include/linux/filter.h:664 [inline]
       __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
       bpf_trace_run2+0x151/0x420 kernel/trace/bpf_trace.c:2420
       trace_kfree include/trace/events/kmem.h:94 [inline]
       kfree+0x225/0x370 mm/slub.c:4377
       trie_update_elem+0x5fb/0xdd0 kernel/bpf/lpm_trie.c:427
       bpf_map_update_value+0x2c1/0x6c0 kernel/bpf/syscall.c:203
       generic_map_update_batch+0x454/0x5f0 kernel/bpf/syscall.c:1876
       bpf_map_do_batch+0x64a/0x720 kernel/bpf/syscall.c:5145
       __sys_bpf+0x1939/0x4b40 kernel/bpf/syscall.c:5695
       __do_sys_bpf kernel/bpf/syscall.c:5738 [inline]
       __se_sys_bpf kernel/bpf/syscall.c:5736 [inline]
       __x64_sys_bpf+0x78/0xc0 kernel/bpf/syscall.c:5736
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xd2/0x260 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x6d/0x75

other info that might help us debug this:

Chain exists of:
  &htab->buckets[i].lock --> &rq->__lock --> &trie->lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&trie->lock);
                               lock(&rq->__lock);
                               lock(&trie->lock);
  lock(&htab->buckets[i].lock);

 *** DEADLOCK ***

3 locks held by syz-executor.4/8497:
 #0: ffffffff8d7b08e0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
 #0: ffffffff8d7b08e0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
 #0: ffffffff8d7b08e0 (rcu_read_lock){....}-{1:2}, at: bpf_map_update_value+0x24b/0x6c0 kernel/bpf/syscall.c:202
 #1: ffff88807af1a1f8 (&trie->lock){....}-{2:2}, at: trie_update_elem+0xc8/0xdd0 kernel/bpf/lpm_trie.c:324
 #2: ffffffff8d7b08e0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
 #2: ffffffff8d7b08e0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
 #2: ffffffff8d7b08e0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
 #2: ffffffff8d7b08e0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0xe4/0x420 kernel/trace/bpf_trace.c:2420

stack backtrace:
CPU: 0 PID: 8497 Comm: syz-executor.4 Not tainted 6.8.0-syzkaller-08951-gfe46a7dd189e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain kernel/locking/lockdep.c:3869 [inline]
 __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
 lock_acquire kernel/locking/lockdep.c:5754 [inline]
 lock_acquire+0x1b1/0x540 kernel/locking/lockdep.c:5719
 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
 _raw_spin_lock_bh+0x33/0x40 kernel/locking/spinlock.c:178
 spin_lock_bh include/linux/spinlock.h:356 [inline]
 sock_hash_delete_elem+0xcb/0x260 net/core/sock_map.c:939
 ___bpf_prog_run+0x3e51/0xae80 kernel/bpf/core.c:1997
 __bpf_prog_run32+0xc1/0x100 kernel/bpf/core.c:2236
 bpf_dispatcher_nop_func include/linux/bpf.h:1234 [inline]
 __bpf_prog_run include/linux/filter.h:657 [inline]
 bpf_prog_run include/linux/filter.h:664 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
 bpf_trace_run2+0x151/0x420 kernel/trace/bpf_trace.c:2420
 trace_kfree include/trace/events/kmem.h:94 [inline]
 kfree+0x225/0x370 mm/slub.c:4377
 trie_update_elem+0x5fb/0xdd0 kernel/bpf/lpm_trie.c:427
 bpf_map_update_value+0x2c1/0x6c0 kernel/bpf/syscall.c:203
 generic_map_update_batch+0x454/0x5f0 kernel/bpf/syscall.c:1876
 bpf_map_do_batch+0x64a/0x720 kernel/bpf/syscall.c:5145
 __sys_bpf+0x1939/0x4b40 kernel/bpf/syscall.c:5695
 __do_sys_bpf kernel/bpf/syscall.c:5738 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:5736 [inline]
 __x64_sys_bpf+0x78/0xc0 kernel/bpf/syscall.c:5736
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xd2/0x260 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x6d/0x75
RIP: 0033:0x7fcb6467de69
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fcb653050c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fcb647abf80 RCX: 00007fcb6467de69
RDX: 0000000000000038 RSI: 0000000020000240 RDI: 000000000000001a
RBP: 00007fcb646ca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fcb647abf80 R15: 00007ffe83ffbc28
 </TASK>
------------[ cut here ]------------
raw_local_irq_restore() called with IRQs enabled
WARNING: CPU: 0 PID: 8497 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x29/0x30 kernel/locking/irqflag-debug.c:10
Modules linked in:
CPU: 0 PID: 8497 Comm: syz-executor.4 Not tainted 6.8.0-syzkaller-08951-gfe46a7dd189e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
RIP: 0010:warn_bogus_irq_restore+0x29/0x30 kernel/locking/irqflag-debug.c:10
Code: 90 f3 0f 1e fa 90 80 3d 72 d0 b5 04 00 74 06 90 c3 cc cc cc cc c6 05 63 d0 b5 04 01 90 48 c7 c7 c0 b1 0c 8b e8 78 6b 7d f6 90 <0f> 0b 90 90 eb df 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90
RSP: 0018:ffffc90003b4fa68 EFLAGS: 00010286

RAX: 0000000000000000 RBX: ffff88807af1a1e0 RCX: ffffc9000a6ec000
RDX: 0000000000040000 RSI: ffffffff814faff6 RDI: 0000000000000001
RBP: 0000000000000287 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000001 R12: ffff8880610941e0
R13: 0000000000000000 R14: 0000000000000003 R15: ffff88807af1a018
FS:  00007fcb653056c0(0000) GS:ffff8880b9400000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f87b6abf000 CR3: 0000000066e5e000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:151 [inline]
 _raw_spin_unlock_irqrestore+0x74/0x80 kernel/locking/spinlock.c:194
 spin_unlock_irqrestore include/linux/spinlock.h:406 [inline]
 trie_update_elem+0x616/0xdd0 kernel/bpf/lpm_trie.c:431
 bpf_map_update_value+0x2c1/0x6c0 kernel/bpf/syscall.c:203
 generic_map_update_batch+0x454/0x5f0 kernel/bpf/syscall.c:1876
 bpf_map_do_batch+0x64a/0x720 kernel/bpf/syscall.c:5145
 __sys_bpf+0x1939/0x4b40 kernel/bpf/syscall.c:5695
 __do_sys_bpf kernel/bpf/syscall.c:5738 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:5736 [inline]
 __x64_sys_bpf+0x78/0xc0 kernel/bpf/syscall.c:5736
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xd2/0x260 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x6d/0x75
RIP: 0033:0x7fcb6467de69
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fcb653050c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fcb647abf80 RCX: 00007fcb6467de69
RDX: 0000000000000038 RSI: 0000000020000240 RDI: 000000000000001a
RBP: 00007fcb646ca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fcb647abf80 R15: 00007ffe83ffbc28
 </TASK>

Crashes (337):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/04/15 18:24 upstream fe46a7dd189e c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in sock_hash_delete_elem
2024/04/15 05:48 upstream fe46a7dd189e c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in sock_hash_delete_elem
2024/04/14 20:26 upstream fe46a7dd189e c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in sock_hash_delete_elem
2024/04/14 08:26 upstream fe46a7dd189e c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in sock_hash_delete_elem
2024/04/09 08:48 upstream fe46a7dd189e 53df08b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/08 10:18 upstream fe46a7dd189e ca620dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in sock_hash_delete_elem
2024/04/16 19:51 net f99c5f563c17 0d592ce4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/16 14:20 bpf 443574b03387 0d592ce4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/16 09:49 net f99c5f563c17 0d592ce4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/16 07:58 net f99c5f563c17 0d592ce4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/15 15:57 bpf 443574b03387 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/15 14:03 bpf 443574b03387 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/15 10:00 net f99c5f563c17 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/15 08:32 bpf 443574b03387 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/14 18:49 net f99c5f563c17 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/14 12:58 bpf 443574b03387 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/13 19:19 bpf 443574b03387 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/13 08:21 bpf 443574b03387 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/13 06:27 bpf 443574b03387 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/13 04:26 net f99c5f563c17 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/13 03:00 net f99c5f563c17 c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/12 18:39 bpf 443574b03387 27de0a5c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/12 11:47 bpf 443574b03387 27de0a5c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/11 23:47 net f99c5f563c17 478efa7f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/11 19:11 net f99c5f563c17 478efa7f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/11 08:04 bpf 443574b03387 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/11 02:31 net f99c5f563c17 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/11 00:32 bpf 443574b03387 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/10 23:09 net f99c5f563c17 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/10 11:21 net f99c5f563c17 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/10 08:36 net f99c5f563c17 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in sock_hash_delete_elem
2024/03/19 11:10 bpf 0740b6427e90 baa80228 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/08 11:39 bpf-next 14bb1e8c8d4a ca620dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce possible deadlock in sock_hash_delete_elem
2024/03/27 00:21 net-next 237bb5f7f7f5 454571b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce possible deadlock in sock_hash_delete_elem
2024/03/17 04:50 net-next 237bb5f7f7f5 d615901c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce possible deadlock in sock_hash_delete_elem
2024/04/16 17:06 linux-next 66e4190e92ce 0d592ce4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/15 20:49 linux-next 6bd343537461 0d592ce4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/15 19:27 linux-next 6bd343537461 0d592ce4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/15 02:45 linux-next 9ed46da14b9b c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/14 22:43 linux-next 9ed46da14b9b c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/13 17:14 linux-next 9ed46da14b9b c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/13 09:11 linux-next 9ed46da14b9b c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/12 23:45 linux-next 9ed46da14b9b c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/12 09:03 linux-next 4118d9533ff3 27de0a5c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/12 06:25 linux-next 4118d9533ff3 27de0a5c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/12 04:23 linux-next 4118d9533ff3 478efa7f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/11 22:17 linux-next 4118d9533ff3 478efa7f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/11 15:54 linux-next 4118d9533ff3 478efa7f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/11 06:27 linux-next 6ebf211bb11d 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/11 03:59 linux-next 6ebf211bb11d 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/10 11:00 linux-next a053fd3ca5d1 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/10 09:38 linux-next a053fd3ca5d1 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
2024/04/10 06:46 linux-next a053fd3ca5d1 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in sock_hash_delete_elem
* Struck through repros no longer work on HEAD.