syzbot


possible deadlock in sch_direct_xmit

Status: auto-obsoleted due to no activity on 2023/08/23 09:10
Reported-by: syzbot+890e1fadc31778d96497@syzkaller.appspotmail.com
First crash: 606d, last: 567d
Similar bugs (12)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in sch_direct_xmit (2) origin:lts-only C done 19 3d05h 316d 0/3 upstream: reported C repro on 2024/01/09 18:28
android-44 possible deadlock in sch_direct_xmit C 240 1814d 2051d 0/2 public: reported C repro on 2019/04/11 08:44
upstream possible deadlock in sch_direct_xmit (2) net C done unreliable 109 493d 1667d 0/28 auto-obsoleted due to no activity on 2024/01/14 06:05
linux-4.19 possible deadlock in sch_direct_xmit (2) C error 15 635d 1152d 0/1 upstream: reported C repro on 2021/09/26 01:30
upstream possible deadlock in sch_direct_xmit net C done done 1548 1822d 2500d 15/28 fixed on 2020/04/17 19:57
linux-5.15 possible deadlock in sch_direct_xmit (2) origin:lts-only C error 13 24d 272d 0/3 upstream: reported C repro on 2024/02/22 19:25
linux-4.14 possible deadlock in sch_direct_xmit 1 1998d 1998d 0/1 auto-closed as invalid on 2019/10/25 08:40
upstream possible deadlock in sch_direct_xmit (4) net 1 207d 207d 25/28 fixed on 2024/06/05 13:52
linux-4.14 possible deadlock in sch_direct_xmit (2) 1 1831d 1831d 0/1 auto-closed as invalid on 2020/03/15 19:58
linux-4.19 possible deadlock in sch_direct_xmit 1 1999d 1999d 0/1 auto-closed as invalid on 2019/10/25 08:50
linux-5.15 possible deadlock in sch_direct_xmit 1 560d 560d 0/3 auto-obsoleted due to no activity on 2023/08/23 09:09
upstream possible deadlock in sch_direct_xmit (3) net 1 283d 283d 25/28 fixed on 2024/04/10 16:40

Sample crash report:
============================================
WARNING: possible recursive locking detected
6.1.21-syzkaller #0 Not tainted
--------------------------------------------
syz-executor.3/10994 is trying to acquire lock:
ffff88807c6e1458 (_xmit_ETHER#2){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:350 [inline]
ffff88807c6e1458 (_xmit_ETHER#2){+.-.}-{2:2}, at: __netif_tx_lock include/linux/netdevice.h:4300 [inline]
ffff88807c6e1458 (_xmit_ETHER#2){+.-.}-{2:2}, at: sch_direct_xmit+0x1c0/0x5e0 net/sched/sch_generic.c:340

but task is already holding lock:
ffff888026ca58d8 (_xmit_ETHER#2){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:350 [inline]
ffff888026ca58d8 (_xmit_ETHER#2){+.-.}-{2:2}, at: __netif_tx_lock include/linux/netdevice.h:4300 [inline]
ffff888026ca58d8 (_xmit_ETHER#2){+.-.}-{2:2}, at: sch_direct_xmit+0x1c0/0x5e0 net/sched/sch_generic.c:340

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(_xmit_ETHER#2);
  lock(_xmit_ETHER#2);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

9 locks held by syz-executor.3/10994:
 #0: ffffffff8cf266a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:306
 #1: ffffffff8cf26700 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:306
 #2: ffffffff8cf26700 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:306
 #3: ffffffff8cf26700 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:306
 #4: ffff888083121258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: spin_trylock include/linux/spinlock.h:360 [inline]
 #4: ffff888083121258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: qdisc_run_begin include/net/sch_generic.h:187 [inline]
 #4: ffff888083121258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3806 [inline]
 #4: ffff888083121258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_queue_xmit+0x13f1/0x3c20 net/core/dev.c:4224
 #5: ffff888026ca58d8 (_xmit_ETHER#2){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:350 [inline]
 #5: ffff888026ca58d8 (_xmit_ETHER#2){+.-.}-{2:2}, at: __netif_tx_lock include/linux/netdevice.h:4300 [inline]
 #5: ffff888026ca58d8 (_xmit_ETHER#2){+.-.}-{2:2}, at: sch_direct_xmit+0x1c0/0x5e0 net/sched/sch_generic.c:340
 #6: ffffffff8cf26700 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:306
 #7: ffffffff8cf26700 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire+0x9/0x30 include/linux/rcupdate.h:306
 #8: ffff88807b81c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: spin_trylock include/linux/spinlock.h:360 [inline]
 #8: ffff88807b81c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: qdisc_run_begin include/net/sch_generic.h:187 [inline]
 #8: ffff88807b81c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3806 [inline]
 #8: ffff88807b81c258 (dev->qdisc_tx_busylock ?: &qdisc_tx_busylock){+...}-{2:2}, at: __dev_queue_xmit+0x13f1/0x3c20 net/core/dev.c:4224

stack backtrace:
CPU: 1 PID: 10994 Comm: syz-executor.3 Not tainted 6.1.21-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 print_deadlock_bug kernel/locking/lockdep.c:2991 [inline]
 check_deadlock kernel/locking/lockdep.c:3034 [inline]
 validate_chain+0x4726/0x58e0 kernel/locking/lockdep.c:3819
 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056
 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669
 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
 _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
 spin_lock include/linux/spinlock.h:350 [inline]
 __netif_tx_lock include/linux/netdevice.h:4300 [inline]
 sch_direct_xmit+0x1c0/0x5e0 net/sched/sch_generic.c:340
 __dev_xmit_skb net/core/dev.c:3819 [inline]
 __dev_queue_xmit+0x1ab4/0x3c20 net/core/dev.c:4224
 neigh_output include/net/neighbour.h:546 [inline]
 ip_finish_output2+0xdd1/0x1280 net/ipv4/ip_output.c:228
 iptunnel_xmit+0x50c/0x930 net/ipv4/ip_tunnel_core.c:82
 ip_tunnel_xmit+0x2296/0x2c70 net/ipv4/ip_tunnel.c:813
 __gre_xmit net/ipv4/ip_gre.c:469 [inline]
 erspan_xmit+0xaa4/0x17a0 net/ipv4/ip_gre.c:715
 __netdev_start_xmit include/linux/netdevice.h:4849 [inline]
 netdev_start_xmit include/linux/netdevice.h:4863 [inline]
 xmit_one net/core/dev.c:3592 [inline]
 dev_hard_start_xmit+0x261/0x8c0 net/core/dev.c:3608
 sch_direct_xmit+0x2b2/0x5e0 net/sched/sch_generic.c:342
 __dev_xmit_skb net/core/dev.c:3819 [inline]
 __dev_queue_xmit+0x1ab4/0x3c20 net/core/dev.c:4224
 neigh_output include/net/neighbour.h:546 [inline]
 ip_finish_output2+0xdd1/0x1280 net/ipv4/ip_output.c:228
 iptunnel_xmit+0x50c/0x930 net/ipv4/ip_tunnel_core.c:82
 ip_tunnel_xmit+0x2296/0x2c70 net/ipv4/ip_tunnel.c:813
 __gre_xmit net/ipv4/ip_gre.c:469 [inline]
 ipgre_xmit+0x759/0xa60 net/ipv4/ip_gre.c:661
 __netdev_start_xmit include/linux/netdevice.h:4849 [inline]
 netdev_start_xmit include/linux/netdevice.h:4863 [inline]
 xmit_one net/core/dev.c:3592 [inline]
 dev_hard_start_xmit+0x261/0x8c0 net/core/dev.c:3608
 __dev_queue_xmit+0x1b97/0x3c20 net/core/dev.c:4258
 dev_queue_xmit include/linux/netdevice.h:3017 [inline]
 __bpf_tx_skb net/core/filter.c:2117 [inline]
 __bpf_redirect_no_mac net/core/filter.c:2151 [inline]
 __bpf_redirect+0x9f3/0x1120 net/core/filter.c:2174
 ____bpf_clone_redirect net/core/filter.c:2441 [inline]
 bpf_clone_redirect+0x249/0x360 net/core/filter.c:2413
 bpf_prog_bebbfe2050753572+0x56/0x5b
 bpf_dispatcher_nop_func include/linux/bpf.h:975 [inline]
 __bpf_prog_run include/linux/filter.h:600 [inline]
 bpf_prog_run include/linux/filter.h:607 [inline]
 bpf_test_run+0x40f/0x8b0 net/bpf/test_run.c:402
 bpf_prog_test_run_skb+0xaf1/0x13a0 net/bpf/test_run.c:1180
 bpf_prog_test_run+0x32f/0x3a0 kernel/bpf/syscall.c:3628
 __sys_bpf+0x3eb/0x6c0 kernel/bpf/syscall.c:4981
 __do_sys_bpf kernel/bpf/syscall.c:5067 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:5065 [inline]
 __x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5065
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7fbd64c8c0f9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fbd6592d168 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fbd64dabf80 RCX: 00007fbd64c8c0f9
RDX: 000000000000002c RSI: 0000000020000080 RDI: 000000000000000a
RBP: 00007fbd64ce7b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffcb9c5a77f R14: 00007fbd6592d300 R15: 0000000000022000
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/03/26 07:30 linux-6.1.y e3a87a10f259 fbf0499a .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in sch_direct_xmit
2023/05/03 16:36 linux-6.1.y ca48fc16c493 b5918830 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in sch_direct_xmit
* Struck through repros no longer work on HEAD.