syzbot


INFO: task hung in sock_ioctl

Status: premoderation: reported on 2025/06/23 11:47
Reported-by: syzbot+b55f23da1deea9bae844@syzkaller.appspotmail.com
First crash: 22d, last: 22d
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in sock_ioctl (2) net 1 1 1392d 1392d 0/29 auto-closed as invalid on 2021/12/21 20:14
upstream INFO: task hung in rtnetlink_rcv_msg net 1 C inconclusive inconclusive 1970 372d 2335d 26/29 fixed on 2024/07/09 19:14
linux-4.14 INFO: task hung in sock_ioctl 1 1 1920d 1920d 0/1 auto-closed as invalid on 2020/08/10 12:43
upstream INFO: task hung in sock_ioctl (3) bridge 1 2 273d 283d 0/29 auto-obsoleted due to no activity on 2025/01/13 17:42
upstream INFO: task hung in sock_ioctl bridge 1 9 2374d 2731d 0/29 auto-closed as invalid on 2019/07/13 13:17
linux-4.14 INFO: task hung in sock_ioctl (2) 1 1 1621d 1621d 0/1 auto-closed as invalid on 2021/06/05 18:05

Sample crash report:
INFO: task syz.1.2281:6896 blocked for more than 124 seconds.
      Not tainted 6.12.30-syzkaller-g5bf4b91e3333 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.2281      state:D stack:0     pid:6896  tgid:6892  ppid:291    flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5924 [inline]
 __schedule+0x145b/0x1f10 kernel/sched/core.c:7750
 __schedule_loop kernel/sched/core.c:7831 [inline]
 schedule+0xc6/0x240 kernel/sched/core.c:7846
 schedule_preempt_disabled+0x14/0x30 kernel/sched/core.c:7903
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0x836/0x1b60 kernel/locking/mutex.c:786
 __mutex_lock_slowpath+0xe/0x20 kernel/locking/mutex.c:1117
 mutex_lock+0x102/0x1c0 kernel/locking/mutex.c:270
 br_ioctl_call net/socket.c:1199 [inline]
 sock_ioctl+0x3d0/0x7b0 net/socket.c:1301
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:907 [inline]
 __se_sys_ioctl+0x135/0x1b0 fs/ioctl.c:893
 __x64_sys_ioctl+0x7f/0xa0 fs/ioctl.c:893
 x64_sys_call+0x1878/0x2ee0 arch/x86/include/generated/asm/syscalls_64.h:17
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0x58/0xf0 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7fc317f8e929
RSP: 002b:00007fc318e35038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fc3181b6080 RCX: 00007fc317f8e929
RDX: 0000000000000000 RSI: 0000000000008940 RDI: 0000000000000003
RBP: 00007fc318010b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007fc3181b6080 R15: 00007ffe58993628
 </TASK>
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 37 Comm: khungtaskd Not tainted 6.12.30-syzkaller-g5bf4b91e3333 #0 38ee2089744292f67dc407ed27f6a777b522fef8
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 __dump_stack+0x21/0x30 lib/dump_stack.c:94
 dump_stack_lvl+0x10c/0x190 lib/dump_stack.c:120
 dump_stack+0x19/0x20 lib/dump_stack.c:129
 nmi_cpu_backtrace+0x2bf/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x142/0x2c0 lib/nmi_backtrace.c:62
 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:41
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:229 [inline]
 watchdog+0xd8f/0xed0 kernel/hung_task.c:385
 kthread+0x2ca/0x370 kernel/kthread.c:389
 ret_from_fork+0x64/0xa0 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 33 Comm: rcuop/1 Not tainted 6.12.30-syzkaller-g5bf4b91e3333 #0 38ee2089744292f67dc407ed27f6a777b522fef8
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
RIP: 0010:netdev_start_xmit include/linux/netdevice.h:4981 [inline]
RIP: 0010:xmit_one net/core/dev.c:3662 [inline]
RIP: 0010:dev_hard_start_xmit+0x1c4/0x770 net/core/dev.c:3678
Code: c6 01 00 00 41 8b 45 00 89 45 ac 0f 1f 44 00 00 e8 31 18 1e fd 4c 8b 6d c8 0f 1f 44 00 00 e8 23 18 1e fd 48 8b 85 40 ff ff ff <80> 3c 18 00 74 08 4c 89 ef e8 1e 0b 74 fd 4d 85 ff 0f 95 c0 4d 8b
RSP: 0018:ffffc900002301b0 EFLAGS: 00000246
RAX: 1ffff11025fba001 RBX: dffffc0000000000 RCX: ffff88810367b900
RDX: 0000000000000100 RSI: 1ffff11025fba034 RDI: ffff88816f451a00
RBP: ffffc90000230270 R08: ffff88810367b900 R09: 0000000000000002
R10: 000000000000a888 R11: 0000000000000100 R12: 1ffffffff0e41b52
R13: ffff88812fdd0008 R14: ffff88816f451a00 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8881f6f00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f6ef20e7d60 CR3: 000000010b710000 CR4: 00000000003526b0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <IRQ>
 __dev_queue_xmit+0x19cc/0x3790 net/core/dev.c:4514
 dev_queue_xmit include/linux/netdevice.h:3141 [inline]
 br_dev_queue_push_xmit+0x553/0x6d0 net/bridge/br_forward.c:53
 NF_HOOK include/linux/netfilter.h:317 [inline]
 br_forward_finish net/bridge/br_forward.c:66 [inline]
 NF_HOOK include/linux/netfilter.h:317 [inline]
 __br_forward+0x25c/0x390 net/bridge/br_forward.c:115
 deliver_clone net/bridge/br_forward.c:131 [inline]
 br_flood+0x67e/0x730 net/bridge/br_forward.c:245
 br_handle_frame_finish+0x12bb/0x1720 net/bridge/br_input.c:215
 nf_hook_bridge_pre net/bridge/br_input.c:301 [inline]
 br_handle_frame+0x5a6/0xba0 net/bridge/br_input.c:424
 __netif_receive_skb_core+0xf4b/0x3940 net/core/dev.c:5651
 __netif_receive_skb_one_core net/core/dev.c:5755 [inline]
 __netif_receive_skb net/core/dev.c:5870 [inline]
 process_backlog+0x3e5/0xae0 net/core/dev.c:6202
 __napi_poll+0xd3/0x610 net/core/dev.c:6853
 napi_poll net/core/dev.c:6922 [inline]
 net_rx_action+0x584/0xce0 net/core/dev.c:7044
 handle_softirqs+0x1ae/0x630 kernel/softirq.c:603
 __do_softirq+0xf/0x16 kernel/softirq.c:641
 do_softirq+0xa6/0x100 kernel/softirq.c:485
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x74/0x80 kernel/softirq.c:412
 local_bh_enable include/linux/bottom_half.h:33 [inline]
 rcu_do_batch+0x5c6/0xd20 kernel/rcu/tree.c:2586
 nocb_cb_wait kernel/rcu/tree_nocb.h:923 [inline]
 rcu_nocb_cb_kthread+0x4dc/0xac0 kernel/rcu/tree_nocb.h:957
 kthread+0x2ca/0x370 kernel/kthread.c:389
 ret_from_fork+0x64/0xa0 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
net_ratelimit: 95839 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:ae:7c:ab:53:10:ad, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/06/23 11:46 android16-6.12 5bf4b91e3333 d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci2-android-6-12-rust INFO: task hung in sock_ioctl
* Struck through repros no longer work on HEAD.