================================================================== BUG: KCSAN: data-race in virtqueue_disable_cb / vring_interrupt write to 0xffff888103349352 of 1 bytes by interrupt on cpu 0: vring_interrupt+0x128/0x170 drivers/virtio/virtio_ring.c:2497 __handle_irq_event_percpu+0x91/0x490 kernel/irq/handle.c:158 handle_irq_event_percpu kernel/irq/handle.c:193 [inline] handle_irq_event+0x64/0xf0 kernel/irq/handle.c:210 handle_edge_irq+0x167/0x590 kernel/irq/chip.c:834 generic_handle_irq_desc include/linux/irqdesc.h:161 [inline] handle_irq arch/x86/kernel/irq.c:238 [inline] __common_interrupt+0x3c/0xb0 arch/x86/kernel/irq.c:257 common_interrupt+0x7a/0x90 arch/x86/kernel/irq.c:247 asm_common_interrupt+0x26/0x40 arch/x86/include/asm/idtentry.h:636 _compound_head include/linux/page-flags.h:245 [inline] virt_to_folio include/linux/mm.h:1194 [inline] virt_to_slab mm/slab.h:213 [inline] memcg_slab_post_alloc_hook mm/slab.h:527 [inline] slab_post_alloc_hook+0x12c/0x340 mm/slab.h:770 slab_alloc_node mm/slub.c:3470 [inline] slab_alloc mm/slub.c:3478 [inline] __kmem_cache_alloc_lru mm/slub.c:3485 [inline] kmem_cache_alloc+0x10f/0x220 mm/slub.c:3494 anon_vma_alloc mm/rmap.c:94 [inline] anon_vma_fork+0xac/0x2c0 mm/rmap.c:361 dup_mmap kernel/fork.c:727 [inline] dup_mm kernel/fork.c:1689 [inline] copy_mm+0x7b6/0xff0 kernel/fork.c:1738 copy_process+0x1008/0x2180 kernel/fork.c:2504 kernel_clone+0x169/0x560 kernel/fork.c:2912 __do_sys_clone kernel/fork.c:3055 [inline] __se_sys_clone kernel/fork.c:3039 [inline] __x64_sys_clone+0xe8/0x120 kernel/fork.c:3039 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd read to 0xffff888103349352 of 1 bytes by task 3281 on cpu 1: virtqueue_disable_cb_split drivers/virtio/virtio_ring.c:862 [inline] virtqueue_disable_cb+0x92/0x180 drivers/virtio/virtio_ring.c:2352 start_xmit+0xf0/0xae0 drivers/net/virtio_net.c:2149 __netdev_start_xmit include/linux/netdevice.h:4910 [inline] netdev_start_xmit include/linux/netdevice.h:4924 [inline] xmit_one net/core/dev.c:3537 [inline] dev_hard_start_xmit+0x11b/0x3f0 net/core/dev.c:3553 sch_direct_xmit+0x1b0/0x570 net/sched/sch_generic.c:342 __dev_xmit_skb net/core/dev.c:3764 [inline] __dev_queue_xmit+0xe5f/0x1d10 net/core/dev.c:4169 dev_queue_xmit include/linux/netdevice.h:3088 [inline] neigh_hh_output include/net/neighbour.h:528 [inline] neigh_output include/net/neighbour.h:542 [inline] ip_finish_output2+0x700/0x840 net/ipv4/ip_output.c:230 ip_finish_output+0xf4/0x240 net/ipv4/ip_output.c:318 NF_HOOK_COND include/linux/netfilter.h:292 [inline] ip_output+0xe5/0x1b0 net/ipv4/ip_output.c:432 dst_output include/net/dst.h:458 [inline] ip_local_out net/ipv4/ip_output.c:127 [inline] __ip_queue_xmit+0xa4d/0xa70 net/ipv4/ip_output.c:534 ip_queue_xmit+0x38/0x40 net/ipv4/ip_output.c:548 __tcp_transmit_skb+0x1231/0x1710 net/ipv4/tcp_output.c:1401 __tcp_send_ack+0x1de/0x2d0 net/ipv4/tcp_output.c:4072 tcp_send_ack+0x27/0x30 net/ipv4/tcp_output.c:4078 __tcp_cleanup_rbuf+0x149/0x260 net/ipv4/tcp.c:1483 tcp_cleanup_rbuf net/ipv4/tcp.c:1494 [inline] tcp_recvmsg_locked+0x109d/0x1540 net/ipv4/tcp.c:2536 tcp_recvmsg+0x13b/0x490 net/ipv4/tcp.c:2566 inet_recvmsg+0xa2/0x210 net/ipv4/af_inet.c:862 sock_recvmsg_nosec net/socket.c:1020 [inline] sock_recvmsg net/socket.c:1041 [inline] sock_read_iter+0x1a0/0x210 net/socket.c:1107 call_read_iter include/linux/fs.h:1865 [inline] new_sync_read fs/read_write.c:389 [inline] vfs_read+0x3da/0x5c0 fs/read_write.c:470 ksys_read+0xeb/0x1a0 fs/read_write.c:613 __do_sys_read fs/read_write.c:623 [inline] __se_sys_read fs/read_write.c:621 [inline] __x64_sys_read+0x42/0x50 fs/read_write.c:621 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd value changed: 0x00 -> 0x01 Reported by Kernel Concurrency Sanitizer on: CPU: 1 PID: 3281 Comm: syz-fuzzer Tainted: G W 6.5.0-rc1-syzkaller-00033-geb26cbb1a754 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/03/2023 ==================================================================