syzbot


possible deadlock in drm_handle_vblank

Status: upstream: reported on 2024/03/20 14:25
Subsystems: bpf net
[Documentation on labels]
Reported-by: syzbot+bc922f476bd65abbd466@syzkaller.appspotmail.com
Fix commit: ff9105993240 bpf, sockmap: Prevent lock inversion deadlock in map delete elem
Patched on: [ci-qemu-upstream ci-qemu-upstream-386 ci-qemu2-arm32 ci-qemu2-arm64 ci-qemu2-arm64-compat ci-qemu2-arm64-mte ci-upstream-bpf-kasan-gce ci-upstream-gce-arm64 ci-upstream-gce-leak ci-upstream-kasan-badwrites-root ci-upstream-kasan-gce ci-upstream-kasan-gce-386 ci-upstream-kasan-gce-root ci-upstream-kasan-gce-selinux-root ci-upstream-kasan-gce-smack-root ci-upstream-kmsan-gce-386-root ci-upstream-kmsan-gce-root ci-upstream-linux-next-kasan-gce-root ci-upstream-net-kasan-gce ci-upstream-net-this-kasan-gce ci2-upstream-fs ci2-upstream-kcsan-gce ci2-upstream-net-next-test-gce ci2-upstream-usb], missing on: [ci-qemu2-riscv64 ci-upstream-bpf-next-kasan-gce]
First crash: 41d, last: 12d
Discussions (2)
Title Replies (including bot) Last reply
[PATCH bpf] bpf, sockmap: Prevent lock inversion deadlock in map delete elem 3 (3) 2024/04/02 14:40
[syzbot] [bpf?] [net?] possible deadlock in drm_handle_vblank 0 (1) 2024/03/20 14:25

Sample crash report:
=====================================================
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
6.8.0-syzkaller-08951-gfe46a7dd189e #0 Not tainted
-----------------------------------------------------
syz-executor.0/8995 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
ffff88807ae35220 (&htab->buckets[i].lock){+...}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
ffff88807ae35220 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xb0/0x300 net/core/sock_map.c:939

and this task is already holding:
ffff88801f6d43f0 (&dev->event_lock){-.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:351 [inline]
ffff88801f6d43f0 (&dev->event_lock){-.-.}-{2:2}, at: vkms_crtc_atomic_flush+0x8d/0x1c0 drivers/gpu/drm/vkms/vkms_crtc.c:253
which would create a new lock dependency:
 (&dev->event_lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+...}-{2:2}

but this new dependency connects a HARDIRQ-irq-safe lock:
 (&dev->event_lock){-.-.}-{2:2}

... which became HARDIRQ-irq-safe at:
  lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
  _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
  drm_handle_vblank+0xc8/0x4c0 drivers/gpu/drm/drm_vblank.c:1885
  vkms_vblank_simulate+0xd6/0x360 drivers/gpu/drm/vkms/vkms_crtc.c:29
  __run_hrtimer kernel/time/hrtimer.c:1692 [inline]
  __hrtimer_run_queues+0x597/0xd00 kernel/time/hrtimer.c:1756
  hrtimer_interrupt+0x396/0x990 kernel/time/hrtimer.c:1818
  local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1032 [inline]
  __sysvec_apic_timer_interrupt+0x109/0x3a0 arch/x86/kernel/apic/apic.c:1049
  instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1043 [inline]
  sysvec_apic_timer_interrupt+0xa1/0xc0 arch/x86/kernel/apic/apic.c:1043
  asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
  native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline]
  arch_safe_halt arch/x86/include/asm/irqflags.h:86 [inline]
  acpi_safe_halt+0x21/0x30 drivers/acpi/processor_idle.c:112
  acpi_idle_enter+0xe4/0x140 drivers/acpi/processor_idle.c:707
  cpuidle_enter_state+0x11a/0x490 drivers/cpuidle/cpuidle.c:267
  cpuidle_enter+0x5d/0xa0 drivers/cpuidle/cpuidle.c:388
  call_cpuidle kernel/sched/idle.c:155 [inline]
  cpuidle_idle_call kernel/sched/idle.c:236 [inline]
  do_idle+0x375/0x5d0 kernel/sched/idle.c:332
  cpu_startup_entry+0x42/0x60 kernel/sched/idle.c:430
  rest_init+0x2e0/0x300 init/main.c:730
  arch_call_rest_init+0xe/0x10 init/main.c:831
  start_kernel+0x47a/0x500 init/main.c:1077
  x86_64_start_reservations+0x2a/0x30 arch/x86/kernel/head64.c:509
  x86_64_start_kernel+0x99/0xa0 arch/x86/kernel/head64.c:490
  common_startup_64+0x13e/0x147

to a HARDIRQ-irq-unsafe lock:
 (&htab->buckets[i].lock){+...}-{2:2}

... which became HARDIRQ-irq-unsafe at:
...
  lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
  __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
  _raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
  spin_lock_bh include/linux/spinlock.h:356 [inline]
  sock_hash_free+0x164/0x820 net/core/sock_map.c:1154
  bpf_map_free_deferred+0xe8/0x110 kernel/bpf/syscall.c:734
  process_one_work kernel/workqueue.c:3254 [inline]
  process_scheduled_works+0xa02/0x1770 kernel/workqueue.c:3335
  worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
  kthread+0x2f2/0x390 kernel/kthread.c:388
  ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
  ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243

other info that might help us debug this:

 Possible interrupt unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&htab->buckets[i].lock);
                               local_irq_disable();
                               lock(&dev->event_lock);
                               lock(&htab->buckets[i].lock);
  <Interrupt>
    lock(&dev->event_lock);

 *** DEADLOCK ***

9 locks held by syz-executor.0/8995:
 #0: ffff88801f6d42f8 (&dev->clientlist_mutex){+.+.}-{3:3}, at: drm_client_dev_restore+0xae/0x270 drivers/gpu/drm/drm_client.c:242
 #1: ffff888019b0ea80 (&helper->lock){+.+.}-{3:3}, at: __drm_fb_helper_restore_fbdev_mode_unlocked drivers/gpu/drm/drm_fb_helper.c:242 [inline]
 #1: ffff888019b0ea80 (&helper->lock){+.+.}-{3:3}, at: drm_fb_helper_restore_fbdev_mode_unlocked drivers/gpu/drm/drm_fb_helper.c:278 [inline]
 #1: ffff888019b0ea80 (&helper->lock){+.+.}-{3:3}, at: drm_fb_helper_lastclose+0xb3/0x180 drivers/gpu/drm/drm_fb_helper.c:2005
 #2: ffff88801f6d41b0 (&dev->master_mutex){+.+.}-{3:3}, at: drm_master_internal_acquire+0x20/0x70 drivers/gpu/drm/drm_auth.c:452
 #3: ffff888019b0e898 (&client->modeset_mutex){+.+.}-{3:3}, at: drm_client_modeset_commit_locked+0x50/0x520 drivers/gpu/drm/drm_client_modeset.c:1152
 #4: ffffc9000b8a79b0 (crtc_ww_class_acquire){+.+.}-{0:0}, at: drm_client_modeset_commit_atomic+0xd5/0x7e0 drivers/gpu/drm/drm_client_modeset.c:990
 #5: ffff88801f6d4db8 (crtc_ww_class_mutex){+.+.}-{3:3}, at: ww_mutex_lock_slow include/linux/ww_mutex.h:299 [inline]
 #5: ffff88801f6d4db8 (crtc_ww_class_mutex){+.+.}-{3:3}, at: modeset_lock+0x301/0x650 drivers/gpu/drm/drm_modeset_lock.c:311
 #6: ffff88801f6d6860 (&vkms_out->lock){-.-.}-{2:2}, at: drm_atomic_helper_commit_planes+0x1d3/0xe00 drivers/gpu/drm/drm_atomic_helper.c:2757
 #7: ffff88801f6d43f0 (&dev->event_lock){-.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:351 [inline]
 #7: ffff88801f6d43f0 (&dev->event_lock){-.-.}-{2:2}, at: vkms_crtc_atomic_flush+0x8d/0x1c0 drivers/gpu/drm/vkms/vkms_crtc.c:253
 #8: ffffffff8e132020 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
 #8: ffffffff8e132020 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
 #8: ffffffff8e132020 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2380 [inline]
 #8: ffffffff8e132020 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x114/0x420 kernel/trace/bpf_trace.c:2420

the dependencies between HARDIRQ-irq-safe lock and the holding lock:
-> (&dev->event_lock){-.-.}-{2:2} {
   IN-HARDIRQ-W at:
                    lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
                    __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
                    _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
                    drm_handle_vblank+0xc8/0x4c0 drivers/gpu/drm/drm_vblank.c:1885
                    vkms_vblank_simulate+0xd6/0x360 drivers/gpu/drm/vkms/vkms_crtc.c:29
                    __run_hrtimer kernel/time/hrtimer.c:1692 [inline]
                    __hrtimer_run_queues+0x597/0xd00 kernel/time/hrtimer.c:1756
                    hrtimer_interrupt+0x396/0x990 kernel/time/hrtimer.c:1818
                    local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1032 [inline]
                    __sysvec_apic_timer_interrupt+0x109/0x3a0 arch/x86/kernel/apic/apic.c:1049
                    instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1043 [inline]
                    sysvec_apic_timer_interrupt+0xa1/0xc0 arch/x86/kernel/apic/apic.c:1043
                    asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
                    native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline]
                    arch_safe_halt arch/x86/include/asm/irqflags.h:86 [inline]
                    acpi_safe_halt+0x21/0x30 drivers/acpi/processor_idle.c:112
                    acpi_idle_enter+0xe4/0x140 drivers/acpi/processor_idle.c:707
                    cpuidle_enter_state+0x11a/0x490 drivers/cpuidle/cpuidle.c:267
                    cpuidle_enter+0x5d/0xa0 drivers/cpuidle/cpuidle.c:388
                    call_cpuidle kernel/sched/idle.c:155 [inline]
                    cpuidle_idle_call kernel/sched/idle.c:236 [inline]
                    do_idle+0x375/0x5d0 kernel/sched/idle.c:332
                    cpu_startup_entry+0x42/0x60 kernel/sched/idle.c:430
                    rest_init+0x2e0/0x300 init/main.c:730
                    arch_call_rest_init+0xe/0x10 init/main.c:831
                    start_kernel+0x47a/0x500 init/main.c:1077
                    x86_64_start_reservations+0x2a/0x30 arch/x86/kernel/head64.c:509
                    x86_64_start_kernel+0x99/0xa0 arch/x86/kernel/head64.c:490
                    common_startup_64+0x13e/0x147
   IN-SOFTIRQ-W at:
                    lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
                    __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
                    _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
                    drm_handle_vblank+0xc8/0x4c0 drivers/gpu/drm/drm_vblank.c:1885
                    vkms_vblank_simulate+0xd6/0x360 drivers/gpu/drm/vkms/vkms_crtc.c:29
                    __run_hrtimer kernel/time/hrtimer.c:1692 [inline]
                    __hrtimer_run_queues+0x597/0xd00 kernel/time/hrtimer.c:1756
                    hrtimer_interrupt+0x396/0x990 kernel/time/hrtimer.c:1818
                    local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1032 [inline]
                    __sysvec_apic_timer_interrupt+0x109/0x3a0 arch/x86/kernel/apic/apic.c:1049
                    instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1043 [inline]
                    sysvec_apic_timer_interrupt+0xa1/0xc0 arch/x86/kernel/apic/apic.c:1043
                    asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
                    call_rcu+0x7a8/0xa70 kernel/rcu/tree.c:2839
                    exit_creds+0x187/0x200
                    __put_task_struct+0x101/0x290 kernel/fork.c:977
                    put_task_struct include/linux/sched/task.h:138 [inline]
                    delayed_put_task_struct+0x115/0x2d0 kernel/exit.c:229
                    rcu_do_batch kernel/rcu/tree.c:2196 [inline]
                    rcu_core+0xaff/0x1830 kernel/rcu/tree.c:2471
                    __do_softirq+0x2be/0x943 kernel/softirq.c:554
                    run_ksoftirqd+0xc5/0x130 kernel/softirq.c:924
                    smpboot_thread_fn+0x546/0xa30 kernel/smpboot.c:164
                    kthread+0x2f2/0x390 kernel/kthread.c:388
                    ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
                    ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
   INITIAL USE at:
                   lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
                   __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
                   _raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
                   spin_lock include/linux/spinlock.h:351 [inline]
                   vkms_crtc_atomic_flush+0x8d/0x1c0 drivers/gpu/drm/vkms/vkms_crtc.c:253
                   drm_atomic_helper_commit_planes+0xaf3/0xe00 drivers/gpu/drm/drm_atomic_helper.c:2820
                   vkms_atomic_commit_tail+0x5d/0x200 drivers/gpu/drm/vkms/vkms_drv.c:73
                   commit_tail+0x2ab/0x3c0 drivers/gpu/drm/drm_atomic_helper.c:1832
                   drm_atomic_helper_commit+0x953/0x9f0 drivers/gpu/drm/drm_atomic_helper.c:2072
                   drm_atomic_commit+0x2ae/0x310 drivers/gpu/drm/drm_atomic.c:1514
                   drm_client_modeset_commit_atomic+0x676/0x7e0 drivers/gpu/drm/drm_client_modeset.c:1051
                   drm_client_modeset_commit_locked+0xe0/0x520 drivers/gpu/drm/drm_client_modeset.c:1154
                   drm_client_modeset_commit+0x4a/0x70 drivers/gpu/drm/drm_client_modeset.c:1180
                   __drm_fb_helper_restore_fbdev_mode_unlocked+0xc3/0x170 drivers/gpu/drm/drm_fb_helper.c:251
                   drm_fb_helper_set_par+0xaf/0x100 drivers/gpu/drm/drm_fb_helper.c:1344
                   fbcon_init+0x112b/0x2190 drivers/video/fbdev/core/fbcon.c:1094
                   visual_init+0x2e8/0x650 drivers/tty/vt/vt.c:1023
                   do_bind_con_driver+0x863/0xf60 drivers/tty/vt/vt.c:3643
                   do_take_over_console+0x5e7/0x750 drivers/tty/vt/vt.c:4222
                   do_fbcon_takeover+0x11a/0x200 drivers/video/fbdev/core/fbcon.c:532
                   do_fb_registered drivers/video/fbdev/core/fbcon.c:3000 [inline]
                   fbcon_fb_registered+0x352/0x600 drivers/video/fbdev/core/fbcon.c:3020
                   do_register_framebuffer drivers/video/fbdev/core/fbmem.c:449 [inline]
                   register_framebuffer+0x6b2/0x8d0 drivers/video/fbdev/core/fbmem.c:515
                   __drm_fb_helper_initial_config_and_unlock+0x172d/0x1e30 drivers/gpu/drm/drm_fb_helper.c:1871
                   drm_fbdev_generic_client_hotplug+0x16e/0x230 drivers/gpu/drm/drm_fbdev_generic.c:279
                   drm_client_register+0x181/0x210 drivers/gpu/drm/drm_client.c:141
                   vkms_create drivers/gpu/drm/vkms/vkms_drv.c:226 [inline]
                   vkms_init+0x5f5/0x730 drivers/gpu/drm/vkms/vkms_drv.c:252
                   do_one_initcall+0x23a/0x830 init/main.c:1241
                   do_initcall_level+0x157/0x210 init/main.c:1303
                   do_initcalls+0x3f/0x80 init/main.c:1319
                   kernel_init_freeable+0x435/0x5d0 init/main.c:1550
                   kernel_init+0x1d/0x2a0 init/main.c:1439
                   ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
                   ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
 }
 ... key      at: [<ffffffff94815400>] drm_dev_init.__key.17+0x0/0x20

the dependencies between the lock to be acquired
 and HARDIRQ-irq-unsafe lock:
-> (&htab->buckets[i].lock){+...}-{2:2} {
   HARDIRQ-ON-W at:
                    lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
                    __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
                    _raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
                    spin_lock_bh include/linux/spinlock.h:356 [inline]
                    sock_hash_free+0x164/0x820 net/core/sock_map.c:1154
                    bpf_map_free_deferred+0xe8/0x110 kernel/bpf/syscall.c:734
                    process_one_work kernel/workqueue.c:3254 [inline]
                    process_scheduled_works+0xa02/0x1770 kernel/workqueue.c:3335
                    worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
                    kthread+0x2f2/0x390 kernel/kthread.c:388
                    ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
                    ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
   INITIAL USE at:
                   lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
                   __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
                   _raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
                   spin_lock_bh include/linux/spinlock.h:356 [inline]
                   sock_hash_free+0x164/0x820 net/core/sock_map.c:1154
                   bpf_map_free_deferred+0xe8/0x110 kernel/bpf/syscall.c:734
                   process_one_work kernel/workqueue.c:3254 [inline]
                   process_scheduled_works+0xa02/0x1770 kernel/workqueue.c:3335
                   worker_thread+0x86d/0xd70 kernel/workqueue.c:3416
                   kthread+0x2f2/0x390 kernel/kthread.c:388
                   ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
                   ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
 }
 ... key      at: [<ffffffff948a0540>] sock_hash_alloc.__key+0x0/0x20
 ... acquired at:
   lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
   __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
   _raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
   spin_lock_bh include/linux/spinlock.h:356 [inline]
   sock_hash_delete_elem+0xb0/0x300 net/core/sock_map.c:939
   bpf_prog_2c29ac5cdc6b1842+0x42/0x4a
   bpf_dispatcher_nop_func include/linux/bpf.h:1234 [inline]
   __bpf_prog_run include/linux/filter.h:657 [inline]
   bpf_prog_run include/linux/filter.h:664 [inline]
   __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
   bpf_trace_run2+0x206/0x420 kernel/trace/bpf_trace.c:2420
   trace_kfree include/trace/events/kmem.h:94 [inline]
   kfree+0x291/0x380 mm/slub.c:4377
   drm_crtc_send_vblank_event+0x196/0x240 drivers/gpu/drm/drm_vblank.c:1129
   vkms_crtc_atomic_flush+0xe7/0x1c0 drivers/gpu/drm/vkms/vkms_crtc.c:256
   drm_atomic_helper_commit_planes+0xaf3/0xe00 drivers/gpu/drm/drm_atomic_helper.c:2820
   vkms_atomic_commit_tail+0x5d/0x200 drivers/gpu/drm/vkms/vkms_drv.c:73
   commit_tail+0x2ab/0x3c0 drivers/gpu/drm/drm_atomic_helper.c:1832
   drm_atomic_helper_commit+0x953/0x9f0 drivers/gpu/drm/drm_atomic_helper.c:2072
   drm_atomic_commit+0x2ae/0x310 drivers/gpu/drm/drm_atomic.c:1514
   drm_client_modeset_commit_atomic+0x676/0x7e0 drivers/gpu/drm/drm_client_modeset.c:1051
   drm_client_modeset_commit_locked+0xe0/0x520 drivers/gpu/drm/drm_client_modeset.c:1154
   drm_client_modeset_commit+0x4a/0x70 drivers/gpu/drm/drm_client_modeset.c:1180
   __drm_fb_helper_restore_fbdev_mode_unlocked drivers/gpu/drm/drm_fb_helper.c:251 [inline]
   drm_fb_helper_restore_fbdev_mode_unlocked drivers/gpu/drm/drm_fb_helper.c:278 [inline]
   drm_fb_helper_lastclose+0xbb/0x180 drivers/gpu/drm/drm_fb_helper.c:2005
   drm_fbdev_generic_client_restore+0x34/0x40 drivers/gpu/drm/drm_fbdev_generic.c:258
   drm_client_dev_restore+0x134/0x270 drivers/gpu/drm/drm_client.c:247
   drm_lastclose drivers/gpu/drm/drm_file.c:406 [inline]
   drm_release+0x47c/0x560 drivers/gpu/drm/drm_file.c:437
   __fput+0x42b/0x8a0 fs/file_table.c:422
   task_work_run+0x251/0x310 kernel/task_work.c:180
   exit_task_work include/linux/task_work.h:38 [inline]
   do_exit+0xa1b/0x27e0 kernel/exit.c:878
   __do_sys_exit kernel/exit.c:994 [inline]
   __se_sys_exit kernel/exit.c:992 [inline]
   __pfx___ia32_sys_exit+0x0/0x10 kernel/exit.c:992
   do_syscall_64+0xfd/0x240
   entry_SYSCALL_64_after_hwframe+0x6d/0x75


stack backtrace:
CPU: 0 PID: 8995 Comm: syz-executor.0 Not tainted 6.8.0-syzkaller-08951-gfe46a7dd189e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 print_bad_irq_dependency kernel/locking/lockdep.c:2626 [inline]
 check_irq_usage kernel/locking/lockdep.c:2865 [inline]
 check_prev_add kernel/locking/lockdep.c:3138 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain+0x4dc7/0x58e0 kernel/locking/lockdep.c:3869
 __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
 lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
 _raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
 spin_lock_bh include/linux/spinlock.h:356 [inline]
 sock_hash_delete_elem+0xb0/0x300 net/core/sock_map.c:939
 bpf_prog_2c29ac5cdc6b1842+0x42/0x4a
 bpf_dispatcher_nop_func include/linux/bpf.h:1234 [inline]
 __bpf_prog_run include/linux/filter.h:657 [inline]
 bpf_prog_run include/linux/filter.h:664 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2381 [inline]
 bpf_trace_run2+0x206/0x420 kernel/trace/bpf_trace.c:2420
 trace_kfree include/trace/events/kmem.h:94 [inline]
 kfree+0x291/0x380 mm/slub.c:4377
 drm_crtc_send_vblank_event+0x196/0x240 drivers/gpu/drm/drm_vblank.c:1129
 vkms_crtc_atomic_flush+0xe7/0x1c0 drivers/gpu/drm/vkms/vkms_crtc.c:256
 drm_atomic_helper_commit_planes+0xaf3/0xe00 drivers/gpu/drm/drm_atomic_helper.c:2820
 vkms_atomic_commit_tail+0x5d/0x200 drivers/gpu/drm/vkms/vkms_drv.c:73
 commit_tail+0x2ab/0x3c0 drivers/gpu/drm/drm_atomic_helper.c:1832
 drm_atomic_helper_commit+0x953/0x9f0 drivers/gpu/drm/drm_atomic_helper.c:2072
 drm_atomic_commit+0x2ae/0x310 drivers/gpu/drm/drm_atomic.c:1514
 drm_client_modeset_commit_atomic+0x676/0x7e0 drivers/gpu/drm/drm_client_modeset.c:1051
 drm_client_modeset_commit_locked+0xe0/0x520 drivers/gpu/drm/drm_client_modeset.c:1154
 drm_client_modeset_commit+0x4a/0x70 drivers/gpu/drm/drm_client_modeset.c:1180
 __drm_fb_helper_restore_fbdev_mode_unlocked drivers/gpu/drm/drm_fb_helper.c:251 [inline]
 drm_fb_helper_restore_fbdev_mode_unlocked drivers/gpu/drm/drm_fb_helper.c:278 [inline]
 drm_fb_helper_lastclose+0xbb/0x180 drivers/gpu/drm/drm_fb_helper.c:2005
 drm_fbdev_generic_client_restore+0x34/0x40 drivers/gpu/drm/drm_fbdev_generic.c:258
 drm_client_dev_restore+0x134/0x270 drivers/gpu/drm/drm_client.c:247
 drm_lastclose drivers/gpu/drm/drm_file.c:406 [inline]
 drm_release+0x47c/0x560 drivers/gpu/drm/drm_file.c:437
 __fput+0x42b/0x8a0 fs/file_table.c:422
 task_work_run+0x251/0x310 kernel/task_work.c:180
 exit_task_work include/linux/task_work.h:38 [inline]
 do_exit+0xa1b/0x27e0 kernel/exit.c:878
 __do_sys_exit kernel/exit.c:994 [inline]
 __se_sys_exit kernel/exit.c:992 [inline]
 __x64_sys_exit+0x40/0x40 kernel/exit.c:992
 do_syscall_64+0xfd/0x240
 entry_SYSCALL_64_after_hwframe+0x6d/0x75
RIP: 0033:0x7f95d8e7de69
Code: Unable to access opcode bytes at 0x7f95d8e7de3f.
RSP: 002b:00007f95d9c75078 EFLAGS: 00000246 ORIG_RAX: 000000000000003c
RAX: ffffffffffffffda RBX: 00007f95d8fabf80 RCX: 00007f95d8e7de69
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 00007f95d8eca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f95d8fabf80 R15: 00007ffcad1c9fb8
 </TASK>

Crashes (30):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/04/15 01:44 upstream fe46a7dd189e c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in drm_handle_vblank
2024/04/12 12:46 upstream fe46a7dd189e 27de0a5c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in drm_handle_vblank
2024/04/12 04:47 upstream fe46a7dd189e 478efa7f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in drm_handle_vblank
2024/04/09 21:49 upstream fe46a7dd189e 56086b24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in drm_handle_vblank
2024/04/09 08:31 upstream fe46a7dd189e 53df08b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in drm_handle_vblank
2024/04/08 21:57 upstream fe46a7dd189e 53df08b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in drm_handle_vblank
2024/04/06 22:24 upstream fe46a7dd189e ca620dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in drm_handle_vblank
2024/04/04 10:45 upstream fe46a7dd189e 51c4dcff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in drm_handle_vblank
2024/04/03 11:11 upstream fe46a7dd189e 7925100d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in drm_handle_vblank
2024/03/27 14:07 upstream fe46a7dd189e 454571b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in drm_handle_vblank
2024/03/27 02:44 upstream fe46a7dd189e 454571b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in drm_handle_vblank
2024/04/04 12:28 upstream c85af715cac0 0ee3535e .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/04/02 07:44 upstream 026e680b0a08 6baf5069 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/03/29 15:15 upstream 317c7bc0ef03 c52bcb23 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/03/29 07:33 upstream 317c7bc0ef03 c52bcb23 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/03/28 12:08 upstream 8d025e2092e2 120789fd .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/03/28 11:24 upstream 8d025e2092e2 120789fd .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/03/26 12:39 upstream 928a87efa423 bcd9b39f .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/03/26 05:24 upstream 928a87efa423 bcd9b39f .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/03/23 05:46 upstream 4f55aa85a874 0ea90952 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/03/20 16:56 upstream a4145ce1e7bc 5b7d42ae .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/03/20 13:05 upstream a4145ce1e7bc 5b7d42ae .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/03/16 14:13 upstream 66a27abac311 d615901c .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in drm_handle_vblank
2024/04/04 05:11 upstream c85af715cac0 51c4dcff .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in drm_handle_vblank
2024/04/04 05:11 upstream c85af715cac0 51c4dcff .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in drm_handle_vblank
2024/04/02 10:43 upstream 026e680b0a08 f861ecca .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in drm_handle_vblank
2024/04/01 06:55 upstream 39cd87c4eb2b 6baf5069 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in drm_handle_vblank
2024/03/31 15:27 upstream 712e14250dd2 6baf5069 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in drm_handle_vblank
2024/03/28 00:03 upstream 962490525cff 120789fd .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in drm_handle_vblank
2024/03/27 09:22 upstream 7033999ecd7b 454571b6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in drm_handle_vblank
* Struck through repros no longer work on HEAD.