syzbot


possible deadlock in shmem_uncharge

Status: fixed on 2020/06/18 13:57
Subsystems: mm
[Documentation on labels]
Reported-by: syzbot+c8a8197c8852f566b9d9@syzkaller.appspotmail.com
Fix commit: ea0dfeb4209b shmem: fix possible deadlocks on shmlock_user_lock
First crash: 1472d, last: 1429d
Cause bisection: introduced by (bisect log) :
commit 71725ed10c40696dc6bdccf8e225815dcef24dba
Author: Hugh Dickins <hughd@google.com>
Date: Tue Apr 7 03:07:57 2020 +0000

  mm: huge tmpfs: try to split_huge_page() when punching hole

Crash: possible deadlock in shmem_uncharge (log)
Repro: C syz .config
  
Discussions (8)
Title Replies (including bot) Last reply
[PATCH 4.19 00/80] 4.19.124-rc1 review 103 (103) 2020/06/05 01:12
[PATCH 5.4 000/147] 5.4.42-rc1 review 152 (152) 2020/05/19 16:29
[PATCH 4.14 000/114] 4.14.181-rc1 review 119 (119) 2020/05/19 16:28
[PATCH 4.9 00/90] 4.9.224-rc1 review 95 (95) 2020/05/19 16:27
[PATCH 5.6 000/194] 5.6.14-rc1 review 203 (203) 2020/05/19 14:44
[patch 12/15] shmem: fix possible deadlocks on shmlock_user_lock 1 (1) 2020/04/21 01:14
[PATCH] shmem: fix possible deadlocks on shmlock_user_lock 2 (2) 2020/04/17 03:00
possible deadlock in shmem_uncharge 6 (7) 2020/04/17 00:19
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in shmem_uncharge (2) mm C done 29 278d 276d 23/26 fixed on 2023/10/12 12:48

Sample crash report:
=====================================================
WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
5.6.0-syzkaller #0 Not tainted
-----------------------------------------------------
syz-executor428/8337 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
ffff8880a851c778 (&info->lock){....}-{2:2}, at: shmem_uncharge+0x24/0x270 mm/shmem.c:341

and this task is already holding:
ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864
which would create a new lock dependency:
 (&xa->xa_lock#4){..-.}-{2:2} -> (&info->lock){....}-{2:2}

but this new dependency connects a SOFTIRQ-irq-safe lock:
 (&xa->xa_lock#4){..-.}-{2:2}

... which became SOFTIRQ-irq-safe at:
  lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
  _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
  test_clear_page_writeback+0x1d7/0x11e0 mm/page-writeback.c:2728
  end_page_writeback+0x239/0x520 mm/filemap.c:1317
  end_buffer_async_write+0x442/0x5c0 fs/buffer.c:384
  end_bio_bh_io_sync+0xe2/0x140 fs/buffer.c:3012
  bio_endio+0x473/0x820 block/bio.c:1422
  req_bio_endio block/blk-core.c:245 [inline]
  blk_update_request+0x3e1/0xdc0 block/blk-core.c:1472
  scsi_end_request+0x80/0x7b0 drivers/scsi/scsi_lib.c:575
  scsi_io_completion+0x1e7/0x1300 drivers/scsi/scsi_lib.c:959
  scsi_softirq_done+0x31e/0x3b0 drivers/scsi/scsi_lib.c:1454
  blk_done_softirq+0x2db/0x440 block/blk-softirq.c:37
  __do_softirq+0x26c/0x9f7 kernel/softirq.c:292
  invoke_softirq kernel/softirq.c:373 [inline]
  irq_exit+0x192/0x1d0 kernel/softirq.c:413
  exiting_irq arch/x86/include/asm/apic.h:546 [inline]
  do_IRQ+0xda/0x270 arch/x86/kernel/irq.c:263
  ret_from_intr+0x0/0x2b
  arch_local_irq_restore arch/x86/include/asm/paravirt.h:759 [inline]
  lock_acquire+0x267/0x8f0 kernel/locking/lockdep.c:4926
  down_write+0x8d/0x150 kernel/locking/rwsem.c:1531
  inode_lock include/linux/fs.h:797 [inline]
  process_measurement+0x68a/0x1750 security/integrity/ima/ima_main.c:228
  ima_file_check+0xb9/0x100 security/integrity/ima/ima_main.c:440
  do_open fs/namei.c:3231 [inline]
  path_openat+0x1997/0x27d0 fs/namei.c:3346
  do_filp_open+0x192/0x260 fs/namei.c:3373
  do_sys_openat2+0x585/0x7d0 fs/open.c:1148
  do_sys_open+0xc3/0x140 fs/open.c:1164
  do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
  entry_SYSCALL_64_after_hwframe+0x49/0xb3

to a SOFTIRQ-irq-unsafe lock:
 (shmlock_user_lock){+.+.}-{2:2}

... which became SOFTIRQ-irq-unsafe at:
...
  lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
  __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
  _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
  spin_lock include/linux/spinlock.h:353 [inline]
  user_shm_lock+0xab/0x230 mm/mlock.c:855
  hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
  newseg+0x460/0xe60 ipc/shm.c:652
  ipcget_new ipc/util.c:344 [inline]
  ipcget+0xf0/0xcb0 ipc/util.c:643
  ksys_shmget ipc/shm.c:742 [inline]
  __do_sys_shmget ipc/shm.c:747 [inline]
  __se_sys_shmget ipc/shm.c:745 [inline]
  __x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
  do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
  entry_SYSCALL_64_after_hwframe+0x49/0xb3

other info that might help us debug this:

Chain exists of:
  &xa->xa_lock#4 --> &info->lock --> shmlock_user_lock

 Possible interrupt unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(shmlock_user_lock);
                               local_irq_disable();
                               lock(&xa->xa_lock#4);
                               lock(&info->lock);
  <Interrupt>
    lock(&xa->xa_lock#4);

 *** DEADLOCK ***

5 locks held by syz-executor428/8337:
 #0: ffff8880a7948450 (sb_writers#7){.+.+}-{0:0}, at: sb_start_write include/linux/fs.h:1655 [inline]
 #0: ffff8880a7948450 (sb_writers#7){.+.+}-{0:0}, at: do_sys_ftruncate+0x29f/0x570 fs/open.c:190
 #1: ffff8880a851c9d0 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:797 [inline]
 #1: ffff8880a851c9d0 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}, at: do_truncate+0x125/0x1f0 fs/open.c:62
 #2: ffff8880a851cb90 (&mapping->i_mmap_rwsem){++++}-{3:3}, at: i_mmap_lock_read include/linux/fs.h:541 [inline]
 #2: ffff8880a851cb90 (&mapping->i_mmap_rwsem){++++}-{3:3}, at: split_huge_page_to_list+0x4c3/0x33b0 mm/huge_memory.c:2825
 #3: ffff88812ffffcd8 (&pgdat->lru_lock){....}-{2:2}, at: split_huge_page_to_list+0x8da/0x33b0 mm/huge_memory.c:2855
 #4: ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: spin_lock include/linux/spinlock.h:353 [inline]
 #4: ffff8880a851cac8 (&xa->xa_lock#4){..-.}-{2:2}, at: split_huge_page_to_list+0xad0/0x33b0 mm/huge_memory.c:2864

the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
-> (&xa->xa_lock#4){..-.}-{2:2} {
   IN-SOFTIRQ-W at:
                    lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
                    __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
                    _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
                    test_clear_page_writeback+0x1d7/0x11e0 mm/page-writeback.c:2728
                    end_page_writeback+0x239/0x520 mm/filemap.c:1317
                    end_buffer_async_write+0x442/0x5c0 fs/buffer.c:384
                    end_bio_bh_io_sync+0xe2/0x140 fs/buffer.c:3012
                    bio_endio+0x473/0x820 block/bio.c:1422
                    req_bio_endio block/blk-core.c:245 [inline]
                    blk_update_request+0x3e1/0xdc0 block/blk-core.c:1472
                    scsi_end_request+0x80/0x7b0 drivers/scsi/scsi_lib.c:575
                    scsi_io_completion+0x1e7/0x1300 drivers/scsi/scsi_lib.c:959
                    scsi_softirq_done+0x31e/0x3b0 drivers/scsi/scsi_lib.c:1454
                    blk_done_softirq+0x2db/0x440 block/blk-softirq.c:37
                    __do_softirq+0x26c/0x9f7 kernel/softirq.c:292
                    invoke_softirq kernel/softirq.c:373 [inline]
                    irq_exit+0x192/0x1d0 kernel/softirq.c:413
                    exiting_irq arch/x86/include/asm/apic.h:546 [inline]
                    do_IRQ+0xda/0x270 arch/x86/kernel/irq.c:263
                    ret_from_intr+0x0/0x2b
                    arch_local_irq_restore arch/x86/include/asm/paravirt.h:759 [inline]
                    lock_acquire+0x267/0x8f0 kernel/locking/lockdep.c:4926
                    down_write+0x8d/0x150 kernel/locking/rwsem.c:1531
                    inode_lock include/linux/fs.h:797 [inline]
                    process_measurement+0x68a/0x1750 security/integrity/ima/ima_main.c:228
                    ima_file_check+0xb9/0x100 security/integrity/ima/ima_main.c:440
                    do_open fs/namei.c:3231 [inline]
                    path_openat+0x1997/0x27d0 fs/namei.c:3346
                    do_filp_open+0x192/0x260 fs/namei.c:3373
                    do_sys_openat2+0x585/0x7d0 fs/open.c:1148
                    do_sys_open+0xc3/0x140 fs/open.c:1164
                    do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
                    entry_SYSCALL_64_after_hwframe+0x49/0xb3
   INITIAL USE at:
                   lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
                   __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
                   _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
                   spin_lock_irq include/linux/spinlock.h:378 [inline]
                   __add_to_page_cache_locked+0x607/0xe00 mm/filemap.c:855
                   add_to_page_cache_lru+0x1aa/0x700 mm/filemap.c:921
                   do_read_cache_page+0x9ab/0x1810 mm/filemap.c:2755
                   read_mapping_page include/linux/pagemap.h:397 [inline]
                   read_part_sector+0xf6/0x600 block/partitions/core.c:643
                   adfspart_check_ICS+0x9d/0xc80 block/partitions/acorn.c:360
                   check_partition block/partitions/core.c:140 [inline]
                   blk_add_partitions+0x474/0xe50 block/partitions/core.c:571
                   bdev_disk_changed+0x1fb/0x380 fs/block_dev.c:1544
                   __blkdev_get+0xb15/0x1530 fs/block_dev.c:1647
                   blkdev_get+0x41/0x2b0 fs/block_dev.c:1749
                   register_disk block/genhd.c:763 [inline]
                   __device_add_disk+0xa4f/0x1170 block/genhd.c:853
                   add_disk include/linux/genhd.h:294 [inline]
                   brd_init+0x297/0x463 drivers/block/brd.c:533
                   do_one_initcall+0x10a/0x7d0 init/main.c:1158
                   do_initcall_level init/main.c:1231 [inline]
                   do_initcalls init/main.c:1247 [inline]
                   do_basic_setup init/main.c:1267 [inline]
                   kernel_init_freeable+0x501/0x5ae init/main.c:1451
                   kernel_init+0xd/0x1bb init/main.c:1358
                   ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352
 }
 ... key      at: [<ffffffff8c67b1e0>] __key.18007+0x0/0x40
 ... acquired at:
   lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
   __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
   _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
   shmem_uncharge+0x24/0x270 mm/shmem.c:341
   __split_huge_page mm/huge_memory.c:2613 [inline]
   split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
   split_huge_page include/linux/huge_mm.h:204 [inline]
   shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
   shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
   shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
   shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
   notify_change+0xb6d/0x1020 fs/attr.c:336
   do_truncate+0x134/0x1f0 fs/open.c:64
   do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
   do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
   entry_SYSCALL_64_after_hwframe+0x49/0xb3


the dependencies between the lock to be acquired
 and SOFTIRQ-irq-unsafe lock:
 -> (shmlock_user_lock){+.+.}-{2:2} {
    HARDIRQ-ON-W at:
                      lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
                      __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
                      _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
                      spin_lock include/linux/spinlock.h:353 [inline]
                      user_shm_lock+0xab/0x230 mm/mlock.c:855
                      hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
                      newseg+0x460/0xe60 ipc/shm.c:652
                      ipcget_new ipc/util.c:344 [inline]
                      ipcget+0xf0/0xcb0 ipc/util.c:643
                      ksys_shmget ipc/shm.c:742 [inline]
                      __do_sys_shmget ipc/shm.c:747 [inline]
                      __se_sys_shmget ipc/shm.c:745 [inline]
                      __x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
                      do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
                      entry_SYSCALL_64_after_hwframe+0x49/0xb3
    SOFTIRQ-ON-W at:
                      lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
                      __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
                      _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
                      spin_lock include/linux/spinlock.h:353 [inline]
                      user_shm_lock+0xab/0x230 mm/mlock.c:855
                      hugetlb_file_setup+0x4e1/0x677 fs/hugetlbfs/inode.c:1416
                      newseg+0x460/0xe60 ipc/shm.c:652
                      ipcget_new ipc/util.c:344 [inline]
                      ipcget+0xf0/0xcb0 ipc/util.c:643
                      ksys_shmget ipc/shm.c:742 [inline]
                      __do_sys_shmget ipc/shm.c:747 [inline]
                      __se_sys_shmget ipc/shm.c:745 [inline]
                      __x64_sys_shmget+0x139/0x1a0 ipc/shm.c:745
                      do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
                      entry_SYSCALL_64_after_hwframe+0x49/0xb3
    INITIAL USE at:
                     lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
                     __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
                     _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
                     spin_lock include/linux/spinlock.h:353 [inline]
                     user_shm_lock+0xab/0x230 mm/mlock.c:855
                     shmem_lock+0x1dd/0x2d0 mm/shmem.c:2184
                     shmctl_do_lock+0x73f/0x8f0 ipc/shm.c:1111
                     ksys_shmctl.constprop.0+0x203/0x350 ipc/shm.c:1188
                     do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
                     entry_SYSCALL_64_after_hwframe+0x49/0xb3
  }
  ... key      at: [<ffffffff89a5e858>] shmlock_user_lock+0x18/0x5c0
  ... acquired at:
   __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
   _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151
   spin_lock include/linux/spinlock.h:353 [inline]
   user_shm_lock+0xab/0x230 mm/mlock.c:855
   shmem_lock+0x1dd/0x2d0 mm/shmem.c:2184
   shmctl_do_lock+0x73f/0x8f0 ipc/shm.c:1111
   ksys_shmctl.constprop.0+0x203/0x350 ipc/shm.c:1188
   do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
   entry_SYSCALL_64_after_hwframe+0x49/0xb3

-> (&info->lock){....}-{2:2} {
   INITIAL USE at:
                   lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
                   __raw_spin_lock_irq include/linux/spinlock_api_smp.h:128 [inline]
                   _raw_spin_lock_irq+0x5b/0x80 kernel/locking/spinlock.c:167
                   spin_lock_irq include/linux/spinlock.h:378 [inline]
                   shmem_getpage_gfp+0x937/0x2a10 mm/shmem.c:1882
                   shmem_getpage mm/shmem.c:154 [inline]
                   shmem_write_begin+0x102/0x1e0 mm/shmem.c:2483
                   generic_perform_write+0x20a/0x4e0 mm/filemap.c:3302
                   __generic_file_write_iter+0x24c/0x610 mm/filemap.c:3431
                   generic_file_write_iter+0x3f3/0x630 mm/filemap.c:3463
                   call_write_iter include/linux/fs.h:1907 [inline]
                   new_sync_write+0x4a2/0x700 fs/read_write.c:483
                   __vfs_write+0xc9/0x100 fs/read_write.c:496
                   vfs_write+0x268/0x5d0 fs/read_write.c:558
                   ksys_write+0x12d/0x250 fs/read_write.c:611
                   do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
                   entry_SYSCALL_64_after_hwframe+0x49/0xb3
 }
 ... key      at: [<ffffffff8c667e80>] __key.56422+0x0/0x40
 ... acquired at:
   lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
   __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
   _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
   shmem_uncharge+0x24/0x270 mm/shmem.c:341
   __split_huge_page mm/huge_memory.c:2613 [inline]
   split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
   split_huge_page include/linux/huge_mm.h:204 [inline]
   shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
   shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
   shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
   shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
   notify_change+0xb6d/0x1020 fs/attr.c:336
   do_truncate+0x134/0x1f0 fs/open.c:64
   do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
   do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
   entry_SYSCALL_64_after_hwframe+0x49/0xb3


stack backtrace:
CPU: 0 PID: 8337 Comm: syz-executor428 Not tainted 5.6.0-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x188/0x20d lib/dump_stack.c:118
 print_bad_irq_dependency kernel/locking/lockdep.c:2132 [inline]
 check_irq_usage.cold+0x566/0x6de kernel/locking/lockdep.c:2330
 check_prev_add kernel/locking/lockdep.c:2519 [inline]
 check_prevs_add kernel/locking/lockdep.c:2620 [inline]
 validate_chain kernel/locking/lockdep.c:3237 [inline]
 __lock_acquire+0x2c39/0x4e00 kernel/locking/lockdep.c:4344
 lock_acquire+0x1f2/0x8f0 kernel/locking/lockdep.c:4923
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0x8c/0xbf kernel/locking/spinlock.c:159
 shmem_uncharge+0x24/0x270 mm/shmem.c:341
 __split_huge_page mm/huge_memory.c:2613 [inline]
 split_huge_page_to_list+0x274b/0x33b0 mm/huge_memory.c:2886
 split_huge_page include/linux/huge_mm.h:204 [inline]
 shmem_punch_compound+0x13e/0x1e0 mm/shmem.c:814
 shmem_undo_range+0x5f1/0x1b80 mm/shmem.c:870
 shmem_truncate_range+0x27/0xa0 mm/shmem.c:980
 shmem_setattr+0x8b6/0xc80 mm/shmem.c:1039
 notify_change+0xb6d/0x1020 fs/attr.c:336
 do_truncate+0x134/0x1f0 fs/open.c:64
 do_sys_ftruncate+0x4a5/0x570 fs/open.c:195
 do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:295
 entry_SYSCALL_64_after_hwframe+0x49/0xb3
RIP: 0033:0x44e769
Code: 4d c9 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 1b c9 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007fe511b3fce8 EFLAGS: 00000246 ORIG_RAX: 000000000000004d
RAX: ffffffffffffffda RBX: 00000000006e1c68 RCX: 000000000044e769
RDX: 000000000044e769 RSI: 00000000000001ff RDI: 0000000000000006
RBP: 00000000006e1c60 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000006e1c6c
R13: 00007ffce699f92f R14: 00007fe511b409c0 R15: 0000000000000000

Crashes (101):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2020/04/09 04:14 upstream ae46d2aa6a7f db9bcd4b .config console log report syz C ci-upstream-kasan-gce-root
2020/05/02 06:16 linux-next ac935d227366 bc734e7a .config console log report syz ci-upstream-linux-next-kasan-gce-root
2020/04/21 21:28 upstream 189522da8b3a 2e44d63e .config console log report ci-upstream-kasan-gce-root
2020/04/21 20:16 upstream ae83d0b416db 2e44d63e .config console log report ci-upstream-kasan-gce-selinux-root
2020/04/21 12:28 upstream ae83d0b416db 2e44d63e .config console log report ci-upstream-kasan-gce-selinux-root
2020/04/20 23:56 upstream ae83d0b416db 98a9f9e6 .config console log report ci-upstream-kasan-gce-root
2020/04/19 03:42 upstream 50cc09c18985 365fba24 .config console log report ci-upstream-kasan-gce-smack-root
2020/04/18 13:36 upstream 90280eaa88ac 365fba24 .config console log report ci-upstream-kasan-gce-selinux-root
2020/04/18 09:23 upstream 90280eaa88ac 435c6d53 .config console log report ci-upstream-kasan-gce-smack-root
2020/04/17 20:41 upstream 95988fbc7c31 435c6d53 .config console log report ci-upstream-kasan-gce-selinux-root
2020/04/17 07:45 upstream 7a56db0299f9 18397578 .config console log report ci-upstream-kasan-gce-smack-root
2020/04/17 06:41 upstream 7a56db0299f9 18397578 .config console log report ci-upstream-kasan-gce-smack-root
2020/04/16 22:06 upstream 9786cab67457 c743fcb3 .config console log report ci-upstream-kasan-gce-root
2020/04/08 10:03 upstream f5e94d10e4c4 db9bcd4b .config console log report ci-upstream-kasan-gce-selinux-root
2020/05/21 01:00 linux-next ac935d227366 c61086ab .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/20 12:51 linux-next ac935d227366 1255f02a .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/19 12:58 linux-next ac935d227366 6d882fd2 .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/18 13:28 linux-next ac935d227366 684d3606 .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/18 06:52 linux-next ac935d227366 37bccd4e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/18 02:39 linux-next ac935d227366 37bccd4e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/17 21:16 linux-next ac935d227366 37bccd4e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/17 19:42 linux-next ac935d227366 37bccd4e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/17 17:11 linux-next ac935d227366 37bccd4e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/17 10:42 linux-next ac935d227366 37bccd4e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/16 01:42 linux-next ac935d227366 37bccd4e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/15 13:03 linux-next ac935d227366 d7f9fffa .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/13 11:26 linux-next ac935d227366 9a6d42fb .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/10 12:18 linux-next ac935d227366 8742a2b9 .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/08 11:55 linux-next ac935d227366 2b98fdbc .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/07 10:43 linux-next ac935d227366 98cbd87b .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/06 13:58 linux-next ac935d227366 4618eb2d .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/06 01:29 linux-next ac935d227366 35b8eb30 .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/04 20:15 linux-next ac935d227366 9941337c .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/03 20:51 linux-next ac935d227366 58ae5e18 .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/03 00:20 linux-next ac935d227366 5457883a .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/02 12:23 linux-next ac935d227366 58da4c35 .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/01 18:38 linux-next ac935d227366 bc734e7a .config console log report ci-upstream-linux-next-kasan-gce-root
2020/05/01 06:38 linux-next ac935d227366 a4d01b80 .config console log report ci-upstream-linux-next-kasan-gce-root
2020/04/30 14:36 linux-next ac935d227366 3698959a .config console log report ci-upstream-linux-next-kasan-gce-root
2020/04/29 14:26 linux-next ac935d227366 496a08ae .config console log report ci-upstream-linux-next-kasan-gce-root
2020/04/28 14:11 linux-next ac935d227366 e3ecea2e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/04/28 12:42 linux-next ac935d227366 e3ecea2e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/04/28 01:20 linux-next ac935d227366 0ce7569e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/04/27 22:09 linux-next ac935d227366 0ce7569e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/04/26 19:13 linux-next ac935d227366 0ce7569e .config console log report ci-upstream-linux-next-kasan-gce-root
2020/04/26 04:16 linux-next ac935d227366 99b258dd .config console log report ci-upstream-linux-next-kasan-gce-root
2020/04/22 03:17 linux-next ac935d227366 2e44d63e .config console log report ci-upstream-linux-next-kasan-gce-root
* Struck through repros no longer work on HEAD.