syzbot


possible deadlock in pie_timer (2)

Status: fixed on 2023/09/28 17:51
Subsystems: net
[Documentation on labels]
Fix commit: 11b73313c124 sch_netem: fix issues in netem_change() vs get_dist_table()
First crash: 535d, last: 535d
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in pie_timer net C done 2 1898d 1899d 13/28 fixed on 2019/10/15 23:40
linux-6.1 possible deadlock in pie_timer 1 495d 495d 0/3 auto-obsoleted due to no activity on 2023/11/10 02:26
linux-4.19 possible deadlock in pie_timer C error 1 1040d 1040d 0/1 upstream: reported C repro on 2022/02/03 09:38
linux-4.14 possible deadlock in pie_timer C inconclusive 3 1029d 1890d 0/1 upstream: reported C repro on 2019/10/06 16:17

Sample crash report:
ip6tnl0: Caught tx_queue_len zero misconfig
=====================================================
WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
6.4.0-rc7-syzkaller-00194-g8a28a0b6f1a1 #0 Not tainted
-----------------------------------------------------
syz-executor.3/18000 [HC0[0]:SC0[2]:HE1:SE0] is trying to acquire:
ffffffff8c8e9a80 (fs_reclaim){+.+.}-{0:0}, at: might_alloc include/linux/sched/mm.h:303 [inline]
ffffffff8c8e9a80 (fs_reclaim){+.+.}-{0:0}, at: slab_pre_alloc_hook mm/slab.h:670 [inline]
ffffffff8c8e9a80 (fs_reclaim){+.+.}-{0:0}, at: slab_alloc_node mm/slab.c:3240 [inline]
ffffffff8c8e9a80 (fs_reclaim){+.+.}-{0:0}, at: __kmem_cache_alloc_node+0x3b/0x3f0 mm/slab.c:3540

and this task is already holding:
ffff88807ae21908 (&sch->q.lock){+.-.}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:355 [inline]
ffff88807ae21908 (&sch->q.lock){+.-.}-{2:2}, at: sch_tree_lock include/net/sch_generic.h:576 [inline]
ffff88807ae21908 (&sch->q.lock){+.-.}-{2:2}, at: sch_tree_lock include/net/sch_generic.h:571 [inline]
ffff88807ae21908 (&sch->q.lock){+.-.}-{2:2}, at: netem_change+0x1520/0x1f70 net/sched/sch_netem.c:969
which would create a new lock dependency:
 (&sch->q.lock){+.-.}-{2:2} -> (fs_reclaim){+.+.}-{0:0}

but this new dependency connects a SOFTIRQ-irq-safe lock:
 (&sch->q.lock){+.-.}-{2:2}

... which became SOFTIRQ-irq-safe at:
  lock_acquire kernel/locking/lockdep.c:5705 [inline]
  lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
  __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
  _raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
  spin_lock include/linux/spinlock.h:350 [inline]
  pie_timer+0xf5/0x3f0 net/sched/sch_pie.c:428
  call_timer_fn+0x1a0/0x580 kernel/time/timer.c:1700
  expire_timers+0x29b/0x4b0 kernel/time/timer.c:1751
  __run_timers kernel/time/timer.c:2022 [inline]
  __run_timers kernel/time/timer.c:1995 [inline]
  run_timer_softirq+0x326/0x910 kernel/time/timer.c:2035
  __do_softirq+0x1d4/0x905 kernel/softirq.c:571
  invoke_softirq kernel/softirq.c:445 [inline]
  __irq_exit_rcu+0x114/0x190 kernel/softirq.c:650
  irq_exit_rcu+0x9/0x20 kernel/softirq.c:662
  sysvec_apic_timer_interrupt+0x97/0xc0 arch/x86/kernel/apic/apic.c:1106
  asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:645
  __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:160 [inline]
  _raw_spin_unlock_irq+0x29/0x50 kernel/locking/spinlock.c:202
  spin_unlock_irq include/linux/spinlock.h:400 [inline]
  loop_queue_work drivers/block/loop.c:898 [inline]
  loop_queue_rq+0x627/0x1260 drivers/block/loop.c:1855
  __blk_mq_issue_directly+0xd1/0x260 block/blk-mq.c:2572
  blk_mq_try_issue_directly+0x187/0x360 block/blk-mq.c:2631
  blk_mq_submit_bio+0x1671/0x1f50 block/blk-mq.c:2989
  __submit_bio+0xfc/0x310 block/blk-core.c:594
  __submit_bio_noacct_mq block/blk-core.c:673 [inline]
  submit_bio_noacct_nocheck+0x7f9/0xb40 block/blk-core.c:702
  submit_bio_noacct+0x945/0x19f0 block/blk-core.c:801
  submit_bh fs/buffer.c:2782 [inline]
  __sync_dirty_buffer+0x174/0x380 fs/buffer.c:2820
  __ext4_handle_dirty_metadata+0x2b7/0x8e0 fs/ext4/ext4_jbd2.c:387
  ext4_xattr_inode_write fs/ext4/xattr.c:1437 [inline]
  ext4_xattr_inode_lookup_create fs/ext4/xattr.c:1594 [inline]
  ext4_xattr_set_entry+0x2bd5/0x3810 fs/ext4/xattr.c:1719
  ext4_xattr_block_set+0xcb7/0x2fd0 fs/ext4/xattr.c:2025
  ext4_xattr_set_handle+0xd8a/0x1510 fs/ext4/xattr.c:2442
  ext4_xattr_set+0x144/0x360 fs/ext4/xattr.c:2544
  __vfs_setxattr+0x173/0x1e0 fs/xattr.c:201
  __vfs_setxattr_noperm+0x129/0x5f0 fs/xattr.c:235
  __vfs_setxattr_locked+0x1d3/0x260 fs/xattr.c:296
  vfs_setxattr+0x143/0x340 fs/xattr.c:322
  do_setxattr+0x147/0x190 fs/xattr.c:630
  setxattr+0x146/0x160 fs/xattr.c:653
  path_setxattr+0x197/0x1c0 fs/xattr.c:672
  __do_sys_setxattr fs/xattr.c:688 [inline]
  __se_sys_setxattr fs/xattr.c:684 [inline]
  __x64_sys_setxattr+0xc4/0x160 fs/xattr.c:684
  do_syscall_x64 arch/x86/entry/common.c:50 [inline]
  do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
  entry_SYSCALL_64_after_hwframe+0x63/0xcd

to a SOFTIRQ-irq-unsafe lock:
 (fs_reclaim){+.+.}-{0:0}

... which became SOFTIRQ-irq-unsafe at:
...
  lock_acquire kernel/locking/lockdep.c:5705 [inline]
  lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
  __fs_reclaim_acquire mm/page_alloc.c:3893 [inline]
  fs_reclaim_acquire+0x11d/0x160 mm/page_alloc.c:3907
  might_alloc include/linux/sched/mm.h:303 [inline]
  slab_pre_alloc_hook mm/slab.h:670 [inline]
  slab_alloc_node mm/slab.c:3240 [inline]
  __kmem_cache_alloc_node+0x3b/0x3f0 mm/slab.c:3540
  kmalloc_trace+0x26/0xe0 mm/slab_common.c:1057
  kmalloc include/linux/slab.h:559 [inline]
  kzalloc include/linux/slab.h:680 [inline]
  alloc_workqueue_attrs kernel/workqueue.c:3510 [inline]
  wq_numa_init kernel/workqueue.c:6216 [inline]
  workqueue_init+0xf5/0xd40 kernel/workqueue.c:6343
  kernel_init_freeable+0x34c/0xba0 init/main.c:1557
  kernel_init+0x1e/0x2c0 init/main.c:1462
  ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308

other info that might help us debug this:

 Possible interrupt unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               local_irq_disable();
                               lock(&sch->q.lock);
                               lock(fs_reclaim);
  <Interrupt>
    lock(&sch->q.lock);

 *** DEADLOCK ***

2 locks held by syz-executor.3/18000:
 #0: ffffffff8e10bbe8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8e10bbe8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x3e8/0xd50 net/core/rtnetlink.c:6414
 #1: ffff88807ae21908 (&sch->q.lock){+.-.}-{2:2}, at: spin_lock_bh include/linux/spinlock.h:355 [inline]
 #1: ffff88807ae21908 (&sch->q.lock){+.-.}-{2:2}, at: sch_tree_lock include/net/sch_generic.h:576 [inline]
 #1: ffff88807ae21908 (&sch->q.lock){+.-.}-{2:2}, at: sch_tree_lock include/net/sch_generic.h:571 [inline]
 #1: ffff88807ae21908 (&sch->q.lock){+.-.}-{2:2}, at: netem_change+0x1520/0x1f70 net/sched/sch_netem.c:969

the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
-> (&sch->q.lock){+.-.}-{2:2} {
   HARDIRQ-ON-W at:
                    lock_acquire kernel/locking/lockdep.c:5705 [inline]
                    lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
                    __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
                    _raw_spin_lock_bh+0x33/0x40 kernel/locking/spinlock.c:178
                    spin_lock_bh include/linux/spinlock.h:355 [inline]
                    dev_reset_queue+0xab/0x1d0 net/sched/sch_generic.c:1291
                    netdev_for_each_tx_queue include/linux/netdevice.h:2517 [inline]
                    dev_deactivate_many+0x36d/0xb00 net/sched/sch_generic.c:1359
                    dev_deactivate+0xed/0x1b0 net/sched/sch_generic.c:1382
                    linkwatch_do_dev+0x101/0x150 net/core/link_watch.c:180
                    __linkwatch_run_queue+0x23f/0x6a0 net/core/link_watch.c:235
                    linkwatch_event+0x4e/0x70 net/core/link_watch.c:278
                    process_one_work+0x99a/0x15e0 kernel/workqueue.c:2405
                    worker_thread+0x67d/0x10c0 kernel/workqueue.c:2552
                    kthread+0x344/0x440 kernel/kthread.c:379
                    ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
   IN-SOFTIRQ-W at:
                    lock_acquire kernel/locking/lockdep.c:5705 [inline]
                    lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
                    __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
                    _raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
                    spin_lock include/linux/spinlock.h:350 [inline]
                    pie_timer+0xf5/0x3f0 net/sched/sch_pie.c:428
                    call_timer_fn+0x1a0/0x580 kernel/time/timer.c:1700
                    expire_timers+0x29b/0x4b0 kernel/time/timer.c:1751
                    __run_timers kernel/time/timer.c:2022 [inline]
                    __run_timers kernel/time/timer.c:1995 [inline]
                    run_timer_softirq+0x326/0x910 kernel/time/timer.c:2035
                    __do_softirq+0x1d4/0x905 kernel/softirq.c:571
                    invoke_softirq kernel/softirq.c:445 [inline]
                    __irq_exit_rcu+0x114/0x190 kernel/softirq.c:650
                    irq_exit_rcu+0x9/0x20 kernel/softirq.c:662
                    sysvec_apic_timer_interrupt+0x97/0xc0 arch/x86/kernel/apic/apic.c:1106
                    asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:645
                    __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:160 [inline]
                    _raw_spin_unlock_irq+0x29/0x50 kernel/locking/spinlock.c:202
                    spin_unlock_irq include/linux/spinlock.h:400 [inline]
                    loop_queue_work drivers/block/loop.c:898 [inline]
                    loop_queue_rq+0x627/0x1260 drivers/block/loop.c:1855
                    __blk_mq_issue_directly+0xd1/0x260 block/blk-mq.c:2572
                    blk_mq_try_issue_directly+0x187/0x360 block/blk-mq.c:2631
                    blk_mq_submit_bio+0x1671/0x1f50 block/blk-mq.c:2989
                    __submit_bio+0xfc/0x310 block/blk-core.c:594
                    __submit_bio_noacct_mq block/blk-core.c:673 [inline]
                    submit_bio_noacct_nocheck+0x7f9/0xb40 block/blk-core.c:702
                    submit_bio_noacct+0x945/0x19f0 block/blk-core.c:801
                    submit_bh fs/buffer.c:2782 [inline]
                    __sync_dirty_buffer+0x174/0x380 fs/buffer.c:2820
                    __ext4_handle_dirty_metadata+0x2b7/0x8e0 fs/ext4/ext4_jbd2.c:387
                    ext4_xattr_inode_write fs/ext4/xattr.c:1437 [inline]
                    ext4_xattr_inode_lookup_create fs/ext4/xattr.c:1594 [inline]
                    ext4_xattr_set_entry+0x2bd5/0x3810 fs/ext4/xattr.c:1719
                    ext4_xattr_block_set+0xcb7/0x2fd0 fs/ext4/xattr.c:2025
                    ext4_xattr_set_handle+0xd8a/0x1510 fs/ext4/xattr.c:2442
                    ext4_xattr_set+0x144/0x360 fs/ext4/xattr.c:2544
                    __vfs_setxattr+0x173/0x1e0 fs/xattr.c:201
                    __vfs_setxattr_noperm+0x129/0x5f0 fs/xattr.c:235
                    __vfs_setxattr_locked+0x1d3/0x260 fs/xattr.c:296
                    vfs_setxattr+0x143/0x340 fs/xattr.c:322
                    do_setxattr+0x147/0x190 fs/xattr.c:630
                    setxattr+0x146/0x160 fs/xattr.c:653
                    path_setxattr+0x197/0x1c0 fs/xattr.c:672
                    __do_sys_setxattr fs/xattr.c:688 [inline]
                    __se_sys_setxattr fs/xattr.c:684 [inline]
                    __x64_sys_setxattr+0xc4/0x160 fs/xattr.c:684
                    do_syscall_x64 arch/x86/entry/common.c:50 [inline]
                    do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
                    entry_SYSCALL_64_after_hwframe+0x63/0xcd
   INITIAL USE at:
                   lock_acquire kernel/locking/lockdep.c:5705 [inline]
                   lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
                   __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
                   _raw_spin_lock_bh+0x33/0x40 kernel/locking/spinlock.c:178
                   spin_lock_bh include/linux/spinlock.h:355 [inline]
                   dev_reset_queue+0xab/0x1d0 net/sched/sch_generic.c:1291
                   netdev_for_each_tx_queue include/linux/netdevice.h:2517 [inline]
                   dev_deactivate_many+0x36d/0xb00 net/sched/sch_generic.c:1359
                   dev_deactivate+0xed/0x1b0 net/sched/sch_generic.c:1382
                   linkwatch_do_dev+0x101/0x150 net/core/link_watch.c:180
                   __linkwatch_run_queue+0x23f/0x6a0 net/core/link_watch.c:235
                   linkwatch_event+0x4e/0x70 net/core/link_watch.c:278
                   process_one_work+0x99a/0x15e0 kernel/workqueue.c:2405
                   worker_thread+0x67d/0x10c0 kernel/workqueue.c:2552
                   kthread+0x344/0x440 kernel/kthread.c:379
                   ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 }
 ... key      at: [<ffffffff9217c5e0>] __key.4+0x0/0x40

the dependencies between the lock to be acquired
 and SOFTIRQ-irq-unsafe lock:
-> (fs_reclaim){+.+.}-{0:0} {
   HARDIRQ-ON-W at:
                    lock_acquire kernel/locking/lockdep.c:5705 [inline]
                    lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
                    __fs_reclaim_acquire mm/page_alloc.c:3893 [inline]
                    fs_reclaim_acquire+0x11d/0x160 mm/page_alloc.c:3907
                    might_alloc include/linux/sched/mm.h:303 [inline]
                    slab_pre_alloc_hook mm/slab.h:670 [inline]
                    slab_alloc_node mm/slab.c:3240 [inline]
                    __kmem_cache_alloc_node+0x3b/0x3f0 mm/slab.c:3540
                    kmalloc_trace+0x26/0xe0 mm/slab_common.c:1057
                    kmalloc include/linux/slab.h:559 [inline]
                    kzalloc include/linux/slab.h:680 [inline]
                    alloc_workqueue_attrs kernel/workqueue.c:3510 [inline]
                    wq_numa_init kernel/workqueue.c:6216 [inline]
                    workqueue_init+0xf5/0xd40 kernel/workqueue.c:6343
                    kernel_init_freeable+0x34c/0xba0 init/main.c:1557
                    kernel_init+0x1e/0x2c0 init/main.c:1462
                    ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
   SOFTIRQ-ON-W at:
                    lock_acquire kernel/locking/lockdep.c:5705 [inline]
                    lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
                    __fs_reclaim_acquire mm/page_alloc.c:3893 [inline]
                    fs_reclaim_acquire+0x11d/0x160 mm/page_alloc.c:3907
                    might_alloc include/linux/sched/mm.h:303 [inline]
                    slab_pre_alloc_hook mm/slab.h:670 [inline]
                    slab_alloc_node mm/slab.c:3240 [inline]
                    __kmem_cache_alloc_node+0x3b/0x3f0 mm/slab.c:3540
                    kmalloc_trace+0x26/0xe0 mm/slab_common.c:1057
                    kmalloc include/linux/slab.h:559 [inline]
                    kzalloc include/linux/slab.h:680 [inline]
                    alloc_workqueue_attrs kernel/workqueue.c:3510 [inline]
                    wq_numa_init kernel/workqueue.c:6216 [inline]
                    workqueue_init+0xf5/0xd40 kernel/workqueue.c:6343
                    kernel_init_freeable+0x34c/0xba0 init/main.c:1557
                    kernel_init+0x1e/0x2c0 init/main.c:1462
                    ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
   INITIAL USE at:
                   lock_acquire kernel/locking/lockdep.c:5705 [inline]
                   lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
                   __fs_reclaim_acquire mm/page_alloc.c:3893 [inline]
                   fs_reclaim_acquire+0x11d/0x160 mm/page_alloc.c:3907
                   might_alloc include/linux/sched/mm.h:303 [inline]
                   slab_pre_alloc_hook mm/slab.h:670 [inline]
                   slab_alloc_node mm/slab.c:3240 [inline]
                   __kmem_cache_alloc_node+0x3b/0x3f0 mm/slab.c:3540
                   kmalloc_trace+0x26/0xe0 mm/slab_common.c:1057
                   kmalloc include/linux/slab.h:559 [inline]
                   kzalloc include/linux/slab.h:680 [inline]
                   alloc_workqueue_attrs kernel/workqueue.c:3510 [inline]
                   wq_numa_init kernel/workqueue.c:6216 [inline]
                   workqueue_init+0xf5/0xd40 kernel/workqueue.c:6343
                   kernel_init_freeable+0x34c/0xba0 init/main.c:1557
                   kernel_init+0x1e/0x2c0 init/main.c:1462
                   ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 }
 ... key      at: [<ffffffff8c8e9a80>] __fs_reclaim_map+0x0/0xe0
 ... acquired at:
   lock_acquire kernel/locking/lockdep.c:5705 [inline]
   lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
   __fs_reclaim_acquire mm/page_alloc.c:3893 [inline]
   fs_reclaim_acquire+0x11d/0x160 mm/page_alloc.c:3907
   might_alloc include/linux/sched/mm.h:303 [inline]
   slab_pre_alloc_hook mm/slab.h:670 [inline]
   slab_alloc_node mm/slab.c:3240 [inline]
   __kmem_cache_alloc_node+0x3b/0x3f0 mm/slab.c:3540
   __do_kmalloc_node mm/slab_common.c:965 [inline]
   __kmalloc_node+0x51/0x1a0 mm/slab_common.c:973
   kmalloc_node include/linux/slab.h:579 [inline]
   kvmalloc_node+0xa2/0x1a0 mm/util.c:604
   kvmalloc include/linux/slab.h:697 [inline]
   get_dist_table+0x8e/0x3a0 net/sched/sch_netem.c:788
   netem_change+0x57c/0x1f70 net/sched/sch_netem.c:985
   netem_init+0x70/0xc0 net/sched/sch_netem.c:1072
   qdisc_create+0x4d1/0x10c0 net/sched/sch_api.c:1326
   tc_modify_qdisc+0x488/0x1c30 net/sched/sch_api.c:1720
   rtnetlink_rcv_msg+0x43d/0xd50 net/core/rtnetlink.c:6417
   netlink_rcv_skb+0x165/0x440 net/netlink/af_netlink.c:2546
   netlink_unicast_kernel net/netlink/af_netlink.c:1339 [inline]
   netlink_unicast+0x547/0x7f0 net/netlink/af_netlink.c:1365
   netlink_sendmsg+0x925/0xe30 net/netlink/af_netlink.c:1913
   sock_sendmsg_nosec net/socket.c:724 [inline]
   sock_sendmsg+0xde/0x190 net/socket.c:747
   ____sys_sendmsg+0x71c/0x900 net/socket.c:2503
   ___sys_sendmsg+0x110/0x1b0 net/socket.c:2557
   __sys_sendmsg+0xf7/0x1c0 net/socket.c:2586
   do_syscall_x64 arch/x86/entry/common.c:50 [inline]
   do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
   entry_SYSCALL_64_after_hwframe+0x63/0xcd


stack backtrace:
CPU: 0 PID: 18000 Comm: syz-executor.3 Not tainted 6.4.0-rc7-syzkaller-00194-g8a28a0b6f1a1 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106
 print_bad_irq_dependency kernel/locking/lockdep.c:2627 [inline]
 check_irq_usage+0x114e/0x1a40 kernel/locking/lockdep.c:2866
 check_prev_add kernel/locking/lockdep.c:3117 [inline]
 check_prevs_add kernel/locking/lockdep.c:3232 [inline]
 validate_chain kernel/locking/lockdep.c:3847 [inline]
 __lock_acquire+0x2fe5/0x5f30 kernel/locking/lockdep.c:5088
 lock_acquire kernel/locking/lockdep.c:5705 [inline]
 lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
 __fs_reclaim_acquire mm/page_alloc.c:3893 [inline]
 fs_reclaim_acquire+0x11d/0x160 mm/page_alloc.c:3907
 might_alloc include/linux/sched/mm.h:303 [inline]
 slab_pre_alloc_hook mm/slab.h:670 [inline]
 slab_alloc_node mm/slab.c:3240 [inline]
 __kmem_cache_alloc_node+0x3b/0x3f0 mm/slab.c:3540
 __do_kmalloc_node mm/slab_common.c:965 [inline]
 __kmalloc_node+0x51/0x1a0 mm/slab_common.c:973
 kmalloc_node include/linux/slab.h:579 [inline]
 kvmalloc_node+0xa2/0x1a0 mm/util.c:604
 kvmalloc include/linux/slab.h:697 [inline]
 get_dist_table+0x8e/0x3a0 net/sched/sch_netem.c:788
 netem_change+0x57c/0x1f70 net/sched/sch_netem.c:985
 netem_init+0x70/0xc0 net/sched/sch_netem.c:1072
 qdisc_create+0x4d1/0x10c0 net/sched/sch_api.c:1326
 tc_modify_qdisc+0x488/0x1c30 net/sched/sch_api.c:1720
 rtnetlink_rcv_msg+0x43d/0xd50 net/core/rtnetlink.c:6417
 netlink_rcv_skb+0x165/0x440 net/netlink/af_netlink.c:2546
 netlink_unicast_kernel net/netlink/af_netlink.c:1339 [inline]
 netlink_unicast+0x547/0x7f0 net/netlink/af_netlink.c:1365
 netlink_sendmsg+0x925/0xe30 net/netlink/af_netlink.c:1913
 sock_sendmsg_nosec net/socket.c:724 [inline]
 sock_sendmsg+0xde/0x190 net/socket.c:747
 ____sys_sendmsg+0x71c/0x900 net/socket.c:2503
 ___sys_sendmsg+0x110/0x1b0 net/socket.c:2557
 __sys_sendmsg+0xf7/0x1c0 net/socket.c:2586
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f31c528c389
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f31c6033168 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007f31c53ac050 RCX: 00007f31c528c389
RDX: 0000000000000000 RSI: 00000000200007c0 RDI: 0000000000000004
RBP: 00007f31c52d7493 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffdeff5f98f R14: 00007f31c6033300 R15: 0000000000022000
 </TASK>
BUG: sleeping function called from invalid context at include/linux/sched/mm.h:306
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 18000, name: syz-executor.3
preempt_count: 201, expected: 0
RCU nest depth: 0, expected: 0
INFO: lockdep is turned off.
Preemption disabled at:
[<0000000000000000>] 0x0
CPU: 0 PID: 18000 Comm: syz-executor.3 Not tainted 6.4.0-rc7-syzkaller-00194-g8a28a0b6f1a1 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x136/0x150 lib/dump_stack.c:106
 __might_resched+0x358/0x580 kernel/sched/core.c:10153
 might_alloc include/linux/sched/mm.h:306 [inline]
 slab_pre_alloc_hook mm/slab.h:670 [inline]
 slab_alloc_node mm/slab.c:3240 [inline]
 __kmem_cache_alloc_node+0x235/0x3f0 mm/slab.c:3540
 __do_kmalloc_node mm/slab_common.c:965 [inline]
 __kmalloc_node+0x51/0x1a0 mm/slab_common.c:973
 kmalloc_node include/linux/slab.h:579 [inline]
 kvmalloc_node+0xa2/0x1a0 mm/util.c:604
 kvmalloc include/linux/slab.h:697 [inline]
 get_dist_table+0x8e/0x3a0 net/sched/sch_netem.c:788
 netem_change+0x57c/0x1f70 net/sched/sch_netem.c:985
 netem_init+0x70/0xc0 net/sched/sch_netem.c:1072
 qdisc_create+0x4d1/0x10c0 net/sched/sch_api.c:1326
 tc_modify_qdisc+0x488/0x1c30 net/sched/sch_api.c:1720
 rtnetlink_rcv_msg+0x43d/0xd50 net/core/rtnetlink.c:6417
 netlink_rcv_skb+0x165/0x440 net/netlink/af_netlink.c:2546
 netlink_unicast_kernel net/netlink/af_netlink.c:1339 [inline]
 netlink_unicast+0x547/0x7f0 net/netlink/af_netlink.c:1365
 netlink_sendmsg+0x925/0xe30 net/netlink/af_netlink.c:1913
 sock_sendmsg_nosec net/socket.c:724 [inline]
 sock_sendmsg+0xde/0x190 net/socket.c:747
 ____sys_sendmsg+0x71c/0x900 net/socket.c:2503
 ___sys_sendmsg+0x110/0x1b0 net/socket.c:2557
 __sys_sendmsg+0xf7/0x1c0 net/socket.c:2586
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f31c528c389
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f31c6033168 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007f31c53ac050 RCX: 00007f31c528c389
RDX: 0000000000000000 RSI: 00000000200007c0 RDI: 0000000000000004
RBP: 00007f31c52d7493 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffdeff5f98f R14: 00007f31c6033300 R15: 0000000000022000
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/06/23 02:03 upstream 8a28a0b6f1a1 09ffe269 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in pie_timer
2023/06/22 16:15 net 2ba7e7ebb6a7 09ffe269 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce possible deadlock in pie_timer
* Struck through repros no longer work on HEAD.