syzbot


INFO: task hung in memcg_create_kmem_cache

Status: closed as invalid on 2018/03/27 11:14
Subsystems: cgroups mm
[Documentation on labels]
First crash: 2552d, last: 2552d

Sample crash report:
rdma_op 00000000608e50e2 conn xmit_rdma           (null)
INFO: task kworker/0:0:3 blocked for more than 120 seconds.
      Not tainted 4.16.0-rc7+ #367
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/0:0     D21016     3      2 0x80000000
Workqueue: memcg_kmem_cache memcg_kmem_cache_create_func
Call Trace:
 context_switch kernel/sched/core.c:2862 [inline]
 __schedule+0x8fb/0x1ec0 kernel/sched/core.c:3440
 schedule+0xf5/0x430 kernel/sched/core.c:3499
 schedule_preempt_disabled+0x10/0x20 kernel/sched/core.c:3557
 __mutex_lock_common kernel/locking/mutex.c:833 [inline]
 __mutex_lock+0xaad/0x1a80 kernel/locking/mutex.c:893
 mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908
 memcg_create_kmem_cache+0x24/0x170 mm/slab_common.c:622
 memcg_kmem_cache_create_func+0x57/0xc0 mm/memcontrol.c:2181
 process_one_work+0xc47/0x1bb0 kernel/workqueue.c:2113
 worker_thread+0x223/0x1990 kernel/workqueue.c:2247
 kthread+0x33c/0x400 kernel/kthread.c:238
 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406

Showing all locks held in the system:
4 locks held by kworker/0:0/3:
 #0:  ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000059ebb435>] work_static include/linux/workqueue.h:198 [inline]
 #0:  ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000059ebb435>] set_work_data kernel/workqueue.c:619 [inline]
 #0:  ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000059ebb435>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
 #0:  ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000059ebb435>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084
 #1:  ((work_completion)(&cw->work)){+.+.}, at: [<00000000fc645d03>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088
 #2:  (cpu_hotplug_lock.rw_sem){++++}, at: [<00000000ac8ea7bc>] get_online_cpus include/linux/cpu.h:124 [inline]
 #2:  (cpu_hotplug_lock.rw_sem){++++}, at: [<00000000ac8ea7bc>] memcg_create_kmem_cache+0x16/0x170 mm/slab_common.c:619
 #3:  (slab_mutex){+.+.}, at: [<00000000fe87f610>] memcg_create_kmem_cache+0x24/0x170 mm/slab_common.c:622
2 locks held by khungtaskd/799:
 #0:  (rcu_read_lock){....}, at: [<00000000276c6ca9>] check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline]
 #0:  (rcu_read_lock){....}, at: [<00000000276c6ca9>] watchdog+0x1c5/0xd60 kernel/hung_task.c:249
 #1:  (tasklist_lock){.+.+}, at: [<0000000041a13fa7>] debug_show_all_locks+0xd3/0x3d0 kernel/locking/lockdep.c:4470
6 locks held by kworker/u4:3/1690:
 #0:  ((wq_completion)"writeback"){+.+.}, at: [<0000000059ebb435>] work_static include/linux/workqueue.h:198 [inline]
 #0:  ((wq_completion)"writeback"){+.+.}, at: [<0000000059ebb435>] set_work_data kernel/workqueue.c:619 [inline]
 #0:  ((wq_completion)"writeback"){+.+.}, at: [<0000000059ebb435>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
 #0:  ((wq_completion)"writeback"){+.+.}, at: [<0000000059ebb435>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084
 #1:  ((work_completion)(&(&wb->dwork)->work)){+.+.}, at: [<00000000fc645d03>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088
 #2:  (&type->s_umount_key#28){++++}, at: [<00000000c7951191>] trylock_super+0x20/0x100 fs/super.c:395
 #3:  (&sbi->s_journal_flag_rwsem){.+.+}, at: [<0000000004d3c95f>] do_writepages+0xff/0x170 mm/page-writeback.c:2340
 #4:  (jbd2_handle){++++}, at: [<000000004bc175a9>] start_this_handle+0x488/0x1080 fs/jbd2/transaction.c:385
 #5:  (&ei->i_data_sem){++++}, at: [<00000000146d589c>] ext4_map_blocks+0x377/0x1830 fs/ext4/inode.c:629
2 locks held by getty/4159:
 #0:  (&tty->ldisc_sem){++++}, at: [<000000004da7f5c7>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000003f097779>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4160:
 #0:  (&tty->ldisc_sem){++++}, at: [<000000004da7f5c7>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000003f097779>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4161:
 #0:  (&tty->ldisc_sem){++++}, at: [<000000004da7f5c7>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000003f097779>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4162:
 #0:  (&tty->ldisc_sem){++++}, at: [<000000004da7f5c7>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000003f097779>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4163:
 #0:  (&tty->ldisc_sem){++++}, at: [<000000004da7f5c7>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000003f097779>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4164:
 #0:  (&tty->ldisc_sem){++++}, at: [<000000004da7f5c7>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000003f097779>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4165:
 #0:  (&tty->ldisc_sem){++++}, at: [<000000004da7f5c7>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000003f097779>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
1 lock held by syz-executor0/4223:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000931e48ff>] do_exit+0x2f5/0x1ad0 kernel/exit.c:811
1 lock held by syz-executor7/4224:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000931e48ff>] do_exit+0x2f5/0x1ad0 kernel/exit.c:811
1 lock held by syz-executor1/4225:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000931e48ff>] do_exit+0x2f5/0x1ad0 kernel/exit.c:811
1 lock held by syz-executor4/4226:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000931e48ff>] do_exit+0x2f5/0x1ad0 kernel/exit.c:811
1 lock held by syz-executor5/4227:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000931e48ff>] do_exit+0x2f5/0x1ad0 kernel/exit.c:811
1 lock held by syz-executor6/4229:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000931e48ff>] do_exit+0x2f5/0x1ad0 kernel/exit.c:811
1 lock held by syz-executor3/4230:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000931e48ff>] do_exit+0x2f5/0x1ad0 kernel/exit.c:811
1 lock held by syz-executor2/4232:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000931e48ff>] do_exit+0x2f5/0x1ad0 kernel/exit.c:811
4 locks held by syz-executor1/4233:
 #0:  (sb_writers#11){.+.+}, at: [<000000004c4e57e6>] sb_start_write include/linux/fs.h:1548 [inline]
 #0:  (sb_writers#11){.+.+}, at: [<000000004c4e57e6>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386
 #1:  (&type->i_mutex_dir_key#4/1){+.+.}, at: [<00000000c2cd64ee>] inode_lock_nested include/linux/fs.h:748 [inline]
 #1:  (&type->i_mutex_dir_key#4/1){+.+.}, at: [<00000000c2cd64ee>] do_rmdir+0x380/0x5f0 fs/namei.c:3907
 #2:  (&type->i_mutex_dir_key#4){++++}, at: [<00000000046e8058>] inode_lock include/linux/fs.h:713 [inline]
 #2:  (&type->i_mutex_dir_key#4){++++}, at: [<00000000046e8058>] vfs_rmdir+0xd6/0x410 fs/namei.c:3848
 #3:  (cgroup_mutex){+.+.}, at: [<000000007090acc6>] cgroup_kn_lock_live+0x29e/0x580 kernel/cgroup/cgroup.c:1535
4 locks held by kworker/1:2/4402:
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<0000000059ebb435>] work_static include/linux/workqueue.h:198 [inline]
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<0000000059ebb435>] set_work_data kernel/workqueue.c:619 [inline]
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<0000000059ebb435>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<0000000059ebb435>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084
 #1:  ((work_completion)(&css->destroy_work)#3){+.+.}, at: [<00000000fc645d03>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088
 #2:  (cpu_hotplug_lock.rw_sem){++++}, at: [<0000000008093ffb>] get_online_cpus include/linux/cpu.h:124 [inline]
 #2:  (cpu_hotplug_lock.rw_sem){++++}, at: [<0000000008093ffb>] memcg_destroy_kmem_caches+0xf/0x80 mm/slab_common.c:771
 #3:  (slab_mutex){+.+.}, at: [<00000000a5cc4e5b>] memcg_destroy_kmem_caches+0x24/0x80 mm/slab_common.c:774
3 locks held by kworker/0:5/6783:
 #0:  ((wq_completion)"events"){+.+.}, at: [<0000000059ebb435>] work_static include/linux/workqueue.h:198 [inline]
 #0:  ((wq_completion)"events"){+.+.}, at: [<0000000059ebb435>] set_work_data kernel/workqueue.c:619 [inline]
 #0:  ((wq_completion)"events"){+.+.}, at: [<0000000059ebb435>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
 #0:  ((wq_completion)"events"){+.+.}, at: [<0000000059ebb435>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084
 #1:  (slab_caches_to_rcu_destroy_work){+.+.}, at: [<00000000fc645d03>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088
 #2:  (slab_mutex){+.+.}, at: [<00000000baa07502>] slab_caches_to_rcu_destroy_workfn+0x25/0xc0 mm/slab_common.c:556
4 locks held by kworker/1:6/7245:
 #0:  ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000059ebb435>] work_static include/linux/workqueue.h:198 [inline]
 #0:  ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000059ebb435>] set_work_data kernel/workqueue.c:619 [inline]
 #0:  ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000059ebb435>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
 #0:  ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000059ebb435>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084
 #1:  ((work_completion)(&cw->work)){+.+.}, at: [<00000000fc645d03>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088
 #2:  (cpu_hotplug_lock.rw_sem){++++}, at: [<00000000ac8ea7bc>] get_online_cpus include/linux/cpu.h:124 [inline]
 #2:  (cpu_hotplug_lock.rw_sem){++++}, at: [<00000000ac8ea7bc>] memcg_create_kmem_cache+0x16/0x170 mm/slab_common.c:619
 #3:  (slab_mutex){+.+.}, at: [<00000000fe87f610>] memcg_create_kmem_cache+0x24/0x170 mm/slab_common.c:622
1 lock held by syz-executor3/11014:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000eeceb6c1>] copy_process kernel/fork.c:1606 [inline]
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000eeceb6c1>] _do_fork+0x1f7/0xf70 kernel/fork.c:2087
1 lock held by syz-executor4/11015:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000eeceb6c1>] copy_process kernel/fork.c:1606 [inline]
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000eeceb6c1>] _do_fork+0x1f7/0xf70 kernel/fork.c:2087
1 lock held by syz-executor4/11025:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000931e48ff>] do_exit+0x2f5/0x1ad0 kernel/exit.c:811
1 lock held by syz-executor2/11016:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000eeceb6c1>] copy_process kernel/fork.c:1606 [inline]
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000eeceb6c1>] _do_fork+0x1f7/0xf70 kernel/fork.c:2087
1 lock held by syz-executor2/11020:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000931e48ff>] do_exit+0x2f5/0x1ad0 kernel/exit.c:811
1 lock held by syz-executor0/11017:
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000eeceb6c1>] copy_process kernel/fork.c:1606 [inline]
 #0:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000eeceb6c1>] _do_fork+0x1f7/0xf70 kernel/fork.c:2087
3 locks held by syz-executor0/11021:
 #0:  (&sig->cred_guard_mutex){+.+.}, at: [<00000000e609ce47>] SYSC_perf_event_open+0x12ca/0x2e00 kernel/events/core.c:9990
 #1:  (&pmus_srcu){....}, at: [<0000000001cd5fe6>] perf_event_alloc+0xf55/0x2b00 kernel/events/core.c:9551
 #2:  (event_mutex){+.+.}, at: [<00000000147dc730>] perf_trace_init+0x58/0xab0 kernel/trace/trace_event_perf.c:216
5 locks held by syz-executor7/11018:
 #0:  (sb_writers#11){.+.+}, at: [<0000000075e789f3>] file_start_write include/linux/fs.h:2709 [inline]
 #0:  (sb_writers#11){.+.+}, at: [<0000000075e789f3>] vfs_write+0x407/0x510 fs/read_write.c:543
 #1:  (&of->mutex){+.+.}, at: [<00000000d40618f4>] kernfs_fop_write+0x208/0x440 fs/kernfs/file.c:307
 #2:  (cgroup_mutex){+.+.}, at: [<000000007090acc6>] cgroup_kn_lock_live+0x29e/0x580 kernel/cgroup/cgroup.c:1535
 #3:  (&cgroup_threadgroup_rwsem){++++}, at: [<00000000bc540e3e>] percpu_down_write+0xa3/0x500 kernel/locking/percpu-rwsem.c:145
 #4:  (cpuset_mutex){+.+.}, at: [<000000006e923423>] cpuset_can_attach+0x15a/0x450 kernel/cgroup/cpuset.c:1469
1 lock held by syz-executor6/11019:
 #0:  (event_mutex){+.+.}, at: [<00000000c6f84180>] perf_trace_destroy+0x28/0x100 kernel/trace/trace_event_perf.c:234
3 locks held by syz-executor5/11022:
 #0:  (sb_writers#10){.+.+}, at: [<0000000075e789f3>] file_start_write include/linux/fs.h:2709 [inline]
 #0:  (sb_writers#10){.+.+}, at: [<0000000075e789f3>] vfs_write+0x407/0x510 fs/read_write.c:543
 #1:  (&of->mutex){+.+.}, at: [<00000000d40618f4>] kernfs_fop_write+0x208/0x440 fs/kernfs/file.c:307
 #2:  (cgroup_mutex){+.+.}, at: [<000000007090acc6>] cgroup_kn_lock_live+0x29e/0x580 kernel/cgroup/cgroup.c:1535

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 799 Comm: khungtaskd Not tainted 4.16.0-rc7+ #367
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:17 [inline]
 dump_stack+0x194/0x24d lib/dump_stack.c:53
 nmi_cpu_backtrace+0x1d2/0x210 lib/nmi_backtrace.c:103
 nmi_trigger_cpumask_backtrace+0x123/0x180 lib/nmi_backtrace.c:62
 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
 trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline]
 check_hung_task kernel/hung_task.c:132 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline]
 watchdog+0x90c/0xd60 kernel/hung_task.c:249
 kthread+0x33c/0x400 kernel/kthread.c:238
 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 11024 Comm: syz-executor3 Not tainted 4.16.0-rc7+ #367
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:__sanitizer_cov_trace_pc+0x2b/0x50 kernel/kcov.c:101
RSP: 0018:ffff8801b7a17680 EFLAGS: 00000246
RAX: ffff8801b2804580 RBX: ffffffffffffff00 RCX: ffffffff825ae6ff
RDX: 0000000000000002 RSI: 0000000000000008 RDI: ffff8801b3a88da0
RBP: ffff8801b7a17680 R08: ffffed00367511b5 R09: ffff8801b3a88da0
R10: 0000000000000001 R11: ffffed00367511b4 R12: ffffffffffffffff
R13: 0000000000000000 R14: ffff8801b3a88da0 R15: 0000000000000000
FS:  00007efedc285700(0000) GS:ffff8801db200000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000930008 CR3: 00000001b5a92006 CR4: 00000000001606f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 __bitmap_set+0x9f/0x110 lib/bitmap.c:270
 bitmap_set include/linux/bitmap.h:365 [inline]
 __bitmap_parselist+0x310/0x4b0 lib/bitmap.c:616
 bitmap_parselist+0x3a/0x50 lib/bitmap.c:628
 cpulist_parse include/linux/cpumask.h:639 [inline]
 update_cpumask kernel/cgroup/cpuset.c:974 [inline]
 cpuset_write_resmask+0x1694/0x2850 kernel/cgroup/cpuset.c:1724
 cgroup_file_write+0x2ae/0x710 kernel/cgroup/cgroup.c:3429
 kernfs_fop_write+0x2bc/0x440 fs/kernfs/file.c:316
 __vfs_write+0xef/0x970 fs/read_write.c:480
 vfs_write+0x189/0x510 fs/read_write.c:544
 SYSC_write fs/read_write.c:589 [inline]
 SyS_write+0xef/0x220 fs/read_write.c:581
 do_syscall_64+0x281/0x940 arch/x86/entry/common.c:287
 entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x454879
RSP: 002b:00007efedc284c68 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007efedc2856d4 RCX: 0000000000454879
RDX: 0000000000000002 RSI: 0000000020000040 RDI: 0000000000000014
RBP: 000000000072bea0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff
R13: 000000000000067c R14: 00000000006fac40 R15: 0000000000000000
Code: 55 65 48 8b 04 25 c0 ed 01 00 48 89 e5 65 8b 15 bc d0 90 7e 81 e2 00 01 1f 00 48 8b 4d 08 75 2b 8b 90 b8 12 00 00 83 fa 02 75 20 <48> 8b b0 c0 12 00 00 8b 80 bc 12 00 00 48 8b 16 48 83 c2 01 48 

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2018/03/26 08:22 upstream 3eb2ce825ea1 e033c1f1 .config console log report ci-upstream-kasan-gce
* Struck through repros no longer work on HEAD.