syzbot


INFO: task hung in htab_map_free

Status: auto-closed as invalid on 2019/08/29 11:50
Reported-by: syzbot+a365d0f8b70dc6404fc1@syzkaller.appspotmail.com
First crash: 2092d, last: 2048d

Sample crash report:
   Free memory is -14360kB above reserved
lowmemorykiller: Killing 'syz-executor.5' (20318) (tgid 20317), adj 1000,
   to free 34996kB on behalf of 'cron' (1966) because
   cache 212kB is below limit 6144kB for oom_score_adj 0
   Free memory is -14332kB above reserved
INFO: task kworker/1:2:2247 blocked for more than 140 seconds.
      Not tainted 4.9.141+ #23
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kworker/1:2     D26552  2247      2 0x80000000
Workqueue: events bpf_map_free_deferred
 ffff8801d18c17c0 ffff8801c6be4780 ffff8801c6be4780 ffff8801d248c740
 ffff8801db721018 ffff8801d06277b0 ffffffff828075c2 ffffffff82e33920
 ffffffff83c7a7d0 ffffffff82e33920 0000000000004c64 ffff8801db7218f0
Call Trace:
 [<ffffffff82808aef>] schedule+0x7f/0x1b0 kernel/sched/core.c:3553
 [<ffffffff828142d5>] schedule_timeout+0x735/0xe20 kernel/time/timer.c:1771
 [<ffffffff8280a63f>] do_wait_for_common kernel/sched/completion.c:75 [inline]
 [<ffffffff8280a63f>] __wait_for_common kernel/sched/completion.c:93 [inline]
 [<ffffffff8280a63f>] wait_for_common+0x3ef/0x5d0 kernel/sched/completion.c:101
 [<ffffffff8280a838>] wait_for_completion+0x18/0x20 kernel/sched/completion.c:122
 [<ffffffff81243b37>] __wait_rcu_gp+0x137/0x1b0 kernel/rcu/update.c:369
 [<ffffffff8124c21a>] synchronize_rcu.part.55+0xfa/0x110 kernel/rcu/tree_plugin.h:684
 [<ffffffff8124c257>] synchronize_rcu+0x27/0x90 kernel/rcu/tree_plugin.h:685
 [<ffffffff813b2a0e>] htab_map_free+0x1e/0x440 kernel/bpf/hashtab.c:704
 [<ffffffff81398436>] bpf_map_free_deferred+0xb6/0xf0 kernel/bpf/syscall.c:126
 [<ffffffff81131001>] process_one_work+0x831/0x15f0 kernel/workqueue.c:2092
 [<ffffffff81131e96>] worker_thread+0xd6/0x1140 kernel/workqueue.c:2226
 [<ffffffff81142c3d>] kthread+0x26d/0x300 kernel/kthread.c:211
 [<ffffffff82817a5c>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:373

Showing all locks held in the system:
2 locks held by kworker/0:0/4:
 #0:  ("events"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((&rew.rew_work)){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/1:0/18:
 #0:  ("events"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by khungtaskd/24:
 #0:  (rcu_read_lock){......}, at: [<ffffffff8131c0cc>] check_hung_uninterruptible_tasks kernel/hung_task.c:168 [inline]
 #0:  (rcu_read_lock){......}, at: [<ffffffff8131c0cc>] watchdog+0x11c/0xa20 kernel/hung_task.c:239
 #1:  (tasklist_lock){.+.?..}, at: [<ffffffff813fe63f>] debug_show_all_locks+0x79/0x218 kernel/locking/lockdep.c:4336
1 lock held by rsyslogd/1915:
 #0:  (&f->f_pos_lock){+.+.+.}, at: [<ffffffff8156cc7c>] __fdget_pos+0xac/0xd0 fs/file.c:781
2 locks held by getty/2042:
 #0:  (&tty->ldisc_sem){++++++}, at: [<ffffffff82815952>] ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
 #1:  (&ldata->atomic_read_lock){+.+.+.}, at: [<ffffffff81d37362>] n_tty_read+0x202/0x16e0 drivers/tty/n_tty.c:2142
2 locks held by kworker/1:2/2247:
 #0:  ("events"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((&map->work)){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
1 lock held by syz-executor.5/5729:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
2 locks held by syz-executor.5/5782:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
 #1:  (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a749>] exp_funnel_lock kernel/rcu/tree_exp.h:256 [inline]
 #1:  (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a749>] _synchronize_rcu_expedited+0x339/0x840 kernel/rcu/tree_exp.h:569
1 lock held by syz-executor.1/8654:
 #0:  (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>] inode_lock include/linux/fs.h:766 [inline]
 #0:  (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>] __sock_release+0x8b/0x260 net/socket.c:604
1 lock held by syz-executor.1/9588:
 #0:  (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>] inode_lock include/linux/fs.h:766 [inline]
 #0:  (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>] __sock_release+0x8b/0x260 net/socket.c:604
2 locks held by syz-executor.4/14821:
 #0:  (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>] inode_lock include/linux/fs.h:766 [inline]
 #0:  (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>] __sock_release+0x8b/0x260 net/socket.c:604
 #1:  (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a7b7>] exp_funnel_lock kernel/rcu/tree_exp.h:289 [inline]
 #1:  (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a7b7>] _synchronize_rcu_expedited+0x3a7/0x840 kernel/rcu/tree_exp.h:569
1 lock held by syz-executor.3/14992:
 #0:  (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>] inode_lock include/linux/fs.h:766 [inline]
 #0:  (&sb->s_type->i_mutex_key#8){+.+.+.}, at: [<ffffffff8229bd8b>] __sock_release+0x8b/0x260 net/socket.c:604
2 locks held by kworker/0:1/29049:
 #0:  ("events"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
1 lock held by syz-executor.1/29336:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.1/29366:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
2 locks held by kworker/1:1/7860:
 #0:  ("events"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((&map->work)){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
3 locks held by kworker/1:3/10999:
 #0:  ("%s"("ipv6_addrconf")){.+.+..}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((addr_chk_work).work){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
 #2:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
3 locks held by kworker/u4:0/16198:
 #0:  ("%s""netns"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  (net_cleanup_work){+.+.+.}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
 #2:  (net_mutex){+.+.+.}, at: [<ffffffff822e681f>] cleanup_net+0x13f/0x8b0 net/core/net_namespace.c:439
2 locks held by kworker/0:2/20372:
 #0:  ("events"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((&map->work)){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/0:4/20373:
 #0:  ("events"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((&map->work)){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
2 locks held by kworker/1:4/20374:
 #0:  ("events"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((&ns->proc_work)){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 24 Comm: khungtaskd Not tainted 4.9.141+ #23
 ffff8801d9907d08 ffffffff81b42e79 0000000000000000 0000000000000001
 0000000000000001 0000000000000001 ffffffff810983b0 ffff8801d9907d40
 ffffffff81b4df89 0000000000000001 0000000000000000 0000000000000003
Call Trace:
 [<ffffffff81b42e79>] __dump_stack lib/dump_stack.c:15 [inline]
 [<ffffffff81b42e79>] dump_stack+0xc1/0x128 lib/dump_stack.c:51
 [<ffffffff81b4df89>] nmi_cpu_backtrace.cold.0+0x48/0x87 lib/nmi_backtrace.c:99
 [<ffffffff81b4df1c>] nmi_trigger_cpumask_backtrace+0x12c/0x151 lib/nmi_backtrace.c:60
 [<ffffffff810984b4>] arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:37
 [<ffffffff8131c65d>] trigger_all_cpu_backtrace include/linux/nmi.h:58 [inline]
 [<ffffffff8131c65d>] check_hung_task kernel/hung_task.c:125 [inline]
 [<ffffffff8131c65d>] check_hung_uninterruptible_tasks kernel/hung_task.c:182 [inline]
 [<ffffffff8131c65d>] watchdog+0x6ad/0xa20 kernel/hung_task.c:239
 [<ffffffff81142c3d>] kthread+0x26d/0x300 kernel/kthread.c:211
 [<ffffffff82817a5c>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:373
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 1966 Comm: cron Not tainted 4.9.141+ #23
task: ffff8801d28f17c0 task.stack: ffff8801d2a10000
RIP: 0010:[<ffffffff8131ba86>] c [<ffffffff8131ba86>] __sanitizer_cov_trace_pc+0x26/0x50 kernel/kcov.c:100
RSP: 0000:ffff8801d2a172a8  EFLAGS: 00000246
RAX: ffff8801d28f17c0 RBX: ffff8801ca288000 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffffffff81ba7d5c RDI: ffffffff82b447e0
RBP: ffff8801d2a172a8 R08: ffff8801d28f2130 R09: a4dc5a3e6d98f24d
R10: ffff8801d28f17c0 R11: 0000000000000001 R12: ffffffff82b447e0
R13: ffffffff82b447a0 R14: 0000000000000000 R15: 0000000000000600
FS:  00007fe9aca6a7a0(0000) GS:ffff8801db600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000000043246a CR3: 00000001d256d000 CR4: 00000000001606b0
Stack:
 ffff8801d2a172e8c ffffffff81ba7d5cc ffffffff81419f70c ffff8801ca288000c
 ffff8801ca288000c ffff8801ca288418c 0000000000000000c 0000000000000600c
 ffff8801d2a172f8c ffffffff81ba7f5cc ffff8801d2a17318c ffffffff81247305c
Call Trace:
 [<ffffffff81ba7d5c>] check_preemption_disabled+0x1c/0x200 lib/smp_processor_id.c:13
 [<ffffffff81ba7f5c>] debug_smp_processor_id+0x1c/0x20 lib/smp_processor_id.c:56
 [<ffffffff81247305>] __rcu_is_watching kernel/rcu/tree.c:1053 [inline]
 [<ffffffff81247305>] rcu_is_watching+0x15/0xa0 kernel/rcu/tree.c:1067
 [<ffffffff8141a11b>] rcu_read_lock include/linux/rcupdate.h:876 [inline]
 [<ffffffff8141a11b>] find_lock_task_mm+0x1ab/0x270 mm/oom_kill.c:112
 [<ffffffff821effdf>] lowmem_scan+0x34f/0xaf0 drivers/staging/android/lowmemorykiller.c:134
 [<ffffffff81449cc6>] do_shrink_slab mm/vmscan.c:398 [inline]
 [<ffffffff81449cc6>] shrink_slab.part.8+0x3c6/0xa00 mm/vmscan.c:501
 [<ffffffff814557fd>] shrink_slab mm/vmscan.c:465 [inline]
 [<ffffffff814557fd>] shrink_node+0x1ed/0x740 mm/vmscan.c:2602
 [<ffffffff814560c7>] shrink_zones mm/vmscan.c:2749 [inline]
 [<ffffffff814560c7>] do_try_to_free_pages mm/vmscan.c:2791 [inline]
 [<ffffffff814560c7>] try_to_free_pages+0x377/0xb80 mm/vmscan.c:3002
 [<ffffffff81428a01>] __perform_reclaim mm/page_alloc.c:3324 [inline]
 [<ffffffff81428a01>] __alloc_pages_direct_reclaim mm/page_alloc.c:3345 [inline]
 [<ffffffff81428a01>] __alloc_pages_slowpath mm/page_alloc.c:3697 [inline]
 [<ffffffff81428a01>] __alloc_pages_nodemask+0x981/0x1bd0 mm/page_alloc.c:3862
 [<ffffffff8143564a>] __alloc_pages include/linux/gfp.h:433 [inline]
 [<ffffffff8143564a>] __alloc_pages_node include/linux/gfp.h:446 [inline]
 [<ffffffff8143564a>] alloc_pages_node include/linux/gfp.h:460 [inline]
 [<ffffffff8143564a>] __page_cache_alloc include/linux/pagemap.h:208 [inline]
 [<ffffffff8143564a>] __do_page_cache_readahead+0x21a/0x8b0 mm/readahead.c:183
 [<ffffffff81415534>] ra_submit mm/internal.h:59 [inline]
 [<ffffffff81415534>] do_sync_mmap_readahead mm/filemap.c:2066 [inline]
 [<ffffffff81415534>] filemap_fault+0x924/0x1110 mm/filemap.c:2143
 [<ffffffff816e7721>] ext4_filemap_fault+0x71/0xa0 fs/ext4/inode.c:5853
 [<ffffffff81492ef3>] __do_fault+0x223/0x500 mm/memory.c:2833
 [<ffffffff814a3696>] do_read_fault mm/memory.c:3180 [inline]
 [<ffffffff814a3696>] do_fault mm/memory.c:3315 [inline]
 [<ffffffff814a3696>] handle_pte_fault mm/memory.c:3516 [inline]
 [<ffffffff814a3696>] __handle_mm_fault mm/memory.c:3603 [inline]
 [<ffffffff814a3696>] handle_mm_fault+0x1326/0x2350 mm/memory.c:3640
 [<ffffffff810b2b33>] __do_page_fault+0x403/0xa60 arch/x86/mm/fault.c:1406
 [<ffffffff810b31e7>] do_page_fault+0x27/0x30 arch/x86/mm/fault.c:1469
 [<ffffffff828188b5>] page_fault+0x25/0x30 arch/x86/entry/entry_64.S:951
Code: cff cff c0f c1f c00 c55 c48 c89 ce5 c48 c8b c75 c08 c65 c48 c8b c04 c25 c00 c7e c01 c00 c65 c8b c15 c18 cc3 ccf c7e c81 ce2 c00 c01 c1f c00 c75 c2b c8b c90 c38 c12 c00 c00 c<83> cfa c02 c75 c20 c48 c8b c88 c40 c12 c00 c00 c8b c80 c3c c12 c00 c00 c48 c8b c11 c

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2019/03/02 11:49 https://android.googlesource.com/kernel/common android-4.9 8fe428403e30 1c0e457a .config console log report ci-android-49-kasan-gce-386
2019/01/18 06:29 https://android.googlesource.com/kernel/common android-4.9 8fe428403e30 5bf17c30 .config console log report ci-android-49-kasan-gce-386
* Struck through repros no longer work on HEAD.