syzbot


INFO: task hung in worker_thread (3)

Status: auto-obsoleted due to no activity on 2023/05/20 16:01
Subsystems: serial
[Documentation on labels]
First crash: 437d, last: 437d
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in worker_thread 1 844d 844d 0/1 auto-closed as invalid on 2022/05/08 06:12
upstream INFO: task hung in worker_thread (4) kernel 1 122d 122d 0/26 auto-obsoleted due to no activity on 2024/03/31 02:07
upstream INFO: task hung in worker_thread (2) fs 1 1197d 1197d 0/26 auto-closed as invalid on 2021/05/17 11:26
upstream INFO: task hung in worker_thread fs 1 1362d 1362d 0/26 auto-closed as invalid on 2020/11/06 03:23

Sample crash report:
INFO: task kworker/1:18:14574 blocked for more than 143 seconds.
      Not tainted 6.2.0-rc8-syzkaller-00151-g925cf0457d7e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:18    state:I
 stack:21728 pid:14574 ppid:2      flags:0x00004000
Workqueue:  0x0
 (events)

Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5296 [inline]
 __schedule+0x1409/0x43f0 kernel/sched/core.c:6609
 schedule+0xc3/0x190 kernel/sched/core.c:6685
 worker_thread+0xec1/0x1210 kernel/workqueue.c:2457
 kthread+0x270/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: 
ffffffff8cf25950
 (
rcu_tasks.tasks_gp_mutex
){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xce0 kernel/rcu/tasks.h:507
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8cf26150 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xce0 kernel/rcu/tasks.h:507
1 lock held by khungtaskd/28:
 #0: ffffffff8cf25780 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by kworker/0:2/897:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77f/0x1370
 #1: ffffc900045bfd20 (free_ipc_work){+.+.}-{0:0}, at: process_one_work+0x7c6/0x1370 kernel/workqueue.c:2264
2 locks held by getty/4741:
 #0: ffff88814a0f5098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc900015a02f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6ab/0x1db0 drivers/tty/n_tty.c:2177
4 locks held by kworker/u4:14/6769:
 #0: ffff8880b983b1d8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:537
 #1: ffff8880b9828748 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x679/0xa30 kernel/sched/psi.c:976
 #2: ffff8880b9829558 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x120/0x260 kernel/time/timer.c:999
 #3: ffffffff91c22778 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x9a/0x6d0 lib/debugobjects.c:665
2 locks held by kworker/0:16/13504:
 #0: ffff888012472538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x77f/0x1370
 #1: ffffc9000329fd20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7c6/0x1370 kernel/workqueue.c:2264
3 locks held by kworker/u4:23/29612:
 #0: ffff888012613138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x77f/0x1370
 #1: ffffc900049b7d20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7c6/0x1370 kernel/workqueue.c:2264
 #2: ffffffff8cf2ac40 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x4c/0x5f0 kernel/rcu/tree.c:3997
3 locks held by kworker/1:2/30128:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77f/0x1370
 #1: ffffc900043afd20 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x7c6/0x1370 kernel/workqueue.c:2264
 #2: ffffffff8e07e9c8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:277
2 locks held by kworker/u4:29/30206:
3 locks held by kworker/1:15/4275:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77f/0x1370
 #1: ffffc9000b67fd20 (fqdir_free_work){+.+.}-{0:0}, at: process_one_work+0x7c6/0x1370 kernel/workqueue.c:2264
 #2: ffffffff8cf2ac40 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x4c/0x5f0 kernel/rcu/tree.c:3997
4 locks held by syz-executor.5/13271:
 #0: ffff88801fde1028 (&hdev->req_lock){+.+.}-{3:3}, at: hci_dev_do_close net/bluetooth/hci_core.c:552 [inline]
 #0: ffff88801fde1028 (&hdev->req_lock){+.+.}-{3:3}, at: hci_unregister_dev+0x1c2/0x480 net/bluetooth/hci_core.c:2702
 #1: ffff88801fde0078 (&hdev->lock){+.+.}-{3:3}, at: hci_dev_close_sync+0x449/0x1020 net/bluetooth/hci_sync.c:4837
 #2: ffffffff8e1d8ac8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:1790 [inline]
 #2: ffffffff8e1d8ac8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_hash_flush+0xbc/0x210 net/bluetooth/hci_conn.c:2437
 #3: ffffffff8cf2ad78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
 #3: ffffffff8cf2ad78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x474/0x890 kernel/rcu/tree_exp.h:946
3 locks held by syz-executor.5/13332:
 #0: ffff88802b001028 (&hdev->req_lock){+.+.}-{3:3}, at: hci_dev_do_close net/bluetooth/hci_core.c:552 [inline]
 #0: ffff88802b001028 (&hdev->req_lock){+.+.}-{3:3}, at: hci_unregister_dev+0x1c2/0x480 net/bluetooth/hci_core.c:2702
 #1: ffff88802b000078 (&hdev->lock){+.+.}-{3:3}, at: hci_dev_close_sync+0x449/0x1020 net/bluetooth/hci_sync.c:4837
 #2: ffffffff8e1d8ac8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:1790 [inline]
 #2: ffffffff8e1d8ac8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_hash_flush+0xbc/0x210 net/bluetooth/hci_conn.c:2437
2 locks held by syz-executor.5/13338:
 #0: ffffffff8e07e9c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:75 [inline]
 #0: ffffffff8e07e9c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x770/0xe90 net/core/rtnetlink.c:6138
 #1: ffffffff8cf2ad78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
 #1: ffffffff8cf2ad78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x474/0x890 kernel/rcu/tree_exp.h:946

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.2.0-rc8-syzkaller-00151-g925cf0457d7e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/21/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x4e5/0x560 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1b4/0x3f0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xf70/0xfb0 kernel/hung_task.c:377
 kthread+0x270/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 30225 Comm: kworker/u4:31 Not tainted 6.2.0-rc8-syzkaller-00151-g925cf0457d7e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/21/2023
Workqueue: phy7 ieee80211_iface_work
RIP: 0010:variable_test_bit arch/x86/include/asm/bitops.h:233 [inline]
RIP: 0010:arch_test_bit arch/x86/include/asm/bitops.h:240 [inline]
RIP: 0010:_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:142 [inline]
RIP: 0010:_ieee802_11_parse_elems_full+0xa7c/0x3d90 net/mac80211/util.c:1073
Code: 3c c1 be 08 00 00 00 e8 e2 22 b1 f7 31 db 4c 0f a3 b4 24 20 03 00 00 41 0f 92 c4 0f 92 c3 bf 02 00 00 00 89 de e8 64 ea 5b f7 <31> ff 89 de e8 5b ea 5b f7 45 84 e4 74 42 e8 c1 e7 5b f7 48 8b bc
RSP: 0018:ffffc90004b07280 EFLAGS: 00000293
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff8a2fec0e
RDX: ffff888082bcba80 RSI: 0000000000000000 RDI: 0000000000000002
RBP: ffffc90004b07650 R08: ffffffff8a2fec2c R09: fffff52000960eb5
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: 0000000000000008 R14: 0000000000000000 R15: ffff88803cfa0074
FS:  0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000557d6251fbc0 CR3: 00000000a4b20000 CR4: 00000000003506f0
Call Trace:
 <TASK>
 ieee802_11_parse_elems_full+0xdbd/0x2a40 net/mac80211/util.c:1638
 ieee802_11_parse_elems_crc net/mac80211/ieee80211_i.h:2260 [inline]
 ieee802_11_parse_elems net/mac80211/ieee80211_i.h:2267 [inline]
 ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1605 [inline]
 ieee80211_ibss_rx_queued_mgmt+0x51a/0x2e00 net/mac80211/ibss.c:1638
 ieee80211_iface_process_skb net/mac80211/iface.c:1583 [inline]
 ieee80211_iface_work+0x7bd/0xd00 net/mac80211/iface.c:1637
 process_one_work+0x8fa/0x1370 kernel/workqueue.c:2289
 worker_thread+0xa63/0x1210 kernel/workqueue.c:2436
 kthread+0x270/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/02/19 15:51 upstream 925cf0457d7e bcdf85f8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in worker_thread
* Struck through repros no longer work on HEAD.