syzbot


INFO: task hung in tioclinux

Status: auto-obsoleted due to no activity on 2024/11/30 23:35
Subsystems: serial
[Documentation on labels]
First crash: 115d, last: 115d

Sample crash report:
INFO: task syz.4.4220:14396 blocked for more than 143 seconds.
      Not tainted 6.11.0-rc6-syzkaller-00017-gc9f016e72b5c #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.4220      state:D stack:28752 pid:14396 tgid:14394 ppid:5383   flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5188 [inline]
 __schedule+0xe37/0x5490 kernel/sched/core.c:6529
 __schedule_loop kernel/sched/core.c:6606 [inline]
 schedule+0xe7/0x350 kernel/sched/core.c:6621
 schedule_timeout+0x258/0x2a0 kernel/time/timer.c:2557
 ___down_common kernel/locking/semaphore.c:225 [inline]
 __down_common+0x32d/0x730 kernel/locking/semaphore.c:246
 down+0x74/0xa0 kernel/locking/semaphore.c:63
 console_lock+0x5b/0xa0 kernel/printk/printk.c:2735
 tioclinux+0x49f/0x5f0 drivers/tty/vt/vt.c:3412
 vt_ioctl+0x2eb5/0x2f80 drivers/tty/vt/vt_ioctl.c:761
 tty_ioctl+0x65d/0x15f0 drivers/tty/tty_io.c:2803
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:907 [inline]
 __se_sys_ioctl fs/ioctl.c:893 [inline]
 __x64_sys_ioctl+0x193/0x220 fs/ioctl.c:893
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f454eb79eb9
RSP: 002b:00007f454f983038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f454ed16058 RCX: 00007f454eb79eb9
RDX: 0000000020001040 RSI: 000000000000541c RDI: 0000000000000003
RBP: 00007f454ebe793e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007f454ed16058 R15: 00007ffe34413468
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8ddb9fe0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:326 [inline]
 #0: ffffffff8ddb9fe0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
 #0: ffffffff8ddb9fe0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x75/0x340 kernel/locking/lockdep.c:6626
6 locks held by kworker/u8:6/1303:
 #0: ffff88801bae3148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x1277/0x1b40 kernel/workqueue.c:3206
 #1: ffffc900049f7d80 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x921/0x1b40 kernel/workqueue.c:3207
 #2: ffffffff8fa1fbd0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xbb/0xbb0 net/core/net_namespace.c:594
 #3: ffff888061dd30e8 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:1009 [inline]
 #3: ffff888061dd30e8 (&dev->mutex){....}-{3:3}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline]
 #3: ffff888061dd30e8 (&dev->mutex){....}-{3:3}, at: devlink_pernet_pre_exit+0x12d/0x2b0 net/devlink/core.c:506
 #4: ffff888061dd7250 (&devlink->lock_key#5){+.+.}-{3:3}, at: devl_lock net/devlink/core.c:276 [inline]
 #4: ffff888061dd7250 (&devlink->lock_key#5){+.+.}-{3:3}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline]
 #4: ffff888061dd7250 (&devlink->lock_key#5){+.+.}-{3:3}, at: devlink_pernet_pre_exit+0x137/0x2b0 net/devlink/core.c:506
 #5: ffffffff8fa355e8 (rtnl_mutex){+.+.}-{3:3}, at: nsim_destroy+0x6f/0x6a0 drivers/net/netdevsim/netdev.c:773
3 locks held by dhcpcd/4893:
2 locks held by getty/4983:
 #0: ffff888030bc30a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xfc8/0x1490 drivers/tty/n_tty.c:2211
3 locks held by syz-executor/5217:
1 lock held by syz-executor/5379:
 #0: ffffffff8ddc5778 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock+0x1a4/0x3b0 kernel/rcu/tree_exp.h:328
2 locks held by syz-executor/5383:
1 lock held by syz-executor/14343:
 #0: ffffffff8fa355e8 (rtnl_mutex){+.+.}-{3:3}, at: __tun_chr_ioctl+0x4fc/0x4770 drivers/net/tun.c:3120
2 locks held by syz-executor/14348:
 #0: ffffffff8fa1fbd0 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x2d6/0x700 net/core/net_namespace.c:504
 #1: ffffffff8fa355e8 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x218/0x780 net/ipv4/ip_tunnel.c:1158
6 locks held by syz.4.4220/14395:
1 lock held by syz.1.4221/14400:
 #0: ffffffff8fa355e8 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fa355e8 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3e/0x230 drivers/net/tun.c:3510
3 locks held by kworker/u8:2/14408:
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x1277/0x1b40 kernel/workqueue.c:3206
 #1: ffffc90009397d80 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x921/0x1b40 kernel/workqueue.c:3207
 #2: ffffffff8fa355e8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0x51/0xc0 net/core/link_watch.c:276
3 locks held by kworker/u8:5/14410:
 #0: ffff8880300a1148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x1277/0x1b40 kernel/workqueue.c:3206
 #1: ffffc9000a37fd80 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x921/0x1b40 kernel/workqueue.c:3207
 #2: ffffffff8fa355e8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x12/0x30 net/ipv6/addrconf.c:4734
3 locks held by syz-executor/14413:
3 locks held by kworker/1:2/14414:
3 locks held by kworker/u8:8/14415:
2 locks held by dhcpcd/14417:
 #0: ffff888026014258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1607 [inline]
 #0: ffff888026014258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2c/0xf60 net/packet/af_packet.c:3266
 #1: ffffffff8ddc5778 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock+0x1a4/0x3b0 kernel/rcu/tree_exp.h:328
1 lock held by dhcpcd/14421:
 #0: ffff88807ed7a258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1607 [inline]
 #0: ffff88807ed7a258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2c/0xf60 net/packet/af_packet.c:3266

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.11.0-rc6-syzkaller-00017-gc9f016e72b5c #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:93 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:119
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xf0c/0x1240 kernel/hung_task.c:379
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 14415 Comm: kworker/u8:8 Not tainted 6.11.0-rc6-syzkaller-00017-gc9f016e72b5c #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
Workqueue: bat_events batadv_tt_purge
RIP: 0010:orc_find arch/x86/kernel/unwind_orc.c:218 [inline]
RIP: 0010:unwind_next_frame+0x2a0/0x23a0 arch/x86/kernel/unwind_orc.c:494
Code: 7c 09 84 d2 74 05 e8 4f a1 ad 00 42 8b 04 b5 5c 6c 5d 91 41 83 c5 01 4a 8d 3c ad 5c 6c 5d 91 48 89 fa 48 c1 ea 03 89 44 24 28 <48> b8 00 00 00 00 00 fc ff df 0f b6 14 02 48 89 f8 83 e0 07 83 c0
RSP: 0018:ffffc90000a179f0 EFLAGS: 00000a07
RAX: 00000000001b17f2 RBX: ffffc90000a17a70 RCX: ffffffff813ce13d
RDX: 1ffffffff23023fc RSI: ffffffff813ce14b RDI: ffffffff91811fe4
RBP: 0000000000000002 R08: 0000000000000004 R09: 000000000008ece1
R10: 00000000000a4000 R11: dffffc0000000000 R12: ffffffff89ece1b2
R13: 000000000008ece2 R14: 000000000008ece1 R15: ffffc90000a17aa5
FS:  0000000000000000(0000) GS:ffff8880b8900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055c6fda77498 CR3: 000000000db7c000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 arch_stack_walk+0x100/0x170 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x95/0xd0 kernel/stacktrace.c:122
 save_stack+0x162/0x1f0 mm/page_owner.c:156
 __set_page_owner+0x8b/0x560 mm/page_owner.c:320
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x2d1/0x350 mm/page_alloc.c:1493
 prep_new_page mm/page_alloc.c:1501 [inline]
 get_page_from_freelist+0x1351/0x2e50 mm/page_alloc.c:3439
 __alloc_pages_noprof+0x22b/0x2460 mm/page_alloc.c:4695
 __alloc_pages_node_noprof include/linux/gfp.h:269 [inline]
 alloc_pages_node_noprof include/linux/gfp.h:296 [inline]
 alloc_slab_page+0x4e/0xf0 mm/slub.c:2321
 allocate_slab mm/slub.c:2484 [inline]
 new_slab+0x84/0x260 mm/slub.c:2537
 ___slab_alloc+0xdac/0x1870 mm/slub.c:3723
 __slab_alloc.constprop.0+0x56/0xb0 mm/slub.c:3813
 __slab_alloc_node mm/slub.c:3866 [inline]
 slab_alloc_node mm/slub.c:4025 [inline]
 kmem_cache_alloc_noprof+0x2a7/0x2f0 mm/slub.c:4044
 skb_clone+0x190/0x3f0 net/core/skbuff.c:2071
 deliver_clone+0x3f/0xa0 net/bridge/br_forward.c:125
 maybe_deliver+0xa7/0x120 net/bridge/br_forward.c:190
 br_flood+0x17e/0x5c0 net/bridge/br_forward.c:236
 br_handle_frame_finish+0xda5/0x1c80 net/bridge/br_input.c:215
 br_nf_hook_thresh+0x303/0x410 net/bridge/br_netfilter_hooks.c:1189
 br_nf_pre_routing_finish_ipv6+0x76a/0xfb0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:314 [inline]
 br_nf_pre_routing_ipv6+0x3ce/0x8c0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x860/0x15b0 net/bridge/br_netfilter_hooks.c:531
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
 br_handle_frame+0x9eb/0x1490 net/bridge/br_input.c:424
 __netif_receive_skb_core.constprop.0+0xa3d/0x4330 net/core/dev.c:5555
 __netif_receive_skb_one_core+0xb1/0x1e0 net/core/dev.c:5659
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:5775
 process_backlog+0x443/0x15f0 net/core/dev.c:6108
 __napi_poll.constprop.0+0xb7/0x550 net/core/dev.c:6772
 napi_poll net/core/dev.c:6841 [inline]
 net_rx_action+0xa92/0x1010 net/core/dev.c:6963
 handle_softirqs+0x216/0x8f0 kernel/softirq.c:554
 do_softirq kernel/softirq.c:455 [inline]
 do_softirq+0xb2/0xf0 kernel/softirq.c:442
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:382
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 batadv_tt_local_purge+0x21c/0x3c0 net/batman-adv/translation-table.c:1356
 batadv_tt_purge+0x86/0xb90 net/batman-adv/translation-table.c:3560
 process_one_work+0x9c5/0x1b40 kernel/workqueue.c:3231
 process_scheduled_works kernel/workqueue.c:3312 [inline]
 worker_thread+0x6c8/0xed0 kernel/workqueue.c:3389
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/09/01 23:26 upstream c9f016e72b5c 1eda0d14 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in tioclinux
* Struck through repros no longer work on HEAD.