syzbot


INFO: task hung in tun_chr_close (5)

Status: upstream: reported syz repro on 2024/09/15 19:59
Subsystems: net
[Documentation on labels]
Reported-by: syzbot+b0ae8f1abf7d891e0426@syzkaller.appspotmail.com
First crash: 305d, last: 4d18h
Cause bisection: introduced by (bisect log) :
commit 5a781ccbd19e4664babcbe4b4ead7aa2b9283d22
Author: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Date: Sat Sep 29 00:59:43 2018 +0000

  tc: Add support for configuring the taprio scheduler

Crash: BUG: unable to handle kernel NULL pointer dereference in taprio_dequeue (log)
Repro: syz .config
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [wireguard?] INFO: task hung in tun_chr_close (5) 0 (3) 2025/01/05 15:45
Similar bugs (13)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in tun_chr_close 40 177d 419d 0/3 auto-obsoleted due to no activity on 2025/02/09 21:18
linux-4.19 INFO: task hung in tun_chr_close 1 1734d 1734d 0/1 auto-closed as invalid on 2020/12/09 18:55
upstream INFO: task hung in tun_chr_close (4) net syz unreliable error 14 1275d 1336d 0/28 auto-closed as invalid on 2022/09/18 21:51
upstream INFO: task hung in tun_chr_close net 5 2060d 2641d 0/28 closed as dup on 2018/02/16 08:24
linux-4.19 INFO: task hung in tun_chr_close (3) 1 1011d 1011d 0/1 auto-obsoleted due to no activity on 2022/12/03 04:48
linux-6.1 INFO: task hung in tun_chr_close 31 289d 356d 0/3 auto-obsoleted due to no activity on 2024/10/04 23:47
upstream INFO: task hung in rtnetlink_rcv_msg net C inconclusive inconclusive 1970 307d 2270d 26/28 fixed on 2024/07/09 19:14
android-49 INFO: task hung in tun_chr_close 1 2556d 2556d 0/3 auto-closed as invalid on 2019/02/22 14:33
linux-4.19 INFO: task hung in tun_chr_close (4) 3 817d 845d 0/1 upstream: reported on 2023/01/18 07:05
linux-4.19 INFO: task hung in tun_chr_close (2) 6 1223d 1317d 0/1 auto-closed as invalid on 2022/05/04 09:03
upstream INFO: task hung in tun_chr_close (3) net 1 1447d 1447d 0/28 auto-closed as invalid on 2021/08/23 13:06
android-44 INFO: task hung in tun_chr_close 1 2565d 2565d 0/2 auto-closed as invalid on 2019/02/22 15:23
upstream INFO: task hung in tun_chr_close (2) net 7 1556d 1777d 0/28 auto-closed as invalid on 2021/05/17 11:47
Last patch testing requests (4)
Created Duration User Patch Repo Result
2025/04/18 23:22 20m retest repro net report log
2025/04/04 10:07 17m retest repro upstream report log
2025/04/04 10:07 20m retest repro upstream report log
2025/04/04 10:07 19m retest repro upstream report log

Sample crash report:
INFO: task syz-executor:5429 blocked for more than 145 seconds.
      Not tainted 6.15.0-rc5-syzkaller-00032-g0d8d44db295c #0
      Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:21992 pid:5429  tgid:5429  ppid:1      task_flags:0x40054c flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x16e2/0x4cd0 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 __mutex_lock_common kernel/locking/mutex.c:678 [inline]
 __mutex_lock+0x724/0xe80 kernel/locking/mutex.c:746
 tun_detach drivers/net/tun.c:633 [inline]
 tun_chr_close+0x3e/0x1c0 drivers/net/tun.c:3390
 __fput+0x449/0xa70 fs/file_table.c:465
 task_work_run+0x1d1/0x260 kernel/task_work.c:227
 exit_task_work include/linux/task_work.h:40 [inline]
 do_exit+0x8d6/0x2550 kernel/exit.c:953
 do_group_exit+0x21c/0x2d0 kernel/exit.c:1102
 get_signal+0x125e/0x1310 kernel/signal.c:3034
 arch_do_signal_or_restart+0x95/0x780 arch/x86/kernel/signal.c:337
 exit_to_user_mode_loop kernel/entry/common.c:111 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0x8b/0x120 kernel/entry/common.c:218
 do_syscall_64+0x103/0x210 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fe65b3906aa
RSP: 002b:00007ffcb7888be8 EFLAGS: 00000202 ORIG_RAX: 0000000000000037
RAX: 0000000000000000 RBX: 00007fe65b57d960 RCX: 00007fe65b3906aa
RDX: 0000000000000081 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 0000000000000003 R08: 00007ffcb7888c0c R09: 00007ffcb7889017
R10: 00007ffcb7888c10 R11: 0000000000000202 R12: 00007ffcb7888c10
R13: 00007ffcb7888c0c R14: 0000000000000001 R15: 0000000000000000
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/0:0/9:
2 locks held by kworker/0:1/10:
4 locks held by kworker/u4:0/12:
 #0: ffff888011e66948 ((wq_completion)wg-kex-wg2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff888011e66948 ((wq_completion)wg-kex-wg2){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc900001e7c60 ((work_completion)(&peer->transmit_handshake_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc900001e7c60 ((work_completion)(&peer->transmit_handshake_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffff8880009b9308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0x10a/0x7e0 drivers/net/wireguard/noise.c:529
 #3: ffff888043c82ad8 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0x11b/0x7e0 drivers/net/wireguard/noise.c:530
3 locks held by kworker/u4:1/13:
1 lock held by khungtaskd/26:
 #0: ffffffff8df3b860 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8df3b860 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8df3b860 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6764
3 locks held by kworker/u4:2/31:
3 locks held by kworker/u4:3/42:
3 locks held by kworker/u5:0/49:
3 locks held by kworker/0:2/57:
 #0: ffff88801a075d48 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a075d48 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc9000104fc60 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000104fc60 ((reg_check_chans).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: reg_check_chans_work+0x95/0xf00 net/wireless/reg.c:2483
4 locks held by kworker/u4:4/74:
3 locks held by kworker/u4:5/181:
4 locks held by kworker/u4:6/1034:
3 locks held by kworker/u4:7/1037:
3 locks held by kworker/u4:8/1043:
2 locks held by kworker/u4:9/1086:
3 locks held by kworker/u4:10/1096:
2 locks held by kworker/0:3/1362:
3 locks held by kworker/u4:11/3068:
2 locks held by udevd/4723:
1 lock held by dhcpcd/5017:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: inet6_rtm_newaddr+0x5b7/0xd20 net/ipv6/addrconf.c:5028
2 locks held by getty/5101:
 #0: ffff88801d3d90a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000018e2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
2 locks held by syz-execprog/5314:
1 lock held by syz-execprog/5315:
3 locks held by kworker/0:4/5366:
4 locks held by kworker/0:5/5419:
 #0: ffff888011b08548 ((wq_completion)wg-kex-wg0#4){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff888011b08548 ((wq_completion)wg-kex-wg0#4){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc90002317c60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker)))); (typeof((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90002317c60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker)))); (typeof((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffff888011985308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x150/0x900 drivers/net/wireguard/noise.c:598
 #3: ffff8880533d0338 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x4de/0x900 drivers/net/wireguard/noise.c:632
4 locks held by kworker/0:6/5424:
1 lock held by syz-executor/5429:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:633 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3e/0x1c0 drivers/net/tun.c:3390
1 lock held by syz-executor/5430:
1 lock held by syz-executor/5440:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:633 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3e/0x1c0 drivers/net/tun.c:3390
1 lock held by syz-executor/5441:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:633 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3e/0x1c0 drivers/net/tun.c:3390
1 lock held by syz-executor/5442:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:633 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3e/0x1c0 drivers/net/tun.c:3390
3 locks held by kworker/u5:4/5445:
1 lock held by syz-executor/5447:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
4 locks held by kworker/0:7/5589:
 #0: ffff88804d6cc148 ((wq_completion)wg-kex-wg2#6){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88804d6cc148 ((wq_completion)wg-kex-wg2#6){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc90002d0fc60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker)))); (typeof((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90002d0fc60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker)))); (typeof((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffff888011fb5308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x150/0x900 drivers/net/wireguard/noise.c:598
 #3: ffff888043c86648 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x4de/0x900 drivers/net/wireguard/noise.c:632
3 locks held by kworker/0:8/5678:
3 locks held by kworker/u4:12/5679:
2 locks held by kworker/0:9/5680:
3 locks held by kworker/u4:13/5681:
4 locks held by kworker/u4:14/5682:
3 locks held by kworker/u4:15/5683:
 #0: ffff88801a079148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a079148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc9000ce37c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000ce37c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
4 locks held by kworker/u4:16/5685:
3 locks held by kworker/u4:17/5686:
 #0: ffff88803e544148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88803e544148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc9000cdc7c60 (
(work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
(work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x112/0x14b0 net/ipv6/addrconf.c:4195
2 locks held by kworker/u4:19/5688:
 #0: ffff88801a079148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a079148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc9000ce67c60 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000ce67c60 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
4 locks held by kworker/u4:20/5689:
4 locks held by kworker/u4:21/5691:
3 locks held by kworker/u4:23/5695:
3 locks held by kworker/0:13/5698:
4 locks held by kworker/u4:24/5709:
4 locks held by kworker/u4:26/5712:
1 lock held by syz-executor/5717:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
3 locks held by kworker/0:19/5720:
 #0: ffff8880423c5948 ((wq_completion)wg-kex-wg2#4){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff8880423c5948 ((wq_completion)wg-kex-wg2#4){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc9000cea7c60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker)))); (typeof((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000cea7c60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker)))); (typeof((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffff8880533d2ad8 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_begin_session+0x36/0xbd0 drivers/net/wireguard/noise.c:822
1 lock held by syz-executor/5727:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
3 locks held by kworker/u4:27/5736:
1 lock held by syz-executor/5740:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
2 locks held by kworker/0:22/5743:
4 locks held by kworker/0:24/5746:
1 lock held by syz-executor/5748:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/5751:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/5754:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
2 locks held by kworker/0:27/5758:
2 locks held by kworker/0:28/5759:
4 locks held by kworker/u4:28/5764:
1 lock held by syz-executor/5780:
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2f47c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
3 locks held by kworker/0:33/5788:
4 locks held by kworker/u4:30/5789:
4 locks held by kworker/u4:31/5791:
3 locks held by kworker/u4:32/5795:
4 locks held by kworker/u4:33/5798:
4 locks held by kworker/u4:34/5803:
3 locks held by kworker/u4:35/5804:
1 lock held by syz-executor/5809:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 26 Comm: khungtaskd Not tainted 6.15.0-rc5-syzkaller-00032-g0d8d44db295c #0 PREEMPT(full) 
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:274 [inline]
 watchdog+0xfee/0x1030 kernel/hung_task.c:437
 kthread+0x70e/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (583):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/05/07 12:22 upstream 0d8d44db295c 350f4ffc .config console log report syz / log [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root INFO: task hung in tun_chr_close
2025/03/08 21:35 upstream 2a520073e74f 7e3bd60d .config console log report syz / log [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root INFO: task hung in tun_chr_close
2025/03/08 19:03 upstream 2a520073e74f 7e3bd60d .config console log report syz / log [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root INFO: task hung in tun_chr_close
2024/11/19 01:31 upstream 9fb2cfa4635a 571351cb .config console log report syz / log [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root INFO: task hung in tun_chr_close
2024/12/17 17:58 net 7ed2d9158877 c8c15bb2 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/04/27 10:16 upstream 5bc1018675ec c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in tun_chr_close
2025/04/24 11:24 upstream a79be02bba5c 9c80ffa0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in tun_chr_close
2025/04/21 21:00 upstream 9d7a0577c9db 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in tun_chr_close
2025/04/04 19:40 upstream e48e99b6edf4 1c4febdb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in tun_chr_close
2025/03/10 06:00 upstream 80e54e84911a 163f510d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in tun_chr_close
2025/03/09 06:08 upstream 2a520073e74f 163f510d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in tun_chr_close
2025/02/25 22:52 upstream 2a1944bff549 d34966d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in tun_chr_close
2025/02/06 08:16 upstream 92514ef226f5 577d049b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in tun_chr_close
2025/01/19 16:50 upstream fda5e3f28400 f2cb035c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in tun_chr_close
2025/01/19 10:09 upstream fda5e3f28400 f2cb035c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in tun_chr_close
2025/01/13 12:54 upstream 5bc55a333a2f 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in tun_chr_close
2025/01/11 02:10 upstream e0daef7de1ac 67d7ec0a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in tun_chr_close
2025/01/06 08:12 upstream ab75170520d4 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in tun_chr_close
2025/01/04 01:11 upstream 0bc21e701a6f f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in tun_chr_close
2025/01/02 03:50 upstream 56e6a3499e14 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in tun_chr_close
2025/01/01 14:18 upstream ccb98ccef0e5 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in tun_chr_close
2024/12/31 07:40 upstream ccb98ccef0e5 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in tun_chr_close
2024/12/16 23:47 upstream 78d4f34e2115 f93b2b55 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in tun_chr_close
2024/12/13 22:10 upstream f932fb9b4074 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in tun_chr_close
2024/09/11 19:42 upstream 7c6a3a65ace7 9326a104 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in tun_chr_close
2024/09/01 23:27 upstream c9f016e72b5c 1eda0d14 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in tun_chr_close
2024/10/17 00:04 upstream c964ced77262 666f77ed .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in tun_chr_close
2025/03/17 01:47 net 4003c9e78778 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/03/14 06:33 net 2409fa66e29a e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/03/12 15:14 net d2b9d97e89c7 ee70e6db .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/03/11 21:53 net 77b2ab31fc65 f2eee6b3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/03/05 01:39 net 3c6a041b317a c3901742 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/02/25 18:12 net bc50682128bd d34966d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/02/24 08:44 net 28b04731a38c d34966d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/02/16 19:49 net 071ed42cff4f 40a34ec9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/02/16 09:18 net 071ed42cff4f 40a34ec9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/02/07 13:18 net 1438f5d07b9a a4f327c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/02/06 01:48 net a1300691aed9 577d049b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/02/03 12:51 net 235174b2bed8 a21a8419 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/01/29 23:44 net 9e6c4e6b605c afe4eff5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/01/28 05:39 net 05d91cdb1f91 18070896 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/01/28 00:13 net 05d91cdb1f91 18070896 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/01/25 14:29 net 15a901361ec3 9fbd772e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2024/12/12 03:59 net 3dd002f20098 ff949d25 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in tun_chr_close
2025/03/21 07:35 net-next 6f13bec53a48 62330552 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2025/03/21 05:32 net-next 6f13bec53a48 62330552 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2025/03/15 09:40 net-next bfc6c67ec2d6 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2025/03/10 10:12 net-next 8ef890df4031 163f510d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2025/03/03 11:00 net-next f77f12010f67 c3901742 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2025/02/27 14:00 net-next 0493f7a54e5b 6a8fcbc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2025/02/21 10:58 net-next 5d6ba5ab8582 0808a665 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2025/02/11 10:18 net-next 907dd32b4a8a 43f51a00 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2025/02/09 13:50 net-next acdefab0dcbc ef44b750 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2025/01/27 11:08 net-next 0ad9617c78ac 9fbd772e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2024/12/21 23:14 net-next ae418e95dd93 d7f584ee .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2024/12/12 23:48 net-next 96b6fcc0ee41 941924eb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in tun_chr_close
2024/10/14 19:39 linux-next 7f773fd61baa 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in tun_chr_close
* Struck through repros no longer work on HEAD.