syzbot


INFO: task hung in ip_tunnel_delete_nets

Status: auto-obsoleted due to no activity on 2023/08/23 09:02
Reported-by: syzbot+79bf35c3a2cc8a770410@syzkaller.appspotmail.com
First crash: 364d, last: 364d
Similar bugs (7)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in ip_tunnel_delete_nets (3) net 1 1332d 1332d 0/26 auto-closed as invalid on 2020/12/07 06:22
linux-4.19 INFO: task hung in ip_tunnel_delete_nets 1 884d 884d 0/1 auto-closed as invalid on 2022/03/29 13:03
upstream INFO: task hung in ip_tunnel_delete_nets (2) net 2 1444d 1456d 0/26 auto-closed as invalid on 2020/08/16 16:51
upstream INFO: task hung in ip_tunnel_delete_nets net 2 1660d 1664d 0/26 closed as invalid on 2019/10/23 07:54
upstream INFO: task hung in ip_tunnel_delete_nets (4) net 2 673d 673d 0/26 auto-closed as invalid on 2022/09/26 14:02
linux-4.19 INFO: task hung in ip_tunnel_delete_nets (2) 1 499d 499d 0/1 auto-obsoleted due to no activity on 2023/04/18 11:45
upstream INFO: task hung in ip_tunnel_delete_nets (5) net 15 237d 543d 0/26 auto-obsoleted due to no activity on 2023/12/06 23:00

Sample crash report:
INFO: task kworker/u4:4:102 blocked for more than 143 seconds.
      Not tainted 6.1.27-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:4    state:D stack:21368 pid:102   ppid:2      flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5241 [inline]
 __schedule+0x132c/0x4330 kernel/sched/core.c:6554
 schedule+0xbf/0x180 kernel/sched/core.c:6630
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6689
 __mutex_lock_common+0xe2b/0x2520 kernel/locking/mutex.c:679
 __mutex_lock kernel/locking/mutex.c:747 [inline]
 mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
 ip_tunnel_delete_nets+0xc9/0x330 net/ipv4/ip_tunnel.c:1121
 ops_exit_list net/core/net_namespace.c:174 [inline]
 cleanup_net+0x763/0xb60 net/core/net_namespace.c:601
 process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289
 worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436
 kthread+0x26e/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8cf273f0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8cf27bf0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/28:
 #0: ffffffff8cf27220 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
4 locks held by kworker/u4:4/102:
 #0: ffff888012606938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc900015d7d20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffffffff8e085510 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:563
 #3: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_delete_nets+0xc9/0x330 net/ipv4/ip_tunnel.c:1121
2 locks held by getty/3302:
 #0: ffff88814b253098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2177
3 locks held by kworker/0:17/5511:
 #0: ffff88814b451138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc9000b85fd20 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4629
3 locks held by kworker/1:0/12070:
 #0: ffff88814b451138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc90003b9fd20 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4629
3 locks held by kworker/1:5/12078:
 #0: ffff888012465d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc90003d3fd20 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: reg_check_chans_work+0x90/0xe40 net/wireless/reg.c:2493
3 locks held by syz-executor.1/15875:
1 lock held by syz-executor.1/15945:
 #0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x720/0xf00 net/core/rtnetlink.c:6088
1 lock held by syz-executor.1/15954:
 #0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x720/0xf00 net/core/rtnetlink.c:6088

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.1.27-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xf18/0xf60 kernel/hung_task.c:377
 kthread+0x26e/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 15875 Comm: syz-executor.1 Not tainted 6.1.27-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
RIP: 0010:constant_test_bit arch/x86/include/asm/bitops.h:207 [inline]
RIP: 0010:arch_test_bit arch/x86/include/asm/bitops.h:239 [inline]
RIP: 0010:_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:142 [inline]
RIP: 0010:folio_test_dirty include/linux/page-flags.h:479 [inline]
RIP: 0010:shrink_folio_list+0x26e0/0x9290 mm/vmscan.c:1891
Code: 1e 68 00 00 0f 1f 44 00 00 e8 9c 46 cc ff 4c 89 e7 be 08 00 00 00 e8 2f 8e 22 00 48 b8 00 00 00 00 00 fc ff df 48 8b 4c 24 20 <80> 3c 01 00 74 08 4c 89 e7 e8 82 8c 22 00 49 8b 1c 24 48 89 de 48
RSP: 0018:ffffc9000544dd60 EFLAGS: 00000256
RAX: dffffc0000000000 RBX: 0000000000000000 RCX: 1ffffd40002c7a98
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffea000163d4c0
RBP: ffffc9000544e1d0 R08: dffffc0000000000 R09: fffff940002c7a99
R10: 0000000000000000 R11: dffffc0000000001 R12: ffffea000163d4c0
R13: 1ffffd40002c7a9b R14: 0000000000000001 R15: ffffea000163d4d8
FS:  00007f6e17d13700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c0006a0000 CR3: 0000000048e01000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 evict_folios+0xb42/0x2810 mm/vmscan.c:5017
 lru_gen_shrink_lruvec mm/vmscan.c:5201 [inline]
 shrink_lruvec+0xdbf/0x4650 mm/vmscan.c:5896
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/05/04 07:18 linux-6.1.y ca48fc16c493 5b7ff9dd .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in ip_tunnel_delete_nets
* Struck through repros no longer work on HEAD.