syzbot


INFO: task hung in addrconf_verify_work

Status: auto-obsoleted due to no activity on 2023/08/23 09:09
Reported-by: syzbot+8e57f6d97c8b7cc83ec9@syzkaller.appspotmail.com
First crash: 409d, last: 358d
Similar bugs (20)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
android-49 INFO: task hung in addrconf_verify_work 18 1667d 1839d 0/3 auto-closed as invalid on 2020/01/31 12:44
linux-6.1 INFO: task hung in addrconf_verify_work (2) 3 42d 88d 0/3 upstream: reported on 2024/01/29 22:05
linux-4.19 INFO: task hung in addrconf_verify_work (5) 3 653d 693d 0/1 auto-obsoleted due to no activity on 2022/11/10 09:18
linux-4.19 INFO: task hung in addrconf_verify_work (6) C error 4 435d 494d 0/1 upstream: reported C repro on 2022/12/19 15:22
linux-4.19 INFO: task hung in addrconf_verify_work (3) 1 1140d 1140d 0/1 auto-closed as invalid on 2021/07/11 07:19
linux-4.19 INFO: task hung in addrconf_verify_work (4) 6 835d 921d 0/1 auto-closed as invalid on 2022/05/13 00:19
linux-4.14 INFO: task hung in addrconf_verify_work (2) C error 7 433d 1276d 0/1 upstream: reported C repro on 2020/10/28 05:47
upstream INFO: task hung in addrconf_verify_work (2) net C 22 1657d 1657d 13/26 fixed on 2019/11/04 14:50
linux-4.19 INFO: task hung in addrconf_verify_work (2) 2 1280d 1370d 0/1 auto-closed as invalid on 2021/02/21 08:05
android-414 INFO: task hung in addrconf_verify_work C 6 1657d 1842d 0/1 public: reported C repro on 2019/04/12 00:01
upstream INFO: task hung in addrconf_verify_work (8) net C error 64 19d 130d 0/26 upstream: reported C repro on 2023/12/18 14:44
android-44 INFO: task hung in addrconf_verify_work 3 2173d 2202d 0/2 auto-closed as invalid on 2019/02/22 14:29
linux-4.19 INFO: task hung in addrconf_verify_work 1 1500d 1500d 0/1 auto-closed as invalid on 2020/07/16 23:17
upstream INFO: task hung in addrconf_verify_work (3) C done 75 1270d 1305d 15/26 fixed on 2020/11/16 12:12
upstream INFO: task hung in addrconf_verify_work (5) net C done done 68 848d 940d 0/26 closed as invalid on 2022/02/01 17:39
upstream INFO: task hung in addrconf_verify_work (7) netfilter C error 64 149d 296d 0/26 closed as invalid on 2023/12/01 14:19
linux-4.14 INFO: task hung in addrconf_verify_work 4 1426d 1497d 0/1 auto-closed as invalid on 2020/09/29 04:19
upstream INFO: task hung in addrconf_verify_work net C 2 2223d 2223d 0/26 closed as invalid on 2018/03/27 11:14
upstream INFO: task hung in addrconf_verify_work (4) C done 132 1161d 1249d 20/26 fixed on 2021/04/09 19:46
upstream INFO: task hung in addrconf_verify_work (6) C done 86 431d 668d 22/26 fixed on 2023/02/24 13:51

Sample crash report:
INFO: task kworker/0:11:21868 blocked for more than 143 seconds.
      Not tainted 6.1.27-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:11    state:D stack:19768 pid:21868 ppid:2      flags:0x00004000
Workqueue: ipv6_addrconf addrconf_verify_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5241 [inline]
 __schedule+0x132c/0x4330 kernel/sched/core.c:6554
 schedule+0xbf/0x180 kernel/sched/core.c:6630
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6689
 __mutex_lock_common+0xe2b/0x2520 kernel/locking/mutex.c:679
 __mutex_lock kernel/locking/mutex.c:747 [inline]
 mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
 addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4629
 process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289
 worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436
 kthread+0x26e/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>
INFO: task kworker/u4:7:23832 blocked for more than 143 seconds.
      Not tainted 6.1.27-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:7    state:D stack:21880 pid:23832 ppid:2      flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5241 [inline]
 __schedule+0x132c/0x4330 kernel/sched/core.c:6554
 schedule+0xbf/0x180 kernel/sched/core.c:6630
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6689
 __mutex_lock_common+0xe2b/0x2520 kernel/locking/mutex.c:679
 __mutex_lock kernel/locking/mutex.c:747 [inline]
 mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
 cangw_pernet_exit_batch+0x1c/0x90 net/can/gw.c:1250
 ops_exit_list net/core/net_namespace.c:174 [inline]
 cleanup_net+0x763/0xb60 net/core/net_namespace.c:601
 process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289
 worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436
 kthread+0x26e/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8cf273f0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8cf27bf0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/28:
 #0: ffffffff8cf27220 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by getty/3305:
 #0: ffff8880293c4098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2177
3 locks held by kworker/u4:10/3948:
3 locks held by kworker/0:11/21868:
 #0: ffff88814ada8138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc90004de7d20 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4629
4 locks held by kworker/u4:7/23832:
 #0: ffff888012606938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc9000340fd20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffffffff8e085510 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:563
 #3: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: cangw_pernet_exit_batch+0x1c/0x90 net/can/gw.c:1250
3 locks held by kworker/1:15/26621:
 #0: ffff88814ada8138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc9000634fd20 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x15/0x30 net/ipv6/addrconf.c:4629
4 locks held by syz-executor.1/31682:
1 lock held by syz-executor.1/31757:
 #0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x720/0xf00 net/core/rtnetlink.c:6088
1 lock held by syz-executor.1/31767:
 #0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8e0918c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x720/0xf00 net/core/rtnetlink.c:6088

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.1.27-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xf18/0xf60 kernel/hung_task.c:377
 kthread+0x26e/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 33 Comm: kworker/u4:2 Not tainted 6.1.27-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
Workqueue: bat_events batadv_nc_worker
RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:35 [inline]
RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:70 [inline]
RIP: 0010:arch_local_irq_save arch/x86/include/asm/irqflags.h:106 [inline]
RIP: 0010:lock_release+0x17f/0xa20 kernel/locking/lockdep.c:5685
Code: 3b 00 74 08 4c 89 f7 e8 1f 1b 76 00 4c 89 6c 24 48 48 c7 84 24 b0 00 00 00 00 00 00 00 9c 8f 84 24 b0 00 00 00 42 80 3c 3b 00 <74> 08 4c 89 f7 e8 77 1a 76 00 4c 8b ac 24 b0 00 00 00 fa 48 c7 c7
RSP: 0018:ffffc90000a9fac0 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 1ffff92000153f6e RCX: ffffffff8169f397
RDX: 0000000000000000 RSI: ffffffff8b3cbf80 RDI: ffffffff8b3cbf40
RBP: ffffc90000a9fbe8 R08: dffffc0000000000 R09: fffffbfff1ca5da6
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff92000153f64
R13: ffffffff8a5372b5 R14: ffffc90000a9fb70 R15: dffffc0000000000
FS:  0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c001ca12c8 CR3: 00000000332c6000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 rcu_read_unlock include/linux/rcupdate.h:780 [inline]
 batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:412 [inline]
 batadv_nc_worker+0x239/0x5b0 net/batman-adv/network-coding.c:719
 process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289
 worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436
 kthread+0x26e/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/05/05 01:32 linux-6.1.y ca48fc16c493 518a39a6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in addrconf_verify_work
2023/03/14 22:36 linux-6.1.y 6449a0ba6843 0d5c4377 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in addrconf_verify_work
* Struck through repros no longer work on HEAD.