syzbot


INFO: task hung in addrconf_verify_work (2)

Status: upstream: reported on 2026/02/07 23:02
Reported-by: syzbot+519aa486c1e2b442d20d@syzkaller.appspotmail.com
First crash: 2d12h, last: 2d12h
Similar bugs (24)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
android-49 INFO: task hung in addrconf_verify_work 1 18 2321d 2494d 0/3 auto-closed as invalid on 2020/01/31 12:44
linux-6.1 INFO: task hung in addrconf_verify_work (2) 1 46 405d 742d 0/3 auto-obsoleted due to no activity on 2025/04/10 16:42
linux-4.19 INFO: task hung in addrconf_verify_work (5) 1 3 1308d 1347d 0/1 auto-obsoleted due to no activity on 2022/11/10 09:18
linux-4.19 INFO: task hung in addrconf_verify_work (6) 1 C error 4 1089d 1148d 0/1 upstream: reported C repro on 2022/12/19 15:22
linux-4.19 INFO: task hung in addrconf_verify_work (3) 1 1 1795d 1795d 0/1 auto-closed as invalid on 2021/07/11 07:19
linux-4.19 INFO: task hung in addrconf_verify_work (4) 1 6 1489d 1575d 0/1 auto-closed as invalid on 2022/05/13 00:19
linux-4.14 INFO: task hung in addrconf_verify_work (2) 1 C error 7 1087d 1931d 0/1 upstream: reported C repro on 2020/10/28 05:47
upstream INFO: task hung in addrconf_verify_work (2) net 1 C 22 2311d 2311d 13/29 fixed on 2019/11/04 14:50
linux-4.19 INFO: task hung in addrconf_verify_work (2) 1 2 1935d 2024d 0/1 auto-closed as invalid on 2021/02/21 08:05
linux-5.15 INFO: task hung in addrconf_verify_work missing-backport 1 C done 53 588d 630d 0/3 auto-obsoleted due to no activity on 2024/10/23 03:15
android-414 INFO: task hung in addrconf_verify_work 1 C 6 2311d 2496d 0/1 public: reported C repro on 2019/04/12 00:01
upstream INFO: task hung in addrconf_verify_work (8) net 1 C error 1294 580d 784d 26/29 fixed on 2024/07/09 19:14
android-44 INFO: task hung in addrconf_verify_work 1 3 2828d 2857d 0/2 auto-closed as invalid on 2019/02/22 14:29
linux-4.19 INFO: task hung in addrconf_verify_work 1 1 2154d 2154d 0/1 auto-closed as invalid on 2020/07/16 23:17
linux-6.1 INFO: task hung in addrconf_verify_work (3) 1 3 35d 94d 0/3 upstream: reported on 2025/11/08 00:01
upstream INFO: task hung in addrconf_verify_work (3) 1 C done 75 1925d 1959d 15/29 fixed on 2020/11/16 12:12
upstream INFO: task hung in addrconf_verify_work (5) net 1 C done done 68 1503d 1595d 0/29 closed as invalid on 2022/02/01 17:39
upstream INFO: task hung in addrconf_verify_work (7) netfilter 1 C error 64 803d 950d 0/29 closed as invalid on 2023/12/01 14:19
linux-6.1 INFO: task hung in addrconf_verify_work 1 2 1012d 1063d 0/3 auto-obsoleted due to no activity on 2023/08/23 09:09
linux-4.14 INFO: task hung in addrconf_verify_work 1 4 2080d 2152d 0/1 auto-closed as invalid on 2020/09/29 04:19
upstream INFO: task hung in addrconf_verify_work net 1 C 2 2877d 2877d 0/29 closed as invalid on 2018/03/27 11:14
upstream INFO: task hung in addrconf_verify_work (4) 1 C done 132 1816d 1903d 20/29 fixed on 2021/04/09 19:46
upstream INFO: task hung in addrconf_verify_work (6) 1 C done 86 1085d 1322d 22/29 fixed on 2023/02/24 13:51
linux-6.6 INFO: task hung in addrconf_verify_work 1 2 5d19h 52d 0/2 upstream: reported on 2025/12/19 17:57

Sample crash report:
INFO: task kworker/1:11:8436 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:11    state:D stack:25272 pid: 8436 ppid:     2 flags:0x00004000
Workqueue: ipv6_addrconf addrconf_verify_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5049 [inline]
 __schedule+0x11ef/0x43c0 kernel/sched/core.c:6395
 schedule+0x11b/0x1e0 kernel/sched/core.c:6478
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6537
 __mutex_lock_common+0xcfc/0x2400 kernel/locking/mutex.c:669
 __mutex_lock kernel/locking/mutex.c:729 [inline]
 mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
 addrconf_verify_work+0xa/0x20 net/ipv6/addrconf.c:4654
 process_one_work+0x85f/0x1010 kernel/workqueue.c:2310
 worker_thread+0xaa6/0x1290 kernel/workqueue.c:2457
 kthread+0x436/0x520 kernel/kthread.c:334
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/0:0/7:
 #0: ffff888016c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
 #1: ffffc90000cc7d00 ((work_completion)(&(&vi->refill)->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
 #2: ffff8880b903a358 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
2 locks held by kworker/u4:0/9:
 #0: ffff888016c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
 #1: ffffc90000ce7d00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
1 lock held by khungtaskd/27:
 #0: ffffffff8c31eaa0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
8 locks held by kworker/u4:3/155:
 #0: ffff888016dcd938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
 #1: ffffc90001f27d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
 #2: ffffffff8d430850 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x148/0xba0 net/core/net_namespace.c:589
 #3: ffffffff8d461da8 (devlink_mutex){+.+.}-{3:3}, at: devlink_pernet_pre_exit+0xa4/0x310 net/core/devlink.c:11534
 #4: ffff8880613e3658 (&nsim_bus_dev->nsim_bus_reload_lock){+.+.}-{3:3}, at: nsim_dev_reload_up+0xc5/0x820 drivers/net/netdevsim/dev.c:897
 #5: ffffffff8d43c748 (rtnl_mutex){+.+.}-{3:3}, at: nsim_init_netdevsim drivers/net/netdevsim/netdev.c:310 [inline]
 #5: ffffffff8d43c748 (rtnl_mutex){+.+.}-{3:3}, at: nsim_create+0x2ef/0x3e0 drivers/net/netdevsim/netdev.c:365
 #6: ffff8880760d1080 (&sb->s_type->i_mutex_key#3){++++}-{3:3}, at: inode_lock include/linux/fs.h:787 [inline]
 #6: ffff8880760d1080 (&sb->s_type->i_mutex_key#3){++++}-{3:3}, at: start_creating+0x129/0x310 fs/debugfs/inode.c:350
 #7: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #7: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #7: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by kswapd0/255:
1 lock held by jbd2/sda1-8/3522:
 #0: ffffffff8c3afc48 (oom_lock){+.+.}-{3:3}, at: __alloc_pages_may_oom mm/page_alloc.c:4308 [inline]
 #0: ffffffff8c3afc48 (oom_lock){+.+.}-{3:3}, at: __alloc_pages_slowpath+0x1cf2/0x2890 mm/page_alloc.c:5163
2 locks held by klogd/3549:
 #0: ffff88802cabdc28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88802cabdc28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by udevd/3560:
 #0: ffff88807df07828 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88807df07828 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by crond/3927:
 #0: ffff88807c516328 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88807c516328 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by getty/3948:
 #0: ffff88814cbce098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
 #1: ffffc90002cf62e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x5df/0x1a70 drivers/tty/n_tty.c:2158
2 locks held by sshd-session/4170:
 #0: ffff88807af5dc28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88807af5dc28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz-executor/4171:
 #0: ffff88807af5ea28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88807af5ea28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz-executor/4186:
 #0: ffff888079452b28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff888079452b28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz-executor/4195:
 #0: ffff888079453928 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff888079453928 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by kworker/u4:7/4307:
 #0: ffff888016c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
 #1: ffffc9000341fd00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
2 locks held by kworker/u4:8/4308:
 #0: ffff888016c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
 #1: ffffc9000343fd00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
2 locks held by kworker/u4:14/8269:
 #0: ffff888016c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
 #1: ffffc900045afd00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
3 locks held by kworker/1:11/8436:
 #0: ffff88802b5dd138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x761/0x1010 kernel/workqueue.c:-1
 #1: ffffc900034ffd00 ((addr_chk_work).work){+.+.}-{0:0}, at: process_one_work+0x79f/0x1010 kernel/workqueue.c:2285
 #2: ffffffff8d43c748 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0xa/0x20 net/ipv6/addrconf.c:4654
1 lock held by syz.0.1765/8910:
 #0: ffff888024d26120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1744 [inline]
 #0: ffff888024d26120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_setsockopt+0x7f3/0x1af0 net/packet/af_packet.c:3829
2 locks held by syz-executor/9300:
 #0: ffff88807f192b28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88807f192b28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz-executor/10539:
 #0: ffff888025f7a698 (nlk_cb_mutex-GENERIC){+.+.}-{3:3}, at: netlink_dump+0xec/0xcf0 net/netlink/af_netlink.c:2226
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by udevd/10544:
 #0: ffff888077541628 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff888077541628 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz.6.2253/10643:
 #0: ffff888074457338 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:842 [inline]
 #0: ffff888074457338 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_fault+0x83b/0x1370 mm/filemap.c:3096
 #1: ffff8880b903a358 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
2 locks held by syz.6.2253/10650:
 #0: ffff88801ef7f828 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88801ef7f828 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by syz.4.2255/10647:
 #0: ffff88802cabf828 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline]
 #0: ffff88802cabf828 (&mm->mmap_lock){++++}-{3:3}, at: vm_mmap_pgoff+0x16c/0x2d0 mm/util.c:549
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by modprobe/10656:
 #0: ffff88802cb34028 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88802cb34028 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by modprobe/10657:
 #0: ffff88802cab9d28 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88802cab9d28 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by modprobe/10658:
 #0: ffff88801d64df48 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:842 [inline]
 #0: ffff88801d64df48 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_fault+0x83b/0x1370 mm/filemap.c:3096
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by dhcpcd-run-hook/10659:
 #0: ffff88802cab9628 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88802cab9628 (&mm->mmap_lock){++++}-{3:3}, at: do_user_addr_fault+0x2b9/0xc80 arch/x86/mm/fault.c:1296
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4654 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 #1: ffffffff8c3de9c0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath+0x10ce/0x2890 mm/page_alloc.c:5114
2 locks held by modprobe/10663:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0x188/0x250 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x3a2/0x3d0 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
 watchdog+0xe0f/0xe50 kernel/hung_task.c:369
 kthread+0x436/0x520 kernel/kthread.c:334
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 10544 Comm: udevd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
RIP: 0010:__lock_acquire+0x5a8d/0x7d10 kernel/locking/lockdep.c:5039
Code: 20 01 00 00 0e 36 e0 45 48 8b 84 24 d0 00 00 00 4a c7 04 00 00 00 00 00 4a c7 44 00 08 00 00 00 00 4a c7 44 00 10 00 00 00 00 <42> c7 44 00 18 00 00 00 00 65 48 8b 04 25 28 00 00 00 48 3b 84 24
RSP: 0000:ffffc90003be6640 EFLAGS: 00000087
RAX: 1ffff9200077ccec RBX: ffff888026d44668 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff901d20c0
RBP: ffffc90003be6890 R08: dffffc0000000000 R09: 1ffffffff203a418
R10: dffffc0000000000 R11: fffffbfff203a419 R12: 56fb501eaf5cf180
R13: ffff888026d43b80 R14: ffff888026d44660 R15: ffff888026d44708
FS:  00007f0f1c512880(0000) GS:ffff8880b9000000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055fa30a9c186 CR3: 000000005c8c8000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 lock_acquire+0x19e/0x400 kernel/locking/lockdep.c:5623
 rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:313
 rcu_read_lock include/linux/rcupdate.h:740 [inline]
 list_lru_count_one+0x49/0x310 mm/list_lru.c:181
 list_lru_shrink_count include/linux/list_lru.h:123 [inline]
 super_cache_count+0x187/0x290 fs/super.c:148
 do_shrink_slab+0x8d/0xd00 mm/vmscan.c:712
 shrink_slab_memcg mm/vmscan.c:834 [inline]
 shrink_slab+0x450/0x7a0 mm/vmscan.c:913
 shrink_node_memcgs mm/vmscan.c:2958 [inline]
 shrink_node+0x110c/0x2610 mm/vmscan.c:3079
 shrink_zones mm/vmscan.c:3285 [inline]
 do_try_to_free_pages+0x606/0x1600 mm/vmscan.c:3340
 try_to_free_pages+0x9a1/0xea0 mm/vmscan.c:3575
 __perform_reclaim mm/page_alloc.c:4657 [inline]
 __alloc_pages_direct_reclaim mm/page_alloc.c:4678 [inline]
 __alloc_pages_slowpath+0x1150/0x2890 mm/page_alloc.c:5114
 __alloc_pages+0x340/0x480 mm/page_alloc.c:5500
 alloc_pages_vma+0x393/0x7c0 mm/mempolicy.c:2146
 __read_swap_cache_async+0x1b5/0xa70 mm/swap_state.c:459
 read_swap_cache_async mm/swap_state.c:525 [inline]
 swap_cluster_readahead+0x6a3/0x7c0 mm/swap_state.c:661
 swapin_readahead+0xf1/0xac0 mm/swap_state.c:854
 do_swap_page+0x4b6/0x1f40 mm/memory.c:3622
 handle_pte_fault mm/memory.c:4654 [inline]
 __handle_mm_fault mm/memory.c:4785 [inline]
 handle_mm_fault+0x1b16/0x4410 mm/memory.c:4883
 do_user_addr_fault+0x489/0xc80 arch/x86/mm/fault.c:1355
 handle_page_fault arch/x86/mm/fault.c:1443 [inline]
 exc_page_fault+0x60/0x100 arch/x86/mm/fault.c:1496
 asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:606
RIP: 0010:__get_user_8+0x18/0x30 arch/x86/lib/getuser.S:100
Code: 31 c0 0f 01 ca c3 90 90 90 90 90 90 90 90 90 90 90 90 48 ba f9 ef ff ff ff 7f 00 00 48 39 d0 73 64 48 19 d2 48 21 d0 0f 01 cb <48> 8b 10 31 c0 0f 01 ca c3 90 90 90 90 90 90 90 90 90 90 90 90 90
RSP: 0000:ffffc90003be7db8 EFLAGS: 00050202
RAX: 00007f0f1c5126a8 RBX: 00007f0f1c5126a8 RCX: 7d0785bb894bc000
RDX: ffffffffffffffff RSI: ffffffff8a2b3a20 RDI: ffffffff8a79f740
RBP: ffffc90003be7ec8 R08: ffffffff8d89db2f R09: 1ffffffff1b13b65
R10: dffffc0000000000 R11: fffffbfff1b13b66 R12: ffffc90003be7fd8
R13: 1ffff9200077cfc4 R14: ffff888026d450f8 R15: dffffc0000000000
 rseq_get_rseq_cs_ptr_val kernel/rseq.c:131 [inline]
 rseq_get_rseq_cs kernel/rseq.c:153 [inline]
 rseq_ip_fixup kernel/rseq.c:266 [inline]
 __rseq_handle_notify_resume+0x150/0xf80 kernel/rseq.c:314
 rseq_handle_notify_resume include/linux/sched.h:2203 [inline]
 tracehook_notify_resume include/linux/tracehook.h:201 [inline]
 exit_to_user_mode_loop+0xe5/0x130 kernel/entry/common.c:181
 exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:214
 irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:320
 exc_page_fault+0x88/0x100 arch/x86/mm/fault.c:1499
 asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:606
RIP: 0033:0x562998af0890
Code: Unable to access opcode bytes at RIP 0x562998af0866.
RSP: 002b:00007ffe6982b958 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 00005629b8052f10 RCX: 0000562cda9ea7ca
RDX: 0000000000000000 RSI: 000000000000002f RDI: 00005629b80529c4
RBP: 0000000000000000 R08: 00000000000001e0 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000297 R12: 000000000aba9500
R13: 0000000003938700 R14: 0000562998b38100 R15: 0000562998b38140
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/07 23:01 linux-5.15.y 7b232985052f 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan INFO: task hung in addrconf_verify_work
* Struck through repros no longer work on HEAD.