syzbot


INFO: task hung in devinet_ioctl (7)

Status: closed as invalid on 2025/04/18 16:40
Subsystems: net
[Documentation on labels]
First crash: 224d, last: 92d
Similar bugs (14)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: task hung in devinet_ioctl 1 808d 808d 0/3 auto-obsoleted due to no activity on 2023/08/07 17:54
linux-6.1 INFO: task hung in devinet_ioctl (2) 1 638d 638d 0/3 auto-obsoleted due to no activity on 2024/01/05 10:55
linux-6.1 INFO: task hung in devinet_ioctl (3) 3 327d 397d 0/3 auto-obsoleted due to no activity on 2024/11/11 06:30
upstream INFO: task hung in devinet_ioctl (2) net 27 1236d 1366d 0/29 closed as invalid on 2022/02/07 19:19
linux-6.1 INFO: task hung in devinet_ioctl (5) 1 31d 31d 0/3 upstream: reported on 2025/05/26 12:59
linux-6.1 INFO: task hung in devinet_ioctl (4) 2 168d 171d 0/3 auto-obsoleted due to no activity on 2025/04/18 15:16
upstream INFO: task hung in devinet_ioctl net 1 2440d 2440d 0/29 auto-closed as invalid on 2019/04/18 15:55
linux-5.15 INFO: task hung in devinet_ioctl (2) 3 493d 506d 0/3 auto-obsoleted due to no activity on 2024/05/28 21:09
upstream INFO: task hung in devinet_ioctl (3) net 825 471d 1121d 0/29 closed as invalid on 2024/03/11 20:24
linux-5.15 INFO: task hung in devinet_ioctl (3) 3 380d 393d 0/3 auto-obsoleted due to no activity on 2024/09/19 07:29
upstream INFO: task hung in devinet_ioctl (4) net 5 470d 471d 25/29 fixed on 2024/04/12 18:02
linux-5.15 INFO: task hung in devinet_ioctl (4) 1 193d 193d 0/3 auto-obsoleted due to no activity on 2025/03/24 18:20
upstream INFO: task hung in devinet_ioctl (5) net 59 351d 377d 26/29 fixed on 2024/07/09 19:14
linux-5.15 INFO: task hung in devinet_ioctl 1 718d 718d 0/3 auto-obsoleted due to no activity on 2023/10/17 09:29

Sample crash report:
INFO: task dhcpcd:5502 blocked for more than 143 seconds.
      Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:dhcpcd          state:D stack:20176 pid:5502  tgid:5502  ppid:5501   task_flags:0x400140 flags:0x00000002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5367 [inline]
 __schedule+0x1b18/0x50e0 kernel/sched/core.c:6748
 __schedule_loop kernel/sched/core.c:6825 [inline]
 schedule+0x163/0x360 kernel/sched/core.c:6840
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6897
 __mutex_lock_common kernel/locking/mutex.c:664 [inline]
 __mutex_lock+0x7fa/0x1000 kernel/locking/mutex.c:732
 rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 devinet_ioctl+0x34e/0x1d80 net/ipv4/devinet.c:1129
 inet_ioctl+0x3d9/0x4f0 net/ipv4/af_inet.c:1001
 sock_do_ioctl+0x15a/0x490 net/socket.c:1199
 sock_ioctl+0x64a/0x900 net/socket.c:1318
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:906 [inline]
 __se_sys_ioctl+0xf1/0x160 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f93fee5bd49
RSP: 002b:00007ffedb381ac8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f93fed8d6c0 RCX: 00007f93fee5bd49
RDX: 00007ffedb391cb8 RSI: 0000000000008914 RDI: 0000000000000010
RBP: 00007ffedb3a1e78 R08: 00007ffedb391c78 R09: 00007ffedb391c28
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffedb391cb8 R14: 0000000000000028 R15: 0000000000008914
 </TASK>
INFO: task kworker/u8:4:9721 blocked for more than 143 seconds.
      Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:4    state:D stack:21384 pid:9721  tgid:9721  ppid:2      task_flags:0x4208060 flags:0x00004000
Workqueue: events_unbound linkwatch_event
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5367 [inline]
 __schedule+0x1b18/0x50e0 kernel/sched/core.c:6748
 __schedule_loop kernel/sched/core.c:6825 [inline]
 schedule+0x163/0x360 kernel/sched/core.c:6840
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6897
 __mutex_lock_common kernel/locking/mutex.c:664 [inline]
 __mutex_lock+0x7fa/0x1000 kernel/locking/mutex.c:732
 linkwatch_event+0xe/0x60 net/core/link_watch.c:285
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xac3/0x18e0 kernel/workqueue.c:3319
 worker_thread+0x870/0xd30 kernel/workqueue.c:3400
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz-executor:16670 blocked for more than 143 seconds.
      Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:22032 pid:16670 tgid:16670 ppid:1      task_flags:0x400140 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5367 [inline]
 __schedule+0x1b18/0x50e0 kernel/sched/core.c:6748
 __schedule_loop kernel/sched/core.c:6825 [inline]
 schedule+0x163/0x360 kernel/sched/core.c:6840
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6897
 __mutex_lock_common kernel/locking/mutex.c:664 [inline]
 __mutex_lock+0x7fa/0x1000 kernel/locking/mutex.c:732
 rtnl_lock net/core/rtnetlink.c:79 [inline]
 rtnl_nets_lock net/core/rtnetlink.c:335 [inline]
 rtnl_newlink+0xd6a/0x1f60 net/core/rtnetlink.c:4021
 rtnetlink_rcv_msg+0x80f/0xd70 net/core/rtnetlink.c:6912
 netlink_rcv_skb+0x208/0x480 net/netlink/af_netlink.c:2533
 netlink_unicast_kernel net/netlink/af_netlink.c:1312 [inline]
 netlink_unicast+0x7f8/0x9a0 net/netlink/af_netlink.c:1338
 netlink_sendmsg+0x8e8/0xce0 net/netlink/af_netlink.c:1882
 sock_sendmsg_nosec net/socket.c:718 [inline]
 __sock_sendmsg+0x221/0x270 net/socket.c:733
 __sys_sendto+0x365/0x4c0 net/socket.c:2187
 __do_sys_sendto net/socket.c:2194 [inline]
 __se_sys_sendto net/socket.c:2190 [inline]
 __x64_sys_sendto+0xde/0x100 net/socket.c:2190
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb5ef98effc
RSP: 002b:00007fb5efccf670 EFLAGS: 00000293 ORIG_RAX: 000000000000002c
RAX: ffffffffffffffda RBX: 00007fb5f06d4620 RCX: 00007fb5ef98effc
RDX: 0000000000000050 RSI: 00007fb5f06d4670 RDI: 0000000000000003
RBP: 0000000000000000 R08: 00007fb5efccf6c4 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 0000000000000000 R14: 00007fb5f06d4670 R15: 0000000000000000
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x30/0x180 kernel/locking/lockdep.c:6761
3 locks held by kworker/u8:3/52:
 #0: ffff88814d472948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88814d472948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90000bc7c60 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90000bc7c60 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #2: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4730
3 locks held by kworker/1:1/58:
 #0: ffff88801ac80d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801ac80d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc9000102fc60 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000102fc60 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
5 locks held by kworker/u8:5/288:
 #0: ffff88801baf6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801baf6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90002fdfc60 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90002fdfc60 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x17c/0xd60 net/core/net_namespace.c:606
 #3: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0xde/0x880 net/core/dev.c:12420
 #4: ffff88805ecb0d28 (&dev->lock){+.+.}-{4:4}, at: netdev_lock include/linux/netdevice.h:2706 [inline]
 #4: ffff88805ecb0d28 (&dev->lock){+.+.}-{4:4}, at: napi_disable+0x4d/0x80 net/core/dev.c:7097
3 locks held by kworker/0:2/976:
 #0: ffff88801ac81d48 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801ac81d48 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90003b0fc60 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90003b0fc60 ((reg_check_chans).work){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: reg_check_chans_work+0x9b/0xfc0 net/wireless/reg.c:2481
1 lock held by dhcpcd/5502:
 #0: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #0: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: devinet_ioctl+0x34e/0x1d80 net/ipv4/devinet.c:1129
2 locks held by getty/5585:
 #0: ffff88814d5a60a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002ff62f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x53d/0x16b0 drivers/tty/n_tty.c:2211
6 locks held by kworker/1:5/5888:
 #0: ffff8880222b2148 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff8880222b2148 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90004447c60 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90004447c60 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffff888144be9190 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #2: ffff888144be9190 (&dev->mutex){....}-{4:4}, at: hub_event+0x200/0x50f0 drivers/usb/core/hub.c:5861
 #3: ffff888011c57190 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #3: ffff888011c57190 (&dev->mutex){....}-{4:4}, at: __device_attach+0x90/0x530 drivers/base/dd.c:1005
 #4: ffff88807bf94160 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #4: ffff88807bf94160 (&dev->mutex){....}-{4:4}, at: __device_attach+0x90/0x530 drivers/base/dd.c:1005
 #5: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: wpan_phy_register+0x22/0x110 net/ieee802154/core.c:145
3 locks held by kworker/u8:4/9721:
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc9000be07c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000be07c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:285
6 locks held by kworker/0:4/12935:
 #0: ffff8880222b2148 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff8880222b2148 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc9001432fc60 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9001432fc60 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffff888028acb190 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #2: ffff888028acb190 (&dev->mutex){....}-{4:4}, at: hub_event+0x200/0x50f0 drivers/usb/core/hub.c:5861
 #3: ffff88802979d190 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #3: ffff88802979d190 (&dev->mutex){....}-{4:4}, at: __device_attach+0x90/0x530 drivers/base/dd.c:1005
 #4: ffff88802979e160 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #4: ffff88802979e160 (&dev->mutex){....}-{4:4}, at: __device_attach+0x90/0x530 drivers/base/dd.c:1005
 #5: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: wpan_phy_register+0x22/0x110 net/ieee802154/core.c:145
2 locks held by syz-executor/16670:
 #0: ffffffff8f653b40 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8f653b40 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8f653b40 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x22/0x250 net/core/rtnetlink.c:564
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:335 [inline]
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xd6a/0x1f60 net/core/rtnetlink.c:4021
1 lock held by syz.1.3091/16727:
 #0: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: do_ip_getsockopt+0x11ae/0x2ba0 net/ipv4/ip_sockglue.c:1702
4 locks held by udevd/16723:
 #0: ffff888033a82668 (&p->lock){+.+.}-{4:4}, at: seq_read_iter+0xb4/0xda0 fs/seq_file.c:182
 #1: ffff88807cfe9488 (&of->mutex#2){+.+.}-{4:4}, at: kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
 #2: ffff8880576aa788 (kn->active#5){++++}-{0:0}, at: kernfs_seq_start+0x72/0x3b0 fs/kernfs/file.c:155
 #3: ffff88802979d190 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #3: ffff88802979d190 (&dev->mutex){....}-{4:4}, at: uevent_show+0x17d/0x340 drivers/base/core.c:2730
4 locks held by udevd/16731:
 #0: ffff88802bb59668 (&p->lock){+.+.}-{4:4}, at: seq_read_iter+0xb4/0xda0 fs/seq_file.c:182
 #1: ffff8880131f9888 (&of->mutex#2){+.+.}-{4:4}, at: kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
 #2: ffff888024f364b8 (kn->active#5){++++}-{0:0}, at: kernfs_seq_start+0x72/0x3b0 fs/kernfs/file.c:155
 #3: ffff888011c57190 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #3: ffff888011c57190 (&dev->mutex){....}-{4:4}, at: uevent_show+0x17d/0x340 drivers/base/core.c:2730
1 lock held by syz.2.3102/16757:
 #0: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: do_ip_getsockopt+0x11ae/0x2ba0 net/ipv4/ip_sockglue.c:1702
1 lock held by syz.0.3104/16766:
 #0: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: do_ip_getsockopt+0x11ae/0x2ba0 net/ipv4/ip_sockglue.c:1702
2 locks held by syz.4.3106/16775:
 #0: ffffffff8f6c6428 (ppp_mutex){+.+.}-{4:4}, at: ppp_ioctl+0x11e/0x1d20 drivers/net/ppp/ppp_generic.c:740
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: ppp_create_interface drivers/net/ppp/ppp_generic.c:3356 [inline]
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: ppp_unattached_ioctl drivers/net/ppp/ppp_generic.c:1071 [inline]
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: ppp_ioctl+0x7a3/0x1d20 drivers/net/ppp/ppp_generic.c:744
1 lock held by syz.4.3106/16776:
 #0: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: arp_ioctl+0x361/0x540 net/ipv4/arp.c:1303
1 lock held by syz.4.3106/16777:
 #0: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: xsk_bind+0x151/0xfe0 net/xdp/xsk.c:1168
2 locks held by syz-executor/16781:
 #0: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x87/0x270 net/ipv4/nexthop.c:3878
2 locks held by syz-executor/16784:
 #0: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x87/0x270 net/ipv4/nexthop.c:3878
2 locks held by syz-executor/16787:
 #0: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x87/0x270 net/ipv4/nexthop.c:3878
2 locks held by syz-executor/16790:
 #0: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x87/0x270 net/ipv4/nexthop.c:3878
2 locks held by syz-executor/16794:
 #0: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x87/0x270 net/ipv4/nexthop.c:3878
2 locks held by syz-executor/16798:
 #0: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x87/0x270 net/ipv4/nexthop.c:3878
2 locks held by syz-executor/16801:
 #0: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x87/0x270 net/ipv4/nexthop.c:3878
2 locks held by syz-executor/16804:
 #0: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x87/0x270 net/ipv4/nexthop.c:3878
2 locks held by syz-executor/16807:
 #0: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x87/0x270 net/ipv4/nexthop.c:3878
2 locks held by syz-executor/16810:
 #0: ffffffff8fec98d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fed6108 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x87/0x270 net/ipv4/nexthop.c:3878

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x4ab/0x4e0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline]
 watchdog+0x1058/0x10a0 kernel/hung_task.c:399
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline]
NMI backtrace for cpu 1 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:106 [inline]
NMI backtrace for cpu 1 skipped: idling at acpi_safe_halt+0x21/0x30 drivers/acpi/processor_idle.c:111

Crashes (31):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/03/25 19:43 upstream 2df0c02dab82 875573af .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in devinet_ioctl
2025/03/18 09:50 upstream fc444ada1310 ce3352cd .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in devinet_ioctl
2025/01/28 19:26 upstream 805ba04cb7cc ac37c1f8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in devinet_ioctl
2025/01/28 16:56 upstream 805ba04cb7cc ac37c1f8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in devinet_ioctl
2025/01/26 19:18 upstream aa22f4da2a46 9fbd772e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in devinet_ioctl
2025/01/26 19:12 upstream aa22f4da2a46 9fbd772e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in devinet_ioctl
2025/01/26 15:59 upstream aa22f4da2a46 9fbd772e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in devinet_ioctl
2025/01/25 06:13 upstream 0afd22092df4 9fbd772e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in devinet_ioctl
2025/01/24 11:40 upstream 8883957b3c9d 521b0ce3 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in devinet_ioctl
2025/01/24 10:31 upstream 21266b8df522 521b0ce3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in devinet_ioctl
2024/12/27 15:02 upstream d6ef8b40d075 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in devinet_ioctl
2024/12/17 21:05 upstream 59dbb9d81adf c8c15bb2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in devinet_ioctl
2024/12/17 14:58 upstream f44d154d6e3d c8c15bb2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in devinet_ioctl
2024/12/14 00:20 upstream f932fb9b4074 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in devinet_ioctl
2024/12/11 14:43 upstream f92f4749861b ff949d25 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in devinet_ioctl
2024/11/24 02:05 upstream 228a1157fb9f 68da6d95 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in devinet_ioctl
2024/11/16 23:19 upstream e8bdb3c8be08 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in devinet_ioctl
2024/11/15 02:40 upstream cfaaa7d010d1 77f3eeb7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in devinet_ioctl
2024/11/14 02:35 upstream f1b785f4c787 a8c99394 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in devinet_ioctl
2025/03/07 07:05 upstream f315296c92fd 831e3629 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in devinet_ioctl
2025/01/26 00:21 upstream 0f8e26b38d7a 9fbd772e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in devinet_ioctl
2024/12/01 11:16 upstream c4bb3a2d641c 68914665 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in devinet_ioctl
2025/03/10 07:33 net 505ead7ab77f 163f510d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2025/02/18 03:43 net 07b598c0e6f0 9be4ace3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2025/01/30 18:39 net f7bf624b1fed 9c8ab845 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2025/01/29 04:21 net b2aec4efe834 f5427d7c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2025/01/27 05:01 net 15a901361ec3 9fbd772e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2025/01/26 18:18 net 15a901361ec3 9fbd772e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2025/01/23 13:09 net 0ad9617c78ac 9d4f14f8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2025/01/20 22:38 net 4395a44acb15 6e87cfa2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2025/02/17 02:25 linux-next 0ae0fa3bf0b4 40a34ec9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in devinet_ioctl
* Struck through repros no longer work on HEAD.