syzbot


INFO: task hung in devtmpfs_create_node (3)

Status: auto-obsoleted due to no activity on 2025/07/24 09:38
Subsystems: kernel
[Documentation on labels]
First crash: 132d, last: 127d
Similar bugs (2)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in devtmpfs_create_node kernel 1 1 2119d 2119d 0/29 auto-closed as invalid on 2020/02/09 23:37
upstream INFO: task hung in devtmpfs_create_node (2) kernel 1 2 256d 299d 0/29 auto-obsoleted due to no activity on 2025/03/17 18:36

Sample crash report:
INFO: task syz-executor:18991 blocked for more than 143 seconds.
      Not tainted 6.15.0-rc3-syzkaller-00094-g02ddfb981de8 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:21176 pid:18991 tgid:18991 ppid:1      task_flags:0x400140 flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x16e2/0x4cd0 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common kernel/sched/completion.c:116 [inline]
 wait_for_common kernel/sched/completion.c:127 [inline]
 wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:148
 devtmpfs_submit_req drivers/base/devtmpfs.c:122 [inline]
 devtmpfs_create_node+0x1d5/0x240 drivers/base/devtmpfs.c:153
 device_add+0x9db/0xb50 drivers/base/core.c:3640
 add_disk_fwnode+0x59d/0x10e0 block/genhd.c:473
 add_disk include/linux/blkdev.h:757 [inline]
 loop_add+0x7f9/0xae0 drivers/block/loop.c:2033
 blk_probe_dev block/genhd.c:823 [inline]
 blk_request_module+0x27d/0x2a0 block/genhd.c:-1
 blkdev_get_no_open+0x37/0xd0 block/bdev.c:787
 bdev_statx+0x80/0x590 block/bdev.c:1288
 vfs_getattr_nosec+0x36d/0x430 fs/stat.c:224
 vfs_getattr fs/stat.c:259 [inline]
 vfs_statx_path fs/stat.c:296 [inline]
 vfs_statx+0x180/0x550 fs/stat.c:353
 vfs_fstatat+0xe4/0x160 fs/stat.c:372
 __do_sys_newfstatat fs/stat.c:536 [inline]
 __se_sys_newfstatat fs/stat.c:530 [inline]
 __x64_sys_newfstatat+0x11c/0x1a0 fs/stat.c:530
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf6/0x210 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7faeaf78d17a
RSP: 002b:00007ffd4a8fcc38 EFLAGS: 00000286 ORIG_RAX: 0000000000000106
RAX: ffffffffffffffda RBX: 00007faeaf81089d RCX: 00007faeaf78d17a
RDX: 00007ffd4a8fcc60 RSI: 00007ffd4a8fccf0 RDI: 00000000ffffff9c
RBP: 00007ffd4a8fccf0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000100 R11: 0000000000000286 R12: 00007ffd4a8fdd80
R13: 00007faeaf81089d R14: 00000000001811e8 R15: 00007ffd4a8fddc0
 </TASK>
INFO: task udevd:21561 blocked for more than 143 seconds.
      Not tainted 6.15.0-rc3-syzkaller-00094-g02ddfb981de8 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:udevd           state:D stack:25248 pid:21561 tgid:21561 ppid:5209   task_flags:0x400140 flags:0x00000002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x16e2/0x4cd0 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 __mutex_lock_common kernel/locking/mutex.c:678 [inline]
 __mutex_lock+0x724/0xe80 kernel/locking/mutex.c:746
 bdev_release+0x1a9/0x650 block/bdev.c:1090
 blkdev_release+0x15/0x20 block/fops.c:660
 __fput+0x44c/0xa70 fs/file_table.c:465
 fput_close_sync+0x169/0x200 fs/file_table.c:570
 __do_sys_close fs/open.c:1581 [inline]
 __se_sys_close fs/open.c:1566 [inline]
 __x64_sys_close+0x7f/0x110 fs/open.c:1566
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf6/0x210 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fe898d170a8
RSP: 002b:00007ffd53a8d1d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000003
RAX: ffffffffffffffda RBX: 00007fe8990a9ae0 RCX: 00007fe898d170a8
RDX: 00007fe898df1b00 RSI: 000055bf484da010 RDI: 0000000000000009
RBP: 000055bf484fccd0 R08: 0000000000000007 R09: bb4f86cd8d617bb4
R10: 000055bf48503700 R11: 0000000000000246 R12: 0000000000000000
R13: 000055bf484f0a80 R14: 000055bf484e94e0 R15: 00007fe899142e8c
 </TASK>
INFO: task syz.7.3722:22279 blocked for more than 144 seconds.
      Not tainted 6.15.0-rc3-syzkaller-00094-g02ddfb981de8 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.7.3722      state:D stack:23832 pid:22279 tgid:22277 ppid:18605  task_flags:0x480140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x16e2/0x4cd0 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common kernel/sched/completion.c:116 [inline]
 wait_for_common kernel/sched/completion.c:127 [inline]
 wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:148
 devtmpfs_submit_req drivers/base/devtmpfs.c:122 [inline]
 devtmpfs_delete_node+0x15a/0x1d0 drivers/base/devtmpfs.c:171
 device_del+0x2cf/0x8e0 drivers/base/core.c:3834
 drop_partition+0x11b/0x180 block/partitions/core.c:278
 bdev_disk_changed+0x28c/0x14b0 block/partitions/core.c:674
 loop_reread_partitions drivers/block/loop.c:436 [inline]
 loop_set_status+0x7f3/0xaf0 drivers/block/loop.c:1243
 lo_ioctl+0xb41/0x22e0 drivers/block/loop.c:-1
 blkdev_ioctl+0x5a8/0x6d0 block/ioctl.c:698
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:906 [inline]
 __se_sys_ioctl+0xfc/0x170 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf6/0x210 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f3880d8e56b
RSP: 002b:00007f3881b78d60 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f3881b78df0 RCX: 00007f3880d8e56b
RDX: 00007f3881b78f00 RSI: 0000000000004c04 RDI: 0000000000000008
RBP: 0000000000000008 R08: 0000000000000000 R09: 0000000000000607
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f3881b78f00
R13: 0000000000000000 R14: 00007f3880fb5fa0 R15: 00007ffc6ee81a68
 </TASK>

Showing all locks held in the system:
2 locks held by kdevtmpfs/26:
 #0: ffff88801b6d8950 (&type->i_mutex_dir_key/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:902 [inline]
 #0: ffff88801b6d8950 (&type->i_mutex_dir_key/1){+.+.}-{4:4}, at: __kern_path_locked+0x14a/0x2d0 fs/namei.c:2765
 #1: ffffffff8e889528 (major_names_lock){+.+.}-{4:4}, at: blk_probe_dev block/genhd.c:820 [inline]
 #1: ffffffff8e889528 (major_names_lock){+.+.}-{4:4}, at: blk_request_module+0x35/0x2a0 block/genhd.c:836
1 lock held by khungtaskd/31:
 #0: ffffffff8e13b860 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e13b860 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e13b860 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6764
3 locks held by kworker/0:2/970:
 #0: ffff88801a478d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a478d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc90003997c60 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90003997c60 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
3 locks held by kworker/u8:9/3542:
 #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc9000cb97c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000cb97c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
1 lock held by dhcpcd/5503:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_deladdr+0x198/0x740 net/ipv4/devinet.c:671
2 locks held by getty/5591:
 #0: ffff8880335d00a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000334b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
3 locks held by kworker/u8:18/10237:
 #0: ffff888030149948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff888030149948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc900033c7c60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc900033c7c60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x112/0x14b0 net/ipv6/addrconf.c:4195
2 locks held by syz-executor/18610:
 #0: ffff88814049e420 (sb_writers){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:556
 #1: ffff88801b6d8950 (&type->i_mutex_dir_key#2){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:877 [inline]
 #1: ffff88801b6d8950 (&type->i_mutex_dir_key#2){++++}-{4:4}, at: open_last_lookups fs/namei.c:3799 [inline]
 #1: ffff88801b6d8950 (&type->i_mutex_dir_key#2){++++}-{4:4}, at: path_openat+0x8cb/0x3830 fs/namei.c:4036
1 lock held by syz-executor/18991:
 #0: ffffffff8e889528 (major_names_lock){+.+.}-{4:4}, at: blk_probe_dev block/genhd.c:820 [inline]
 #0: ffffffff8e889528 (major_names_lock){+.+.}-{4:4}, at: blk_request_module+0x35/0x2a0 block/genhd.c:836
1 lock held by udevd/21561:
 #0: ffff888024e70358 (&disk->open_mutex){+.+.}-{4:4}, at: bdev_release+0x1a9/0x650 block/bdev.c:1090
1 lock held by syz.7.3722/22279:
 #0: ffff888024e70358 (&disk->open_mutex){+.+.}-{4:4}, at: loop_reread_partitions drivers/block/loop.c:435 [inline]
 #0: ffff888024e70358 (&disk->open_mutex){+.+.}-{4:4}, at: loop_set_status+0x7da/0xaf0 drivers/block/loop.c:1243
2 locks held by syz-executor/22562:
 #0: ffffffff8f9fbef8 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8f9fbef8 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8f9fbef8 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4064
2 locks held by syz-executor/22589:
 #0: ffffffff8ec8cd60 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8ec8cd60 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8ec8cd60 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4064
2 locks held by syz-executor/22623:
 #0: ffffffff8fa18538 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8fa18538 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8fa18538 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4064
2 locks held by syz.9.3816/22711:
 #0: ffffffff8f55bd70 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: ethnl_tunnel_info_doit+0x16a/0xc10 net/ethtool/tunnels.c:181
1 lock held by syz.8.3824/22736:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: __tun_chr_ioctl+0x37a/0x1d90 drivers/net/tun.c:3038
1 lock held by syz.8.3824/22740:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnetlink_rcv_msg+0x71c/0xb70 net/core/rtnetlink.c:6961
5 locks held by syz.3.3844/22794:
 #0: ffff88803571e420 (sb_writers#7){.+.+}-{0:0}, at: direct_splice_actor+0x49/0x160 fs/splice.c:1157
 #1: ffff88802946f088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x1e0/0x4f0 fs/kernfs/file.c:325
 #2: ffff88801d693c38 (kn->active#70){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x203/0x4f0 fs/kernfs/file.c:326
 #3: ffffffff8dfeb7e8 (system_transition_mutex){+.+.}-{4:4}, at: software_resume+0x45/0x3c0 kernel/power/hibernate.c:998
 #4: ffffffff8e889528 (major_names_lock){+.+.}-{4:4}, at: blk_probe_dev block/genhd.c:820 [inline]
 #4: ffffffff8e889528 (major_names_lock){+.+.}-{4:4}, at: blk_request_module+0x35/0x2a0 block/genhd.c:836
1 lock held by syz-executor/22797:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22800:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22803:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22806:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22809:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22812:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22815:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22818:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22821:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22825:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22827:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979
1 lock held by syz-executor/22831:
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f4f7808 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:979

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.15.0-rc3-syzkaller-00094-g02ddfb981de8 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:274 [inline]
 watchdog+0xfee/0x1030 kernel/hung_task.c:437
 kthread+0x711/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x4e/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Not tainted 6.15.0-rc3-syzkaller-00094-g02ddfb981de8 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
RIP: 0010:pv_native_safe_halt+0x13/0x20 arch/x86/kernel/paravirt.c:81
Code: ee dd b3 f5 cc cc cc 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa eb 07 0f 00 2d 43 ab 2b 00 f3 0f 1e fa fb f4 <e9> c3 dd b3 f5 cc cc cc cc cc cc cc cc 90 90 90 90 90 90 90 90 90
RSP: 0018:ffffc90000197de0 EFLAGS: 00000286
RAX: d0e021d186822e00 RBX: ffffffff81974d78 RCX: d0e021d186822e00
RDX: 0000000000000001 RSI: ffffffff8d956b7f RDI: ffffffff8be1ba80
RBP: ffffc90000197f20 R08: ffff8880b8732b5b R09: 1ffff110170e656b
R10: dffffc0000000000 R11: ffffed10170e656c R12: ffffffff8f9f8370
R13: 0000000000000001 R14: 0000000000000001 R15: 1ffff1100395db40
FS:  0000000000000000(0000) GS:ffff888125da3000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000056186b04c250 CR3: 000000000df36000 CR4: 0000000000350ef0
Call Trace:
 <TASK>
 arch_safe_halt arch/x86/include/asm/paravirt.h:107 [inline]
 default_idle+0x13/0x20 arch/x86/kernel/process.c:748
 default_idle_call+0x74/0xb0 kernel/sched/idle.c:117
 cpuidle_idle_call kernel/sched/idle.c:185 [inline]
 do_idle+0x1e8/0x510 kernel/sched/idle.c:325
 cpu_startup_entry+0x44/0x60 kernel/sched/idle.c:423
 start_secondary+0x101/0x110 arch/x86/kernel/smpboot.c:315
 common_startup_64+0x13e/0x147
 </TASK>

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/04/25 09:37 upstream 02ddfb981de8 e3715315 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in devtmpfs_create_node
2025/04/21 07:18 upstream ac71fabf1567 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in devtmpfs_create_node
2025/04/20 16:48 upstream 6fea5fabd332 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in devtmpfs_create_node
2025/04/20 08:47 upstream 6fea5fabd332 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in devtmpfs_create_node
* Struck through repros no longer work on HEAD.