syzbot


INFO: task hung in remove_one

Status: upstream: reported syz repro on 2025/01/06 11:11
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+3147c5de186107ffc7a1@syzkaller.appspotmail.com
First crash: 258d, last: 1d17h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] INFO: task hung in remove_one 0 (1) 2025/01/06 11:11
Last patch testing requests (10)
Created Duration User Patch Repo Result
2025/05/22 11:16 21m retest repro upstream report log
2025/05/22 11:16 21m retest repro upstream OK log
2025/04/09 01:34 28m retest repro upstream report log
2025/04/09 01:34 27m retest repro upstream report log
2025/04/09 01:34 27m retest repro upstream report log
2025/04/09 01:34 20m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log
2025/04/09 00:08 15m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log

Sample crash report:
INFO: task kworker/u8:2:36 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:2    state:D stack:24744 pid:36    tgid:36    ppid:2      task_flags:0x4208160 flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5357 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6961
 __schedule_loop kernel/sched/core.c:7043 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7058
 schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121
 __debugfs_file_removed fs/debugfs/inode.c:769 [inline]
 remove_one+0x312/0x420 fs/debugfs/inode.c:776
 __simple_recursive_removal+0x158/0x610 fs/libfs.c:631
 debugfs_remove+0x5d/0x80 fs/debugfs/inode.c:799
 nsim_dev_health_exit+0x3b/0xe0 drivers/net/netdevsim/health.c:227
 nsim_dev_reload_destroy+0x144/0x4d0 drivers/net/netdevsim/dev.c:1710
 nsim_dev_reload_down+0x6e/0xd0 drivers/net/netdevsim/dev.c:983
 devlink_reload+0x1a1/0x7c0 net/devlink/dev.c:461
 devlink_pernet_pre_exit+0x1a0/0x2b0 net/devlink/core.c:509
 ops_pre_exit_list net/core/net_namespace.c:160 [inline]
 ops_undo_list+0x187/0xab0 net/core/net_namespace.c:233
 cleanup_net+0x408/0x890 net/core/net_namespace.c:682
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3236
 process_scheduled_works kernel/workqueue.c:3319 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x5d7/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz-executor:8475 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:24176 pid:8475  tgid:8475  ppid:1      task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5357 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6961
 __schedule_loop kernel/sched/core.c:7043 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7058
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7115
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x81b/0x1060 kernel/locking/mutex.c:760
 device_lock include/linux/device.h:911 [inline]
 device_del+0xa0/0x9f0 drivers/base/core.c:3840
 device_unregister+0x1d/0xc0 drivers/base/core.c:3919
 nsim_bus_dev_del drivers/net/netdevsim/bus.c:483 [inline]
 del_device_store+0x355/0x4a0 drivers/net/netdevsim/bus.c:244
 bus_attr_store+0x71/0xb0 drivers/base/bus.c:172
 sysfs_kf_write+0xef/0x150 fs/sysfs/file.c:145
 kernfs_fop_write_iter+0x351/0x510 fs/kernfs/file.c:334
 new_sync_write fs/read_write.c:593 [inline]
 vfs_write+0x7d3/0x11d0 fs/read_write.c:686
 ksys_write+0x12a/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f1726b8d69f
RSP: 002b:00007ffcb05e80c0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f1726b8d69f
RDX: 0000000000000001 RSI: 00007ffcb05e8110 RDI: 0000000000000005
RBP: 00007f1726c130c1 R08: 0000000000000000 R09: 00007ffcb05e7f17
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000001
R13: 00007ffcb05e8110 R14: 00007f17278f4620 R15: 0000000000000003
 </TASK>
INFO: task syz.3.2387:8492 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.2387      state:D stack:27160 pid:8492  tgid:8492  ppid:6016   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5357 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6961
 __schedule_loop kernel/sched/core.c:7043 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7058
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7115
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x81b/0x1060 kernel/locking/mutex.c:760
 devlink_health_report+0x3ba/0x9c0 net/devlink/health.c:627
 nsim_dev_health_break_write+0x166/0x210 drivers/net/netdevsim/health.c:162
 full_proxy_write+0x12e/0x1a0 fs/debugfs/file.c:388
 do_loop_readv_writev fs/read_write.c:850 [inline]
 do_loop_readv_writev fs/read_write.c:835 [inline]
 vfs_writev+0x5df/0xde0 fs/read_write.c:1059
 do_writev+0x132/0x340 fs/read_write.c:1103
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fe45bb8ebe9
RSP: 002b:00007fffdcfa6a78 EFLAGS: 00000246 ORIG_RAX: 0000000000000014
RAX: ffffffffffffffda RBX: 00007fe45bdc5fa0 RCX: 00007fe45bb8ebe9
RDX: 000000000000000b RSI: 0000200000000000 RDI: 0000000000000000
RBP: 00007fe45bc11e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fe45bdc5fa0 R14: 00007fe45bdc5fa0 R15: 0000000000000003
 </TASK>
INFO: task syz.0.2413:8521 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.2413      state:D stack:28216 pid:8521  tgid:8521  ppid:6006   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5357 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6961
 __schedule_loop kernel/sched/core.c:7043 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7058
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7115
 rwsem_down_read_slowpath+0x64e/0xbf0 kernel/locking/rwsem.c:1086
 __down_read_common kernel/locking/rwsem.c:1261 [inline]
 __down_read kernel/locking/rwsem.c:1274 [inline]
 down_read+0xef/0x480 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:884 [inline]
 open_last_lookups fs/namei.c:3806 [inline]
 path_openat+0x818/0x2cb0 fs/namei.c:4043
 do_filp_open+0x20b/0x470 fs/namei.c:4073
 do_sys_openat2+0x11b/0x1d0 fs/open.c:1435
 do_sys_open fs/open.c:1450 [inline]
 __do_sys_openat fs/open.c:1466 [inline]
 __se_sys_openat fs/open.c:1461 [inline]
 __x64_sys_openat+0x174/0x210 fs/open.c:1461
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7eff8598ebe9
RSP: 002b:00007ffcfaa5e028 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007eff85bc5fa0 RCX: 00007eff8598ebe9
RDX: 0000000000000101 RSI: 00002000000000c0 RDI: ffffffffffffff9c
RBP: 00007eff85a11e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007eff85bc5fa0 R14: 00007eff85bc5fa0 R15: 0000000000000004
 </TASK>
INFO: task syz.2.2415:8523 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.2415      state:D stack:28216 pid:8523  tgid:8523  ppid:6015   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5357 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6961
 __schedule_loop kernel/sched/core.c:7043 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7058
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7115
 rwsem_down_read_slowpath+0x64e/0xbf0 kernel/locking/rwsem.c:1086
 __down_read_common kernel/locking/rwsem.c:1261 [inline]
 __down_read kernel/locking/rwsem.c:1274 [inline]
 down_read+0xef/0x480 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:884 [inline]
 open_last_lookups fs/namei.c:3806 [inline]
 path_openat+0x818/0x2cb0 fs/namei.c:4043
 do_filp_open+0x20b/0x470 fs/namei.c:4073
 do_sys_openat2+0x11b/0x1d0 fs/open.c:1435
 do_sys_open fs/open.c:1450 [inline]
 __do_sys_openat fs/open.c:1466 [inline]
 __se_sys_openat fs/open.c:1461 [inline]
 __x64_sys_openat+0x174/0x210 fs/open.c:1461
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f5813d8ebe9
RSP: 002b:00007ffe9d601fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f5813fc5fa0 RCX: 00007f5813d8ebe9
RDX: 0000000000000101 RSI: 00002000000000c0 RDI: ffffffffffffff9c
RBP: 00007f5813e11e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f5813fc5fa0 R14: 00007f5813fc5fa0 R15: 0000000000000004
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:1/13:
1 lock held by rcu_exp_gp_kthr/18:
 #0: ffff8880b843a458 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x29/0x130 kernel/sched/core.c:636
1 lock held by khungtaskd/31:
 #0: ffffffff8e5c10e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e5c10e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e5c10e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6775
6 locks held by kworker/u8:2/36:
 #0: ffff88801c6fe948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3211
 #1: ffffc90000ac7d10 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3212
 #2: ffffffff90371510 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xad/0x890 net/core/net_namespace.c:658
 #3: ffff888032f3b0e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:911 [inline]
 #3: ffff888032f3b0e8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline]
 #3: ffff888032f3b0e8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x12c/0x2b0 net/devlink/core.c:506
 #4: ffff888032f3c250 (&devlink->lock_key#2){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:276 [inline]
 #4: ffff888032f3c250 (&devlink->lock_key#2){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline]
 #4: ffff888032f3c250 (&devlink->lock_key#2){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x136/0x2b0 net/devlink/core.c:506
 #5: ffff88805f3de4f0 (&sb->s_type->i_mutex_key#3/2){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:914 [inline]
 #5: ffff88805f3de4f0 (&sb->s_type->i_mutex_key#3/2){+.+.}-{4:4}, at: __simple_recursive_removal+0x354/0x610 fs/libfs.c:627
2 locks held by getty/5629:
 #0: ffff88803138f0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
5 locks held by syz-executor/8475:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88807fbd6088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
 #4: ffff888032f3b0e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:911 [inline]
 #4: ffff888032f3b0e8 (&dev->mutex){....}-{4:4}, at: device_del+0xa0/0x9f0 drivers/base/core.c:3840
2 locks held by syz.3.2387/8492:
 #0: ffff88801fad8428 (sb_writers#8){.+.+}-{0:0}, at: do_writev+0x132/0x340 fs/read_write.c:1103
 #1: ffff888032f3c250 (&devlink->lock_key#2){+.+.}-{4:4}, at: devlink_health_report+0x3ba/0x9c0 net/devlink/health.c:627
2 locks held by syz.0.2413/8521:
 #0: ffff88801fad8428 (sb_writers#8){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:3796 [inline]
 #0: ffff88801fad8428 (sb_writers#8){.+.+}-{0:0}, at: path_openat+0x1ec8/0x2cb0 fs/namei.c:4043
 #1: ffff88805f3de4f0 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:884 [inline]
 #1: ffff88805f3de4f0 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: open_last_lookups fs/namei.c:3806 [inline]
 #1: ffff88805f3de4f0 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: path_openat+0x818/0x2cb0 fs/namei.c:4043
2 locks held by syz.2.2415/8523:
 #0: ffff88801fad8428 (sb_writers#8){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:3796 [inline]
 #0: ffff88801fad8428 (sb_writers#8){.+.+}-{0:0}, at: path_openat+0x1ec8/0x2cb0 fs/namei.c:4043
 #1: ffff88805f3de4f0 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:884 [inline]
 #1: ffff88805f3de4f0 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: open_last_lookups fs/namei.c:3806 [inline]
 #1: ffff88805f3de4f0 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: path_openat+0x818/0x2cb0 fs/namei.c:4043
4 locks held by syz-executor/8536:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888075eed488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8543:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888031673488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8546:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88807e39e488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8574:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88807e64ec88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8585:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88802a2d7c88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8588:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805ba80088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8591:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805b8b6888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8619:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805d9f7088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8632:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888060621888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8634:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888060198088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8637:
 #0: ffff88807dbfe428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888060199088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b40a58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8f6748 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
2 locks held by dhcpcd/8663:
 #0: ffff888057c8a258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1667 [inline]
 #0: ffff888057c8a258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2c/0xf60 net/packet/af_packet.c:3251
 #1: ffffffff8e5cc678 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x284/0x3c0 kernel/rcu/tree_exp.h:311
2 locks held by dhcpcd/8664:
 #0: ffff88807ab84258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1667 [inline]
 #0: ffff88807ab84258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2c/0xf60 net/packet/af_packet.c:3251
 #1: ffffffff8e5cc678 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a3/0x3c0 kernel/rcu/tree_exp.h:343

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:328 [inline]
 watchdog+0xf0e/0x1260 kernel/hung_task.c:491
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x5d7/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
RIP: 0010:pv_native_safe_halt+0xf/0x20 arch/x86/kernel/paravirt.c:82
Code: bc 64 02 c3 cc cc cc cc 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 66 90 0f 00 2d e3 11 19 00 fb f4 <e9> 7c 09 03 00 66 2e 0f 1f 84 00 00 00 00 00 66 90 90 90 90 90 90
RSP: 0018:ffffc90000197df8 EFLAGS: 000002c6
RAX: 000000000016486d RBX: 0000000000000001 RCX: ffffffff8b90fbf9
RDX: 0000000000000000 RSI: ffffffff8de4dc69 RDI: ffffffff8c162f00
RBP: ffffed1003c55b40 R08: 0000000000000001 R09: ffffed10170a6655
R10: ffff8880b85332ab R11: 0000000000000000 R12: 0000000000000001
R13: ffff88801e2ada00 R14: ffffffff90ab9290 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8881247c0000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055d870c04168 CR3: 000000002a51e000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 arch_safe_halt arch/x86/include/asm/paravirt.h:107 [inline]
 default_idle+0x13/0x20 arch/x86/kernel/process.c:757
 default_idle_call+0x6d/0xb0 kernel/sched/idle.c:122
 cpuidle_idle_call kernel/sched/idle.c:190 [inline]
 do_idle+0x391/0x510 kernel/sched/idle.c:330
 cpu_startup_entry+0x4f/0x60 kernel/sched/idle.c:428
 start_secondary+0x21d/0x2b0 arch/x86/kernel/smpboot.c:315
 common_startup_64+0x13e/0x148
 </TASK>

Crashes (67):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/09/03 01:35 upstream e6b9dce0aeeb 96a211bc .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/16 09:56 upstream dfd4b508c8c6 1804e95e .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/09 20:20 upstream c30a13538d9f 32a0e5ed .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/02 13:03 upstream a6923c06a3b2 7368264b .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/16 00:32 upstream 155a3c003e55 03fcfc4b .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/09 10:43 upstream 733923397fd9 f4e5e155 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/26 23:37 upstream ee88bddf7f2f 1ae8177e .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/19 21:11 upstream 24770983ccfe ed3e87f7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/16 19:14 upstream e04c78d86a96 d1716036 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/14 01:31 upstream 27605c8c0f69 0e8da31f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/03 04:42 upstream 7f9039c524a3 a30356b7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/06 06:41 upstream 01f95500a162 ae98e6b9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 17:07 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 13:30 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 09:08 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/03 11:55 upstream 7eb172143d55 c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/23 06:01 upstream 5cf80612d3f7 d34966d1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/19 19:23 upstream 6537cfb395f3 cbd8edab .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/17 00:28 upstream ba643b6d8440 40a34ec9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/14 22:20 upstream 128c8f96eb86 fe17639f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/02 11:04 upstream 56e6a3499e14 d3ccff63 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/02 22:14 upstream e6b9dce0aeeb 96a211bc .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/29 06:59 upstream 07d9df80082b d401b9d7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/19 20:03 upstream b19a97d57c15 523f460e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/16 05:26 upstream dfd4b508c8c6 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/11 14:32 upstream 8f5ae30d69d7 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/11 07:56 upstream 8f5ae30d69d7 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/11 01:41 upstream 2b38afce25c4 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/10 07:55 upstream 561c80369df0 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/09 17:21 upstream c30a13538d9f 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/05 17:51 upstream 5998f2bca43e 37880f40 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/02 08:08 upstream a6923c06a3b2 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/27 17:35 upstream ec2df4364666 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/26 13:20 upstream 5f33ebd2018c fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/23 09:36 upstream 89be9a83ccf1 e1dd4f22 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/09 06:52 upstream 733923397fd9 f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/30 05:07 upstream afa9a6f4f574 fc9d8ee5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/26 13:46 upstream ee88bddf7f2f 1ae8177e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/23 14:54 upstream 86731a2a651e d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/19 17:34 upstream 24770983ccfe ed3e87f7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/16 16:08 upstream e04c78d86a96 d1716036 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/13 22:55 upstream 27605c8c0f69 0e8da31f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/07 08:34 upstream c0c9379f235d 4826c28e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/03 01:33 upstream 7f9039c524a3 a30356b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/08 11:15 upstream d76bb1ebb558 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/04 18:16 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/01 22:55 upstream 4f79eaa2ceac 51b137cd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/28 08:28 upstream b4432656b36e c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/19 16:36 upstream 8560697b23dc 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/18 05:17 upstream b5c6891b2c5b 2a20f901 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/18 00:04 upstream b5c6891b2c5b 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/10 15:19 upstream 2eb959eeecc6 1ef3ab4d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/26 00:06 upstream 2df0c02dab82 89d30d73 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/21 02:08 upstream 5fc319360819 62330552 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/16 22:28 upstream cb82ca153949 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/12 20:56 upstream 0fed89a961ea 1a5d9317 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/03 10:19 upstream 7eb172143d55 c3901742 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/01 11:57 upstream 276f98efb64a 67cf5345 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/26 01:08 upstream 2a1944bff549 d34966d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/23 02:01 upstream 5cf80612d3f7 d34966d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/19 15:35 upstream 6537cfb395f3 cbd8edab .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/14 20:19 upstream 128c8f96eb86 fe17639f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/31 12:49 upstream 2a9f04bde07a 4c6ac32f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/16 20:46 upstream ce69b4019001 f9e07a6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/23 02:17 upstream bcde95ce32b6 b4fbdbd4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/22 04:11 upstream 48f506ad0b68 d7f584ee .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/19 21:16 upstream eabcdba3ad40 1d58202c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
* Struck through repros no longer work on HEAD.