syzbot


INFO: task hung in remove_one

Status: upstream: reported syz repro on 2025/01/06 11:11
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+3147c5de186107ffc7a1@syzkaller.appspotmail.com
First crash: 202d, last: 1d07h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] INFO: task hung in remove_one 0 (1) 2025/01/06 11:11
Last patch testing requests (10)
Created Duration User Patch Repo Result
2025/05/22 11:16 21m retest repro upstream report log
2025/05/22 11:16 21m retest repro upstream OK log
2025/04/09 01:34 28m retest repro upstream report log
2025/04/09 01:34 27m retest repro upstream report log
2025/04/09 01:34 27m retest repro upstream report log
2025/04/09 01:34 20m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log
2025/04/09 00:08 15m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log

Sample crash report:
INFO: task kworker/u8:4:66 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:4    state:D stack:25816 pid:66    tgid:66    ppid:2      task_flags:0x4208060 flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5401 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6790
 __schedule_loop kernel/sched/core.c:6868 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6883
 schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common+0x2ff/0x4e0 kernel/sched/completion.c:116
 __debugfs_file_removed fs/debugfs/inode.c:775 [inline]
 remove_one+0x312/0x420 fs/debugfs/inode.c:782
 simple_recursive_removal+0x21b/0x690 fs/libfs.c:636
 debugfs_remove+0x5d/0x80 fs/debugfs/inode.c:805
 nsim_dev_health_exit+0x3b/0xe0 drivers/net/netdevsim/health.c:227
 nsim_dev_reload_destroy+0x144/0x4d0 drivers/net/netdevsim/dev.c:1664
 nsim_dev_reload_down+0x6e/0xd0 drivers/net/netdevsim/dev.c:968
 devlink_reload+0x1a1/0x7c0 net/devlink/dev.c:461
 devlink_pernet_pre_exit+0x1a0/0x2b0 net/devlink/core.c:509
 ops_pre_exit_list net/core/net_namespace.c:162 [inline]
 ops_undo_list+0x187/0xab0 net/core/net_namespace.c:235
 cleanup_net+0x408/0x890 net/core/net_namespace.c:686
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3238
 process_scheduled_works kernel/workqueue.c:3321 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3402
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x5d7/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz-executor:9076 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:24936 pid:9076  tgid:9076  ppid:1      task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5401 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6790
 __schedule_loop kernel/sched/core.c:6868 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6883
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6940
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0x6c7/0xb90 kernel/locking/mutex.c:747
 device_lock include/linux/device.h:884 [inline]
 device_del+0xa0/0x9f0 drivers/base/core.c:3843
 device_unregister+0x1d/0xc0 drivers/base/core.c:3922
 nsim_bus_dev_del drivers/net/netdevsim/bus.c:462 [inline]
 del_device_store+0x355/0x4a0 drivers/net/netdevsim/bus.c:226
 bus_attr_store+0x74/0xb0 drivers/base/bus.c:172
 sysfs_kf_write+0xf2/0x150 fs/sysfs/file.c:145
 kernfs_fop_write_iter+0x354/0x510 fs/kernfs/file.c:334
 new_sync_write fs/read_write.c:593 [inline]
 vfs_write+0x6c4/0x1150 fs/read_write.c:686
 ksys_write+0x12a/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa52d98d3df
RSP: 002b:00007ffdc4f10170 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007fa52d98d3df
RDX: 0000000000000001 RSI: 00007ffdc4f101c0 RDI: 0000000000000005
RBP: 00007fa52da11d8d R08: 0000000000000000 R09: 00007ffdc4f0ffc7
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000001
R13: 00007ffdc4f101c0 R14: 00007fa52e6e4620 R15: 0000000000000003
 </TASK>
INFO: task syz.0.2878:9092 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.2878      state:D stack:28216 pid:9092  tgid:9092  ppid:8492   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5401 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6790
 __schedule_loop kernel/sched/core.c:6868 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6883
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6940
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0x6c7/0xb90 kernel/locking/mutex.c:747
 devlink_health_report+0x3ba/0x9c0 net/devlink/health.c:627
 nsim_dev_health_break_write+0x166/0x210 drivers/net/netdevsim/health.c:162
 full_proxy_write+0x13c/0x200 fs/debugfs/file.c:398
 vfs_write+0x2a0/0x1150 fs/read_write.c:684
 ksys_write+0x12a/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fbb7f18e929
RSP: 002b:00007ffe760d1d18 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007fbb7f3b5fa0 RCX: 00007fbb7f18e929
RDX: 00000000000001ff RSI: 0000000000000000 RDI: 0000000000000003
RBP: 00007fbb7f210b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fbb7f3b5fa0 R14: 00007fbb7f3b5fa0 R15: 0000000000000003
 </TASK>
INFO: task syz.1.2900:9123 blocked for more than 144 seconds.
      Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.2900      state:D stack:28216 pid:9123  tgid:9123  ppid:5982   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5401 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6790
 __schedule_loop kernel/sched/core.c:6868 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6883
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6940
 rwsem_down_read_slowpath+0x62f/0xb60 kernel/locking/rwsem.c:1084
 __down_read_common kernel/locking/rwsem.c:1248 [inline]
 __down_read kernel/locking/rwsem.c:1261 [inline]
 down_read+0xef/0x480 kernel/locking/rwsem.c:1526
 inode_lock_shared include/linux/fs.h:884 [inline]
 open_last_lookups fs/namei.c:3815 [inline]
 path_openat+0x818/0x2cb0 fs/namei.c:4052
 do_filp_open+0x20b/0x470 fs/namei.c:4082
 do_sys_openat2+0x11b/0x1d0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_openat fs/open.c:1468 [inline]
 __se_sys_openat fs/open.c:1463 [inline]
 __x64_sys_openat+0x174/0x210 fs/open.c:1463
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f8598f8e929
RSP: 002b:00007fff4f3cd048 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f85991b5fa0 RCX: 00007f8598f8e929
RDX: 0000000000048081 RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007f8599010b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f85991b5fa0 R14: 00007f85991b5fa0 R15: 0000000000000004
 </TASK>
INFO: task syz.3.2901:9124 blocked for more than 144 seconds.
      Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.2901      state:D stack:28216 pid:9124  tgid:9124  ppid:5988   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5401 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6790
 __schedule_loop kernel/sched/core.c:6868 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6883
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6940
 rwsem_down_read_slowpath+0x62f/0xb60 kernel/locking/rwsem.c:1084
 __down_read_common kernel/locking/rwsem.c:1248 [inline]
 __down_read kernel/locking/rwsem.c:1261 [inline]
 down_read+0xef/0x480 kernel/locking/rwsem.c:1526
 inode_lock_shared include/linux/fs.h:884 [inline]
 open_last_lookups fs/namei.c:3815 [inline]
 path_openat+0x818/0x2cb0 fs/namei.c:4052
 do_filp_open+0x20b/0x470 fs/namei.c:4082
 do_sys_openat2+0x11b/0x1d0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_openat fs/open.c:1468 [inline]
 __se_sys_openat fs/open.c:1463 [inline]
 __x64_sys_openat+0x174/0x210 fs/open.c:1463
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f9ead38e929
RSP: 002b:00007ffe7d3b90e8 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f9ead5b5fa0 RCX: 00007f9ead38e929
RDX: 0000000000048081 RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007f9ead410b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f9ead5b5fa0 R14: 00007f9ead5b5fa0 R15: 0000000000000004
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e5c4d00 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e5c4d00 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e5c4d00 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6770
6 locks held by kworker/u8:4/66:
 #0: ffff88801c6fe148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc9000210fd10 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffffffff9034e110 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xad/0x890 net/core/net_namespace.c:662
 #3: ffff8880360aa0e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline]
 #3: ffff8880360aa0e8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline]
 #3: ffff8880360aa0e8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x12c/0x2b0 net/devlink/core.c:506
 #4: ffff8880360ab250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:276 [inline]
 #4: ffff8880360ab250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline]
 #4: ffff8880360ab250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x136/0x2b0 net/devlink/core.c:506
 #5: ffff88805e1115a8 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #5: ffff88805e1115a8 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: simple_recursive_removal+0x2c5/0x690 fs/libfs.c:628
2 locks held by getty/5609:
 #0: ffff8880366f20a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000333b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
5 locks held by syz-executor/9076:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888034f3b088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
 #4: ffff8880360aa0e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:884 [inline]
 #4: ffff8880360aa0e8 (&dev->mutex){....}-{4:4}, at: device_del+0xa0/0x9f0 drivers/base/core.c:3843
2 locks held by syz.0.2878/9092:
 #0: ffff88801fa98428 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880360ab250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devlink_health_report+0x3ba/0x9c0 net/devlink/health.c:627
2 locks held by syz.1.2900/9123:
 #0: ffff88801fa98428 (sb_writers#8){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:3805 [inline]
 #0: ffff88801fa98428 (sb_writers#8){.+.+}-{0:0}, at: path_openat+0x1ec8/0x2cb0 fs/namei.c:4052
 #1: ffff88805e1115a8 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:884 [inline]
 #1: ffff88805e1115a8 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: open_last_lookups fs/namei.c:3815 [inline]
 #1: ffff88805e1115a8 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: path_openat+0x818/0x2cb0 fs/namei.c:4052
2 locks held by syz.3.2901/9124:
 #0: ffff88801fa98428 (sb_writers#8){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:3805 [inline]
 #0: ffff88801fa98428 (sb_writers#8){.+.+}-{0:0}, at: path_openat+0x1ec8/0x2cb0 fs/namei.c:4052
 #1: ffff88805e1115a8 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:884 [inline]
 #1: ffff88805e1115a8 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: open_last_lookups fs/namei.c:3815 [inline]
 #1: ffff88805e1115a8 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: path_openat+0x818/0x2cb0 fs/namei.c:4052
4 locks held by syz-executor/9151:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805ee91088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/9162:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88814d160888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/9164:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888142747888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/9189:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888062c20488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/9199:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880330da488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/9211:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880365c2c88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/9212:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88807c9ab888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/9235:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888034c0e488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/9247:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805d9b8488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/9258:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880595a9488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216
4 locks held by syz-executor/9260:
 #0: ffff888031a1e428 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880595f0888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffff888140b5da58 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2b2/0x510 fs/kernfs/file.c:326
 #3: ffffffff8f8ebb28 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:216

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline]
 watchdog+0xf70/0x12c0 kernel/hung_task.c:470
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x5d7/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
RIP: 0010:pv_native_safe_halt+0xf/0x20 arch/x86/kernel/paravirt.c:82
Code: 4b 6f 02 e9 93 fb 02 00 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 66 90 0f 00 2d 93 c7 25 00 fb f4 <c3> cc cc cc cc 66 2e 0f 1f 84 00 00 00 00 00 66 90 90 90 90 90 90
RSP: 0018:ffffc90000197df8 EFLAGS: 000002c6
RAX: 0000000000112db5 RBX: 0000000000000001 RCX: ffffffff8b844c49
RDX: 0000000000000000 RSI: ffffffff8de2be6b RDI: ffffffff8c1578e0
RBP: ffffed1003cd8b40 R08: 0000000000000001 R09: ffffed10170a6645
R10: ffff8880b853322b R11: 0000000000000001 R12: 0000000000000001
R13: ffff88801e6c5a00 R14: ffffffff90a99a50 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888124821000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007feb59d9f000 CR3: 000000000e382000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 arch_safe_halt arch/x86/include/asm/paravirt.h:107 [inline]
 default_idle+0x13/0x20 arch/x86/kernel/process.c:749
 default_idle_call+0x6d/0xb0 kernel/sched/idle.c:117
 cpuidle_idle_call kernel/sched/idle.c:185 [inline]
 do_idle+0x391/0x510 kernel/sched/idle.c:325
 cpu_startup_entry+0x4f/0x60 kernel/sched/idle.c:423
 start_secondary+0x21d/0x2b0 arch/x86/kernel/smpboot.c:315
 common_startup_64+0x13e/0x148
 </TASK>

Crashes (48):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/09 10:43 upstream 733923397fd9 f4e5e155 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/26 23:37 upstream ee88bddf7f2f 1ae8177e .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/19 21:11 upstream 24770983ccfe ed3e87f7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/16 19:14 upstream e04c78d86a96 d1716036 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/14 01:31 upstream 27605c8c0f69 0e8da31f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/03 04:42 upstream 7f9039c524a3 a30356b7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/06 06:41 upstream 01f95500a162 ae98e6b9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 17:07 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 13:30 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 09:08 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/03 11:55 upstream 7eb172143d55 c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/23 06:01 upstream 5cf80612d3f7 d34966d1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/19 19:23 upstream 6537cfb395f3 cbd8edab .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/17 00:28 upstream ba643b6d8440 40a34ec9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/14 22:20 upstream 128c8f96eb86 fe17639f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/02 11:04 upstream 56e6a3499e14 d3ccff63 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/09 06:52 upstream 733923397fd9 f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/30 05:07 upstream afa9a6f4f574 fc9d8ee5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/26 13:46 upstream ee88bddf7f2f 1ae8177e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/23 14:54 upstream 86731a2a651e d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/19 17:34 upstream 24770983ccfe ed3e87f7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/16 16:08 upstream e04c78d86a96 d1716036 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/13 22:55 upstream 27605c8c0f69 0e8da31f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/07 08:34 upstream c0c9379f235d 4826c28e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/03 01:33 upstream 7f9039c524a3 a30356b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/08 11:15 upstream d76bb1ebb558 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/04 18:16 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/01 22:55 upstream 4f79eaa2ceac 51b137cd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/28 08:28 upstream b4432656b36e c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/19 16:36 upstream 8560697b23dc 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/18 05:17 upstream b5c6891b2c5b 2a20f901 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/18 00:04 upstream b5c6891b2c5b 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/10 15:19 upstream 2eb959eeecc6 1ef3ab4d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/26 00:06 upstream 2df0c02dab82 89d30d73 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/21 02:08 upstream 5fc319360819 62330552 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/16 22:28 upstream cb82ca153949 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/12 20:56 upstream 0fed89a961ea 1a5d9317 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/03 10:19 upstream 7eb172143d55 c3901742 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/01 11:57 upstream 276f98efb64a 67cf5345 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/26 01:08 upstream 2a1944bff549 d34966d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/23 02:01 upstream 5cf80612d3f7 d34966d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/19 15:35 upstream 6537cfb395f3 cbd8edab .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/14 20:19 upstream 128c8f96eb86 fe17639f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/31 12:49 upstream 2a9f04bde07a 4c6ac32f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/16 20:46 upstream ce69b4019001 f9e07a6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/23 02:17 upstream bcde95ce32b6 b4fbdbd4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/22 04:11 upstream 48f506ad0b68 d7f584ee .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/19 21:16 upstream eabcdba3ad40 1d58202c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
* Struck through repros no longer work on HEAD.