syzbot


INFO: task hung in remove_one

Status: upstream: reported syz repro on 2025/01/06 11:11
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+3147c5de186107ffc7a1@syzkaller.appspotmail.com
First crash: 354d, last: 4d09h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] INFO: task hung in remove_one 0 (1) 2025/01/06 11:11
Last patch testing requests (10)
Created Duration User Patch Repo Result
2025/11/01 12:43 19m retest repro upstream report log
2025/11/01 12:43 18m retest repro upstream report log
2025/11/01 12:43 18m retest repro upstream report log
2025/11/01 12:43 19m retest repro upstream report log
2025/11/01 12:43 19m retest repro upstream report log
2025/05/22 11:16 21m retest repro upstream report log
2025/05/22 11:16 21m retest repro upstream OK log
2025/04/09 01:34 28m retest repro upstream report log
2025/04/09 01:34 27m retest repro upstream report log
2025/04/09 01:34 27m retest repro upstream report log

Sample crash report:
INFO: task kworker/u8:2:36 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:2    state:D stack:24744 pid:36    tgid:36    ppid:2      task_flags:0x4208160 flags:0x00080000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1139/0x6150 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6960
 schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75
 do_wait_for_common+0x2d7/0x4c0 kernel/sched/completion.c:100
 __wait_for_common kernel/sched/completion.c:121 [inline]
 wait_for_common kernel/sched/completion.c:132 [inline]
 wait_for_completion+0x49/0x60 kernel/sched/completion.c:153
 __debugfs_file_removed fs/debugfs/inode.c:770 [inline]
 remove_one+0x312/0x420 fs/debugfs/inode.c:777
 __simple_recursive_removal+0x15b/0x610 fs/libfs.c:631
 debugfs_remove+0x5d/0x80 fs/debugfs/inode.c:800
 nsim_dev_health_exit+0x3b/0xe0 drivers/net/netdevsim/health.c:227
 nsim_dev_reload_destroy+0x144/0x4d0 drivers/net/netdevsim/dev.c:1766
 nsim_dev_reload_down+0x66/0xd0 drivers/net/netdevsim/dev.c:1038
 devlink_reload+0x1a1/0x7c0 net/devlink/dev.c:461
 devlink_pernet_pre_exit+0x1a0/0x2b0 net/devlink/core.c:509
 ops_pre_exit_list net/core/net_namespace.c:161 [inline]
 ops_undo_list+0x187/0xab0 net/core/net_namespace.c:234
 cleanup_net+0x41b/0x830 net/core/net_namespace.c:696
 process_one_work+0x9ba/0x1b20 kernel/workqueue.c:3257
 process_scheduled_works kernel/workqueue.c:3340 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3421
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x983/0xb10 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
INFO: task syz-executor:8980 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:24440 pid:8980  tgid:8980  ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1139/0x6150 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6960
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7017
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xcf8/0x1b10 kernel/locking/mutex.c:776
 device_lock include/linux/device.h:914 [inline]
 device_del+0xa0/0x9f0 drivers/base/core.c:3840
 device_unregister+0x1d/0xc0 drivers/base/core.c:3919
 nsim_bus_dev_del drivers/net/netdevsim/bus.c:483 [inline]
 del_device_store+0x355/0x4a0 drivers/net/netdevsim/bus.c:244
 bus_attr_store+0x74/0xb0 drivers/base/bus.c:172
 sysfs_kf_write+0xf2/0x150 fs/sysfs/file.c:142
 kernfs_fop_write_iter+0x3af/0x570 fs/kernfs/file.c:352
 new_sync_write fs/read_write.c:593 [inline]
 vfs_write+0x7d3/0x11d0 fs/read_write.c:686
 ksys_write+0x12a/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb6cb18e27f
RSP: 002b:00007ffd36bd8cf0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007fb6cb18e27f
RDX: 0000000000000001 RSI: 00007ffd36bd8d40 RDI: 0000000000000005
RBP: 00007fb6cb2152cb R08: 0000000000000000 R09: 00007ffd36bd8b47
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000001
R13: 00007ffd36bd8d40 R14: 00007fb6cbf14620 R15: 0000000000000003
 </TASK>
INFO: task syz.0.2845:8996 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.2845      state:D stack:28504 pid:8996  tgid:8996  ppid:8402   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1139/0x6150 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6960
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7017
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xcf8/0x1b10 kernel/locking/mutex.c:776
 devlink_health_report+0x6b4/0xaa0 net/devlink/health.c:680
 nsim_dev_health_break_write+0x166/0x210 drivers/net/netdevsim/health.c:162
 full_proxy_write+0x131/0x1a0 fs/debugfs/file.c:388
 vfs_write+0x2a0/0x11d0 fs/read_write.c:684
 ksys_write+0x12a/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6661d8f7c9
RSP: 002b:00007ffc213a30d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f6661fe5fa0 RCX: 00007f6661d8f7c9
RDX: 0000000000000006 RSI: 0000200000005900 RDI: 0000000000000003
RBP: 00007f6661e13f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f6661fe5fa0 R14: 00007f6661fe5fa0 R15: 0000000000000003
 </TASK>
INFO: task syz.3.2870:9024 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.2870      state:D stack:28584 pid:9024  tgid:9024  ppid:5974   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1139/0x6150 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6960
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7017
 rwsem_down_read_slowpath+0x64b/0xbf0 kernel/locking/rwsem.c:1086
 __down_read_common kernel/locking/rwsem.c:1261 [inline]
 __down_read kernel/locking/rwsem.c:1274 [inline]
 down_read+0xef/0x460 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:1042 [inline]
 open_last_lookups fs/namei.c:4539 [inline]
 path_openat+0x1248/0x3140 fs/namei.c:4784
 do_filp_open+0x20b/0x470 fs/namei.c:4814
 do_sys_openat2+0x11f/0x280 fs/open.c:1430
 do_sys_open fs/open.c:1436 [inline]
 __do_sys_openat fs/open.c:1452 [inline]
 __se_sys_openat fs/open.c:1447 [inline]
 __x64_sys_openat+0x174/0x210 fs/open.c:1447
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4edc58f7c9
RSP: 002b:00007fff520f1268 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f4edc7e5fa0 RCX: 00007f4edc58f7c9
RDX: 0000000000048081 RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007f4edc613f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f4edc7e5fa0 R14: 00007f4edc7e5fa0 R15: 0000000000000004
 </TASK>
INFO: task syz.1.2869:9025 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.2869      state:D stack:28584 pid:9025  tgid:9025  ppid:5972   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1139/0x6150 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6960
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7017
 rwsem_down_read_slowpath+0x64b/0xbf0 kernel/locking/rwsem.c:1086
 __down_read_common kernel/locking/rwsem.c:1261 [inline]
 __down_read kernel/locking/rwsem.c:1274 [inline]
 down_read+0xef/0x460 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:1042 [inline]
 open_last_lookups fs/namei.c:4539 [inline]
 path_openat+0x1248/0x3140 fs/namei.c:4784
 do_filp_open+0x20b/0x470 fs/namei.c:4814
 do_sys_openat2+0x11f/0x280 fs/open.c:1430
 do_sys_open fs/open.c:1436 [inline]
 __do_sys_openat fs/open.c:1452 [inline]
 __se_sys_openat fs/open.c:1447 [inline]
 __x64_sys_openat+0x174/0x210 fs/open.c:1447
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fbe5e38f7c9
RSP: 002b:00007ffd9ebbd648 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007fbe5e5e5fa0 RCX: 00007fbe5e38f7c9
RDX: 0000000000048081 RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007fbe5e413f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fbe5e5e5fa0 R14: 00007fbe5e5e5fa0 R15: 0000000000000004
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/u8:1/13:
 #0: ffff8880b843acd8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x29/0x130 kernel/sched/core.c:639
 #1: ffff8880b8424508 (psi_seq){-.-.}-{0:0}, at: psi_sched_switch kernel/sched/stats.h:220 [inline]
 #1: ffff8880b8424508 (psi_seq){-.-.}-{0:0}, at: __schedule+0x19b1/0x6150 kernel/sched/core.c:6857
1 lock held by khungtaskd/31:
 #0: ffffffff8e3c9140 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e3c9140 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8e3c9140 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6775
6 locks held by kworker/u8:2/36:
 #0: ffff88801badf148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x128d/0x1b20 kernel/workqueue.c:3232
 #1: ffffc90000ac7c90 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x914/0x1b20 kernel/workqueue.c:3233
 #2: ffffffff901042f0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xad/0x830 net/core/net_namespace.c:670
 #3: ffff88805a1970e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:914 [inline]
 #3: ffff88805a1970e8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline]
 #3: ffff88805a1970e8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x12c/0x2b0 net/devlink/core.c:506
 #4: ffff888029b88250 (&devlink->lock_key#4){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:276 [inline]
 #4: ffff888029b88250 (&devlink->lock_key#4){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline]
 #4: ffff888029b88250 (&devlink->lock_key#4){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x136/0x2b0 net/devlink/core.c:506
 #5: ffff888059a68b58 (&sb->s_type->i_mutex_key#8/2){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #5: ffff888059a68b58 (&sb->s_type->i_mutex_key#8/2){+.+.}-{4:4}, at: __simple_recursive_removal+0x354/0x610 fs/libfs.c:627
2 locks held by getty/5594:
 #0: ffff88814df750a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
5 locks held by syz-executor/8980:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805e860088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
 #4: ffff88805a1970e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:914 [inline]
 #4: ffff88805a1970e8 (&dev->mutex){....}-{4:4}, at: device_del+0xa0/0x9f0 drivers/base/core.c:3840
2 locks held by syz.0.2845/8996:
 #0: ffff888141ecc420 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888029b88250 (&devlink->lock_key#4){+.+.}-{4:4}, at: devlink_health_report+0x6b4/0xaa0 net/devlink/health.c:680
2 locks held by syz.3.2870/9024:
 #0: ffff888141ecc420 (sb_writers#8){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:4529 [inline]
 #0: ffff888141ecc420 (sb_writers#8){.+.+}-{0:0}, at: path_openat+0x183a/0x3140 fs/namei.c:4784
 #1: ffff888059a68b58 (&sb->s_type->i_mutex_key#16){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1042 [inline]
 #1: ffff888059a68b58 (&sb->s_type->i_mutex_key#16){++++}-{4:4}, at: open_last_lookups fs/namei.c:4539 [inline]
 #1: ffff888059a68b58 (&sb->s_type->i_mutex_key#16){++++}-{4:4}, at: path_openat+0x1248/0x3140 fs/namei.c:4784
2 locks held by syz.1.2869/9025:
 #0: ffff888141ecc420 (sb_writers#8){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:4529 [inline]
 #0: ffff888141ecc420 (sb_writers#8){.+.+}-{0:0}, at: path_openat+0x183a/0x3140 fs/namei.c:4784
 #1: ffff888059a68b58 (&sb->s_type->i_mutex_key#16){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1042 [inline]
 #1: ffff888059a68b58 (&sb->s_type->i_mutex_key#16){++++}-{4:4}, at: open_last_lookups fs/namei.c:4539 [inline]
 #1: ffff888059a68b58 (&sb->s_type->i_mutex_key#16){++++}-{4:4}, at: path_openat+0x1248/0x3140 fs/namei.c:4784
4 locks held by syz-executor/9033:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888059c9ac88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/9034:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88803088a888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/9037:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88807b7cb088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/9068:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805e84a088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/9081:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805a963888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/9082:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805e0a4088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/9087:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888030bbec88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/9116:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88802f856088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/9129:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888034d0bc88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/9130:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888075a81488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/9135:
 #0: ffff888035356420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888075f47888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff8880279102d8 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6952a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
 watchdog+0xf14/0x1140 kernel/hung_task.c:495
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x983/0xb10 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:native_apic_msr_eoi+0xf/0x20 arch/x86/include/asm/apic.h:218
Code: 01 00 00 00 c3 cc cc cc cc 66 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 31 c0 b9 0b 08 00 00 89 c2 0f 30 <e9> cc 2b f7 09 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90
RSP: 0018:ffffc90000a08c68 EFLAGS: 00000046
RAX: 0000000000000000 RBX: ffffc90000a08c98 RCX: 000000000000080b
RDX: 0000000000000000 RSI: ffffffff81686e6a RDI: ffffc90000a08c98
RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888124a93000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055dba477a660 CR3: 0000000073ae4000 CR4: 00000000003526f0
Call Trace:
 <IRQ>
 apic_eoi arch/x86/include/asm/apic.h:414 [inline]
 __sysvec_call_function_single+0xf/0x3b0 arch/x86/kernel/smp.c:268
 instr_sysvec_call_function_single arch/x86/kernel/smp.c:266 [inline]
 sysvec_call_function_single+0x52/0xc0 arch/x86/kernel/smp.c:266
 asm_sysvec_call_function_single+0x1a/0x20 arch/x86/include/asm/idtentry.h:704
RIP: 0010:__raw_spin_unlock_irq include/linux/spinlock_api_smp.h:160 [inline]
RIP: 0010:_raw_spin_unlock_irq+0x29/0x50 kernel/locking/spinlock.c:202
Code: 90 f3 0f 1e fa 53 48 8b 74 24 08 48 89 fb 48 83 c7 18 e8 ca a2 37 f6 48 89 df e8 42 f6 37 f6 e8 9d 43 64 f6 fb bf 01 00 00 00 <e8> 82 10 28 f6 65 8b 05 2b 9f 47 08 85 c0 74 06 5b e9 51 4c 00 00
RSP: 0018:ffffc90000a08d40 EFLAGS: 00000202
RAX: 0000000000144434 RBX: ffff8880b85261c0 RCX: ffffffff81c5739f
RDX: 0000000000000000 RSI: ffffffff8daa5989 RDI: 0000000000000001
RBP: ffffffff864d8d50 R08: 0000000000000001 R09: 0000000000000001
R10: ffffffff908632d7 R11: 000000005039a2e5 R12: dffffc0000000000
R13: ffff8880b85261c0 R14: fffff520001411c2 R15: ffffc90000a08e10
 expire_timers kernel/time/timer.c:1798 [inline]
 __run_timers+0x73a/0xae0 kernel/time/timer.c:2373
 __run_timer_base kernel/time/timer.c:2385 [inline]
 __run_timer_base kernel/time/timer.c:2377 [inline]
 run_timer_base+0x114/0x190 kernel/time/timer.c:2394
 run_timer_softirq+0x1a/0x40 kernel/time/timer.c:2404
 handle_softirqs+0x219/0x8b0 kernel/softirq.c:622
 __do_softirq kernel/softirq.c:656 [inline]
 invoke_softirq kernel/softirq.c:496 [inline]
 __irq_exit_rcu+0x109/0x170 kernel/softirq.c:723
 irq_exit_rcu+0x9/0x30 kernel/softirq.c:739
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1056 [inline]
 sysvec_apic_timer_interrupt+0xa4/0xc0 arch/x86/kernel/apic/apic.c:1056
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:697
RIP: 0010:pv_native_safe_halt+0xf/0x20 arch/x86/kernel/paravirt.c:82
Code: 86 76 02 e9 93 2f 03 00 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 66 90 0f 00 2d 73 f1 2b 00 fb f4 <c3> cc cc cc cc 66 2e 0f 1f 84 00 00 00 00 00 66 90 90 90 90 90 90
RSP: 0018:ffffc90000197de8 EFLAGS: 000002c2
RAX: 000000000014442f RBX: 0000000000000001 RCX: ffffffff8b5e66f9
RDX: 0000000000000000 RSI: ffffffff8daa5989 RDI: ffffffff8bf1d600
RBP: ffffed1003ad7b70 R08: 0000000000000001 R09: ffffed10170a671d
R10: ffff8880b85338eb R11: 00000000ffffffff R12: 0000000000000001
R13: ffff88801d6bdb80 R14: ffffffff908632d0 R15: 0000000000000000
 arch_safe_halt arch/x86/include/asm/paravirt.h:107 [inline]
 default_idle+0x13/0x20 arch/x86/kernel/process.c:767
 default_idle_call+0x6c/0xb0 kernel/sched/idle.c:122
 cpuidle_idle_call kernel/sched/idle.c:191 [inline]
 do_idle+0x38d/0x510 kernel/sched/idle.c:332
 cpu_startup_entry+0x4f/0x60 kernel/sched/idle.c:430
 start_secondary+0x21d/0x2b0 arch/x86/kernel/smpboot.c:312
 common_startup_64+0x13e/0x148
 </TASK>

Crashes (111):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/12/04 17:05 upstream 8f7aa3d3c732 d1b870e1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/27 22:43 upstream 765e56e41a5a e8331348 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/26 00:29 upstream 8a2bcda5e139 64219f15 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/16 02:21 upstream f824272b6e3f f7988ea4 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/11 02:37 upstream 4ea7c1717f3f 4e1406b4 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/08 17:22 upstream e811c33b1f13 4e1406b4 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/11 08:21 upstream 917167ed1211 ff1712fe .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/02 09:12 upstream d3479214c05d 267f56c6 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/23 23:27 upstream cec1e6e5d1ab e667a34f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/22 18:41 upstream 07e27ad16399 770ff59f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/11 13:37 upstream 7aac71907bde e2beed91 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/03 01:35 upstream e6b9dce0aeeb 96a211bc .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/16 09:56 upstream dfd4b508c8c6 1804e95e .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/09 20:20 upstream c30a13538d9f 32a0e5ed .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/02 13:03 upstream a6923c06a3b2 7368264b .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/16 00:32 upstream 155a3c003e55 03fcfc4b .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/09 10:43 upstream 733923397fd9 f4e5e155 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/26 23:37 upstream ee88bddf7f2f 1ae8177e .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/19 21:11 upstream 24770983ccfe ed3e87f7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/16 19:14 upstream e04c78d86a96 d1716036 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/14 01:31 upstream 27605c8c0f69 0e8da31f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/03 04:42 upstream 7f9039c524a3 a30356b7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/06 06:41 upstream 01f95500a162 ae98e6b9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 17:07 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 13:30 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 09:08 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/03 11:55 upstream 7eb172143d55 c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/23 06:01 upstream 5cf80612d3f7 d34966d1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/19 19:23 upstream 6537cfb395f3 cbd8edab .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/17 00:28 upstream ba643b6d8440 40a34ec9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/14 22:20 upstream 128c8f96eb86 fe17639f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/02 11:04 upstream 56e6a3499e14 d3ccff63 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/04 13:52 upstream 8f7aa3d3c732 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/12/03 01:29 upstream 4a26e7032d7d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/28 23:00 upstream e538109ac71d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/27 19:00 upstream 765e56e41a5a e8331348 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/25 20:27 upstream 8a2bcda5e139 64219f15 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/24 10:20 upstream ac3fd01e4c1e bf6fe8fe .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/24 01:18 upstream d0e88704d96c 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/15 23:48 upstream f824272b6e3f f7988ea4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/15 10:50 upstream 7a0892d2836e f7988ea4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/15 06:03 upstream 7a0892d2836e f7988ea4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/14 15:56 upstream 6da43bbeb691 6d98c1c8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/12 22:48 upstream 24172e0d7990 07e030de .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/11 23:38 upstream 24172e0d7990 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/10 23:39 upstream 4ea7c1717f3f 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/09 17:07 upstream 439fc29dfd3b 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/08 14:35 upstream e811c33b1f13 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/11/03 22:03 upstream 6146a0f1dfae e6c64ba8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/18 12:23 upstream f406055cb18c 1c8c8cd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/11 05:56 upstream 917167ed1211 ff1712fe .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/02 05:06 upstream d3479214c05d 267f56c6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/25 16:43 upstream bf40f4b87761 0abd0691 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/23 19:25 upstream cec1e6e5d1ab e667a34f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/22 15:13 upstream 07e27ad16399 770ff59f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/18 20:13 upstream 8b789f2b7602 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/16 11:24 upstream 46a51f4f5eda e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/15 10:18 upstream 79e8447ec662 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/11 09:34 upstream 7aac71907bde e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/09 16:24 upstream f777d1112ee5 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/08 22:35 upstream f777d1112ee5 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/07 07:38 upstream b236920731dd d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/06 16:20 upstream d1d10cea0895 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/02 22:14 upstream e6b9dce0aeeb 96a211bc .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/29 06:59 upstream 07d9df80082b d401b9d7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/19 20:03 upstream b19a97d57c15 523f460e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/16 05:26 upstream dfd4b508c8c6 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/11 14:32 upstream 8f5ae30d69d7 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/11 07:56 upstream 8f5ae30d69d7 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/11 01:41 upstream 2b38afce25c4 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/10 07:55 upstream 561c80369df0 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/09 17:21 upstream c30a13538d9f 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/05 17:51 upstream 5998f2bca43e 37880f40 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/18 05:17 upstream b5c6891b2c5b 2a20f901 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/19 21:16 upstream eabcdba3ad40 1d58202c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
* Struck through repros no longer work on HEAD.