syzbot


INFO: task hung in remove_one

Status: upstream: reported syz repro on 2025/01/06 11:11
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+3147c5de186107ffc7a1@syzkaller.appspotmail.com
First crash: 306d, last: 4d02h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] INFO: task hung in remove_one 0 (1) 2025/01/06 11:11
Last patch testing requests (10)
Created Duration User Patch Repo Result
2025/05/22 11:16 21m retest repro upstream report log
2025/05/22 11:16 21m retest repro upstream OK log
2025/04/09 01:34 28m retest repro upstream report log
2025/04/09 01:34 27m retest repro upstream report log
2025/04/09 01:34 27m retest repro upstream report log
2025/04/09 01:34 20m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log
2025/04/09 00:08 15m retest repro upstream report log
2025/04/09 00:08 16m retest repro upstream report log

Sample crash report:
INFO: task kworker/u8:8:3452 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:8    state:D stack:24968 pid:3452  tgid:3452  ppid:2      task_flags:0x4208060 flags:0x00080000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7026
 schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common+0x2f9/0x4e0 kernel/sched/completion.c:121
 __debugfs_file_removed fs/debugfs/inode.c:770 [inline]
 remove_one+0x312/0x420 fs/debugfs/inode.c:777
 __simple_recursive_removal+0x158/0x610 fs/libfs.c:631
 debugfs_remove+0x5d/0x80 fs/debugfs/inode.c:800
 nsim_dev_health_exit+0x3b/0xe0 drivers/net/netdevsim/health.c:227
 nsim_dev_reload_destroy+0x144/0x4d0 drivers/net/netdevsim/dev.c:1710
 nsim_dev_reload_down+0x6e/0xd0 drivers/net/netdevsim/dev.c:983
 devlink_reload+0x19e/0x7c0 net/devlink/dev.c:461
 devlink_pernet_pre_exit+0x1a0/0x2b0 net/devlink/core.c:509
 ops_pre_exit_list net/core/net_namespace.c:161 [inline]
 ops_undo_list+0x184/0xab0 net/core/net_namespace.c:234
 cleanup_net+0x41b/0x8b0 net/core/net_namespace.c:695
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3263
 process_scheduled_works kernel/workqueue.c:3346 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3427
 kthread+0x3c2/0x780 kernel/kthread.c:463
 ret_from_fork+0x675/0x7d0 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz-executor:8705 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:24296 pid:8705  tgid:8705  ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x818/0x1060 kernel/locking/mutex.c:760
 device_lock include/linux/device.h:914 [inline]
 device_del+0xa0/0x9f0 drivers/base/core.c:3840
 device_unregister+0x1d/0xc0 drivers/base/core.c:3919
 nsim_bus_dev_del drivers/net/netdevsim/bus.c:483 [inline]
 del_device_store+0x355/0x4a0 drivers/net/netdevsim/bus.c:244
 bus_attr_store+0x71/0xb0 drivers/base/bus.c:172
 sysfs_kf_write+0xf2/0x150 fs/sysfs/file.c:142
 kernfs_fop_write_iter+0x3af/0x570 fs/kernfs/file.c:352
 new_sync_write fs/read_write.c:593 [inline]
 vfs_write+0x7d3/0x11d0 fs/read_write.c:686
 ksys_write+0x12a/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f69bb98d97f
RSP: 002b:00007fff38cce970 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f69bb98d97f
RDX: 0000000000000001 RSI: 00007fff38cce9c0 RDI: 0000000000000005
RBP: 00007f69bba13239 R08: 0000000000000000 R09: 00007fff38cce7c7
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000001
R13: 00007fff38cce9c0 R14: 00007f69bc714620 R15: 0000000000000003
 </TASK>
INFO: task syz.0.2545:8710 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.2545      state:D stack:27288 pid:8710  tgid:8710  ppid:5986   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x818/0x1060 kernel/locking/mutex.c:760
 devlink_health_report+0x6b4/0xb00 net/devlink/health.c:680
 nsim_dev_health_break_write+0x166/0x210 drivers/net/netdevsim/health.c:162
 full_proxy_write+0x12e/0x1a0 fs/debugfs/file.c:388
 vfs_write+0x2a0/0x11d0 fs/read_write.c:684
 ksys_write+0x12a/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6e21d8eec9
RSP: 002b:00007ffceae04908 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f6e21fe5fa0 RCX: 00007f6e21d8eec9
RDX: 0000000000000002 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 00007f6e21e11f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f6e21fe5fa0 R14: 00007f6e21fe5fa0 R15: 0000000000000003
 </TASK>
INFO: task syz.3.2553:8720 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.2553      state:D stack:27288 pid:8720  tgid:8720  ppid:5985   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 rwsem_down_read_slowpath+0x64b/0xbf0 kernel/locking/rwsem.c:1086
 __down_read_common kernel/locking/rwsem.c:1261 [inline]
 __down_read kernel/locking/rwsem.c:1274 [inline]
 down_read+0xef/0x480 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:995 [inline]
 open_last_lookups fs/namei.c:3894 [inline]
 path_openat+0x818/0x2cb0 fs/namei.c:4131
 do_filp_open+0x20b/0x470 fs/namei.c:4161
 do_sys_openat2+0x11b/0x1d0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_openat fs/open.c:1468 [inline]
 __se_sys_openat fs/open.c:1463 [inline]
 __x64_sys_openat+0x174/0x210 fs/open.c:1463
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f389118eec9
RSP: 002b:00007ffc7348e618 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f38913e5fa0 RCX: 00007f389118eec9
RDX: 0000000000048081 RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007f3891211f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f38913e5fa0 R14: 00007f38913e5fa0 R15: 0000000000000004
 </TASK>
INFO: task syz.1.2554:8722 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.2554      state:D stack:27288 pid:8722  tgid:8722  ppid:8485   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 rwsem_down_read_slowpath+0x64b/0xbf0 kernel/locking/rwsem.c:1086
 __down_read_common kernel/locking/rwsem.c:1261 [inline]
 __down_read kernel/locking/rwsem.c:1274 [inline]
 down_read+0xef/0x480 kernel/locking/rwsem.c:1539
 inode_lock_shared include/linux/fs.h:995 [inline]
 open_last_lookups fs/namei.c:3894 [inline]
 path_openat+0x818/0x2cb0 fs/namei.c:4131
 do_filp_open+0x20b/0x470 fs/namei.c:4161
 do_sys_openat2+0x11b/0x1d0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_openat fs/open.c:1468 [inline]
 __se_sys_openat fs/open.c:1463 [inline]
 __x64_sys_openat+0x174/0x210 fs/open.c:1463
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f619e98eec9
RSP: 002b:00007ffe1134eca8 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f619ebe5fa0 RCX: 00007f619e98eec9
RDX: 0000000000048081 RSI: 0000200000000000 RDI: ffffffffffffff9c
RBP: 00007f619ea11f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f619ebe5fa0 R14: 00007f619ebe5fa0 R15: 0000000000000004
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e3c4320 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e3c4320 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8e3c4320 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6775
6 locks held by kworker/u8:8/3452:
 #0: ffff88801ba9f148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
 #1: ffffc9000bc17d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
 #2: ffffffff900e8770 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xad/0x8b0 net/core/net_namespace.c:669
 #3: ffff88802863b0e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:914 [inline]
 #3: ffff88802863b0e8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline]
 #3: ffff88802863b0e8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x12c/0x2b0 net/devlink/core.c:506
 #4: ffff88802863c250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:276 [inline]
 #4: ffff88802863c250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline]
 #4: ffff88802863c250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x136/0x2b0 net/devlink/core.c:506
 #5: ffff8880587eda70 (&sb->s_type->i_mutex_key#3/2){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1025 [inline]
 #5: ffff8880587eda70 (&sb->s_type->i_mutex_key#3/2){+.+.}-{4:4}, at: __simple_recursive_removal+0x354/0x610 fs/libfs.c:627
2 locks held by getty/5595:
 #0: ffff88814d81d0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
5 locks held by syz-executor/8705:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880603a7888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
 #4: ffff88802863b0e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:914 [inline]
 #4: ffff88802863b0e8 (&dev->mutex){....}-{4:4}, at: device_del+0xa0/0x9f0 drivers/base/core.c:3840
2 locks held by syz.0.2545/8710:
 #0: ffff88801e6f2420 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88802863c250 (&devlink->lock_key#5){+.+.}-{4:4}, at: devlink_health_report+0x6b4/0xb00 net/devlink/health.c:680
2 locks held by syz.3.2553/8720:
 #0: ffff88801e6f2420 (sb_writers#8){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:3884 [inline]
 #0: ffff88801e6f2420 (sb_writers#8){.+.+}-{0:0}, at: path_openat+0x1ec8/0x2cb0 fs/namei.c:4131
 #1: ffff8880587eda70 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:995 [inline]
 #1: ffff8880587eda70 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: open_last_lookups fs/namei.c:3894 [inline]
 #1: ffff8880587eda70 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: path_openat+0x818/0x2cb0 fs/namei.c:4131
2 locks held by syz.1.2554/8722:
 #0: ffff88801e6f2420 (sb_writers#8){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:3884 [inline]
 #0: ffff88801e6f2420 (sb_writers#8){.+.+}-{0:0}, at: path_openat+0x1ec8/0x2cb0 fs/namei.c:4131
 #1: ffff8880587eda70 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:995 [inline]
 #1: ffff8880587eda70 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: open_last_lookups fs/namei.c:3894 [inline]
 #1: ffff8880587eda70 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: path_openat+0x818/0x2cb0 fs/namei.c:4131
4 locks held by syz-executor/8729:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805b99c488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8732:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880580d2088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8735:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805ad17088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8765:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff88805a184088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8777:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888058bac088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8780:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880237b0088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8782:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880580b2c88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8813:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff8880580d7488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8827:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888059647488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8829:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888060527088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234
4 locks held by syz-executor/8832:
 #0: ffff888034fc8420 (sb_writers#7){.+.+}-{0:0}, at: ksys_write+0x12a/0x250 fs/read_write.c:738
 #1: ffff888025cb4888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x570 fs/kernfs/file.c:343
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff88814476c968 (kn->active#58){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2ff/0x570 fs/kernfs/file.c:344
 #3: ffffffff8f6825a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xd1/0x4a0 drivers/net/netdevsim/bus.c:234

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
 watchdog+0xf3f/0x1170 kernel/hung_task.c:495
 kthread+0x3c2/0x780 kernel/kthread.c:463
 ret_from_fork+0x675/0x7d0 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 3551 Comm: kworker/u8:13 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: events_unbound toggle_allocation_gate
RIP: 0010:trace_hardirqs_off+0x14/0x40 kernel/trace/trace_preemptirq.c:106
Code: 22 91 5f 00 eb 8a 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 48 8b 3c 24 e8 73 1f 9e 09 65 8b 05 d0 f8 e0 11 <85> c0 74 05 c3 cc cc cc cc 65 c7 05 bc f8 e0 11 01 00 00 00 48 8b
RSP: 0018:ffffc9000bfb7918 EFLAGS: 00000002
RAX: 0000000000000000 RBX: ffff88813ff1e558 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffffffff8daffadf RDI: ffffffff8bf1d740
RBP: ffffffff821882bc R08: 0000000000000007 R09: 0000000000000000
R10: 0000000000000200 R11: 0000000000000000 R12: 00000000000002c0
R13: ffffea0000086200 R14: fffffffffffffeff R15: 8000000000000063
FS:  0000000000000000(0000) GS:ffff8881249e4000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005627a1faaa38 CR3: 000000000e182000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 __text_poke+0x6e9/0xb70 arch/x86/kernel/alternative.c:2474
 smp_text_poke_batch_finish+0x4f1/0xdb0 arch/x86/kernel/alternative.c:2885
 arch_jump_label_transform_apply+0x1c/0x30 arch/x86/kernel/jump_label.c:146
 jump_label_update+0x376/0x550 kernel/jump_label.c:919
 static_key_disable_cpuslocked+0x158/0x1c0 kernel/jump_label.c:240
 static_key_disable+0x1a/0x20 kernel/jump_label.c:248
 toggle_allocation_gate mm/kfence/core.c:857 [inline]
 toggle_allocation_gate+0x145/0x280 mm/kfence/core.c:844
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3263
 process_scheduled_works kernel/workqueue.c:3346 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3427
 kthread+0x3c2/0x780 kernel/kthread.c:463
 ret_from_fork+0x675/0x7d0 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (87):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/10/11 08:21 upstream 917167ed1211 ff1712fe .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/02 09:12 upstream d3479214c05d 267f56c6 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/23 23:27 upstream cec1e6e5d1ab e667a34f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/22 18:41 upstream 07e27ad16399 770ff59f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/11 13:37 upstream 7aac71907bde e2beed91 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/03 01:35 upstream e6b9dce0aeeb 96a211bc .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/16 09:56 upstream dfd4b508c8c6 1804e95e .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/09 20:20 upstream c30a13538d9f 32a0e5ed .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/02 13:03 upstream a6923c06a3b2 7368264b .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/16 00:32 upstream 155a3c003e55 03fcfc4b .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/09 10:43 upstream 733923397fd9 f4e5e155 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/26 23:37 upstream ee88bddf7f2f 1ae8177e .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/19 21:11 upstream 24770983ccfe ed3e87f7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/16 19:14 upstream e04c78d86a96 d1716036 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/14 01:31 upstream 27605c8c0f69 0e8da31f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/03 04:42 upstream 7f9039c524a3 a30356b7 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/06 06:41 upstream 01f95500a162 ae98e6b9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 17:07 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 13:30 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/04 09:08 upstream 99fa936e8e4f c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/03/03 11:55 upstream 7eb172143d55 c3901742 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/23 06:01 upstream 5cf80612d3f7 d34966d1 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/19 19:23 upstream 6537cfb395f3 cbd8edab .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/17 00:28 upstream ba643b6d8440 40a34ec9 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/02/14 22:20 upstream 128c8f96eb86 fe17639f .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/01/02 11:04 upstream 56e6a3499e14 d3ccff63 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/18 12:23 upstream f406055cb18c 1c8c8cd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/11 05:56 upstream 917167ed1211 ff1712fe .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/10/02 05:06 upstream d3479214c05d 267f56c6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/25 16:43 upstream bf40f4b87761 0abd0691 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/23 19:25 upstream cec1e6e5d1ab e667a34f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/22 15:13 upstream 07e27ad16399 770ff59f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/18 20:13 upstream 8b789f2b7602 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/16 11:24 upstream 46a51f4f5eda e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/15 10:18 upstream 79e8447ec662 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/11 09:34 upstream 7aac71907bde e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/09 16:24 upstream f777d1112ee5 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/08 22:35 upstream f777d1112ee5 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/07 07:38 upstream b236920731dd d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/06 16:20 upstream d1d10cea0895 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/09/02 22:14 upstream e6b9dce0aeeb 96a211bc .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/29 06:59 upstream 07d9df80082b d401b9d7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/19 20:03 upstream b19a97d57c15 523f460e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/16 05:26 upstream dfd4b508c8c6 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/11 14:32 upstream 8f5ae30d69d7 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/11 07:56 upstream 8f5ae30d69d7 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/11 01:41 upstream 2b38afce25c4 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/10 07:55 upstream 561c80369df0 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/09 17:21 upstream c30a13538d9f 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/05 17:51 upstream 5998f2bca43e 37880f40 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/08/02 08:08 upstream a6923c06a3b2 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/27 17:35 upstream ec2df4364666 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/26 13:20 upstream 5f33ebd2018c fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/23 09:36 upstream 89be9a83ccf1 e1dd4f22 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/07/09 06:52 upstream 733923397fd9 f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/30 05:07 upstream afa9a6f4f574 fc9d8ee5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/26 13:46 upstream ee88bddf7f2f 1ae8177e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/23 14:54 upstream 86731a2a651e d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/19 17:34 upstream 24770983ccfe ed3e87f7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/16 16:08 upstream e04c78d86a96 d1716036 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/13 22:55 upstream 27605c8c0f69 0e8da31f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/07 08:34 upstream c0c9379f235d 4826c28e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/06/03 01:33 upstream 7f9039c524a3 a30356b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/08 11:15 upstream d76bb1ebb558 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/04 18:16 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/05/01 22:55 upstream 4f79eaa2ceac 51b137cd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/28 08:28 upstream b4432656b36e c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/19 16:36 upstream 8560697b23dc 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/18 05:17 upstream b5c6891b2c5b 2a20f901 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/18 00:04 upstream b5c6891b2c5b 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2025/04/10 15:19 upstream 2eb959eeecc6 1ef3ab4d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
2024/12/19 21:16 upstream eabcdba3ad40 1d58202c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in remove_one
* Struck through repros no longer work on HEAD.