INFO: task syz.8.444:10833 blocked for more than 143 seconds.
Tainted: G W 6.17.0-rc1-syzkaller-00036-gdfc0f6373094 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.444 state:D stack:24520 pid:10833 tgid:10832 ppid:10659 task_flags:0x400140 flags:0x00004004
task:syz.8.444 state:D stack:24520 pid:10833 tgid:10832 ppid:10659 task_flags:0x400140 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:5357 [inline]
__schedule+0x16f3/0x4c20 kernel/sched/core.c:6961
__schedule_loop kernel/sched/core.c:7043 [inline]
schedule+0x165/0x360 kernel/sched/core.c:7058
io_schedule+0x81/0xe0 kernel/sched/core.c:7903
folio_wait_bit_common+0x6b5/0xb90 mm/filemap.c:1317
folio_lock include/linux/pagemap.h:1133 [inline]
release_metapage+0x103/0xab0 fs/jfs/jfs_metapage.c:870
discard_metapage fs/jfs/jfs_metapage.h:88 [inline]
__get_metapage+0x9a0/0xde0 fs/jfs/jfs_metapage.c:753
dtSplitPage+0x7f8/0x3b20 fs/jfs/jfs_dtree.c:1471
dtSplitUp fs/jfs/jfs_dtree.c:1092 [inline]
dtInsert+0x109b/0x5f40 fs/jfs/jfs_dtree.c:871
jfs_create+0x6c8/0xa80 fs/jfs/namei.c:137
lookup_open fs/namei.c:3708 [inline]
open_last_lookups fs/namei.c:3807 [inline]
path_openat+0x14fd/0x3840 fs/namei.c:4043
do_filp_open+0x1fa/0x410 fs/namei.c:4073
do_sys_openat2+0x121/0x1c0 fs/open.c:1435
do_sys_open fs/open.c:1450 [inline]
__do_sys_openat fs/open.c:1466 [inline]
__se_sys_openat fs/open.c:1461 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1461
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f2ea557ebe9
RSP: 002b:00007f2ea37e6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f2ea57a5fa0 RCX: 00007f2ea557ebe9
RDX: 000000000000275a RSI: 00002000000001c0 RDI: ffffffffffffff9c
RBP: 00007f2ea5601e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f2ea57a6038 R14: 00007f2ea57a5fa0 R15: 00007ffd38264718
INFO: task syz.8.444:10856 blocked for more than 143 seconds.
Tainted: G W 6.17.0-rc1-syzkaller-00036-gdfc0f6373094 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.444 state:D stack:26920 pid:10856 tgid:10832 ppid:10659 task_flags:0x400140 flags:0x00024004
Call Trace:
context_switch kernel/sched/core.c:5357 [inline]
__schedule+0x16f3/0x4c20 kernel/sched/core.c:6961
__schedule_loop kernel/sched/core.c:7043 [inline]
rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7339
rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
__rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
__rt_mutex_slowlock_locked+0x1e04/0x25e0 kernel/locking/rtmutex.c:1760
rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
__rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
inode_lock include/linux/fs.h:869 [inline]
open_last_lookups fs/namei.c:3804 [inline]
path_openat+0x8e6/0x3840 fs/namei.c:4043
do_filp_open+0x1fa/0x410 fs/namei.c:4073
do_sys_openat2+0x121/0x1c0 fs/open.c:1435
do_sys_open fs/open.c:1450 [inline]
__do_sys_openat fs/open.c:1466 [inline]
__se_sys_openat fs/open.c:1461 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1461
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f2ea557ebe9
RSP: 002b:00007f2ea37c5038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f2ea57a6090 RCX: 00007f2ea557ebe9
RDX: 000000000000275a RSI: 0000200000000080 RDI: ffffffffffffff9c
RBP: 00007f2ea5601e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f2ea57a6128 R14: 00007f2ea57a6090 R15: 00007ffd38264718
INFO: task syz.8.444:10863 blocked for more than 143 seconds.
Tainted: G W 6.17.0-rc1-syzkaller-00036-gdfc0f6373094 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.444 state:D stack:28576 pid:10863 tgid:10832 ppid:10659 task_flags:0x400040 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:5357 [inline]
__schedule+0x16f3/0x4c20 kernel/sched/core.c:6961
__schedule_loop kernel/sched/core.c:7043 [inline]
rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7339
rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
__rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
__rt_mutex_slowlock_locked+0x1e04/0x25e0 kernel/locking/rtmutex.c:1760
rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
__rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
inode_lock include/linux/fs.h:869 [inline]
open_last_lookups fs/namei.c:3804 [inline]
path_openat+0x8e6/0x3840 fs/namei.c:4043
do_filp_open+0x1fa/0x410 fs/namei.c:4073
do_sys_openat2+0x121/0x1c0 fs/open.c:1435
do_sys_open fs/open.c:1450 [inline]
__do_sys_creat fs/open.c:1528 [inline]
__se_sys_creat fs/open.c:1522 [inline]
__x64_sys_creat+0x8f/0xc0 fs/open.c:1522
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f2ea557ebe9
RSP: 002b:00007f2ea37a4038 EFLAGS: 00000246 ORIG_RAX: 0000000000000055
RAX: ffffffffffffffda RBX: 00007f2ea57a6180 RCX: 00007f2ea557ebe9
RDX: 0000000000000000 RSI: d931d3864d39dcca RDI: 0000200000000100
RBP: 00007f2ea5601e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f2ea57a6218 R14: 00007f2ea57a6180 R15: 00007ffd38264718
INFO: task syz.8.444:10865 blocked for more than 143 seconds.
Tainted: G W 6.17.0-rc1-syzkaller-00036-gdfc0f6373094 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.8.444 state:D stack:26920 pid:10865 tgid:10832 ppid:10659 task_flags:0x400040 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:5357 [inline]
__schedule+0x16f3/0x4c20 kernel/sched/core.c:6961
__schedule_loop kernel/sched/core.c:7043 [inline]
rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7339
rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
__rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
__rt_mutex_slowlock_locked+0x1e04/0x25e0 kernel/locking/rtmutex.c:1760
rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
__rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
inode_lock include/linux/fs.h:869 [inline]
open_last_lookups fs/namei.c:3804 [inline]
path_openat+0x8e6/0x3840 fs/namei.c:4043
do_filp_open+0x1fa/0x410 fs/namei.c:4073
do_sys_openat2+0x121/0x1c0 fs/open.c:1435
do_sys_open fs/open.c:1450 [inline]
__do_sys_openat fs/open.c:1466 [inline]
__se_sys_openat fs/open.c:1461 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1461
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f2ea557ebe9
RSP: 002b:00007f2ea3381038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f2ea57a6270 RCX: 00007f2ea557ebe9
RDX: 000000000000275a RSI: 0000200000000100 RDI: ffffffffffffff9c
RBP: 00007f2ea5601e19 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f2ea57a6308 R14: 00007f2ea57a6270 R15: 00007ffd38264718
Showing all locks held in the system:
2 locks held by kworker/0:1/10:
6 locks held by rcuc/0/20:
#0: ffffffff8d84a740 (local_bh){.+.+}-{1:3}, at: __local_bh_disable_ip+0xa1/0x400 kernel/softirq.c:163
#1: ffff8880b8823d90 ((softirq_ctrl.lock)){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:44 [inline]
#1: ffff8880b8823d90 ((softirq_ctrl.lock)){+.+.}-{3:3}, at: __local_bh_disable_ip+0x264/0x400 kernel/softirq.c:168
#2: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#2: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
#2: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: __rt_spin_lock kernel/locking/spinlock_rt.c:50 [inline]
#2: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: rt_spin_lock+0x1bb/0x2c0 kernel/locking/spinlock_rt.c:57
#3: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: __local_bh_disable_ip+0xa1/0x400 kernel/softirq.c:163
#4: ffffffff8d9a8ca0 (rcu_callback){....}-{0:0}, at: local_bh_disable include/linux/bottom_half.h:20 [inline]
#4: ffffffff8d9a8ca0 (rcu_callback){....}-{0:0}, at: rcu_cpu_kthread+0x23e/0x1b50 kernel/rcu/tree.c:2942
#5: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:44 [inline]
#5: ffff88813fffc5d8 (&zone->lock){+.+.}-{3:3}, at: free_one_page+0x41/0x270 mm/page_alloc.c:1542
6 locks held by rcuc/1/28:
1 lock held by khungtaskd/38:
#0: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#0: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
#0: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
2 locks held by kworker/u8:4/67:
#0: ffff8881452e5138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
#0: ffff8881452e5138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3319
#1: ffffc9000152fbc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
#1: ffffc9000152fbc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3319
2 locks held by kworker/u8:11/1291:
#0: ffff8881452e5138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
#0: ffff8881452e5138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3319
#1: ffffc90005257bc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
#1: ffffc90005257bc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3319
3 locks held by kworker/u8:12/1305:
#0: ffff888019881138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
#0: ffff888019881138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3319
#1: ffffc900051f7bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
#1: ffffc900051f7bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3319
#2: ffffffff8ecd13b8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
5 locks held by kworker/u8:13/1403:
2 locks held by getty/5600:
#0: ffff88823bf328a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc90003e762e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x444/0x1410 drivers/tty/n_tty.c:2222
4 locks held by kworker/u8:5/8235:
#0: ffff88801f689938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
#0: ffff88801f689938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3319
#1: ffffc90003b5fbc0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
#1: ffffc90003b5fbc0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3319
#2: ffff88806873c0d0 (&type->s_umount_key#56){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
#3: ffff88806916bcc8 (&jfs_ip->commit_mutex){+.+.}-{4:4}, at: jfs_commit_inode+0x1ca/0x530 fs/jfs/inode.c:102
3 locks held by kworker/u8:8/9737:
#0: ffff88801f68e938 ((wq_completion)cfg80211){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
#0: ffff88801f68e938 ((wq_completion)cfg80211){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3319
#1: ffffc90004b47bc0 ((work_completion)(&(&rdev->dfs_update_channels_wk)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
#1: ffffc90004b47bc0 ((work_completion)(&(&rdev->dfs_update_channels_wk)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3319
#2: ffffffff8ecd13b8 (rtnl_mutex){+.+.}-{4:4}, at: cfg80211_dfs_channels_update_work+0xb6/0x630 net/wireless/mlme.c:1040
4 locks held by syz.8.444/10833:
#0: ffff88806873c488 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: open_last_lookups fs/namei.c:3804 [inline]
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: path_openat+0x8e6/0x3840 fs/namei.c:4043
#2: ffff88806916bcc8 (&jfs_ip->commit_mutex){+.+.}-{4:4}, at: jfs_create+0x1f5/0xa80 fs/jfs/namei.c:100
#3: ffff88805f0db2f8 (&jfs_ip->commit_mutex/1){+.+.}-{4:4}, at: jfs_create+0x210/0xa80 fs/jfs/namei.c:101
2 locks held by syz.8.444/10856:
#0: ffff88806873c488 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: open_last_lookups fs/namei.c:3804 [inline]
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: path_openat+0x8e6/0x3840 fs/namei.c:4043
2 locks held by syz.8.444/10863:
#0: ffff88806873c488 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: open_last_lookups fs/namei.c:3804 [inline]
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: path_openat+0x8e6/0x3840 fs/namei.c:4043
2 locks held by syz.8.444/10865:
#0: ffff88806873c488 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: open_last_lookups fs/namei.c:3804 [inline]
#1: ffff88806916c098 (&type->i_mutex_dir_key#9){++++}-{4:4}, at: path_openat+0x8e6/0x3840 fs/namei.c:4043
3 locks held by udevd/10854:
#0: ffff888034d56488 (sb_writers#5){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:3107 [inline]
#0: ffff888034d56488 (sb_writers#5){.+.+}-{0:0}, at: vfs_write+0x217/0xb40 fs/read_write.c:682
#1: ffff88802592f6b8 (&sb->s_type->i_mutex_key#12){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
#1: ffff88802592f6b8 (&sb->s_type->i_mutex_key#12){+.+.}-{4:4}, at: shmem_file_write_iter+0x82/0x120 mm/shmem.c:3518
#2: ffff8880b8833490 ((lock)#2){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:44 [inline]
#2: ffff8880b8833490 ((lock)#2){+.+.}-{3:3}, at: __folio_batch_add_and_move+0x170/0x540 mm/swap.c:-1
3 locks held by udevd/10859:
2 locks held by syz.5.479/11239:
#0: ffff88806873c0d0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock fs/super.c:59 [inline]
#0: ffff88806873c0d0 (&type->s_umount_key#56){++++}-{4:4}, at: super_lock+0x2a9/0x3b0 fs/super.c:121
#1: ffff888024a5e870 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:387 [inline]
#1: ffff888024a5e870 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: sync_inodes_sb+0x19b/0xa50 fs/fs-writeback.c:2831
2 locks held by kworker/u8:16/11661:
2 locks held by syz-executor/12950:
#0: ffffffff8e43a3c0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#0: ffffffff8e43a3c0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
#0: ffffffff8e43a3c0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
#1: ffffffff8ecd13b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
#1: ffffffff8ecd13b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
#1: ffffffff8ecd13b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4056
5 locks held by syz.4.691/13138:
2 locks held by syz.9.692/13140:
#0: ffff8880496680d0 (&type->s_umount_key#49/1){+.+.}-{4:4}, at: alloc_super+0x204/0x990 fs/super.c:345
#1: ffff8880b8833490 ((lock)#2){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:44 [inline]
#1: ffff8880b8833490 ((lock)#2){+.+.}-{3:3}, at: __folio_batch_add_and_move+0x170/0x540 mm/swap.c:-1
6 locks held by syz.0.697/13144:
1 lock held by udevadm/13171:
5 locks held by sed/13192:
#0: ffff888051d62588 (vm_lock){++++}-{0:0}, at: lock_vma_under_rcu+0x19f/0x3d0 mm/mmap_lock.c:147
#1: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#1: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
#1: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: ___pte_offset_map+0x29/0x200 mm/pgtable-generic.c:286
#2: ffff888035fd1598 (ptlock_ptr(ptdesc)#2){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:44 [inline]
#2: ffff888035fd1598 (ptlock_ptr(ptdesc)#2){+.+.}-{3:3}, at: __pte_offset_map_lock+0x13e/0x210 mm/pgtable-generic.c:401
#3: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#3: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
#3: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: __rt_spin_lock kernel/locking/spinlock_rt.c:50 [inline]
#3: ffffffff8d9a8b80 (rcu_read_lock){....}-{1:3}, at: rt_spin_lock+0x1bb/0x2c0 kernel/locking/spinlock_rt.c:57
#4: ffff8880b8833490 ((lock)#2){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:44 [inline]
#4: ffff8880b8833490 ((lock)#2){+.+.}-{3:3}, at: __folio_batch_add_and_move+0x170/0x540 mm/swap.c:-1
2 locks held by udevadm/13193:
=============================================
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 38 Comm: khungtaskd Tainted: G W 6.17.0-rc1-syzkaller-00036-gdfc0f6373094 #0 PREEMPT_{RT,(full)}
Tainted: [W]=WARN
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:328 [inline]
watchdog+0xf93/0xfe0 kernel/hung_task.c:491
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 28 Comm: rcuc/1 Tainted: G W 6.17.0-rc1-syzkaller-00036-gdfc0f6373094 #0 PREEMPT_{RT,(full)}
Tainted: [W]=WARN
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
RIP: 0010:debug_spin_lock_before kernel/locking/spinlock_debug.c:86 [inline]
RIP: 0010:do_raw_spin_lock+0x88/0x290 kernel/locking/spinlock_debug.c:115
Code: f1 f1 04 f3 f3 f3 48 89 4c 24 18 4a 89 04 39 4c 8d 77 04 4c 89 f0 48 c1 e8 03 42 0f b6 04 38 84 c0 0f 85 9f 01 00 00 41 8b 06 <3d> ad 4e ad de 0f 85 1b 01 00 00 4c 8d 63 10 4d 89 e6 49 c1 ee 03
RSP: 0018:ffffc90000a2f860 EFLAGS: 00000046
RAX: 00000000dead4ead RBX: ffffffff99289630 RCX: 1ffff92000145f10
RDX: 0000000000000000 RSI: ffffffff8d2176c7 RDI: ffffffff99289630
RBP: ffffc90000a2f910 R08: 0000000000000000 R09: ffffffff84bf8b04
R10: dffffc0000000000 R11: fffffbfff1e3a727 R12: dffffc0000000000
R13: ffff8880277f7640 R14: ffffffff99289634 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff8881269c5000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f3e33f5f008 CR3: 0000000038778000 CR4: 00000000003526f0
Call Trace:
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
_raw_spin_lock_irqsave+0xb3/0xf0 kernel/locking/spinlock.c:162
debug_object_active_state+0xa4/0x260 lib/debugobjects.c:1046
debug_rcu_head_unqueue kernel/rcu/rcu.h:245 [inline]
rcu_do_batch kernel/rcu/tree.c:2597 [inline]
rcu_core kernel/rcu/tree.c:2861 [inline]
rcu_cpu_kthread+0xb6d/0x1b50 kernel/rcu/tree.c:2949
smpboot_thread_fn+0x542/0xa60 kernel/smpboot.c:160
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245