INFO: task syz-executor.0:5470 blocked for more than 143 seconds.
Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.0 state:D stack:25488 pid:5470 tgid:5467 ppid:5097 flags:0x00004006
Call Trace:
context_switch kernel/sched/core.c:5400 [inline]
__schedule+0x17df/0x4a40 kernel/sched/core.c:6727
__schedule_loop kernel/sched/core.c:6804 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6819
schedule_timeout+0xb0/0x310 kernel/time/timer.c:2159
do_wait_for_common kernel/sched/completion.c:95 [inline]
__wait_for_common kernel/sched/completion.c:116 [inline]
wait_for_common kernel/sched/completion.c:127 [inline]
wait_for_completion+0x355/0x620 kernel/sched/completion.c:148
__flush_workqueue+0x730/0x1630 kernel/workqueue.c:3617
ceph_monc_stop+0x7c/0x1e0 net/ceph/mon_client.c:1248
ceph_destroy_client+0x74/0x130 net/ceph/ceph_common.c:768
destroy_fs_client+0x192/0x270 fs/ceph/super.c:899
deactivate_locked_super+0xc4/0x130 fs/super.c:477
ceph_get_tree+0x9a9/0x17b0 fs/ceph/super.c:1361
vfs_get_tree+0x90/0x2a0 fs/super.c:1784
vfs_cmd_create+0xe4/0x230 fs/fsopen.c:230
__do_sys_fsconfig fs/fsopen.c:476 [inline]
__se_sys_fsconfig+0x967/0xec0 fs/fsopen.c:349
do_syscall_64+0xfb/0x240
entry_SYSCALL_64_after_hwframe+0x6d/0x75
RIP: 0033:0x7fda9987dda9
RSP: 002b:00007fda9a5c00c8 EFLAGS: 00000246 ORIG_RAX: 00000000000001af
RAX: ffffffffffffffda RBX: 00007fda999abf80 RCX: 00007fda9987dda9
RDX: 0000000000000000 RSI: 0000000000000006 RDI: 0000000000000003
RBP: 00007fda998ca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fda999abf80 R15: 00007fff3def3348
INFO: task syz-executor.4:5480 blocked for more than 144 seconds.
Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4 state:D stack:25432 pid:5480 tgid:5479 ppid:5099 flags:0x00004006
Call Trace:
context_switch kernel/sched/core.c:5400 [inline]
__schedule+0x17df/0x4a40 kernel/sched/core.c:6727
__schedule_loop kernel/sched/core.c:6804 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6819
schedule_timeout+0xb0/0x310 kernel/time/timer.c:2159
do_wait_for_common kernel/sched/completion.c:95 [inline]
__wait_for_common kernel/sched/completion.c:116 [inline]
wait_for_common kernel/sched/completion.c:127 [inline]
wait_for_completion+0x355/0x620 kernel/sched/completion.c:148
__flush_workqueue+0x730/0x1630 kernel/workqueue.c:3617
ceph_mdsc_pre_umount+0x5b5/0x8b0 fs/ceph/mds_client.c:5475
ceph_kill_sb+0x9f/0x4b0 fs/ceph/super.c:1535
deactivate_locked_super+0xc4/0x130 fs/super.c:477
ceph_get_tree+0x9a9/0x17b0 fs/ceph/super.c:1361
vfs_get_tree+0x90/0x2a0 fs/super.c:1784
vfs_cmd_create+0xe4/0x230 fs/fsopen.c:230
__do_sys_fsconfig fs/fsopen.c:476 [inline]
__se_sys_fsconfig+0x967/0xec0 fs/fsopen.c:349
do_syscall_64+0xfb/0x240
entry_SYSCALL_64_after_hwframe+0x6d/0x75
RIP: 0033:0x7fe28fa7dda9
RSP: 002b:00007fe2908c00c8 EFLAGS: 00000246 ORIG_RAX: 00000000000001af
RAX: ffffffffffffffda RBX: 00007fe28fbabf80 RCX: 00007fe28fa7dda9
RDX: 0000000000000000 RSI: 0000000000000006 RDI: 0000000000000003
RBP: 00007fe28faca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fe28fbabf80 R15: 00007fff6c77ab58
Showing all locks held in the system:
1 lock held by khungtaskd/29:
#0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
#0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
#0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
2 locks held by getty/4833:
#0: ffff88802a73b0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc90002f0e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201
3 locks held by kworker/0:7/5157:
1 lock held by syz-executor.0/5470:
#0: ffff88807da0b070 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
#0: ffff88807da0b070 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
2 locks held by syz-executor.4/5480:
#0: ffff88807e2a4470 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
#0: ffff88807e2a4470 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
#1: ffff88807fc080e0 (&type->s_umount_key#50/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345
2 locks held by syz-executor.3/5489:
#0: ffff88807da08470
(&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
(&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
#1: ffff8880368640e0 (&type->s_umount_key#50/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345
2 locks held by syz-executor.3/5852:
#0: ffff88803444b070 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
#0: ffff88803444b070 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
#1: ffff88802c0b60e0 (&type->s_umount_key#50/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345
2 locks held by syz-executor.4/5873:
#0: ffff88803570dc70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
#0: ffff88803570dc70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
#1: ffff888035c580e0 (&type->s_umount_key#50/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345
2 locks held by syz-executor.3/6293:
#0: ffff888034767c70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
#0: ffff888034767c70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
#1: ffff888039bd40e0 (&type->s_umount_key#50/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345
2 locks held by syz-executor.4/6355:
#0: ffff8880296b0c70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
#0: ffff8880296b0c70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
#1: ffff8880324120e0 (&type->s_umount_key#50/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345
2 locks held by kworker/u4:8/6404:
1 lock held by syz-executor.3/6444:
#0: ffffffff8e1360f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
#0: ffffffff8e1360f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x463/0x820 kernel/rcu/tree_exp.h:939
2 locks held by syz-executor.4/6469:
#0: ffffffff8f3dd970 (cb_lock){++++}-{3:3}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1216
#1: ffffffff8f379fc8 (rtnl_mutex){+.+.}-{3:3}, at: nl80211_pre_doit+0x5f/0x8b0 net/wireless/nl80211.c:16461
2 locks held by syz-executor.1/6594:
=============================================
NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call Trace:
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
watchdog+0xfb0/0xff0 kernel/hung_task.c:379
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:242
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 6405 Comm: kworker/u4:9 Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Workqueue: events_unbound cfg80211_wiphy_work
RIP: 0010:find_stack lib/stackdepot.c:594 [inline]
RIP: 0010:stack_depot_save_flags+0x183/0x860 lib/stackdepot.c:659
Code: 26 7b 4c 8b 3c 08 49 39 d7 8b 6c 24 0c 0f 84 95 00 00 00 45 89 cd eb 0c 4d 8b 3f 49 39 d7 0f 84 84 00 00 00 45 39 77 10 75 ee <45> 39 4f 14 75 e8 31 c0 49 8b 0c c0 49 3b 4c c7 20 75 db 48 ff c0
RSP: 0000:ffffc9001aa1f800 EFLAGS: 00000246
RAX: ffff88823ac00000 RBX: 0000000043934d80 RCX: 00000000006ed040
RDX: ffff88823b2ed040 RSI: 0000000000000003 RDI: 00000000709f7bc8
RBP: 0000000000000001 R08: ffffc9001aa1f860 R09: 000000000000000c
R10: 0000000000000002 R11: ffff888027e85a00 R12: ffffffff8ae0db40
R13: 000000000000000c R14: 000000009016ed04 R15: ffff888024725ec0
FS: 0000000000000000(0000) GS:ffff8880b9400000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c014960cc8 CR3: 000000007b3f4000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
kasan_save_stack mm/kasan/common.c:48 [inline]
kasan_save_track+0x51/0x80 mm/kasan/common.c:68
kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:586
poison_slab_object+0xa6/0xe0 mm/kasan/common.c:240
__kasan_slab_free+0x37/0x60 mm/kasan/common.c:256
kasan_slab_free include/linux/kasan.h:184 [inline]
slab_free_hook mm/slub.c:2122 [inline]
slab_free mm/slub.c:4296 [inline]
kmem_cache_free+0x102/0x2a0 mm/slub.c:4360
kfree_skb include/linux/skbuff.h:1244 [inline]
ieee80211_iface_work+0x270/0xd90 net/mac80211/iface.c:1645
cfg80211_wiphy_work+0x221/0x260 net/wireless/core.c:437
process_one_work kernel/workqueue.c:3049 [inline]
process_scheduled_works+0x913/0x14f0 kernel/workqueue.c:3125
worker_thread+0xa60/0x1000 kernel/workqueue.c:3206
kthread+0x2f0/0x390 kernel/kthread.c:388
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:242