INFO: task syz-executor.0:10463 blocked for more than 143 seconds.
Not tainted 6.1.0-syzkaller-09648-g32f1002ed485 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.0 state:D stack:26520 pid:10463 ppid:5331 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:5244 [inline]
__schedule+0xb8a/0x5450 kernel/sched/core.c:6556
schedule+0xde/0x1b0 kernel/sched/core.c:6632
schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6691
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0xa48/0x1360 kernel/locking/mutex.c:747
tls_sw_sendpage+0x83/0xe0 net/tls/tls_sw.c:1278
inet_sendpage+0xd4/0x140 net/ipv4/af_inet.c:844
kernel_sendpage.part.0+0x1d5/0x700 net/socket.c:3555
kernel_sendpage net/socket.c:3552 [inline]
sock_sendpage+0xe3/0x140 net/socket.c:1054
pipe_to_sendpage+0x2b1/0x380 fs/splice.c:361
splice_from_pipe_feed fs/splice.c:415 [inline]
__splice_from_pipe+0x449/0x8a0 fs/splice.c:559
splice_from_pipe fs/splice.c:594 [inline]
generic_splice_sendpage+0xd8/0x140 fs/splice.c:743
do_splice_from fs/splice.c:764 [inline]
direct_splice_actor+0x114/0x180 fs/splice.c:931
splice_direct_to_actor+0x335/0x8a0 fs/splice.c:886
do_splice_direct+0x1ab/0x280 fs/splice.c:974
do_sendfile+0xb19/0x1270 fs/read_write.c:1255
__do_sys_sendfile64 fs/read_write.c:1323 [inline]
__se_sys_sendfile64 fs/read_write.c:1309 [inline]
__x64_sys_sendfile64+0x1d0/0x210 fs/read_write.c:1309
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f0ab608c0d9
RSP: 002b:00007f0ab4bdd168 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f0ab61ac050 RCX: 00007f0ab608c0d9
RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000003
RBP: 00007f0ab60e7ae9 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000001000218 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fffd8ce4f4f R14: 00007f0ab4bdd300 R15: 0000000000022000
Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
#0: ffffffff8c7905b0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xc70 kernel/rcu/tasks.h:507
1 lock held by rcu_tasks_trace/13:
#0: ffffffff8c7902b0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xc70 kernel/rcu/tasks.h:507
1 lock held by khungtaskd/28:
#0: ffffffff8c791100 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x57/0x264 kernel/locking/lockdep.c:6494
1 lock held by klogd/4654:
#0: ffff8880b993b558 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2f/0x120 kernel/sched/core.c:537
2 locks held by getty/4983:
#0: ffff888027def098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x26/0x80 drivers/tty/tty_ldisc.c:244
#1: ffffc900015c02f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xef4/0x13e0 drivers/tty/n_tty.c:2177
3 locks held by kworker/0:0/28648:
#0: ffff888012464d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888012464d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888012464d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
#0: ffff888012464d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
#0: ffff888012464d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
#0: ffff888012464d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x86d/0x1710 kernel/workqueue.c:2260
#1: ffffc90004947da8 ((work_completion)(&(&sw_ctx_tx->tx_work.work)->work)){+.+.}-{0:0}, at: process_one_work+0x8a1/0x1710 kernel/workqueue.c:2264
#2: ffff88803821fcd8 (&ctx->tx_lock){+.+.}-{3:3}, at: tx_work_handler+0x12b/0x190 net/tls/tls_sw.c:2419
1 lock held by syz-executor.0/10463:
#0: ffff88803821fcd8 (&ctx->tx_lock){+.+.}-{3:3}, at: tls_sw_sendpage+0x83/0xe0 net/tls/tls_sw.c:1278
=============================================
NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.1.0-syzkaller-09648-g32f1002ed485 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xd1/0x138 lib/dump_stack.c:106
nmi_cpu_backtrace.cold+0x24/0x18a lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x333/0x3c0 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
watchdog+0xc75/0xfc0 kernel/hung_task.c:377
kthread+0x2e8/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 2535 Comm: kworker/u4:16 Not tainted 6.1.0-syzkaller-09648-g32f1002ed485 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Workqueue: events_unbound toggle_allocation_gate
RIP: 0010:match_held_lock+0x0/0xc0 kernel/locking/lockdep.c:5113
Code: 07 ba ff 48 c7 c7 00 44 4c 8a e8 8b 07 ba ff e8 e9 78 ff ff 31 c0 48 83 c4 08 5d c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 <53> 48 89 fb 48 83 ec 08 48 39 77 10 74 6a 66 f7 47 22 f0 ff 74 5a
RSP: 0018:ffffc9000366f860 EFLAGS: 00000097
RAX: 0000000000000005 RBX: 0000000000000001 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffffffff8c791040 RDI: ffff88801fde27b8
RBP: ffffffff8c791040 R08: 0000000000000000 R09: ffffffff8e73cb57
R10: fffffbfff1ce796a R11: 0000000000000000 R12: ffff88801fde1d40
R13: ffff88801fde2790 R14: 00000000ffffffff R15: ffff88801fde27b8
FS: 0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f3e5b983558 CR3: 000000000c48e000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
__lock_is_held kernel/locking/lockdep.c:5409 [inline]
lock_is_held_type+0xab/0x140 kernel/locking/lockdep.c:5711
lock_is_held include/linux/lockdep.h:283 [inline]
rcu_read_lock_sched_held+0x3e/0x70 kernel/rcu/update.c:125
trace_lock_acquire include/trace/events/lock.h:24 [inline]
lock_acquire+0x500/0x630 kernel/locking/lockdep.c:5639
__raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
_raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
spin_lock include/linux/spinlock.h:350 [inline]
__get_locked_pte+0x158/0x270 mm/memory.c:1844
get_locked_pte include/linux/mm.h:2185 [inline]
__text_poke+0x1b3/0x8e0 arch/x86/kernel/alternative.c:1132
text_poke arch/x86/kernel/alternative.c:1217 [inline]
text_poke_bp_batch+0x445/0x6b0 arch/x86/kernel/alternative.c:1566
text_poke_flush arch/x86/kernel/alternative.c:1670 [inline]
text_poke_flush arch/x86/kernel/alternative.c:1667 [inline]
text_poke_finish+0x1a/0x30 arch/x86/kernel/alternative.c:1677
arch_jump_label_transform_apply+0x17/0x30 arch/x86/kernel/jump_label.c:146
jump_label_update+0x32f/0x410 kernel/jump_label.c:829
static_key_disable_cpuslocked+0x156/0x1b0 kernel/jump_label.c:235
static_key_disable+0x1a/0x20 kernel/jump_label.c:243
toggle_allocation_gate mm/kfence/core.c:814 [inline]
toggle_allocation_gate+0x187/0x390 mm/kfence/core.c:792
process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289
worker_thread+0x669/0x1090 kernel/workqueue.c:2436
kthread+0x2e8/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306