INFO: task syz.1.738:7605 blocked for more than 144 seconds.
Not tainted 6.15.0-rc2-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.738 state:D stack:27752 pid:7605 tgid:7601 ppid:5840 task_flags:0x400140 flags:0x00000004
Call Trace:
<TASK>
context_switch kernel/sched/core.c:5382 [inline]
__schedule+0x116f/0x5de0 kernel/sched/core.c:6767
__schedule_loop kernel/sched/core.c:6845 [inline]
schedule+0xe7/0x3a0 kernel/sched/core.c:6860
schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75
___down_common+0x2d8/0x460 kernel/locking/semaphore.c:229
__down_common kernel/locking/semaphore.c:250 [inline]
__down+0x20/0x30 kernel/locking/semaphore.c:258
down+0x74/0xa0 kernel/locking/semaphore.c:64
console_lock+0x5b/0xa0 kernel/printk/printk.c:2849
show_bind+0x35/0x90 drivers/tty/vt/vt.c:4074
dev_attr_show+0x53/0xe0 drivers/base/core.c:2424
sysfs_kf_seq_show+0x213/0x3e0 fs/sysfs/file.c:65
seq_read_iter+0x506/0x12c0 fs/seq_file.c:230
kernfs_fop_read_iter+0x40f/0x5a0 fs/kernfs/file.c:279
copy_splice_read+0x615/0xba0 fs/splice.c:363
do_splice_read fs/splice.c:979 [inline]
do_splice_read+0x282/0x370 fs/splice.c:953
splice_direct_to_actor+0x2a1/0xa30 fs/splice.c:1083
do_splice_direct_actor fs/splice.c:1201 [inline]
do_splice_direct+0x174/0x240 fs/splice.c:1227
do_sendfile+0xafd/0xe50 fs/read_write.c:1368
__do_sys_sendfile64 fs/read_write.c:1429 [inline]
__se_sys_sendfile64 fs/read_write.c:1415 [inline]
__x64_sys_sendfile64+0x1d8/0x220 fs/read_write.c:1415
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xcd/0x260 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f013218d169
RSP: 002b:00007f0132fb0038 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f01323a6080 RCX: 00007f013218d169
RDX: 0000000000000000 RSI: 0000000000000003 RDI: 0000000000000003
RBP: 00007f013220e990 R08: 0000000000000000 R09: 0000000000000000
R10: 0000400000000006 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f01323a6080 R15: 00007fff8a1120b8
</TASK>
Showing all locks held in the system:
1 lock held by kthreadd/2:
1 lock held by kworker/R-kvfre/6:
4 locks held by kworker/0:0/9:
3 locks held by kworker/0:1/11:
3 locks held by kworker/u8:0/12:
1 lock held by kworker/R-mm_pe/13:
#0: ffffffff8e2790c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
3 locks held by kworker/u8:1/14:
2 locks held by kworker/1:0/24:
#0: ffff88801b480d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
#1: ffffc900001e7d18 (console_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
1 lock held by khungtaskd/31:
#0: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#0: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
#0: ffffffff8e3c15c0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6764
3 locks held by kworker/u8:2/36:
3 locks held by kworker/1:1/52:
3 locks held by kworker/u8:3/53:
#0: ffff88801b489148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
#1: ffffc90000be7d18 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
#2: ffffffff9012e5a8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0x51/0xc0 net/core/link_watch.c:303
3 locks held by kworker/u8:4/67:
4 locks held by kworker/0:2/975:
3 locks held by kworker/u8:5/1092:
3 locks held by kworker/u8:6/1100:
3 locks held by kworker/u8:7/1164:
3 locks held by kworker/1:2/1209:
3 locks held by kworker/u8:8/3023:
3 locks held by kworker/R-ipv6_/3173:
#0: ffff888030b05148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
#1: ffffc9000b727cb0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
#2: ffffffff9012e5a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
#2: ffffffff9012e5a8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_verify_work+0x12/0x30 net/ipv6/addrconf.c:4735
2 locks held by kworker/R-bat_e/3403:
3 locks held by kworker/u8:9/3478:
3 locks held by syslogd/5194:
2 locks held by udevd/5212:
1 lock held by dhcpcd/5505:
2 locks held by getty/5592:
#0: ffff888031bd40a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
#1: ffffc9000332e2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
1 lock held by syz-executor/5828:
1 lock held by syz-executor/5838:
#0: ffffffff8e3ccaf8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a3/0x3c0 kernel/rcu/tree_exp.h:336
2 locks held by syz-executor/5839:
5 locks held by syz-executor/5842:
3 locks held by kworker/0:3/5844:
4 locks held by kworker/u9:4/5848:
#0: ffff888031c9b948 ((wq_completion)hci1){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
#1: ffffc900040cfd18 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
#2: ffff88807d510d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_cmd_sync_work+0x175/0x430 net/bluetooth/hci_sync.c:331
#3: ffff88807d510078 (&hdev->lock){+.+.}-{4:4}, at: hci_abort_conn_sync+0x146/0xb40 net/bluetooth/hci_sync.c:5597
3 locks held by kworker/u9:7/5854:
1 lock held by kworker/R-wg-cr/5871:
#0: ffffffff8e2790c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
#0: ffffffff8e2790c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x839/0xea0 kernel/workqueue.c:3529
1 lock held by kworker/R-wg-cr/5872:
#0: ffffffff8e2790c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5874:
#0: ffffffff8e2790c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5875:
1 lock held by kworker/R-wg-cr/5877:
#0: ffffffff8e2790c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
#0: ffffffff8e2790c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x839/0xea0 kernel/workqueue.c:3529
1 lock held by kworker/R-wg-cr/5879:
#0: ffffffff8e2790c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5881:
#0: ffffffff8e2790c8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
4 locks held by kworker/0:4/5882:
#0: ffff88807b442548 ((wq_completion)wg-kex-wg1#6){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
#1: ffffc900046efd18 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __asm__ ("" : "=r"(__ptr) : "0"((__typeof__(*((worker))) *)(( unsigned long)((worker))))); (typeof((__typeof__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
#2: ffff888060635308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x1c2/0x880 drivers/net/wireguard/noise.c:598
#3: ffff88805f113ea8 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x5ac/0x880 drivers/net/wireguard/noise.c:632
2 locks held by kworker/0:5/5883:
2 locks held by kworker/1:4/5900:
3 locks held by syz.1.738/7605:
#0: ffff88802890b668 (&p->lock){+.+.}-{4:4}, at: seq_read_iter+0xe1/0x12c0 fs/seq_file.c:182
#1: ffff888027007888 (&of->mutex){+.+.}-{4:4}, at: kernfs_seq_start+0x4d/0x240 fs/kernfs/file.c:154
#2: ffff8880255785a8 (kn->active#66){.+.+}-{0:0}, at: kernfs_seq_start+0x71/0x240 fs/kernfs/file.c:155
1 lock held by kworker/0:6/7630:
3 locks held by syz-executor/7827:
4 locks held by kworker/1:5/7906:
3 locks held by kworker/u8:10/7908:
=============================================
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.15.0-rc2-syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:274 [inline]
watchdog+0xf70/0x12c0 kernel/hung_task.c:437
kthread+0x3c2/0x780 kernel/kthread.c:464
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 9 Comm: kworker/0:0 Not tainted 6.15.0-rc2-syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Workqueue: events_power_efficient neigh_periodic_work
RIP: 0010:check_wait_context kernel/locking/lockdep.c:4856 [inline]
RIP: 0010:__lock_acquire+0x1f4/0x1ba0 kernel/locking/lockdep.c:5185
Code: c5 8b 43 20 25 ff 1f 00 00 41 09 c5 8b 84 24 c0 00 00 00 44 89 6b 20 89 43 24 e8 f7 b0 ff ff 48 89 df 44 0f b6 a8 c4 00 00 00 <e8> e7 b0 ff ff 0f b6 b0 c5 00 00 00 45 84 ed 0f 84 fb 01 00 00 0f
RSP: 0018:ffffc90000007000 EFLAGS: 00000007
RAX: ffffffff95ae2fa0 RBX: ffff88801dad0b68 RCX: 0000000000000000
RDX: 0000000000040000 RSI: ffff88801dad0b40 RDI: ffff88801dad0b68
RBP: ffff88801dad0af0 R08: 0000000000080000 R09: 0000000000000001
R10: 0000000000000000 R11: ffffffff901ce318 R12: 0000000000000001
R13: 0000000000000003 R14: ffff88801dad0000 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff8881249b9000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055559502a5a8 CR3: 000000007ec6e000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<IRQ>
lock_acquire kernel/locking/lockdep.c:5866 [inline]
lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5823
__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
_raw_spin_lock_irqsave+0x3a/0x60 kernel/locking/spinlock.c:162
__wake_up_common_lock kernel/sched/wait.c:105 [inline]
__wake_up+0x1c/0x60 kernel/sched/wait.c:127
netlink_unlock_table net/netlink/af_netlink.c:462 [inline]
netlink_unlock_table net/netlink/af_netlink.c:459 [inline]
netlink_broadcast_filtered+0xa5c/0xf10 net/netlink/af_netlink.c:1526
nlmsg_multicast_filtered include/net/netlink.h:1129 [inline]
nlmsg_multicast include/net/netlink.h:1148 [inline]
nlmsg_notify+0x9e/0x220 net/netlink/af_netlink.c:2577
fdb_notify+0xfd/0x1a0 net/bridge/br_fdb.c:199
br_fdb_update+0x32b/0x7d0 net/bridge/br_fdb.c:934
br_handle_frame_finish+0xd21/0x1c20 net/bridge/br_input.c:144
br_nf_hook_thresh+0x301/0x410 net/bridge/br_netfilter_hooks.c:1170
br_nf_pre_routing_finish_ipv6+0x76a/0xfb0 net/bridge/br_netfilter_ipv6.c:154
NF_HOOK include/linux/netfilter.h:314 [inline]
br_nf_pre_routing_ipv6+0x3cd/0x8c0 net/bridge/br_netfilter_ipv6.c:184
br_nf_pre_routing+0x860/0x15b0 net/bridge/br_netfilter_hooks.c:508
nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
nf_hook_bridge_pre net/bridge/br_input.c:282 [inline]
br_handle_frame+0xad5/0x14a0 net/bridge/br_input.c:433
__netif_receive_skb_core.constprop.0+0xa23/0x4a00 net/core/dev.c:5771
__netif_receive_skb_one_core+0xb0/0x1e0 net/core/dev.c:5883
__netif_receive_skb+0x1d/0x160 net/core/dev.c:5998
process_backlog+0x442/0x15e0 net/core/dev.c:6350
__napi_poll.constprop.0+0xb7/0x550 net/core/dev.c:7322
napi_poll net/core/dev.c:7386 [inline]
net_rx_action+0xa97/0x1010 net/core/dev.c:7508
handle_softirqs+0x216/0x8e0 kernel/softirq.c:579
do_softirq kernel/softirq.c:480 [inline]
do_softirq+0xb2/0xf0 kernel/softirq.c:467
</IRQ>
<TASK>
__local_bh_enable_ip+0x100/0x120 kernel/softirq.c:407
neigh_periodic_work+0x768/0xcd0 net/core/neighbour.c:966
process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238
process_scheduled_works kernel/workqueue.c:3319 [inline]
worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400
kthread+0x3c2/0x780 kernel/kthread.c:464
ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
</TASK>
net_ratelimit: 23540 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
net_ratelimit: 23182 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
net_ratelimit: 23920 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)