INFO: task syslogd:5174 blocked for more than 143 seconds. Not tainted 6.14.0-rc3-syzkaller-00079-g87a132e73910 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syslogd state:D stack:24240 pid:5174 tgid:5174 ppid:1 task_flags:0x400000 flags:0x00000002 Call Trace: context_switch kernel/sched/core.c:5378 [inline] __schedule+0xf43/0x5890 kernel/sched/core.c:6765 __schedule_loop kernel/sched/core.c:6842 [inline] schedule+0xe7/0x350 kernel/sched/core.c:6857 schedule_timeout+0x244/0x280 kernel/time/sleep_timeout.c:75 ___down_common+0x2d7/0x460 kernel/locking/semaphore.c:225 __down_common kernel/locking/semaphore.c:246 [inline] __down+0x20/0x30 kernel/locking/semaphore.c:254 down+0x74/0xa0 kernel/locking/semaphore.c:63 console_lock+0x5b/0xa0 kernel/printk/printk.c:2833 console_device+0x19/0x180 kernel/printk/printk.c:3482 tty_lookup_driver+0x2fb/0x510 drivers/tty/tty_io.c:1930 tty_open_by_driver drivers/tty/tty_io.c:2047 [inline] tty_open+0x54e/0xf80 drivers/tty/tty_io.c:2129 chrdev_open+0x237/0x6a0 fs/char_dev.c:414 do_dentry_open+0x735/0x1c40 fs/open.c:956 vfs_open+0x82/0x3f0 fs/open.c:1086 do_open fs/namei.c:3830 [inline] path_openat+0x1e88/0x2d80 fs/namei.c:3989 do_filp_open+0x20c/0x470 fs/namei.c:4016 do_sys_openat2+0x17a/0x1e0 fs/open.c:1428 do_sys_open fs/open.c:1443 [inline] __do_sys_openat fs/open.c:1459 [inline] __se_sys_openat fs/open.c:1454 [inline] __x64_sys_openat+0x175/0x210 fs/open.c:1454 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f427151d9a4 RSP: 002b:00007fff68b36620 EFLAGS: 00000246 ORIG_RAX: 0000000000000101 RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f427151d9a4 RDX: 0000000000000901 RSI: 00007f42716bc3b3 RDI: 00000000ffffff9c RBP: 00007f42716bc3b3 R08: 0000000000000001 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000901 R13: 0000000000000901 R14: 0000000000000901 R15: 0000556d0782f410 Showing all locks held in the system: 1 lock held by kthreadd/2: 2 locks held by kworker/0:1/9: 1 lock held by kworker/u8:0/11: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2676 1 lock held by kworker/R-mm_pe/13: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 2 locks held by kworker/1:0/25: 1 lock held by khungtaskd/30: #0: ffffffff8e1bcc80 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline] #0: ffffffff8e1bcc80 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline] #0: ffffffff8e1bcc80 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x7f/0x390 kernel/locking/lockdep.c:6746 1 lock held by kworker/R-write/32: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2676 1 lock held by kworker/1:1/57: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2676 3 locks held by kworker/u8:5/354: 3 locks held by kworker/1:2/971: 2 locks held by kworker/0:2/974: 3 locks held by kworker/u8:6/1155: 5 locks held by kworker/u8:7/1325: 1 lock held by syslogd/5174: #0: ffffffff8ee87768 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2046 [inline] #0: ffffffff8ee87768 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x53d/0xf80 drivers/tty/tty_io.c:2129 3 locks held by dhcpcd/5485: 2 locks held by getty/5578: #0: ffff88814d8520a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243 #1: ffffc90002fde2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0xfba/0x1480 drivers/tty/n_tty.c:2211 4 locks held by syz-executor/5804: 3 locks held by syz-executor/5814: 1 lock held by syz-executor/5821: 1 lock held by kworker/u9:3/5822: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: set_pf_worker kernel/workqueue.c:3323 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_thread+0x7c4/0xf00 kernel/workqueue.c:3356 1 lock held by syz-executor/5824: 2 locks held by syz-executor/5825: 1 lock held by kworker/R-wg-cr/5847: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 1 lock held by kworker/R-wg-cr/5848: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 1 lock held by kworker/R-wg-cr/5850: 1 lock held by kworker/R-wg-cr/5853: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 2 locks held by kworker/0:3/5854: 1 lock held by kworker/R-wg-cr/5855: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 1 lock held by kworker/R-wg-cr/5856: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 1 lock held by kworker/R-wg-cr/5857: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 1 lock held by kworker/R-wg-cr/5860: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 1 lock held by kworker/R-wg-cr/5861: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 1 lock held by kworker/R-wg-cr/5862: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 1 lock held by kworker/R-wg-cr/5863: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline] #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3527 1 lock held by kworker/R-wg-cr/5865: #0: ffffffff8e076188 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2676 3 locks held by kworker/1:3/5866: 2 locks held by kworker/0:5/5868: 3 locks held by kworker/1:5/5870: 5 locks held by kworker/1:6/5897: 2 locks held by kworker/0:7/5902: 3 locks held by kworker/0:8/5920: 2 locks held by kworker/1:7/5959: 2 locks held by syz.4.4500/15314: 6 locks held by syz.2.4529/15377: 2 locks held by kworker/u8:9/15405: #0: ffff88801b081148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x1293/0x1ba0 kernel/workqueue.c:3211 #1: ffffc9000218fd18 ((work_completion)(&pool->idle_cull_work)){+.+.}-{0:0}, at: process_one_work+0x921/0x1ba0 kernel/workqueue.c:3212 3 locks held by syz-executor/15406: ============================================= NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.14.0-rc3-syzkaller-00079-g87a132e73910 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline] watchdog+0xf62/0x12b0 kernel/hung_task.c:399 kthread+0x3af/0x750 kernel/kthread.c:464 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 1325 Comm: kworker/u8:7 Not tainted 6.14.0-rc3-syzkaller-00079-g87a132e73910 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Workqueue: bat_events batadv_tt_purge RIP: 0010:lockdep_recursion_finish kernel/locking/lockdep.c:469 [inline] RIP: 0010:lock_is_held_type+0xec/0x150 kernel/locking/lockdep.c:5924 Code: f6 43 22 03 0f 95 c0 45 31 ed 44 39 f0 41 0f 94 c5 48 c7 c7 a0 ef 6c 8b e8 91 16 00 00 b8 ff ff ff ff 65 0f c1 05 3c 75 ab 74 <83> f8 01 75 2d 9c 58 f6 c4 02 75 43 48 f7 04 24 00 02 00 00 74 01 RSP: 0000:ffffc90000a184a0 EFLAGS: 00000057 RAX: 0000000000000001 RBX: ffff888028242f80 RCX: 0000000000000001 RDX: 0000000000000000 RSI: ffffffff8b6cefa0 RDI: ffffffff8bd35480 RBP: ffffffff8e1bcc80 R08: 0000000000000005 R09: 0000000000000000 R10: 0000000000000001 R11: 0000000000000003 R12: ffff888028242440 R13: 0000000000000001 R14: 00000000ffffffff R15: 0000000000000002 FS: 0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f8c65a760da CR3: 00000000277aa000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: __find_rr_leaf+0x36d/0xe00 net/ipv6/route.c:800 find_rr_leaf net/ipv6/route.c:856 [inline] rt6_select net/ipv6/route.c:900 [inline] fib6_table_lookup+0x57e/0xa30 net/ipv6/route.c:2195 ip6_pol_route+0x1cd/0x1120 net/ipv6/route.c:2231 pol_lookup_func include/net/ip6_fib.h:616 [inline] fib6_rule_lookup+0x536/0x720 net/ipv6/fib6_rules.c:119 ip6_route_input_lookup net/ipv6/route.c:2300 [inline] ip6_route_input+0x663/0xc10 net/ipv6/route.c:2596 ip6_rcv_finish_core.constprop.0+0x1a0/0x5d0 net/ipv6/ip6_input.c:66 ip6_rcv_finish net/ipv6/ip6_input.c:77 [inline] NF_HOOK include/linux/netfilter.h:314 [inline] NF_HOOK include/linux/netfilter.h:308 [inline] ipv6_rcv+0x1e4/0x680 net/ipv6/ip6_input.c:309 __netif_receive_skb_one_core+0x12e/0x1e0 net/core/dev.c:5828 __netif_receive_skb+0x1d/0x160 net/core/dev.c:5941 process_backlog+0x443/0x15f0 net/core/dev.c:6289 __napi_poll.constprop.0+0xb7/0x550 net/core/dev.c:7106 napi_poll net/core/dev.c:7175 [inline] net_rx_action+0xa94/0x1010 net/core/dev.c:7297 handle_softirqs+0x213/0x8f0 kernel/softirq.c:561 do_softirq kernel/softirq.c:462 [inline] do_softirq+0xb2/0xf0 kernel/softirq.c:449 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:389 spin_unlock_bh include/linux/spinlock.h:396 [inline] batadv_tt_global_purge net/batman-adv/translation-table.c:2250 [inline] batadv_tt_purge+0x251/0xb90 net/batman-adv/translation-table.c:3510 process_one_work+0x9c5/0x1ba0 kernel/workqueue.c:3236 process_scheduled_works kernel/workqueue.c:3317 [inline] worker_thread+0x6c8/0xf00 kernel/workqueue.c:3398 kthread+0x3af/0x750 kernel/kthread.c:464 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 net_ratelimit: 22683 callbacks suppressed bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:b6:3c:33:e4:9d:77, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:b6:3c:33:e4:9d:77, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0) bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)