INFO: task kworker/1:4:15551 blocked for more than 143 seconds.
Not tainted 6.0.0-rc6-syzkaller-00281-g1707c39ae309 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:4 state:D stack:26624 pid:15551 ppid: 2 flags:0x00004000
Workqueue: events linkwatch_event
Call Trace:
context_switch kernel/sched/core.c:5182 [inline]
__schedule+0xadf/0x52b0 kernel/sched/core.c:6494
schedule+0xda/0x1b0 kernel/sched/core.c:6570
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6629
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0xa44/0x1350 kernel/locking/mutex.c:747
linkwatch_event+0xb/0x60 net/core/link_watch.c:263
process_one_work+0x991/0x1610 kernel/workqueue.c:2289
worker_thread+0x665/0x1080 kernel/workqueue.c:2436
kthread+0x2e4/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
INFO: task kworker/1:8:15553 blocked for more than 143 seconds.
Not tainted 6.0.0-rc6-syzkaller-00281-g1707c39ae309 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:8 state:D
stack:27160 pid:15553 ppid: 2 flags:0x00004000
Workqueue: events switchdev_deferred_process_work
Call Trace:
context_switch kernel/sched/core.c:5182 [inline]
__schedule+0xadf/0x52b0 kernel/sched/core.c:6494
schedule+0xda/0x1b0 kernel/sched/core.c:6570
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6629
__mutex_lock_common kernel/locking/mutex.c:679 [inline]
__mutex_lock+0xa44/0x1350 kernel/locking/mutex.c:747
switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:75
process_one_work+0x991/0x1610 kernel/workqueue.c:2289
worker_thread+0x665/0x1080 kernel/workqueue.c:2436
kthread+0x2e4/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/13:
#0:
ffffffff8bf888b0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xc70 kernel/rcu/tasks.h:507
1 lock held by rcu_tasks_trace/14:
#0: ffffffff8bf885b0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xc70 kernel/rcu/tasks.h:507
1 lock held by khungtaskd/29:
#0: ffffffff8bf89400 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6492
2 locks held by kworker/u4:3/47:
#0:
ffff888011869138
(
(wq_completion)events_unbound){+.+.}-{0:0}
, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
, at: set_work_data kernel/workqueue.c:636 [inline]
, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
#1: ffffc90000b87da8 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
5 locks held by kworker/u4:5/2430:
#0: ffff8880119c6138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff8880119c6138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff8880119c6138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
#0: ffff8880119c6138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
#0: ffff8880119c6138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
#0: ffff8880119c6138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
#1: ffffc9000a26fda8 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
#2:
ffffffff8d79d950
(
pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb00 net/core/net_namespace.c:557
#3:
ffffffff8d7b1328 (rtnl_mutex){+.+.}-{3:3}, at: default_device_exit_batch+0x8e/0x590 net/core/dev.c:11342
#4: ffffffff8bf940b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
#4: ffffffff8bf940b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x562/0x670 kernel/rcu/tree_exp.h:940
3 locks held by kworker/1:2/2475:
2 locks held by getty/3288:
#0: ffff888025ee4098
(
&tty->ldisc_sem
){++++}-{0:0}
, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:244
#1: ffffc90002d232f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xef0/0x13e0 drivers/tty/n_tty.c:2177
3 locks held by kworker/0:5/3668:
#0: ffff8880257e8538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff8880257e8538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff8880257e8538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
#0: ffff8880257e8538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
#0: ffff8880257e8538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
#0: ffff8880257e8538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
#1: ffffc90003d1fda8 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}
, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
#2: ffffffff8d7b1328 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0xe/0x20 net/ipv6/addrconf.c:4624
6 locks held by kworker/0:2/15496:
3 locks held by kworker/1:4/15551:
#0:
ffff888011864d38
((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
#1: ffffc90002eafda8 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
#2: ffffffff8d7b1328 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xb/0x60 net/core/link_watch.c:263
3 locks held by kworker/1:5/15552:
#0: ffff8880257e8538 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}
, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
, at: set_work_data kernel/workqueue.c:636 [inline]
, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
#1: ffffc90002fa7da8 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
#2: ffffffff8d7b1328 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0xe/0x20 net/ipv6/addrconf.c:4624
3 locks held by kworker/1:8/15553:
#0: ffff888011864d38
((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
#1: ffffc90002fb7da8 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
#2: ffffffff8d7b1328 (rtnl_mutex){+.+.}-{3:3}
, at: switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:75
2 locks held by kworker/u4:2/15761:
#0:
ffff888011869138
((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
#1: ffffc9000302fda8 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
=============================================
NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted 6.0.0-rc6-syzkaller-00281-g1707c39ae309 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022
Call Trace:
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
nmi_cpu_backtrace.cold+0x46/0x14f lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x206/0x250 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
watchdog+0xc18/0xf50 kernel/hung_task.c:369
kthread+0x2e4/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 9 Comm: kworker/u4:0 Not tainted 6.0.0-rc6-syzkaller-00281-g1707c39ae309 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022
Workqueue: bat_events batadv_nc_worker
RIP: 0010:separate_irq_context kernel/locking/lockdep.c:4582 [inline]
RIP: 0010:__lock_acquire+0x9e7/0x56d0 kernel/locking/lockdep.c:5037
Code: b6 6d 21 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 0f b6 04 02 48 89 fa 83 e2 07 38 d0 7f 08 84 c0 0f 85 56 4b 00 00 <41> 32 6c 24 21 83 e5 60 0f 85 24 0c 00 00 48 89 d8 89 5c 24 58 48
RSP: 0018:ffffc900000e7a30 EFLAGS: 00000046
RAX: 0000000000000000 RBX: bc60d19d4e3f7947 RCX: ffffffff815f2256
RDX: 0000000000000001 RSI: ffff888011a98a78 RDI: ffff888011a98ae9
RBP: 0000000000000007 R08: 0000000000000000 R09: ffffffff908e5947
R10: fffffbfff211cb28 R11: 0000000000000000 R12: ffff888011a98ac8
R13: ffff888011a98000 R14: ffff888011a98a78 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005650dc429fa8 CR3: 000000007f0f6000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
lock_acquire kernel/locking/lockdep.c:5666 [inline]
lock_acquire+0x1ab/0x570 kernel/locking/lockdep.c:5631
rcu_lock_acquire include/linux/rcupdate.h:280 [inline]
rcu_read_lock include/linux/rcupdate.h:706 [inline]
batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:408 [inline]
batadv_nc_worker+0x12d/0xfa0 net/batman-adv/network-coding.c:719
process_one_work+0x991/0x1610 kernel/workqueue.c:2289
worker_thread+0x665/0x1080 kernel/workqueue.c:2436
kthread+0x2e4/0x3a0 kernel/kthread.c:376
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306