syzbot


INFO: task hung in cangw_pernet_exit_batch (2)

Status: auto-obsoleted due to no activity on 2023/10/19 02:51
Subsystems: can
[Documentation on labels]
First crash: 769d, last: 512d
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in cangw_pernet_exit_batch (3) can 33 158d 173d 26/28 fixed on 2024/07/09 19:14
upstream INFO: task hung in cangw_pernet_exit_batch can 11 887d 899d 0/28 auto-obsoleted due to no activity on 2022/10/09 07:17
linux-6.1 INFO: task hung in cangw_pernet_exit_batch (2) 18 178d 206d 0/3 auto-obsoleted due to no activity on 2024/08/27 12:37
linux-6.1 INFO: task hung in cangw_pernet_exit_batch 2 557d 577d 0/3 auto-obsoleted due to no activity on 2023/09/13 14:11

Sample crash report:
INFO: task kworker/u4:1:11 blocked for more than 143 seconds.
      Not tainted 6.4.0-syzkaller-11479-g6cd06ab12d1a #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:1    state:D stack:23208 pid:11    ppid:2      flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6710
 schedule+0xde/0x1a0 kernel/sched/core.c:6786
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6845
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0xa3b/0x1350 kernel/locking/mutex.c:747
 cangw_pernet_exit_batch+0x15/0xa0 net/can/gw.c:1257
 ops_exit_list+0x125/0x170 net/core/net_namespace.c:175
 cleanup_net+0x4ee/0xb10 net/core/net_namespace.c:614
 process_one_work+0xa34/0x16f0 kernel/workqueue.c:2597
 worker_thread+0x67d/0x10c0 kernel/workqueue.c:2748
 kthread+0x344/0x440 kernel/kthread.c:389
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
INFO: task kworker/1:0:11345 blocked for more than 143 seconds.
      Not tainted 6.4.0-syzkaller-11479-g6cd06ab12d1a #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:0     state:D stack:23992 pid:11345 ppid:2      flags:0x00004000
Workqueue: ipv6_addrconf addrconf_verify_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6710
 schedule+0xde/0x1a0 kernel/sched/core.c:6786
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6845
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0xa3b/0x1350 kernel/locking/mutex.c:747
 addrconf_verify_work+0x12/0x30 net/ipv6/addrconf.c:4630
 process_one_work+0xa34/0x16f0 kernel/workqueue.c:2597
 worker_thread+0x67d/0x10c0 kernel/workqueue.c:2748
 kthread+0x344/0x440 kernel/kthread.c:389
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
INFO: task kworker/0:16:16440 blocked for more than 144 seconds.
      Not tainted 6.4.0-syzkaller-11479-g6cd06ab12d1a #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:16    state:D stack:23376 pid:16440 ppid:2      flags:0x00004000
Workqueue: ipv6_addrconf addrconf_verify_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6710
 schedule+0xde/0x1a0 kernel/sched/core.c:6786
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6845
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0xa3b/0x1350 kernel/locking/mutex.c:747
 addrconf_verify_work+0x12/0x30 net/ipv6/addrconf.c:4630
 process_one_work+0xa34/0x16f0 kernel/workqueue.c:2597
 worker_thread+0x67d/0x10c0 kernel/workqueue.c:2748
 kthread+0x344/0x440 kernel/kthread.c:389
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
INFO: task syz-executor.0:22008 blocked for more than 144 seconds.
      Not tainted 6.4.0-syzkaller-11479-g6cd06ab12d1a #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.0  state:D stack:27680 pid:22008 ppid:20471  flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6710
 schedule+0xde/0x1a0 kernel/sched/core.c:6786
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6845
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0xa3b/0x1350 kernel/locking/mutex.c:747
 tun_detach drivers/net/tun.c:697 [inline]
 tun_chr_close+0x3e/0x240 drivers/net/tun.c:3491
 __fput+0x40c/0xad0 fs/file_table.c:384
 task_work_run+0x16f/0x270 kernel/task_work.c:179
 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:171 [inline]
 exit_to_user_mode_prepare+0x210/0x240 kernel/entry/common.c:204
 __syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline]
 syscall_exit_to_user_mode+0x1d/0x50 kernel/entry/common.c:297
 do_syscall_64+0x46/0xb0 arch/x86/entry/common.c:86
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f61ab23e12b
RSP: 002b:00007ffc82431060 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
RAX: 0000000000000000 RBX: 0000000000000004 RCX: 00007f61ab23e12b
RDX: 0000001b32720000 RSI: 00007f61aa84ecb8 RDI: 0000000000000003
RBP: 00007f61ab3ad980 R08: 0000000000000000 R09: 000000008a1ffa4d
R10: 00007ffc825e6090 R11: 0000000000000293 R12: 00000000001e3c66
R13: 00007ffc82431160 R14: 00007ffc82431180 R15: 0000000000000032
 </TASK>
INFO: task syz-executor.4:22009 blocked for more than 145 seconds.
      Not tainted 6.4.0-syzkaller-11479-g6cd06ab12d1a #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.4  state:D stack:27680 pid:22009 ppid:11283  flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6710
 schedule+0xde/0x1a0 kernel/sched/core.c:6786
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6845
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0xa3b/0x1350 kernel/locking/mutex.c:747
 tun_detach drivers/net/tun.c:697 [inline]
 tun_chr_close+0x3e/0x240 drivers/net/tun.c:3491
 __fput+0x40c/0xad0 fs/file_table.c:384
 task_work_run+0x16f/0x270 kernel/task_work.c:179
 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:171 [inline]
 exit_to_user_mode_prepare+0x210/0x240 kernel/entry/common.c:204
 __syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline]
 syscall_exit_to_user_mode+0x1d/0x50 kernel/entry/common.c:297
 do_syscall_64+0x46/0xb0 arch/x86/entry/common.c:86
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f9d7d83e12b
RSP: 002b:00007fffe10fe7e0 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
RAX: 0000000000000000 RBX: 0000000000000004 RCX: 00007f9d7d83e12b
RDX: 0000001b2cb20000 RSI: 00007f9d7ce33c20 RDI: 0000000000000003
RBP: 00007f9d7d9ad980 R08: 0000000000000000 R09: 000000008a1ffa4d
R10: 00007fffe1112090 R11: 0000000000000293 R12: 00000000001e3cf4
R13: 00007fffe10fe8e0 R14: 00007f9d7d9ac1f0 R15: 0000000000000032
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/u4:1/11:
 #0: ffff888017a51138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:20 [inline]
 #0: ffff888017a51138 ((wq_completion)netns){+.+.}-{0:0}, at: raw_atomic64_set include/linux/atomic/atomic-arch-fallback.h:2608 [inline]
 #0: ffff888017a51138 ((wq_completion)netns){+.+.}-{0:0}, at: raw_atomic_long_set include/linux/atomic/atomic-long.h:79 [inline]
 #0: ffff888017a51138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:3196 [inline]
 #0: ffff888017a51138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:675 [inline]
 #0: ffff888017a51138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:702 [inline]
 #0: ffff888017a51138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x8fd/0x16f0 kernel/workqueue.c:2567
 #1: ffffc9000031fdb0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x930/0x16f0 kernel/workqueue.c:2571
 #2: ffffffff8e3ab410 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9f/0xb10 net/core/net_namespace.c:576
 #3: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: cangw_pernet_exit_batch+0x15/0xa0 net/can/gw.c:1257
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8c9a0ab0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}
, at: rcu_tasks_one_gp+0x31/0xd80 kernel/rcu/tasks.h:522
1 lock held by rcu_tasks_trace/13:
 #0: 
ffffffff8c9a07b0
 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x31/0xd80 kernel/rcu/tasks.h:522
1 lock held by khungtaskd/27:
 #0: ffffffff8c9a16c0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x340 kernel/locking/lockdep.c:6615
4 locks held by kworker/1:2/3663:
4 locks held by kworker/1:3/4748:
2 locks held by getty/4758:
 #0: ffff88802b0ec098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x26/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc900020382f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xf08/0x13f0 drivers/tty/n_tty.c:2187
4 locks held by kworker/1:6/5113:
4 locks held by kworker/1:8/5173:
4 locks held by kworker/1:9/5176:
4 locks held by kworker/1:12/10388:
4 locks held by kworker/u5:1/11180:
 #0: ffff888093dc5138 ((wq_completion)hci22#2){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:20 [inline]
 #0: ffff888093dc5138 ((wq_completion)hci22#2){+.+.}-{0:0}, at: raw_atomic64_set include/linux/atomic/atomic-arch-fallback.h:2608 [inline]
 #0: ffff888093dc5138 ((wq_completion)hci22#2){+.+.}-{0:0}, at: raw_atomic_long_set include/linux/atomic/atomic-long.h:79 [inline]
 #0: ffff888093dc5138 ((wq_completion)hci22#2){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:3196 [inline]
 #0: ffff888093dc5138 ((wq_completion)hci22#2){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:675 [inline]
 #0: ffff888093dc5138 ((wq_completion)hci22#2){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:702 [inline]
 #0: ffff888093dc5138 ((wq_completion)hci22#2){+.+.}-{0:0}, at: process_one_work+0x8fd/0x16f0 kernel/workqueue.c:2567
 #1: ffffc9000371fdb0 ((work_completion)(&hdev->rx_work)){+.+.}-{0:0}, at: process_one_work+0x930/0x16f0 kernel/workqueue.c:2571
 #2: ffff888090b68078 (&hdev->lock){+.+.}-{3:3}, at: hci_remote_features_evt+0x95/0xa50 net/bluetooth/hci_event.c:3720
 #3: ffffffff8e617708 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_connect_cfm include/net/bluetooth/hci_core.h:1818 [inline]
 #3: ffffffff8e617708 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_remote_features_evt+0x4d8/0xa50 net/bluetooth/hci_event.c:3753
3 locks held by kworker/1:0/11345:
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:20 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: raw_atomic64_set include/linux/atomic/atomic-arch-fallback.h:2608 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: raw_atomic_long_set include/linux/atomic/atomic-long.h:79 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:3196 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:675 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:702 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x8fd/0x16f0 kernel/workqueue.c:2567
 #1: ffffc9000526fdb0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x930/0x16f0 kernel/workqueue.c:2571
 #2: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x12/0x30 net/ipv6/addrconf.c:4630
8 locks held by kworker/1:1/11725:
4 locks held by kworker/1:4/11758:
2 locks held by kworker/u4:16/12430:
4 locks held by kworker/1:10/12589:
4 locks held by kworker/1:13/12600:
4 locks held by kworker/1:15/13278:
2 locks held by kworker/0:14/13471:
 #0: ffff888012869d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:20 [inline]
 #0: ffff888012869d38 ((wq_completion)events){+.+.}-{0:0}, at: raw_atomic64_set include/linux/atomic/atomic-arch-fallback.h:2608 [inline]
 #0: ffff888012869d38 ((wq_completion)events){+.+.}-{0:0}, at: raw_atomic_long_set include/linux/atomic/atomic-long.h:79 [inline]
 #0: ffff888012869d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:3196 [inline]
 #0: ffff888012869d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:675 [inline]
 #0: ffff888012869d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:702 [inline]
 #0: ffff888012869d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x8fd/0x16f0 kernel/workqueue.c:2567
 #1: ffffc9000515fdb0 (free_ipc_work){+.+.}-{0:0}, at: process_one_work+0x930/0x16f0 kernel/workqueue.c:2571
4 locks held by kworker/1:7/16281:
3 locks held by kworker/0:16/16440:
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:20 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: raw_atomic64_set include/linux/atomic/atomic-arch-fallback.h:2608 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: raw_atomic_long_set include/linux/atomic/atomic-long.h:79 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:3196 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:675 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:702 [inline]
 #0: ffff888028bee138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x8fd/0x16f0 kernel/workqueue.c:2567
 #1: ffffc9000545fdb0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x930/0x16f0 kernel/workqueue.c:2571
 #2: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x12/0x30 net/ipv6/addrconf.c:4630
4 locks held by kworker/1:16/17851:
4 locks held by kworker/1:18/19525:
4 locks held by kworker/u5:0/20469:
 #0: ffff888090733938 ((wq_completion)hci21#2){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:20 [inline]
 #0: ffff888090733938 ((wq_completion)hci21#2){+.+.}-{0:0}, at: raw_atomic64_set include/linux/atomic/atomic-arch-fallback.h:2608 [inline]
 #0: ffff888090733938 ((wq_completion)hci21#2){+.+.}-{0:0}, at: raw_atomic_long_set include/linux/atomic/atomic-long.h:79 [inline]
 #0: ffff888090733938 ((wq_completion)hci21#2){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:3196 [inline]
 #0: ffff888090733938 ((wq_completion)hci21#2){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:675 [inline]
 #0: ffff888090733938 ((wq_completion)hci21#2){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:702 [inline]
 #0: ffff888090733938 ((wq_completion)hci21#2){+.+.}-{0:0}, at: process_one_work+0x8fd/0x16f0 kernel/workqueue.c:2567
 #1: ffffc900039bfdb0 ((work_completion)(&hdev->rx_work)){+.+.}-{0:0}, at: process_one_work+0x930/0x16f0 kernel/workqueue.c:2571
 #2: ffff888090550078 (&hdev->lock){+.+.}-{3:3}, at: hci_remote_features_evt+0x95/0xa50 net/bluetooth/hci_event.c:3720
 #3: ffffffff8e617708 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_connect_cfm include/net/bluetooth/hci_core.h:1818 [inline]
 #3: ffffffff8e617708 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_remote_features_evt+0x4d8/0xa50 net/bluetooth/hci_event.c:3753
3 locks held by syz-executor.3/22005:
 #0: ffffffff8e3ab410 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x311/0x6c0 net/core/net_namespace.c:487
 #1: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: default_device_exit_batch+0x92/0x5b0 net/core/dev.c:11345
 #2: ffffffff8c9acb78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:325 [inline]
 #2: ffffffff8c9acb78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3e8/0x770 kernel/rcu/tree_exp.h:992
1 lock held by syz-executor.0/22008:
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:697 [inline]
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3e/0x240 drivers/net/tun.c:3491
1 lock held by syz-executor.4/22009:
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:697 [inline]
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3e/0x240 drivers/net/tun.c:3491
3 locks held by syz-executor.1/22029:
 #0: ffff8880961bd0b8 (&hdev->req_lock){+.+.}-{3:3}, at: hci_dev_do_close+0x29/0x70 net/bluetooth/hci_core.c:552
 #1: ffff8880961bc078 (&hdev->lock){+.+.}-{3:3}, at: hci_dev_close_sync+0x306/0x1200 net/bluetooth/hci_sync.c:4939
 #2: ffffffff8e617708 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:1833 [inline]
 #2: ffffffff8e617708 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_hash_flush+0xc4/0x230 net/bluetooth/hci_conn.c:2488
1 lock held by dhcpcd/22035:
 #0: ffff88803a676850 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:771 [inline]
 #0: ffff88803a676850 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release+0x86/0x290 net/socket.c:653
1 lock held by dhcpcd/22036:
 #0: ffff888031432b10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:771 [inline]
 #0: ffff888031432b10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release+0x86/0x290 net/socket.c:653
1 lock held by dhcpcd/22037:
 #0: ffff88803dd99810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:771 [inline]
 #0: ffff88803dd99810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release+0x86/0x290 net/socket.c:653
2 locks held by dhcpcd/22038:
 #0: ffff888031707290 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:771 [inline]
 #0: ffff888031707290 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release+0x86/0x290 net/socket.c:653
 #1: ffffffff8c9acb78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:325 [inline]
 #1: ffffffff8c9acb78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3e8/0x770 kernel/rcu/tree_exp.h:992
1 lock held by dhcpcd/22041:
 #0: ffff88803a599910 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:771 [inline]
 #0: ffff88803a599910 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release+0x86/0x290 net/socket.c:653
1 lock held by syz-executor.0/22051:
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x3e8/0xd50 net/core/rtnetlink.c:6421
1 lock held by syz-executor.4/22053:
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x3e8/0xd50 net/core/rtnetlink.c:6421
1 lock held by syz-executor.1/22058:
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x3e8/0xd50 net/core/rtnetlink.c:6421
4 locks held by kworker/1:21/22066:
1 lock held by dhcpcd/22067:
 #0: ffff888035bb64d0 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:771 [inline]
 #0: ffff888035bb64d0 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release+0x86/0x290 net/socket.c:653
1 lock held by syz-executor.0/22070:
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x3e8/0xd50 net/core/rtnetlink.c:6421
1 lock held by syz-executor.4/22075:
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x3e8/0xd50 net/core/rtnetlink.c:6421
1 lock held by syz-executor.1/22078:
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x3e8/0xd50 net/core/rtnetlink.c:6421
1 lock held by syz-executor.3/22081:
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:78 [inline]
 #0: ffffffff8e3bf6e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x3e8/0xd50 net/core/rtnetlink.c:6421

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted 6.4.0-syzkaller-11479-g6cd06ab12d1a #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x29c/0x350 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x2a4/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xe16/0x1090 kernel/hung_task.c:379
 kthread+0x344/0x440 kernel/kthread.c:389
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 4748 Comm: kworker/1:3 Not tainted 6.4.0-syzkaller-11479-g6cd06ab12d1a #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023
Workqueue: events cfg80211_wiphy_work
RIP: 0010:on_stack arch/x86/include/asm/stacktrace.h:58 [inline]
RIP: 0010:stack_access_ok+0x0/0x1d0 arch/x86/kernel/unwind_orc.c:393
Code: c3 e8 a4 4b 9d 00 eb a8 48 89 ef e8 ba 4b 9d 00 eb c4 48 89 ef e8 b0 4b 9d 00 eb de 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 <48> b8 00 00 00 00 00 fc ff df 41 56 41 55 41 54 49 89 d4 48 89 fa
RSP: 0018:ffffc900003e73e8 EFLAGS: 00000083
RAX: ffffffffffffffd0 RBX: 0000000000000002 RCX: ffffffff8fb6aeea
RDX: 0000000000000008 RSI: ffffc900003e7c10 RDI: ffffc900003e7460
RBP: ffffc900003e74a8 R08: 0000000000000001 R09: ffffc900003e7c38
R10: ffffc900003e7460 R11: 0000000000096001 R12: ffffc900003e74b0
R13: ffffc900003e7460 R14: ffffc900003e7c10 R15: ffffc900003e7494
FS:  0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1ee9093866 CR3: 0000000083ecb000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 deref_stack_reg arch/x86/kernel/unwind_orc.c:403 [inline]
 unwind_next_frame+0x153e/0x1f70 arch/x86/kernel/unwind_orc.c:648
 arch_stack_walk+0x81/0xf0 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x90/0xc0 kernel/stacktrace.c:122
 kasan_save_stack+0x22/0x40 mm/kasan/common.c:45
 kasan_set_track+0x25/0x30 mm/kasan/common.c:52
 kasan_save_free_info+0x28/0x40 mm/kasan/generic.c:521
 ____kasan_slab_free mm/kasan/common.c:236 [inline]
 ____kasan_slab_free+0x13b/0x1a0 mm/kasan/common.c:200
 kasan_slab_free include/linux/kasan.h:162 [inline]
 __cache_free mm/slab.c:3370 [inline]
 __do_kmem_cache_free mm/slab.c:3557 [inline]
 kmem_cache_free mm/slab.c:3582 [inline]
 kmem_cache_free+0x105/0x370 mm/slab.c:3575
 skb_kfree_head net/core/skbuff.c:892 [inline]
 skb_kfree_head net/core/skbuff.c:889 [inline]
 skb_free_head+0x17f/0x1b0 net/core/skbuff.c:906
 skb_release_data+0x5a4/0x840 net/core/skbuff.c:936
 skb_release_all net/core/skbuff.c:1002 [inline]
 __kfree_skb net/core/skbuff.c:1016 [inline]
 kfree_skb_reason+0x179/0x3c0 net/core/skbuff.c:1052
 kfree_skb include/linux/skbuff.h:1237 [inline]
 ip_tunnel_xmit+0x6f3/0x3170 net/ipv4/ip_tunnel.c:841
 gre_tap_xmit+0x4f7/0x620 net/ipv4/ip_gre.c:743
 __netdev_start_xmit include/linux/netdevice.h:4910 [inline]
 netdev_start_xmit include/linux/netdevice.h:4924 [inline]
 xmit_one net/core/dev.c:3537 [inline]
 dev_hard_start_xmit+0x187/0x700 net/core/dev.c:3553
 sch_direct_xmit+0x1a3/0xc30 net/sched/sch_generic.c:342
 __dev_xmit_skb net/core/dev.c:3764 [inline]
 __dev_queue_xmit+0x14d6/0x3b10 net/core/dev.c:4169
 dev_queue_xmit include/linux/netdevice.h:3088 [inline]
 br_dev_queue_push_xmit+0x26e/0x7b0 net/bridge/br_forward.c:53
 br_nf_dev_queue_xmit+0x5f9/0x1e80 net/bridge/br_netfilter_hooks.c:810
 NF_HOOK include/linux/netfilter.h:303 [inline]
 NF_HOOK include/linux/netfilter.h:297 [inline]
 br_nf_post_routing+0x9f8/0x1200 net/bridge/br_netfilter_hooks.c:856
 nf_hook_entry_hookfn include/linux/netfilter.h:143 [inline]
 nf_hook_slow+0xc9/0x1f0 net/netfilter/core.c:626
 nf_hook+0x431/0x730 include/linux/netfilter.h:258
 NF_HOOK include/linux/netfilter.h:301 [inline]
 br_forward_finish+0xd8/0x130 net/bridge/br_forward.c:66
 br_nf_hook_thresh+0x2fb/0x3f0 net/bridge/br_netfilter_hooks.c:1048
 br_nf_forward_finish+0x6df/0xa30 net/bridge/br_netfilter_hooks.c:567
 NF_HOOK include/linux/netfilter.h:303 [inline]
 NF_HOOK include/linux/netfilter.h:297 [inline]
 br_nf_forward_ip net/bridge/br_netfilter_hooks.c:637 [inline]
 br_nf_forward_ip+0xb83/0x13c0 net/bridge/br_netfilter_hooks.c:578
 nf_hook_entry_hookfn include/linux/netfilter.h:143 [inline]
 nf_hook_slow+0xc9/0x1f0 net/netfilter/core.c:626
 nf_hook+0x431/0x730 include/linux/netfilter.h:258
 NF_HOOK include/linux/netfilter.h:301 [inline]
 __br_forward+0x19a/0x570 net/bridge/br_forward.c:115
 deliver_clone net/bridge/br_forward.c:131 [inline]
 maybe_deliver+0x350/0x450 net/bridge/br_forward.c:189
 br_flood+0x173/0x630 net/bridge/br_forward.c:235
 br_handle_frame_finish+0xf89/0x1de0 net/bridge/br_input.c:210
 br_nf_hook_thresh+0x2fb/0x3f0 net/bridge/br_netfilter_hooks.c:1048
 br_nf_pre_routing_finish_ipv6+0x695/0xf30 net/bridge/br_netfilter_ipv6.c:148
 NF_HOOK include/linux/netfilter.h:303 [inline]
 br_nf_pre_routing_ipv6+0x41b/0x830 net/bridge/br_netfilter_ipv6.c:178
 br_nf_pre_routing+0xda4/0x1520 net/bridge/br_netfilter_hooks.c:508
 nf_hook_entry_hookfn include/linux/netfilter.h:143 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:272 [inline]
 br_handle_frame+0xac1/0x1440 net/bridge/br_input.c:417
 __netif_receive_skb_core+0xa10/0x3900 net/core/dev.c:5346
 __netif_receive_skb_one_core+0xae/0x180 net/core/dev.c:5450
 __netif_receive_skb+0x1f/0x1c0 net/core/dev.c:5566
 process_backlog+0x101/0x670 net/core/dev.c:5894
 __napi_poll+0xb7/0x6f0 net/core/dev.c:6460
 napi_poll net/core/dev.c:6527 [inline]
 net_rx_action+0x8a9/0xcb0 net/core/dev.c:6660
 __do_softirq+0x1d4/0x905 kernel/softirq.c:553
 do_softirq.part.0+0x87/0xc0 kernel/softirq.c:454
 </IRQ>
 <TASK>
 do_softirq kernel/softirq.c:446 [inline]
 __local_bh_enable_ip+0x106/0x130 kernel/softirq.c:381
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 cfg80211_inform_single_bss_frame_data+0x7a5/0xfe0 net/wireless/scan.c:2887
 cfg80211_inform_bss_frame_data+0xc2/0x290 net/wireless/scan.c:2912
 ieee80211_bss_info_update+0x371/0x9b0 net/mac80211/scan.c:211
 ieee80211_rx_bss_info net/mac80211/ibss.c:1124 [inline]
 ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1613 [inline]
 ieee80211_ibss_rx_queued_mgmt+0x1a1d/0x3080 net/mac80211/ibss.c:1642
 ieee80211_iface_process_skb net/mac80211/iface.c:1604 [inline]
 ieee80211_iface_work+0xa4a/0xd70 net/mac80211/iface.c:1658
 cfg80211_wiphy_work+0x253/0x330 net/wireless/core.c:435
 process_one_work+0xa34/0x16f0 kernel/workqueue.c:2597
 worker_thread+0x67d/0x10c0 kernel/workqueue.c:2748
 kthread+0x344/0x440 kernel/kthread.c:389
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>

Crashes (16):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/07/05 23:18 upstream 6cd06ab12d1a ba5dba36 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in cangw_pernet_exit_batch
2023/05/19 22:53 upstream cbd6ac3837cd 96689200 .config console log report info ci-upstream-kasan-gce-root INFO: task hung in cangw_pernet_exit_batch
2023/07/21 02:40 upstream 57f1f9dd3abe 28847498 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in cangw_pernet_exit_batch
2023/07/06 15:19 upstream c17414a273b8 1a2f6297 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in cangw_pernet_exit_batch
2023/06/29 03:09 net 3674fbf0451d 8064cb02 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2023/06/25 06:45 net eb441289f940 09ffe269 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2023/01/15 07:51 net-old a22b7388d658 a63719e7 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2023/04/13 20:51 net-next f2b3b6a22df7 3cfcaa1b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2022/11/22 19:38 net-next-old 339e79dfb087 9da37ae8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2022/11/12 14:21 net-next-old b548b17a93fd 3ead01ad .config console log report info ci-upstream-net-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2022/11/11 12:09 net-next-old c1b05105573b 3ead01ad .config console log report info ci-upstream-net-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2022/11/11 05:44 net-next-old c1b05105573b 3ead01ad .config console log report info ci-upstream-net-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2022/11/10 16:24 net-next-old 0c9ef08a4d0f 3ead01ad .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2023/07/02 21:37 linux-next 6352a698ca5b bfc47836 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in cangw_pernet_exit_batch
2022/11/11 04:26 linux-next 0cdb3579f1ee 3ead01ad .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in cangw_pernet_exit_batch
2022/11/05 08:47 linux-next 0cdb3579f1ee 6d752409 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in cangw_pernet_exit_batch
* Struck through repros no longer work on HEAD.