syzbot


INFO: task hung in rxrpc_release

Status: auto-obsoleted due to no activity on 2023/08/19 19:10
Reported-by: syzbot+e75907a2c5b1e5584b98@syzkaller.appspotmail.com
First crash: 579d, last: 579d
Similar bugs (7)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in rxrpc_release (3) 3 639d 765d 0/1 upstream: reported on 2022/10/17 23:49
linux-4.19 INFO: task hung in rxrpc_release (2) 3 1071d 1096d 0/1 auto-closed as invalid on 2022/04/15 12:04
linux-5.15 INFO: task hung in rxrpc_release 1 598d 598d 0/3 auto-obsoleted due to no activity on 2023/07/31 18:39
upstream INFO: task hung in rxrpc_release (2) net afs 1 1623d 1623d 0/28 auto-closed as invalid on 2020/09/09 16:08
upstream INFO: task hung in rxrpc_release (3) afs net syz unreliable 7 290d 291d 0/28 auto-obsoleted due to no activity on 2024/05/01 12:51
upstream INFO: task hung in rxrpc_release afs net 1 1755d 1755d 0/28 auto-closed as invalid on 2020/05/30 15:47
linux-4.19 INFO: task hung in rxrpc_release 1 1780d 1780d 0/1 auto-closed as invalid on 2020/05/05 21:00

Sample crash report:
INFO: task kworker/u4:0:9 blocked for more than 143 seconds.
      Not tainted 6.1.25-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:0    state:D stack:25248 pid:9     ppid:2      flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5241 [inline]
 __schedule+0x132c/0x4330 kernel/sched/core.c:6554
 schedule+0xbf/0x180 kernel/sched/core.c:6630
 schedule_timeout+0xac/0x300 kernel/time/timer.c:1911
 do_wait_for_common+0x441/0x5e0 kernel/sched/completion.c:85
 __wait_for_common kernel/sched/completion.c:106 [inline]
 wait_for_common kernel/sched/completion.c:117 [inline]
 wait_for_completion+0x46/0x60 kernel/sched/completion.c:138
 __flush_workqueue+0x737/0x1610 kernel/workqueue.c:2861
 rxrpc_release_sock net/rxrpc/af_rxrpc.c:887 [inline]
 rxrpc_release+0x274/0x430 net/rxrpc/af_rxrpc.c:917
 __sock_release net/socket.c:652 [inline]
 sock_release+0x7a/0x140 net/socket.c:680
 afs_close_socket+0x284/0x310 fs/afs/rxrpc.c:125
 afs_net_exit+0x58/0xa0 fs/afs/main.c:158
 ops_exit_list net/core/net_namespace.c:169 [inline]
 cleanup_net+0x6ce/0xb60 net/core/net_namespace.c:601
 process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289
 worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436
 kthread+0x268/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u4:0/9:
 #0: ffff888012606938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc900000e7d20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffffffff8e07da50 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:563
2 locks held by kworker/u4:1/11:
 #0: ffff888012469138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc90000107d20 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8cf26870 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8cf27070 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/28:
 #0: ffffffff8cf266a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by getty/3308:
 #0: ffff888028a02098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2177
3 locks held by kworker/0:5/3709:
2 locks held by kworker/u4:7/3798:
 #0: ffff888012469138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc900049ffd20 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
3 locks held by kworker/u4:8/6847:
3 locks held by kworker/1:9/10832:
1 lock held by dhcpcd/13051:
 #0: ffff88805133b810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #0: ffff88805133b810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release net/socket.c:651 [inline]
 #0: ffff88805133b810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: sock_close+0x98/0x230 net/socket.c:1370
1 lock held by dhcpcd/13107:
 #0: ffff888048003810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #0: ffff888048003810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release net/socket.c:651 [inline]
 #0: ffff888048003810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: sock_close+0x98/0x230 net/socket.c:1370
1 lock held by dhcpcd/13139:
 #0: ffff8880513e0e10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #0: ffff8880513e0e10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release net/socket.c:651 [inline]
 #0: ffff8880513e0e10 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: sock_close+0x98/0x230 net/socket.c:1370
2 locks held by dhcpcd/13320:
 #0: ffff88805105e810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #0: ffff88805105e810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: __sock_release net/socket.c:651 [inline]
 #0: ffff88805105e810 (&sb->s_type->i_mutex_key#10){+.+.}-{3:3}, at: sock_close+0x98/0x230 net/socket.c:1370
 #1: ffffffff8cf2bc78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
 #1: ffffffff8cf2bc78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x479/0x8a0 kernel/rcu/tree_exp.h:948
1 lock held by syz-executor.4/19894:
3 locks held by syz-executor.0/19899:
 #0: ffff8880b9939dd8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:537
 #1: ffff8880b9927788 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x43d/0x770 kernel/sched/psi.c:952
 #2: ffff8880b9927788 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_change+0xf9/0x270 kernel/sched/psi.c:876
1 lock held by syz-executor.0/19910:
2 locks held by syz-executor.1/19903:
 #0: ffff8880b9839dd8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:537
 #1: ffff8880b9827788 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x43d/0x770 kernel/sched/psi.c:952
2 locks held by syz-executor.3/19909:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.1.25-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xf18/0xf60 kernel/hung_task.c:377
 kthread+0x268/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 3658 Comm: syz-executor.2 Not tainted 6.1.25-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/30/2023
RIP: 0010:find_stack lib/stackdepot.c:305 [inline]
RIP: 0010:__stack_depot_save+0x172/0x470 lib/stackdepot.c:452
Code: 7f 0d 8b 1d d4 2f 7f 0d 44 21 eb 48 89 44 24 10 4c 8b 34 d8 4c 89 c5 41 89 ec eb 03 4d 8b 36 4d 85 f6 74 2a 45 39 6e 08 75 f2 <41> 39 6e 0c 75 ec 31 c0 49 8b 0c c7 49 3b 4c c6 18 75 df 48 ff c0
RSP: 0000:ffffc900040feef8 EFLAGS: 00000246
RAX: ffff88823b400000 RBX: 00000000000938d4 RCX: 00000000e3ce254e
RDX: ffffc900040fef78 RSI: 0000000000000002 RDI: 0000000000082120
RBP: 0000000000000004 R08: 0000000000000004 R09: 0000000000000001
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000004
R13: 00000000316938d4 R14: ffff8881514bc7e0 R15: ffffc900040fef60
FS:  0000555556877400(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007faeafab0e91 CR3: 0000000057f19000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 kasan_save_stack mm/kasan/common.c:46 [inline]
 kasan_set_track+0x60/0x70 mm/kasan/common.c:52
 ____kasan_kmalloc mm/kasan/common.c:374 [inline]
 __kasan_kmalloc+0x97/0xb0 mm/kasan/common.c:383
 kmalloc include/linux/slab.h:553 [inline]
 kzalloc include/linux/slab.h:689 [inline]
 set_mm_walk mm/vmscan.c:4218 [inline]
 try_to_inc_max_seq+0x274/0x2af0 mm/vmscan.c:4401
 get_nr_to_scan mm/vmscan.c:5111 [inline]
 lru_gen_shrink_lruvec mm/vmscan.c:5197 [inline]
 shrink_lruvec+0xd02/0x4650 mm/vmscan.c:5896
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/04/21 19:10 linux-6.1.y f17b0ab65d17 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf INFO: task hung in rxrpc_release
* Struck through repros no longer work on HEAD.