syzbot


INFO: task hung in io_wq_put_and_exit

Status: upstream: reported on 2024/03/11 08:55
Reported-by: syzbot+ade2ca85e7b68028c3e1@syzkaller.appspotmail.com
First crash: 76d, last: 76d
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in io_wq_put_and_exit (3) io-uring C error unreliable 75 400d 794d 0/26 auto-obsoleted due to no activity on 2023/08/20 08:26
upstream INFO: task hung in io_wq_put_and_exit io-uring fs C unreliable 628 929d 990d 20/26 fixed on 2021/11/10 00:50
upstream INFO: task hung in io_wq_put_and_exit (2) fs 22 898d 927d 0/26 closed as invalid on 2022/02/08 09:40

Sample crash report:
INFO: task syz-executor.0:17095 blocked for more than 143 seconds.
      Not tainted 5.15.151-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.0  state:D stack:25944 pid:17095 ppid: 10771 flags:0x00104006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5030 [inline]
 __schedule+0x12c4/0x45b0 kernel/sched/core.c:6376
 schedule+0x11b/0x1f0 kernel/sched/core.c:6459
 schedule_timeout+0xac/0x300 kernel/time/timer.c:1860
 do_wait_for_common+0x2d9/0x480 kernel/sched/completion.c:85
 __wait_for_common kernel/sched/completion.c:106 [inline]
 wait_for_common kernel/sched/completion.c:117 [inline]
 wait_for_completion+0x48/0x60 kernel/sched/completion.c:138
 io_wq_exit_workers io_uring/io-wq.c:1256 [inline]
 io_wq_put_and_exit+0x468/0xb30 io_uring/io-wq.c:1291
 io_uring_clean_tctx io_uring/io_uring.c:10023 [inline]
 io_uring_cancel_generic+0x703/0x900 io_uring/io_uring.c:10092
 io_uring_files_cancel include/linux/io_uring.h:16 [inline]
 do_exit+0x278/0x2480 kernel/exit.c:827
 do_group_exit+0x144/0x310 kernel/exit.c:994
 get_signal+0xc66/0x14e0 kernel/signal.c:2889
 arch_do_signal_or_restart+0xc3/0x1890 arch/x86/kernel/signal.c:867
 handle_signal_work kernel/entry/common.c:148 [inline]
 exit_to_user_mode_loop+0x97/0x130 kernel/entry/common.c:172
 exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:208
 __syscall_exit_to_user_mode_work kernel/entry/common.c:290 [inline]
 syscall_exit_to_user_mode+0x5d/0x250 kernel/entry/common.c:301
 do_syscall_64+0x49/0xb0 arch/x86/entry/common.c:86
 entry_SYSCALL_64_after_hwframe+0x61/0xcb
RIP: 0033:0x7fb1e43bdda9
RSP: 002b:00007fb1e293e178 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: fffffffffffffe00 RBX: 00007fb1e44ebf88 RCX: 00007fb1e43bdda9
RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fb1e44ebf88
RBP: 00007fb1e44ebf80 R08: 00007fb1e293e6c0 R09: 00007fb1e293e6c0
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb1e44ebf8c
R13: 000000000000000b R14: 00007ffc66d68860 R15: 00007ffc66d68948
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
 #0: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by getty/3257:
 #0: ffff88802461e098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252
 #1: ffffc90002bab2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6af/0x1db0 drivers/tty/n_tty.c:2158
3 locks held by kworker/1:4/3579:
 #0: ffff8880b9b3a318 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
 #1: ffff8880b9b27848 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x53d/0x810 kernel/sched/psi.c:891
 #2: ffffffff8c91f780 (rcu_read_lock_bh){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268
3 locks held by kworker/1:5/3583:
 #0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc9000448fd20 (fqdir_free_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
 #2: ffffffff8c923bf0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x9c/0x4e0 kernel/rcu/tree.c:4039
2 locks held by kworker/u4:6/3724:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90004e27d20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/u4:11/3910:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90004637d20 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
3 locks held by kworker/1:13/4339:
 #0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc9000518fd20 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
 #2: ffff88801de94240 (&data->fib_lock){+.+.}-{3:3}, at: nsim_fib_event_work+0x2cd/0x4120 drivers/net/netdevsim/fib.c:1478
2 locks held by kworker/u4:10/16722:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90004f47d20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/u4:14/16726:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc900053bfd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
1 lock held by iou-wrk-17095/17099:
2 locks held by kworker/u4:20/17789:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc9000640fd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
1 lock held by iou-wrk-18129/18135:
2 locks held by kworker/u4:23/19310:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc900090e7d20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/u4:25/19312:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90009147d20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
4 locks held by kworker/u4:26/19315:
 #0: ffff888011dcd138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90009167d20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
 #2: ffffffff8d9cfd50 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:558
 #3: ffffffff8c923ce8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
 #3: ffffffff8c923ce8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x280/0x740 kernel/rcu/tree_exp.h:845
2 locks held by kworker/u4:27/19317:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc9000959fd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
1 lock held by iou-wrk-19348/19359:
2 locks held by kworker/u4:30/19429:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc9000b7cfd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/u4:32/20141:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc900044cfd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/u4:33/20142:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc900043cfd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
1 lock held by syz-executor.0/20444:
 #0: ffffffff8d9db908 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
 #0: ffffffff8d9db908 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5626
2 locks held by kworker/u4:34/20546:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90005a2fd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/u4:35/20547:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90005a3fd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/u4:36/20548:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90005a7fd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/u4:37/20562:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90005b6fd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/u4:38/20563:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc900059ffd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by kworker/u4:40/20598:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90005d2fd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
2 locks held by syz-executor.3/20736:
 #0: ffffffff8d9db908 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
 #0: ffffffff8d9db908 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x94c/0xee0 net/core/rtnetlink.c:5626
 #1: ffffffff8c923ce8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline]
 #1: ffffffff8c923ce8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x350/0x740 kernel/rcu/tree_exp.h:845
1 lock held by syz-executor.4/20772:
 #0: ffff88802e6340a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ctx_quiesce io_uring/io_uring.c:11087 [inline]
 #0: ffff88802e6340a8 (&ctx->uring_lock){+.+.}-{3:3}, at: __io_uring_register io_uring/io_uring.c:11116 [inline]
 #0: ffff88802e6340a8 (&ctx->uring_lock){+.+.}-{3:3}, at: __do_sys_io_uring_register io_uring/io_uring.c:11257 [inline]
 #0: ffff88802e6340a8 (&ctx->uring_lock){+.+.}-{3:3}, at: __se_sys_io_uring_register+0x119f/0x3450 io_uring/io_uring.c:11234
1 lock held by syz-executor.4/20776:
 #0: ffff88802e6340a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_uring_del_tctx_node+0xe3/0x2b0 io_uring/io_uring.c:9999
2 locks held by kworker/u4:41/20804:
 #0: ffff8880b9a3a318 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
 #1: ffff8880b9a27848 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x4e1/0x810 kernel/sched/psi.c:882
2 locks held by kworker/u4:42/20810:
 #0: ffff888011c79138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc9000b5bfd20 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
3 locks held by kworker/u4:43/20811:
 #0: ffff8880b9b3a318 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
 #1: ffff8880b9b27848 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x53d/0x810 kernel/sched/psi.c:891
 #2: ffff8880b9b3a318 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475
3 locks held by kworker/u4:45/20815:
1 lock held by syz-executor.1/20820:
1 lock held by syz-executor.2/20823:
 #0: ffffffff8d9db908 (rtnl_mutex){+.+.}-{3:3}, at: __netlink_dump_start+0x12e/0x6f0 net/netlink/af_netlink.c:2348

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted 5.15.151-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 nmi_cpu_backtrace+0x46a/0x4a0 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x181/0x2a0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline]
 watchdog+0xe72/0xeb0 kernel/hung_task.c:295
 kthread+0x3f6/0x4f0 kernel/kthread.c:319
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 17099 Comm: iou-wrk-17095 Not tainted 5.15.151-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
RIP: 0010:hlock_class kernel/locking/lockdep.c:197 [inline]
RIP: 0010:check_wait_context kernel/locking/lockdep.c:4687 [inline]
RIP: 0010:__lock_acquire+0x551/0x1ff0 kernel/locking/lockdep.c:4962
Code: 00 00 41 8b 1f 81 e3 ff 1f 00 00 89 d8 c1 e8 06 48 8d 3c c5 c0 f0 bc 8f be 08 00 00 00 e8 c7 60 67 00 48 0f a3 1d ef 5d 5a 0e <73> 1f 48 8d 04 5b 48 c1 e0 06 48 8d 98 c0 4f 8c 8f 48 ba 00 00 00
RSP: 0018:ffffc90002df6460 EFLAGS: 00000057
RAX: 0000000000000001 RBX: 0000000000000015 RCX: ffffffff816292c9
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8fbcf0c0
RBP: 000000000000000a R08: dffffc0000000000 R09: fffffbfff1f79e19
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000002
R13: ffff88805830a8a8 R14: 0000000000000002 R15: ffff88805830a920
FS:  00007fb1e293e6c0(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f5e5ed0f290 CR3: 000000008e86f000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
 rcu_lock_acquire+0x2a/0x30 include/linux/rcupdate.h:269
 rcu_read_lock include/linux/rcupdate.h:696 [inline]
 list_lru_count_one+0x46/0x2f0 mm/list_lru.c:181
 list_lru_shrink_count include/linux/list_lru.h:123 [inline]
 nfs4_xattr_entry_count+0x79/0x120 fs/nfs/nfs42xattr.c:978
 do_shrink_slab+0x7d/0xda0 mm/vmscan.c:705
 shrink_slab_memcg mm/vmscan.c:827 [inline]
 shrink_slab+0x5a1/0x960 mm/vmscan.c:906
 shrink_node_memcgs mm/vmscan.c:2951 [inline]
 shrink_node+0x1113/0x25d0 mm/vmscan.c:3072
 shrink_zones mm/vmscan.c:3275 [inline]
 do_try_to_free_pages+0x650/0x1670 mm/vmscan.c:3330
 try_to_free_mem_cgroup_pages+0x44c/0xa60 mm/vmscan.c:3644
 try_charge_memcg+0x4f4/0x1530 mm/memcontrol.c:2651
 obj_cgroup_charge_pages+0xab/0x1d0 mm/memcontrol.c:3015
 obj_cgroup_charge+0x19e/0x390 mm/memcontrol.c:3296
 memcg_slab_pre_alloc_hook mm/slab.h:287 [inline]
 slab_pre_alloc_hook+0xa6/0xc0 mm/slab.h:497
 slab_alloc_node mm/slub.c:3134 [inline]
 slab_alloc mm/slub.c:3228 [inline]
 kmem_cache_alloc_trace+0x49/0x290 mm/slub.c:3245
 kmalloc include/linux/slab.h:591 [inline]
 io_add_buffers io_uring/io_uring.c:4559 [inline]
 io_provide_buffers io_uring/io_uring.c:4594 [inline]
 io_issue_sqe+0x37dc/0xab60 io_uring/io_uring.c:7002
 io_wq_submit_work+0x196/0x6d0 io_uring/io_uring.c:7074
 io_worker_handle_work+0x7fc/0xd80 io_uring/io-wq.c:586
 io_wqe_worker+0x35e/0xdc0 io_uring/io-wq.c:640
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/03/11 08:54 linux-5.15.y 574362648507 6ee49f2e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan INFO: task hung in io_wq_put_and_exit
* Struck through repros no longer work on HEAD.