syzbot


INFO: task hung in io_uring_del_tctx_node (3)

Status: auto-obsoleted due to no activity on 2024/10/02 07:14
Subsystems: io-uring
[Documentation on labels]
First crash: 226d, last: 226d
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in io_uring_del_tctx_node 2 661d 665d 0/3 auto-obsoleted due to no activity on 2023/08/23 09:02
upstream INFO: task hung in io_uring_del_tctx_node io-uring fs C unreliable 37 1077d 1240d 20/28 fixed on 2022/03/08 16:11
upstream INFO: task hung in io_uring_del_tctx_node (2) io-uring C error error 20 706d 1068d 0/28 auto-obsoleted due to no activity on 2023/07/30 22:24

Sample crash report:
INFO: task syz.0.537:7473 blocked for more than 143 seconds.
      Not tainted 6.10.0-rc6-syzkaller-00067-g8a9c6c40432e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.537       state:D stack:23800 pid:7473  tgid:7472  ppid:5086   flags:0x00004002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5408 [inline]
 __schedule+0x1796/0x49d0 kernel/sched/core.c:6745
 __schedule_loop kernel/sched/core.c:6822 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6837
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6894
 __mutex_lock_common kernel/locking/mutex.c:684 [inline]
 __mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
 io_uring_del_tctx_node+0xeb/0x2b0 io_uring/tctx.c:169
 io_uring_clean_tctx+0x10a/0x1e0 io_uring/tctx.c:185
 io_uring_cancel_generic+0x7dd/0x850 io_uring/io_uring.c:3120
 io_uring_files_cancel include/linux/io_uring.h:20 [inline]
 do_exit+0x6a8/0x27e0 kernel/exit.c:832
 do_group_exit+0x207/0x2c0 kernel/exit.c:1023
 get_signal+0x16a1/0x1740 kernel/signal.c:2909
 arch_do_signal_or_restart+0x96/0x860 arch/x86/kernel/signal.c:310
 exit_to_user_mode_loop kernel/entry/common.c:111 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0xc9/0x360 kernel/entry/common.c:218
 do_syscall_64+0x100/0x230 arch/x86/entry/common.c:89
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7efe5a975bd9
RSP: 002b:00007efe5b70b048 EFLAGS: 00000246 ORIG_RAX: 000000000000012b
RAX: fffffffffffffe00 RBX: 00007efe5ab03f60 RCX: 00007efe5a975bd9
RDX: 0000000000000001 RSI: 00000000200008c0 RDI: 0000000000000006
RBP: 00007efe5a9e4a98 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007efe5ab03f60 R15: 00007ffe04e879b8
 </TASK>
INFO: task syz.0.537:7480 blocked for more than 144 seconds.
      Not tainted 6.10.0-rc6-syzkaller-00067-g8a9c6c40432e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.537       state:D stack:26104 pid:7480  tgid:7472  ppid:5086   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5408 [inline]
 __schedule+0x1796/0x49d0 kernel/sched/core.c:6745
 __schedule_loop kernel/sched/core.c:6822 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6837
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6894
 rwsem_down_read_slowpath kernel/locking/rwsem.c:1086 [inline]
 __down_read_common kernel/locking/rwsem.c:1250 [inline]
 __down_read kernel/locking/rwsem.c:1263 [inline]
 down_read+0x705/0xa40 kernel/locking/rwsem.c:1528
 filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 page_cache_ra_unbounded+0xf7/0x7f0 mm/readahead.c:225
 page_cache_sync_readahead include/linux/pagemap.h:1306 [inline]
 filemap_get_pages+0x49d/0x2090 mm/filemap.c:2529
 filemap_read+0x457/0xfa0 mm/filemap.c:2625
 blkdev_read_iter+0x2df/0x440 block/fops.c:749
 io_iter_do_read io_uring/rw.c:762 [inline]
 __io_read+0x3a6/0xf40 io_uring/rw.c:857
 io_read+0x1e/0x60 io_uring/rw.c:927
 io_issue_sqe+0x36a/0x14f0 io_uring/io_uring.c:1751
 io_queue_sqe io_uring/io_uring.c:1965 [inline]
 io_submit_sqe io_uring/io_uring.c:2221 [inline]
 io_submit_sqes+0xaff/0x1bf0 io_uring/io_uring.c:2336
 __do_sys_io_uring_enter io_uring/io_uring.c:3245 [inline]
 __se_sys_io_uring_enter+0x2d4/0x2670 io_uring/io_uring.c:3182
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7efe5a975bd9
RSP: 002b:00007efe5a2dd048 EFLAGS: 00000246 ORIG_RAX: 00000000000001aa
RAX: ffffffffffffffda RBX: 00007efe5ab04110 RCX: 00007efe5a975bd9
RDX: 0000000000000000 RSI: 0000000000000b15 RDI: 0000000000000004
RBP: 00007efe5a9e4a98 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007efe5ab04110 R15: 00007ffe04e879b8
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
3 locks held by kworker/1:1/45:
 #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3223 [inline]
 #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3329
 #1: ffffc90000b57d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3224 [inline]
 #1: ffffc90000b57d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3329
 #2: ffffffff8f5d49c8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:276
3 locks held by kworker/1:2/784:
2 locks held by kworker/u8:8/2469:
 #0: ffff888017fd6148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3223 [inline]
 #0: ffff888017fd6148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3329
 #1: ffffc9000919fd00 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3224 [inline]
 #1: ffffc9000919fd00 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3329
2 locks held by getty/4837:
 #0: ffff88802a9830a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f1e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2211
5 locks held by kworker/u8:17/7059:
 #0: ffff8880b943e758 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0xb0/0x140 kernel/sched/core.c:567
 #1: ffff8880b9528948 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x441/0x770 kernel/sched/psi.c:988
 #2: ffff8880b952a718 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x112/0x240 kernel/time/timer.c:1051
 #3: ffffffff9498d2b8 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x16d/0x510 lib/debugobjects.c:708
 #4: ffffffff9497d310 (&obj_hash[i].lock){-.-.}-{2:2}, at: __debug_check_no_obj_freed lib/debugobjects.c:978 [inline]
 #4: ffffffff9497d310 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_check_no_obj_freed+0x234/0x580 lib/debugobjects.c:1019
1 lock held by syz.4.478/7109:
3 locks held by kworker/u8:33/7189:
1 lock held by syz.0.537/7473:
 #0: ffff88804c3180a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_uring_del_tctx_node+0xeb/0x2b0 io_uring/tctx.c:169
2 locks held by syz.0.537/7480:
 #0: ffff88804c3180a8 (&ctx->uring_lock){+.+.}-{3:3}, at: __do_sys_io_uring_enter io_uring/io_uring.c:3244 [inline]
 #0: ffff88804c3180a8 (&ctx->uring_lock){+.+.}-{3:3}, at: __se_sys_io_uring_enter+0x2c9/0x2670 io_uring/io_uring.c:3182
 #1: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #1: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: page_cache_ra_unbounded+0xf7/0x7f0 mm/readahead.c:225
1 lock held by syz.1.700/8109:
 #0: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #0: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: page_cache_ra_unbounded+0xf7/0x7f0 mm/readahead.c:225
1 lock held by syz.1.774/8543:
 #0: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #0: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: page_cache_ra_unbounded+0xf7/0x7f0 mm/readahead.c:225
1 lock held by syz.0.793/8659:
 #0: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #0: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: page_cache_ra_unbounded+0xf7/0x7f0 mm/readahead.c:225
1 lock held by syz.3.795/8665:
 #0: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #0: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: page_cache_ra_unbounded+0xf7/0x7f0 mm/readahead.c:225
1 lock held by syz.2.823/9019:
 #0: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock include/linux/fs.h:836 [inline]
 #0: ffff88801d550948 (mapping.invalidate_lock#2){++++}-{3:3}, at: blkdev_fallocate+0x233/0x550 block/fops.c:792
2 locks held by syz.1.840/9102:
 #0: ffffffff8f5d49c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8f5d49c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x842/0x1180 net/core/rtnetlink.c:6632
 #1: ffffffff8e3392f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #1: ffffffff8e3392f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:939
1 lock held by syz.1.840/9105:
 #0: ffff888068c40810 (&sb->s_type->i_mutex_key#9){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:791 [inline]
 #0: ffff888068c40810 (&sb->s_type->i_mutex_key#9){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
 #0: ffff888068c40810 (&sb->s_type->i_mutex_key#9){+.+.}-{3:3}, at: sock_close+0x90/0x240 net/socket.c:1421
1 lock held by syz.3.843/9108:
 #0: ffffffff8f5d49c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8f5d49c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x842/0x1180 net/core/rtnetlink.c:6632
1 lock held by syz.0.844/9110:
 #0: ffffffff8f5d49c8 (rtnl_mutex){+.+.}-{3:3}, at: dev_ioctl+0x86e/0x1340 net/core/dev_ioctl.c:811
1 lock held by syz.0.844/9111:
 #0: ffffffff8f5d49c8 (rtnl_mutex){+.+.}-{3:3}, at: dev_ioctl+0x86e/0x1340 net/core/dev_ioctl.c:811

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 30 Comm: khungtaskd Not tainted 6.10.0-rc6-syzkaller-00067-g8a9c6c40432e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xfde/0x1020 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline]
NMI backtrace for cpu 1 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:86 [inline]
NMI backtrace for cpu 1 skipped: idling at acpi_safe_halt+0x21/0x30 drivers/acpi/processor_idle.c:112

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/07/04 07:11 upstream 8a9c6c40432e 409d975c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in io_uring_del_tctx_node
* Struck through repros no longer work on HEAD.