syzbot


INFO: task hung in do_exit (2)

Status: upstream: reported on 2025/12/25 10:17
Reported-by: syzbot+2348fdd295bb228c7c18@syzkaller.appspotmail.com
First crash: 48d, last: 48d
Similar bugs (8)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in do_exit 1 syz done error 125 1473d 2496d 0/29 closed as invalid on 2022/02/08 10:54
linux-4.14 INFO: task hung in do_exit 1 syz inconclusive 8 1469d 2484d 0/1 upstream: reported syz repro on 2019/04/25 02:28
android-414 INFO: task hung in do_exit 1 syz 19 2264d 2496d 0/1 public: reported syz repro on 2019/04/13 00:01
linux-4.19 INFO: task hung in do_exit 1 C error 58 1101d 2489d 0/1 upstream: reported C repro on 2019/04/19 20:32
upstream INFO: task can't die in show_free_areas serial 1 C error 240 4d15h 1478d 0/29 upstream: reported C repro on 2022/01/24 13:23
android-6-12 INFO: task hung in do_exit 1 1 204d 204d 0/1 auto-obsoleted due to no activity on 2025/10/20 11:57
linux-6.1 INFO: task hung in do_exit 1 1 579d 579d 0/3 auto-obsoleted due to no activity on 2024/10/20 05:24
android-49 INFO: task hung in do_exit 1 syz 18 2311d 2495d 0/3 public: reported syz repro on 2019/04/14 09:28

Sample crash report:
INFO: task syz.5.192:5517 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.5.192       state:D stack:26240 pid:5517  ppid:5092   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5244 [inline]
 __schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
 schedule+0xb9/0x180 kernel/sched/core.c:6637
 coredump_task_exit kernel/exit.c:432 [inline]
 do_exit+0x45d/0x2400 kernel/exit.c:821
 do_group_exit+0x217/0x2d0 kernel/exit.c:1022
 get_signal+0x1272/0x1350 kernel/signal.c:2871
 arch_do_signal_or_restart+0xb7/0x1240 arch/x86/kernel/signal.c:871
 exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
 exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210
 __syscall_exit_to_user_mode_work kernel/entry/common.c:292 [inline]
 syscall_exit_to_user_mode+0x16/0x40 kernel/entry/common.c:303
 do_syscall_64+0x58/0xa0 arch/x86/entry/common.c:87
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f772378f749
RSP: 002b:00007ffedb50e608 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: fffffffffffffdfc RBX: 00000000000258cc RCX: 00007f772378f749
RDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007f77239e5fac
RBP: 0000000000000032 R08: 002f9d06ed85e666 R09: 00000013db50e8ff
R10: 00007ffedb50e700 R11: 0000000000000246 R12: 00007f77239e5fac
R13: 00007ffedb50e700 R14: 00000000000258fe R15: 00007ffedb50e720
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8c92bab0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8c92c2d0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/27:
 #0: ffffffff8c92b120 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8c92b120 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8c92b120 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
4 locks held by kworker/1:1/41:
 #0: ffff888055c4bd38 ((wq_completion)xfs-sync/loop5){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90000b27d00 ((work_completion)(&mp->m_flush_inodes_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffff8880567dc0e0 (&type->s_umount_key#76){++++}-{3:3}, at: xfs_flush_inodes_worker+0x41/0x80 fs/xfs/xfs_super.c:599
 #3: ffff888024a8e7d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:362 [inline]
 #3: ffff888024a8e7d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x19c/0x9e0 fs/fs-writeback.c:2748
1 lock held by acpid/3622:
 #0: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #0: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
2 locks held by getty/4027:
 #0: ffff88814cdcd098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
1 lock held by syz-executor/4262:
2 locks held by udevd/4407:
2 locks held by kworker/0:15/4619:
2 locks held by kworker/u4:20/4673:
 #0: ffff888017479138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90006087d00 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
5 locks held by kworker/u4:24/4681:
 #0: ffff888017616938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc900060f7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffffffff8db2e6d0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x132/0xb80 net/core/net_namespace.c:594
 #3: ffff88807d2992f8 (&devlink->lock_key#2){+.+.}-{3:3}, at: devlink_pernet_pre_exit+0xf8/0x270 net/devlink/leftover.c:12500
 #4: ffffffff8db3b3a8 (rtnl_mutex
){+.+.}-{3:3}, at: devlink_nl_port_fill+0x298/0x910 net/devlink/leftover.c:1276
4 locks held by kworker/u4:26/4685:
 #0: ffff888144e59138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90006127d00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffff8880567dc650 (sb_internal#4){.+.+}-{0:0}, at: xfs_bmapi_convert_one_delalloc fs/xfs/libxfs/xfs_bmap.c:4571 [inline]
 #2: ffff8880567dc650 (sb_internal#4){.+.+}-{0:0}, at: xfs_bmapi_convert_delalloc+0x2fd/0x1480 fs/xfs/libxfs/xfs_bmap.c:4698
 #3: ffff8880553f2018 (&xfs_nondir_ilock_class){++++}-{3:3}, at: xfs_bmapi_convert_one_delalloc fs/xfs/libxfs/xfs_bmap.c:4576 [inline]
 #3: ffff8880553f2018 (&xfs_nondir_ilock_class){++++}-{3:3}, at: xfs_bmapi_convert_delalloc+0x329/0x1480 fs/xfs/libxfs/xfs_bmap.c:4698
2 locks held by syz.5.192/5519:
 #0: ffff8880567dc460 (sb_writers#24){.+.+}-{0:0}, at: do_coredump+0x15de/0x22b0 fs/coredump.c:823
 #1: ffff8880553f2238 (&sb->s_type->i_mutex_key#30){++++}-{3:3}, at: xfs_ilock+0x104/0x3d0 fs/xfs/xfs_inode.c:195
2 locks held by syz.7.499/7489:
 #0: 
ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: ppp_release+0x86/0x1f0 drivers/net/ppp/ppp_generic.c:418
 #1: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #1: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
1 lock held by syz.6.505/7507:
 #0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6147
1 lock held by syz.6.505/7519:
 #0: ffffffff8db3b3a8
 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_set_dstaddr+0xd7/0x2d0 net/ipv6/addrconf.c:2926

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 27 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
 nmi_cpu_backtrace+0x3f4/0x470 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xeee/0xf30 kernel/hung_task.c:377
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 4619 Comm: kworker/0:15 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Workqueue: xfs-buf/loop5 xfs_buf_ioend_work
RIP: 0010:io_serial_in+0x73/0xb0 drivers/tty/serial/8250/8250_port.c:461
Code: e8 a2 7e 09 fd 44 89 f9 d3 e3 49 83 c6 40 4c 89 f0 48 c1 e8 03 42 80 3c 20 00 74 08 4c 89 f7 e8 d3 af 59 fd 41 03 1e 89 da ec <0f> b6 c0 5b 41 5c 41 5e 41 5f c3 44 89 f9 80 e1 07 38 c1 7c aa 4c
RSP: 0018:ffffc90005247278 EFLAGS: 00000002
RAX: 1ffffffff2daac00 RBX: 00000000000003fd RCX: 0000000000000000
RDX: 00000000000003fd RSI: 0000000000000000 RDI: 0000000000000020
RBP: ffffc90005247470 R08: dffffc0000000000 R09: ffffed10048ae047
R10: ffffed10048ae047 R11: 1ffff110048ae046 R12: dffffc0000000000
R13: 00000000000026ee R14: ffffffff96d56480 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f43e21c3000 CR3: 000000007cc21000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 serial_in drivers/tty/serial/8250/8250.h:117 [inline]
 serial_lsr_in drivers/tty/serial/8250/8250.h:139 [inline]
 wait_for_lsr drivers/tty/serial/8250/8250_port.c:2101 [inline]
 fifo_wait_for_lsr drivers/tty/serial/8250/8250_port.c:3366 [inline]
 serial8250_console_fifo_write drivers/tty/serial/8250/8250_port.c:3388 [inline]
 serial8250_console_write+0xf26/0x17a0 drivers/tty/serial/8250/8250_port.c:3473
 call_console_driver kernel/printk/printk.c:1977 [inline]
 console_emit_next_record+0x947/0xc90 kernel/printk/printk.c:2777
 console_flush_all kernel/printk/printk.c:-1 [inline]
 console_unlock+0x223/0x630 kernel/printk/printk.c:2906
 vprintk_emit+0x489/0x680 kernel/printk/printk.c:2303
 _printk+0xcc/0x110 kernel/printk/printk.c:2328
 print_hex_dump+0x1a5/0x260 lib/hexdump.c:285
 xfs_hex_dump+0x39/0x50 fs/xfs/xfs_message.c:110
 xfs_buf_verifier_error+0x1c8/0x290 fs/xfs/xfs_error.c:441
 xfs_agfl_read_verify+0x1bf/0x240 fs/xfs/libxfs/xfs_alloc.c:-1
 xfs_buf_ioend+0x27a/0x780 fs/xfs/xfs_buf.c:1303
 process_one_work+0x898/0x1160 kernel/workqueue.c:2292
 worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
vkms_vblank_simulate: vblank timer overrun

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/12/25 10:16 linux-6.1.y 50cbba13faa2 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in do_exit
* Struck through repros no longer work on HEAD.