syzbot


INFO: task hung in bch2_direct_write

Status: upstream: reported on 2024/07/03 06:51
Subsystems: bcachefs
[Documentation on labels]
Reported-by: syzbot+1bad52f1790df954e281@syzkaller.appspotmail.com
First crash: 71d, last: 9h45m
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [bcachefs?] INFO: task hung in bch2_direct_write 0 (1) 2024/07/03 06:51

Sample crash report:
INFO: task syz.0.238:7651 blocked for more than 143 seconds.
      Not tainted 6.10.0-syzkaller-05505-gb1bc554e009e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.238       state:D stack:20032 pid:7651  tgid:7650  ppid:6997   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5188 [inline]
 __schedule+0x17ae/0x4a10 kernel/sched/core.c:6529
 __schedule_loop kernel/sched/core.c:6606 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6621
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
 rwsem_down_write_slowpath+0xeeb/0x13b0 kernel/locking/rwsem.c:1178
 __down_write_common kernel/locking/rwsem.c:1306 [inline]
 __down_write kernel/locking/rwsem.c:1315 [inline]
 down_write+0x1d7/0x220 kernel/locking/rwsem.c:1580
 inode_lock include/linux/fs.h:799 [inline]
 bch2_direct_write+0x243/0x3050 fs/bcachefs/fs-io-direct.c:598
 bch2_write_iter+0x206/0x2840 fs/bcachefs/fs-io-buffered.c:1135
 iter_file_splice_write+0xbd7/0x14e0 fs/splice.c:743
 do_splice_from fs/splice.c:941 [inline]
 direct_splice_actor+0x11e/0x220 fs/splice.c:1164
 splice_direct_to_actor+0x58e/0xc90 fs/splice.c:1108
 do_splice_direct_actor fs/splice.c:1207 [inline]
 do_splice_direct+0x28c/0x3e0 fs/splice.c:1233
 do_sendfile+0x56d/0xe20 fs/read_write.c:1295
 __do_sys_sendfile64 fs/read_write.c:1362 [inline]
 __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f60b1375b59
RSP: 002b:00007f60b2149048 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f60b1503f60 RCX: 00007f60b1375b59
RDX: 0000000000000000 RSI: 0000000000000005 RDI: 0000000000000004
RBP: 00007f60b13e4e5d R08: 0000000000000000 R09: 0000000000000000
R10: 0001000000201005 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f60b1503f60 R15: 00007fff7e4630d8
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8e335fe0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:327 [inline]
 #0: ffffffff8e335fe0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:839 [inline]
 #0: ffffffff8e335fe0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6613
4 locks held by kworker/u8:5/1059:
2 locks held by kworker/u8:7/1102:
 #0: ffff8880b953e818 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:560
 #1: ffff8880b9528948 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x441/0x770 kernel/sched/psi.c:989
2 locks held by kworker/u8:9/2826:
 #0: ffff88801b398948 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3206 [inline]
 #0: ffff88801b398948 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
 #1: ffffc900096c7d00 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
 #1: ffffc900096c7d00 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
1 lock held by udevd/4540:
2 locks held by getty/4841:
 #0: ffff88802b1c50a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc900031332f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2211
2 locks held by syz.0.238/7651:
 #0: ffff88805a4a8420 (sb_writers#13){.+.+}-{0:0}, at: direct_splice_actor+0x49/0x220 fs/splice.c:1163
 #1: ffff88805e66cb08 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:799 [inline]
 #1: ffff88805e66cb08 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: bch2_direct_write+0x243/0x3050 fs/bcachefs/fs-io-direct.c:598
5 locks held by syz.0.238/7678:
 #0: ffff88805a4a8420 (sb_writers#13){.+.+}-{0:0}, at: do_ftruncate+0x294/0x590 fs/open.c:178
 #1: ffff88805e66cb08 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:799 [inline]
 #1: ffff88805e66cb08 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: do_truncate fs/open.c:63 [inline]
 #1: ffff88805e66cb08 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: do_ftruncate+0x457/0x590 fs/open.c:181
 #2: ffff888050680ab8 (&c->snapshot_create_lock){.+.+}-{3:3}, at: bch2_truncate+0x16c/0x2c0 fs/bcachefs/io_misc.c:290
 #3: ffff8880506842d8 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:151 [inline]
 #3: ffff8880506842d8 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:250 [inline]
 #3: ffff8880506842d8 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x7f8/0xe40 fs/bcachefs/btree_iter.c:3193
 #4: ffff8880506a6750 (&c->gc_lock){.+.+}-{3:3}, at: bch2_btree_update_start+0x68d/0x1500 fs/bcachefs/btree_update_interior.c:1195
2 locks held by bch-copygc/loop/7676:
1 lock held by syz-executor/8336:
 #0: ffffffff8e33b3b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:296 [inline]
 #0: ffffffff8e33b3b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x381/0x830 kernel/rcu/tree_exp.h:958
2 locks held by syz.2.394/9139:
1 lock held by udevadm/9188:
1 lock held by udevadm/9189:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 30 Comm: khungtaskd Not tainted 6.10.0-syzkaller-05505-gb1bc554e009e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xfde/0x1020 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 7676 Comm: bch-copygc/loop Not tainted 6.10.0-syzkaller-05505-gb1bc554e009e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/27/2024
RIP: 0010:__might_resched+0xc/0x780 kernel/sched/core.c:8392
Code: e8 79 e6 91 00 e9 63 ff ff ff 0f 1f 40 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 55 48 89 e5 41 57 41 56 <41> 55 41 54 53 48 83 e4 e0 48 81 ec e0 00 00 00 41 89 d4 41 89 f6
RSP: 0018:ffffc90004506fc0 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000249 RDI: ffffffff8bcab360
RBP: ffffc90004506fd0 R08: ffffc900045070a7 R09: 0000000000000000
R10: ffffc90004507080 R11: fffff520008a0e15 R12: dffffc0000000000
R13: ffff8880506845f0 R14: 0000000000000000 R15: ffff88807e908000
FS:  0000000000000000(0000) GS:ffff8880b9400000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f9d9fc88000 CR3: 0000000078db4000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 __mutex_lock_common kernel/locking/mutex.c:585 [inline]
 __mutex_lock+0xc1/0xd70 kernel/locking/mutex.c:752
 bch2_btree_write_buffer_flush_locked+0x114/0x40c0 fs/bcachefs/btree_write_buffer.c:269
 bch2_btree_write_buffer_flush_nocheck_rw fs/bcachefs/btree_write_buffer.c:478 [inline]
 bch2_btree_write_buffer_tryflush+0x16a/0x1c0 fs/bcachefs/btree_write_buffer.c:492
 bch2_copygc_get_buckets fs/bcachefs/movinggc.c:155 [inline]
 bch2_copygc+0x255/0x4640 fs/bcachefs/movinggc.c:215
 bch2_copygc_thread+0x757/0xc80 fs/bcachefs/movinggc.c:370
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (33):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/07/18 20:13 upstream b1bc554e009e 71884c12 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/07/13 15:09 upstream 528dd46d0fc3 eaeb5c15 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/07/10 05:15 upstream 34afb82a3c67 bc144f9a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/29 06:42 upstream de0a9f448633 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/29 06:37 upstream de0a9f448633 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/29 06:37 upstream de0a9f448633 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/29 06:37 upstream de0a9f448633 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/29 06:26 upstream de0a9f448633 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/29 06:23 upstream de0a9f448633 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/10 01:37 upstream 771ed66105de 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/09 03:17 upstream 061d1af7b030 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/09 02:32 upstream 061d1af7b030 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/05 16:02 upstream 32f88d65f01b e1e2c66e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in bch2_direct_write
2024/06/05 08:56 upstream 32f88d65f01b e1e2c66e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/04 19:22 upstream 2ab795141095 11f2afa5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/06/01 08:35 upstream d8ec19857b09 3113787f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/31 17:20 upstream 4a4be1ad3a6e 0c378259 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/31 08:57 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/31 08:57 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/23 19:40 upstream b6394d6f7159 4c2072ee .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/23 19:39 upstream b6394d6f7159 4c2072ee .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/22 06:33 upstream b6394d6f7159 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/22 05:40 upstream b6394d6f7159 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/22 02:39 upstream b6394d6f7159 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/22 02:39 upstream b6394d6f7159 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/22 02:37 upstream b6394d6f7159 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/22 02:32 upstream b6394d6f7159 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/22 00:31 upstream b6394d6f7159 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/13 16:38 upstream a38297e3fb01 9026e142 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/12 10:59 upstream cf87f46fd34d 9026e142 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/09 05:33 upstream 6d7ddd805123 20bf80e1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_direct_write
2024/05/22 08:36 linux-next 124cfbcd6d18 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in bch2_direct_write
2024/05/22 08:36 linux-next 124cfbcd6d18 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in bch2_direct_write
* Struck through repros no longer work on HEAD.