syzbot


INFO: task hung in bch2_page_fault

Status: upstream: reported on 2024/12/13 22:41
Subsystems: bcachefs
[Documentation on labels]
Reported-by: syzbot+32415e0466b02533303c@syzkaller.appspotmail.com
First crash: 245d, last: 5d16h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [bcachefs?] INFO: task hung in bch2_page_fault 0 (1) 2024/12/13 22:41

Sample crash report:
INFO: task syz.0.42:6190 blocked for more than 143 seconds.
      Not tainted 6.13.0-rc6-syzkaller-00262-gb62cef9a5c67 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.42        state:D stack:24128 pid:6190  tgid:6190  ppid:5827   flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x17fb/0x4be0 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 __bch2_two_state_lock+0x229/0x2c0 fs/bcachefs/two_state_shared_lock.c:7
 bch2_two_state_lock fs/bcachefs/two_state_shared_lock.h:55 [inline]
 bch2_page_fault+0x31f/0x960 fs/bcachefs/fs-io-pagecache.c:592
 __do_fault+0x135/0x390 mm/memory.c:4907
 do_shared_fault mm/memory.c:5386 [inline]
 do_fault mm/memory.c:5460 [inline]
 do_pte_missing mm/memory.c:3979 [inline]
 handle_pte_fault+0xfcf/0x5ed0 mm/memory.c:5801
 __handle_mm_fault mm/memory.c:5944 [inline]
 handle_mm_fault+0x1053/0x1ad0 mm/memory.c:6112
 do_user_addr_fault arch/x86/mm/fault.c:1338 [inline]
 handle_page_fault arch/x86/mm/fault.c:1481 [inline]
 exc_page_fault+0x459/0x8b0 arch/x86/mm/fault.c:1539
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
RIP: 0033:0x7fb6ce14dd0a
RSP: 002b:00007ffc1b6ff588 EFLAGS: 00010202
RAX: 0000000020000500 RBX: 0000000000000004 RCX: 0000000000004e1c
RDX: 000000000000591c RSI: 00007fb6cdc06bfa RDI: 0000000020001000
RBP: 00007fb6ce377ba0 R08: 0000000020000500 R09: 0000000000000002
R10: 0000000000000000 R11: 0000000000000002 R12: 000000000001cfe9
R13: 00007ffc1b6ff690 R14: 0000000000000032 R15: fffffffffffffffe
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/u8:1/12:
1 lock held by khungtaskd/30:
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6744
3 locks held by kworker/u8:5/1152:
2 locks held by getty/5583:
 #0: ffff88814ce590a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002fd62f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
1 lock held by syz-executor/5813:
1 lock held by udevd/5824:
1 lock held by syz.0.42/6190:
 #0: ffff8880250d44a8 (&vma->vm_lock->lock){++++}-{4:4}, at: vma_start_read include/linux/mm.h:717 [inline]
 #0: ffff8880250d44a8 (&vma->vm_lock->lock){++++}-{4:4}, at: lock_vma_under_rcu+0x34b/0x790 mm/memory.c:6278
5 locks held by syz.0.42/6191:
 #0: ffff888028a2e420 (sb_writers#16){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:516
 #1: ffff888078ba97c8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline]
 #1: ffff888078ba97c8 (&sb->s_type->i_mutex_key#23){+.+.}-{4:4}, at: do_truncate+0x20c/0x310 fs/open.c:63
 #2: ffff888053500a38 (&c->snapshot_create_lock){.+.+}-{4:4}, at: bch2_truncate+0x166/0x2d0 fs/bcachefs/io_misc.c:292
 #3: ffff888053504398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:158 [inline]
 #3: ffff888053504398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:249 [inline]
 #3: ffff888053504398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x7e1/0xd30 fs/bcachefs/btree_iter.c:3228
 #4: ffff8880535266d0 (&c->gc_lock){.+.+}-{4:4}, at: bch2_btree_update_start+0x682/0x14e0 fs/bcachefs/btree_update_interior.c:1197
1 lock held by syz.0.348/8470:
3 locks held by syz.8.349/8487:
2 locks held by dhcpcd/8494:
 #0: ffff888052ace808 (&sb->s_type->i_mutex_key#9){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline]
 #0: ffff888052ace808 (&sb->s_type->i_mutex_key#9){+.+.}-{4:4}, at: __sock_release net/socket.c:639 [inline]
 #0: ffff888052ace808 (&sb->s_type->i_mutex_key#9){+.+.}-{4:4}, at: sock_close+0x90/0x240 net/socket.c:1408
 #1: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:297 [inline]
 #1: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x381/0x830 kernel/rcu/tree_exp.h:976
2 locks held by dhcpcd/8495:
 #0: ffff888068c80258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1623 [inline]
 #0: ffff888068c80258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcb0 net/packet/af_packet.c:3253
 #1: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:329 [inline]
 #1: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:976
1 lock held by dhcpcd/8497:
 #0: ffff88804d0a8258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1623 [inline]
 #0: ffff88804d0a8258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcb0 net/packet/af_packet.c:3253
1 lock held by dhcpcd/8498:
 #0: ffff88807bbb0258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1623 [inline]
 #0: ffff88807bbb0258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcb0 net/packet/af_packet.c:3253
1 lock held by dhcpcd/8499:
 #0: ffff88807cfde258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1623 [inline]
 #0: ffff88807cfde258 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcb0 net/packet/af_packet.c:3253

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.13.0-rc6-syzkaller-00262-gb62cef9a5c67 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:234 [inline]
 watchdog+0xff6/0x1040 kernel/hung_task.c:397
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 8474 Comm: syz.2.350 Not tainted 6.13.0-rc6-syzkaller-00262-gb62cef9a5c67 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
RIP: 0010:__orc_find arch/x86/kernel/unwind_orc.c:100 [inline]
RIP: 0010:orc_find arch/x86/kernel/unwind_orc.c:227 [inline]
RIP: 0010:unwind_next_frame+0x6b5/0x22d0 arch/x86/kernel/unwind_orc.c:494
Code: 00 74 08 4c 89 f7 e8 6a 9b b7 00 49 8b 2e e9 32 02 00 00 4d 89 ec 4d 89 ee 48 89 e8 4c 29 f0 48 89 c1 48 c1 f9 02 48 c1 e8 3f <48> 01 c8 48 83 e0 fe 49 8d 1c 46 48 89 d8 48 c1 e8 03 48 b9 00 00
RSP: 0018:ffffc90004c3ee30 EFLAGS: 00000246
RAX: 0000000000000000 RBX: ffffffff91485b68 RCX: 0000000000000008
RDX: 00000000000b0001 RSI: ffffffff90a14c46 RDI: ffffffff814ba930
RBP: ffffffff902b0154 R08: 0000000000000009 R09: 0000000000000000
R10: ffffc90004c3ef60 R11: fffff52000987df8 R12: ffffffff902b0134
R13: ffffffff902b0134 R14: ffffffff902b0134 R15: ffffffff814bd167
FS:  00007ff1347226c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ff12a5bd000 CR3: 000000004dff8000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 __unwind_start+0x59a/0x740 arch/x86/kernel/unwind_orc.c:760
 unwind_start arch/x86/include/asm/unwind.h:64 [inline]
 arch_stack_walk+0xe5/0x150 arch/x86/kernel/stacktrace.c:24
 stack_trace_save+0x118/0x1d0 kernel/stacktrace.c:122
 save_stack+0xfb/0x1f0 mm/page_owner.c:156
 __set_page_owner+0x92/0x800 mm/page_owner.c:320
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1558
 prep_new_page mm/page_alloc.c:1566 [inline]
 get_page_from_freelist+0x3651/0x37a0 mm/page_alloc.c:3476
 __alloc_pages_noprof+0x292/0x710 mm/page_alloc.c:4753
 alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2269
 folio_alloc_mpol_noprof+0x36/0x50 mm/mempolicy.c:2287
 shmem_alloc_folio mm/shmem.c:1799 [inline]
 shmem_alloc_and_add_folio+0x4a0/0x1080 mm/shmem.c:1838
 shmem_get_folio_gfp+0x621/0x1840 mm/shmem.c:2358
 shmem_get_folio mm/shmem.c:2464 [inline]
 shmem_write_begin+0x165/0x350 mm/shmem.c:3120
 generic_perform_write+0x346/0x990 mm/filemap.c:4046
 shmem_file_write_iter+0xf9/0x120 mm/shmem.c:3296
 new_sync_write fs/read_write.c:586 [inline]
 vfs_write+0xaeb/0xd30 fs/read_write.c:679
 ksys_write+0x18f/0x2b0 fs/read_write.c:731
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff1339847df
Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 f9 92 02 00 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 4c 93 02 00 48
RSP: 002b:00007ff134721df0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00000000013bd7ef RCX: 00007ff1339847df
RDX: 00000000013bd7ef RSI: 00007ff129200000 RDI: 0000000000000004
RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000054ff
R10: 00000000000003c8 R11: 0000000000000293 R12: 0000000000000004
R13: 00007ff134721ef0 R14: 00007ff134721eb0 R15: 00007ff129200000
 </TASK>

Crashes (34):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/12 03:09 upstream b62cef9a5c67 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2025/01/11 13:28 upstream 77a903cd8e5a 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/12/27 07:37 upstream d6ef8b40d075 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/12/13 11:25 upstream f932fb9b4074 3547e30f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/12/09 07:07 upstream 62b5a46999c7 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/12/01 16:24 upstream bcc8eda6d349 68914665 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/11/29 16:04 upstream 7af08b57bcb9 5df23865 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/11/26 14:58 upstream 7eef7e306d3c e9a9a9f2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/10/25 22:51 upstream ae90f6a6170d 045e728d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/10/20 08:38 upstream f9e4825524aa cd6fc0a3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/28 02:53 upstream 3630400697a3 440b26ec .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/27 15:19 upstream 075dbe9f6e3c 9314348a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/25 15:07 upstream 684a64bf32b6 349a68c4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/25 14:19 upstream 684a64bf32b6 349a68c4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/24 14:57 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/24 14:56 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/23 22:35 upstream f8eb5bd9a818 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/19 05:30 upstream 4a39ac5b7d62 c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/07/07 13:16 upstream 22f902dfc51e bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in bch2_page_fault
2024/07/07 13:06 upstream 22f902dfc51e bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in bch2_page_fault
2024/07/07 06:56 upstream 22f902dfc51e 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/07/02 22:16 upstream 1dfe225e9af5 8373af66 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in bch2_page_fault
2024/07/02 08:49 upstream 73e931504f8e b294e901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/06/13 12:08 upstream cea2a26553ac 2aa5052f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in bch2_page_fault
2024/06/08 07:49 upstream 96e09b8f8166 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/06/05 12:06 upstream 32f88d65f01b e1e2c66e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in bch2_page_fault
2024/06/05 09:34 upstream 32f88d65f01b e1e2c66e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/05/31 03:18 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/05/31 02:22 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/05/20 01:12 upstream 61307b7be41a c0f1611a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/05/17 02:14 upstream 3c999d1ae3c7 c2e07261 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/12/09 22:24 linux-next af2ea8ab7a54 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in bch2_page_fault
2024/09/28 16:13 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 5f5673607153 440b26ec .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in bch2_page_fault
2024/07/25 20:01 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci c912bf709078 32fcf98f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in bch2_page_fault
* Struck through repros no longer work on HEAD.