syzbot


INFO: task hung in bch2_page_mkwrite (2)

Status: auto-obsoleted due to no activity on 2025/07/27 23:58
Subsystems: bcachefs
[Documentation on labels]
First crash: 339d, last: 125d
Similar bugs (1)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in bch2_page_mkwrite bcachefs 1 1 480d 480d 0/29 auto-obsoleted due to no activity on 2024/08/06 23:06

Sample crash report:
INFO: task syz.0.1178:16013 blocked for more than 143 seconds.
      Not tainted 6.15.0-rc4-syzkaller-00011-gf15d97df5afa #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.1178      state:D stack:24648 pid:16013 tgid:16013 ppid:14501  task_flags:0x400040 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x168f/0x4c70 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 __bch2_two_state_lock+0x1ea/0x370 fs/bcachefs/two_state_shared_lock.c:7
 bch2_two_state_lock fs/bcachefs/two_state_shared_lock.h:55 [inline]
 bch2_page_mkwrite+0x402/0xee0 fs/bcachefs/fs-io-pagecache.c:623
 do_page_mkwrite+0x14a/0x310 mm/memory.c:3287
 wp_page_shared mm/memory.c:3688 [inline]
 do_wp_page+0x2626/0x5760 mm/memory.c:3907
 handle_pte_fault mm/memory.c:6013 [inline]
 __handle_mm_fault+0x1028/0x5380 mm/memory.c:6140
 handle_mm_fault+0x2d5/0x7f0 mm/memory.c:6309
 do_user_addr_fault+0xa81/0x1390 arch/x86/mm/fault.c:1337
 handle_page_fault arch/x86/mm/fault.c:1480 [inline]
 exc_page_fault+0x68/0x110 arch/x86/mm/fault.c:1538
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
RIP: 0033:0x7f00ae9566d8
RSP: 002b:00007ffd12e90a88 EFLAGS: 00010246
RAX: 0000200000000040 RBX: 0000000000000004 RCX: 0031656c69662f2e
RDX: 0000000000000008 RSI: 0031656c69662f2e RDI: 0000200000000040
RBP: 00007f00aebb7ba0 R08: 00007f00ae800000 R09: 0000000000000001
R10: 0000000000000001 R11: 0000000000000009 R12: 00007f00aebb5fac
R13: 00007ffd12e90b80 R14: fffffffffffffffe R15: 00007ffd12e90ba0
 </TASK>
INFO: task bch-reclaim/loo:16027 blocked for more than 143 seconds.
      Not tainted 6.15.0-rc4-syzkaller-00011-gf15d97df5afa #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:bch-reclaim/loo state:D stack:26856 pid:16027 tgid:16027 ppid:2      task_flags:0x200840 flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x168f/0x4c70 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 __mutex_lock_common kernel/locking/mutex.c:678 [inline]
 __mutex_lock+0x724/0xe80 kernel/locking/mutex.c:746
 btree_write_buffer_flush_seq+0x1829/0x19a0 fs/bcachefs/btree_write_buffer.c:569
 bch2_btree_write_buffer_journal_flush+0x69/0xb0 fs/bcachefs/btree_write_buffer.c:586
 journal_flush_pins+0x8e0/0xe90 fs/bcachefs/journal_reclaim.c:592
 __bch2_journal_reclaim+0x781/0xd10 fs/bcachefs/journal_reclaim.c:723
 bch2_journal_reclaim_thread+0x177/0x4f0 fs/bcachefs/journal_reclaim.c:765
 kthread+0x70e/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8df3b860 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8df3b860 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8df3b860 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6764
4 locks held by kworker/u8:2/36:
3 locks held by kworker/u8:8/2136:
 #0: ffff88801a089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc90005587c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90005587c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
1 lock held by dhcpcd/5486:
 #0: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: devinet_ioctl+0x323/0x1b50 net/ipv4/devinet.c:1121
2 locks held by getty/5574:
 #0: ffff888034dc20a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002ffe2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
3 locks held by kworker/u8:9/8720:
 #0: ffff88814cb75148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88814cb75148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc90004fffc60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90004fffc60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x112/0x14b0 net/ipv6/addrconf.c:4195
5 locks held by kworker/u8:6/14895:
 #0: ffff88801aef3948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801aef3948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b1/0x17a0 kernel/workqueue.c:3319
 #1: ffffc9000518fc60 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000518fc60 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ec/0x17a0 kernel/workqueue.c:3319
 #2: ffffffff8f2d49d0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x145/0xbd0 net/core/net_namespace.c:608
 #3: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0xdc/0x890 net/core/dev.c:12524
 #4: ffffffff8df41338 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:304 [inline]
 #4: ffffffff8df41338 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x2f4/0x730 kernel/rcu/tree_exp.h:998
2 locks held by syz.0.1178/16013:
 #0: ffff88807bbe3d08 (vm_lock){++++}-{0:0}, at: do_user_addr_fault+0x2d9/0x1390 arch/x86/mm/fault.c:1328
 #1: ffff8880683f6518 (sb_pagefaults#7){.+.+}-{0:0}, at: do_page_mkwrite+0x14a/0x310 mm/memory.c:3287
5 locks held by syz.0.1178/16014:
 #0: ffff8880683f6420 (sb_writers#15){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:556
 #1: ffff888078639078 (&sb->s_type->i_mutex_key#23){++++}-{4:4}, at: inode_lock include/linux/fs.h:867 [inline]
 #1: ffff888078639078 (&sb->s_type->i_mutex_key#23){++++}-{4:4}, at: do_truncate+0x186/0x220 fs/open.c:63
 #2: ffff888043280a70 (&c->snapshot_create_lock){.+.+}-{4:4}, at: bch2_truncate+0xeb/0x200 fs/bcachefs/io_misc.c:295
 #3: ffff888043284228 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:161 [inline]
 #3: ffff888043284228 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:253 [inline]
 #3: ffff888043284228 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x806/0xda0 fs/bcachefs/btree_iter.c:3385
 #4: ffff8880432a6590 (&c->gc_lock){++++}-{4:4}, at: bch2_btree_update_start+0x68f/0x14c0 fs/bcachefs/btree_update_interior.c:1179
3 locks held by bch-reclaim/loo/16027:
 #0: ffff8880432cad28 (&j->reclaim_lock){+.+.}-{4:4}, at: bch2_journal_reclaim_thread+0x16b/0x4f0 fs/bcachefs/journal_reclaim.c:764
 #1: ffff888043284228 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:161 [inline]
 #1: ffff888043284228 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:253 [inline]
 #1: ffff888043284228 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x806/0xda0 fs/bcachefs/btree_iter.c:3385
 #2: ffff8880432845d0 (&wb->flushing.lock){+.+.}-{4:4}, at: btree_write_buffer_flush_seq+0x1829/0x19a0 fs/bcachefs/btree_write_buffer.c:569
3 locks held by bch-copygc/loop/16028:
 #0: ffff8880432845d0 (&wb->flushing.lock){+.+.}-{4:4}, at: bch2_btree_write_buffer_flush_nocheck_rw fs/bcachefs/btree_write_buffer.c:620 [inline]
 #0: ffff8880432845d0 (&wb->flushing.lock){+.+.}-{4:4}, at: bch2_btree_write_buffer_tryflush+0x130/0x1a0 fs/bcachefs/btree_write_buffer.c:635
 #1: ffff888043284228 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:161 [inline]
 #1: ffff888043284228 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:253 [inline]
 #1: ffff888043284228 (&c->btree_trans_barrier){.+.+}-{0:0}, at: bch2_trans_srcu_lock+0xaf/0x220 fs/bcachefs/btree_iter.c:3202
 #2: ffff8880432a6590 (&c->gc_lock){++++}-{4:4}, at: bch2_btree_update_start+0x68f/0x14c0 fs/bcachefs/btree_update_interior.c:1179
1 lock held by syz-executor/18021:
 #0: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: inet6_rtm_newaddr+0x5b7/0xd20 net/ipv6/addrconf.c:5028
2 locks held by syz-executor/18286:
 #0: ffffffff8f7f7000 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8f7f7000 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8f7f7000 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8f2e1508 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4064

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.15.0-rc4-syzkaller-00011-gf15d97df5afa #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/19/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:274 [inline]
 watchdog+0xfee/0x1030 kernel/hung_task.c:437
 kthread+0x70e/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 132 Comm: kworker/u8:5 Not tainted 6.15.0-rc4-syzkaller-00011-gf15d97df5afa #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/19/2025
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:validate_chain+0x9/0x2140 kernel/locking/lockdep.c:3865
Code: 82 91 fe ff ff eb b3 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 41 57 41 56 41 55 41 54 <53> 48 81 ec e0 00 00 00 49 89 cf 65 48 8b 05 54 8e d3 10 48 89 84
RSP: 0018:ffffc90002e573c8 EFLAGS: 00000082
RAX: ffffffff931a7c58 RBX: 0000000000000004 RCX: 69b2fa4b2ec96ba9
RDX: 0000000000000000 RSI: ffff88801f74c790 RDI: ffff88801f74bc00
RBP: ffff88801f74c6f0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: ffffffff817199f5 R12: 00000000d2bed388
R13: 69b2fa4b2ec96ba9 R14: 000000007acf51b4 R15: ffff88801f74c790
FS:  0000000000000000(0000) GS:ffff888126102000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f70fdb80178 CR3: 0000000033de4000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 __lock_acquire+0xaac/0xd20 kernel/locking/lockdep.c:5235
 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5866
 rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 rcu_read_lock include/linux/rcupdate.h:841 [inline]
 class_rcu_constructor include/linux/rcupdate.h:1155 [inline]
 unwind_next_frame+0xc2/0x2390 arch/x86/kernel/unwind_orc.c:479
 arch_stack_walk+0x11c/0x150 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x9c/0xe0 kernel/stacktrace.c:122
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x3e/0x80 mm/kasan/common.c:68
 unpoison_slab_object mm/kasan/common.c:319 [inline]
 __kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:345
 kasan_slab_alloc include/linux/kasan.h:250 [inline]
 slab_post_alloc_hook mm/slub.c:4161 [inline]
 slab_alloc_node mm/slub.c:4210 [inline]
 kmem_cache_alloc_node_noprof+0x1bb/0x3c0 mm/slub.c:4262
 __alloc_skb+0x112/0x2d0 net/core/skbuff.c:658
 alloc_skb include/linux/skbuff.h:1340 [inline]
 nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:748 [inline]
 nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
 nsim_dev_trap_report_work+0x29a/0xb80 drivers/net/netdevsim/dev.c:851
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xadb/0x17a0 kernel/workqueue.c:3319
 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3400
 kthread+0x70e/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (5):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/04/28 23:50 upstream f15d97df5afa aeb6ec69 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_mkwrite
2025/03/18 00:54 upstream 4701f33a1070 ce3352cd .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_mkwrite
2025/01/11 17:02 upstream 77a903cd8e5a 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_mkwrite
2024/12/19 05:13 upstream eabcdba3ad40 1432fc84 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_mkwrite
2024/09/27 01:02 upstream 11a299a7933e 9314348a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_mkwrite
* Struck through repros no longer work on HEAD.