syzbot


INFO: task hung in bch2_fs_file_ioctl (2)

Status: auto-obsoleted due to no activity on 2025/02/10 10:06
Subsystems: bcachefs
[Documentation on labels]
First crash: 141d, last: 121d
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in bch2_fs_file_ioctl bcachefs 4 303d 304d 0/28 auto-obsoleted due to no activity on 2024/08/12 02:46

Sample crash report:
INFO: task syz.2.15:6041 blocked for more than 143 seconds.
      Not tainted 6.12.0-rc7-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.15        state:D stack:27392 pid:6041  tgid:5980  ppid:5832   flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5328 [inline]
 __schedule+0x17fa/0x4bd0 kernel/sched/core.c:6690
 __schedule_loop kernel/sched/core.c:6767 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6782
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6839
 rwsem_down_write_slowpath+0xeee/0x13b0 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1d7/0x220 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:815 [inline]
 bch2_ioc_setflags fs/bcachefs/fs-ioctl.c:96 [inline]
 bch2_fs_file_ioctl+0x1bd0/0x28b0 fs/bcachefs/fs-ioctl.c:547
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:907 [inline]
 __se_sys_ioctl+0xf9/0x170 fs/ioctl.c:893
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6616d7e719
RSP: 002b:00007f6617ab4038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f6616f36058 RCX: 00007f6616d7e719
RDX: 0000000020000040 RSI: 0000000040086602 RDI: 0000000000000004
RBP: 00007f6616df1616 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007f6616f36058 R15: 00007fff179ae478
 </TASK>
INFO: task syz.2.15:6042 blocked for more than 145 seconds.
      Not tainted 6.12.0-rc7-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.15        state:D stack:20368 pid:6042  tgid:5980  ppid:5832   flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5328 [inline]
 __schedule+0x17fa/0x4bd0 kernel/sched/core.c:6690
 __schedule_loop kernel/sched/core.c:6767 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6782
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6839
 rwsem_down_write_slowpath+0xeee/0x13b0 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1d7/0x220 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:815 [inline]
 vfs_unlink+0xe4/0x650 fs/namei.c:4458
 do_unlinkat+0x4ae/0x830 fs/namei.c:4533
 __do_sys_unlink fs/namei.c:4581 [inline]
 __se_sys_unlink fs/namei.c:4579 [inline]
 __x64_sys_unlink+0x47/0x50 fs/namei.c:4579
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6616d7e719
RSP: 002b:00007f6617a93038 EFLAGS: 00000246 ORIG_RAX: 0000000000000057
RAX: ffffffffffffffda RBX: 00007f6616f36130 RCX: 00007f6616d7e719
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000080
RBP: 00007f6616df1616 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f6616f36130 R15: 00007fff179ae478
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:1/12:
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc90000117d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90000117d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcc1408 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:276
3 locks held by kworker/1:0/25:
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc900001f7d00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc900001f7d00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcc1408 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
1 lock held by khungtaskd/30:
 #0: ffffffff8e937da0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e937da0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e937da0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6720
2 locks held by kworker/u8:3/52:
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc90000bd7d00 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90000bd7d00 (connector_reaper_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
3 locks held by kworker/u8:5/596:
 #0: ffff88814d7cb948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88814d7cb948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc90002f9fd00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90002f9fd00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcc1408 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4736
2 locks held by getty/5589:
 #0: ffff8880311f50a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
5 locks held by kworker/u8:7/5906:
 #0: ffff88801baeb148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801baeb148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc9000441fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc9000441fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcb48d0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x16a/0xcc0 net/core/net_namespace.c:580
 #3: ffffffff8fcc1408 (rtnl_mutex){+.+.}-{3:3}, at: cleanup_net+0x6af/0xcc0 net/core/net_namespace.c:616
 #4: ffffffff8e93d338 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:297 [inline]
 #4: ffffffff8e93d338 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x381/0x830 kernel/rcu/tree_exp.h:976
2 locks held by syz.2.15/5981:
 #0: ffff888063c9e420 (sb_writers#21){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2931 [inline]
 #0: ffff888063c9e420 (sb_writers#21){.+.+}-{0:0}, at: vfs_fallocate+0x4fe/0x6e0 fs/open.c:332
 #1: ffff888057f788c8 (&sb->s_type->i_mutex_key#27){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:815 [inline]
 #1: ffff888057f788c8 (&sb->s_type->i_mutex_key#27){+.+.}-{3:3}, at: bch2_fallocate_dispatch+0x1e2/0x540 fs/bcachefs/fs-io.c:771
2 locks held by syz.2.15/6041:
 #0: ffff888063c9e420 (sb_writers#21){.+.+}-{0:0}, at: mnt_want_write_file+0x5e/0x200 fs/namespace.c:559
 #1: ffff888057f788c8 (&sb->s_type->i_mutex_key#27){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:815 [inline]
 #1: ffff888057f788c8 (&sb->s_type->i_mutex_key#27){+.+.}-{3:3}, at: bch2_ioc_setflags fs/bcachefs/fs-ioctl.c:96 [inline]
 #1: ffff888057f788c8 (&sb->s_type->i_mutex_key#27){+.+.}-{3:3}, at: bch2_fs_file_ioctl+0x1bd0/0x28b0 fs/bcachefs/fs-ioctl.c:547
3 locks held by syz.2.15/6042:
 #0: ffff888063c9e420 (sb_writers#21){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515
 #1: ffff888057f78148 (&sb->s_type->i_mutex_key#27/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:850 [inline]
 #1: ffff888057f78148 (&sb->s_type->i_mutex_key#27/1){+.+.}-{3:3}, at: do_unlinkat+0x26a/0x830 fs/namei.c:4520
 #2: ffff888057f788c8 (&sb->s_type->i_mutex_key#27){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:815 [inline]
 #2: ffff888057f788c8 (&sb->s_type->i_mutex_key#27){+.+.}-{3:3}, at: vfs_unlink+0xe4/0x650 fs/namei.c:4458
4 locks held by bch-reclaim/loo/6002:
 #0: ffff888055dcb0a8 (&j->reclaim_lock){+.+.}-{3:3}, at: bch2_journal_reclaim_thread+0x167/0x560 fs/bcachefs/journal_reclaim.c:739
 #1: ffff888055d84398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:151 [inline]
 #1: ffff888055d84398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:250 [inline]
 #1: ffff888055d84398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x7de/0xd20 fs/bcachefs/btree_iter.c:3228
 #2: ffff888055d84740 (&wb->flushing.lock){+.+.}-{3:3}, at: btree_write_buffer_flush_seq+0x1a39/0x1bc0 fs/bcachefs/btree_write_buffer.c:509
 #3: ffff888055da66d0 (&c->gc_lock){.+.+}-{3:3}, at: bch2_btree_update_start+0x682/0x14e0 fs/bcachefs/btree_update_interior.c:1197
1 lock held by syz-executor/6571:
3 locks held by syz.6.91/6597:
2 locks held by syz.6.91/6617:
 #0: ffff88801278e420 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:850 [inline]
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: filename_create+0x260/0x540 fs/namei.c:4026
2 locks held by syz.6.91/6618:
 #0: ffff88801278e420 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22){++++}-{3:3}, at: inode_lock include/linux/fs.h:815 [inline]
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22){++++}-{3:3}, at: open_last_lookups fs/namei.c:3691 [inline]
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22){++++}-{3:3}, at: path_openat+0x89a/0x3590 fs/namei.c:3930
1 lock held by syz.6.91/6625:
 #0: ffff88823bedccc0 (&type->i_mutex_dir_key#22){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:825 [inline]
 #0: ffff88823bedccc0 (&type->i_mutex_dir_key#22){++++}-{3:3}, at: lookup_slow+0x45/0x70 fs/namei.c:1748
2 locks held by syz.6.91/6628:
 #0: ffff88801278e420 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:850 [inline]
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: lock_rename fs/namei.c:3161 [inline]
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: do_renameat2+0x62c/0x13f0 fs/namei.c:5105
2 locks held by syz.6.91/6629:
 #0: ffff88801278e420 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:850 [inline]
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22/1){+.+.}-{3:3}, at: filename_create+0x260/0x540 fs/namei.c:4026
2 locks held by syz.6.91/6631:
 #0: ffff88801278e420 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:825 [inline]
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22){++++}-{3:3}, at: open_last_lookups fs/namei.c:3693 [inline]
 #1: ffff88823bedccc0 (&type->i_mutex_dir_key#22){++++}-{3:3}, at: path_openat+0x88b/0x3590 fs/namei.c:3930
1 lock held by syz-executor/7711:
 #0: ffffffff8fcc1408 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fcc1408 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6672
2 locks held by syz.3.241/7756:
 #0: ffff888055ec60e0 (&type->s_umount_key#50/1){+.+.}-{3:3}, at: alloc_super+0x221/0x9d0 fs/super.c:344
 #1: ffffffff8e93d200 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x4c/0x530 kernel/rcu/tree.c:4562
2 locks held by syz.0.245/7806:
1 lock held by dhcpcd-run-hook/7816:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.12.0-rc7-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/30/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xff4/0x1040 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 6597 Comm: syz.6.91 Not tainted 6.12.0-rc7-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/30/2024
RIP: 0010:kasan_check_range+0x1b8/0x290 mm/kasan/generic.c:189
Code: 4d 01 fb 48 8d 5d 07 48 85 ed 48 0f 49 dd 48 83 e3 f8 48 29 dd 74 12 41 80 3b 00 0f 85 a6 00 00 00 49 ff c3 48 ff cd 75 ee 5b <41> 5c 41 5e 41 5f 5d c3 cc cc cc cc 40 84 ed 75 5f f7 c5 00 ff 00
RSP: 0018:ffffc900045ddbc8 EFLAGS: 00000256
RAX: ffffc900045def01 RBX: 0000000000000010 RCX: ffffffff8141632b
RDX: 0000000000000001 RSI: 0000000000000010 RDI: ffffc900045ddd30
RBP: 0000000000000000 R08: ffffc900045ddd3f R09: 1ffff920008bbba7
R10: dffffc0000000000 R11: fffff520008bbba8 R12: ffffc900045e0000
R13: ffffc900045ddce0 R14: dffffc0000000001 R15: fffff520008bbba8
FS:  00007f1ca1e366c0(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055555fb0d5c8 CR3: 00000000261ce000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 __asan_memset+0x23/0x50 mm/kasan/shadow.c:84
 unwind_next_frame+0xcfb/0x22d0 arch/x86/kernel/unwind_orc.c:592
 arch_stack_walk+0x11c/0x150 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x118/0x1d0 kernel/stacktrace.c:122
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
 kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:579
 poison_slab_object mm/kasan/common.c:247 [inline]
 __kasan_slab_free+0x59/0x70 mm/kasan/common.c:264
 kasan_slab_free include/linux/kasan.h:230 [inline]
 slab_free_hook mm/slub.c:2342 [inline]
 slab_free mm/slub.c:4579 [inline]
 kmem_cache_free+0x1a2/0x420 mm/slub.c:4681
 free_buffer_head+0x54/0x240 fs/buffer.c:3036
 try_to_free_buffers+0x311/0x5f0 fs/buffer.c:2977
 shrink_folio_list+0x23d1/0x8fe0 mm/vmscan.c:1433
 shrink_inactive_list mm/vmscan.c:1960 [inline]
 shrink_list mm/vmscan.c:2197 [inline]
 shrink_lruvec+0x16d2/0x2f20 mm/vmscan.c:5715
 shrink_node_memcgs mm/vmscan.c:5917 [inline]
 shrink_node+0x12a4/0x2df0 mm/vmscan.c:5957
 shrink_zones mm/vmscan.c:6201 [inline]
 do_try_to_free_pages+0x69d/0x1b20 mm/vmscan.c:6263
 try_to_free_mem_cgroup_pages+0x4b8/0xb10 mm/vmscan.c:6595
 try_charge_memcg+0x8c2/0x1170 mm/memcontrol.c:2207
 obj_cgroup_charge_pages+0x91/0x230 mm/memcontrol.c:2623
 obj_cgroup_charge+0x380/0x5d0 mm/memcontrol.c:2914
 __memcg_slab_post_alloc_hook+0x1b1/0x7e0 mm/memcontrol.c:2975
 memcg_slab_post_alloc_hook mm/slub.c:2156 [inline]
 slab_post_alloc_hook mm/slub.c:4095 [inline]
 slab_alloc_node mm/slub.c:4134 [inline]
 kmem_cache_alloc_noprof+0x1de/0x2a0 mm/slub.c:4141
 alloc_buffer_head+0x2a/0x290 fs/buffer.c:3020
 folio_alloc_buffers+0x2bc/0x660 fs/buffer.c:928
 create_empty_buffers+0x3a/0x740 fs/buffer.c:1667
 block_read_full_folio+0x25c/0xcd0 fs/buffer.c:2382
 filemap_read_folio+0x14b/0x630 mm/filemap.c:2367
 do_read_cache_folio+0x3f5/0x850 mm/filemap.c:3825
 read_mapping_folio include/linux/pagemap.h:1011 [inline]
 dir_get_folio fs/sysv/dir.c:64 [inline]
 sysv_find_entry+0x16a/0x4b0 fs/sysv/dir.c:154
 sysv_inode_by_name+0x98/0x2a0 fs/sysv/dir.c:370
 sysv_lookup+0x6b/0xe0 fs/sysv/namei.c:38
 lookup_one_qstr_excl+0x11f/0x260 fs/namei.c:1633
 do_renameat2+0x670/0x13f0 fs/namei.c:5111
 __do_sys_rename fs/namei.c:5217 [inline]
 __se_sys_rename fs/namei.c:5215 [inline]
 __x64_sys_rename+0x82/0x90 fs/namei.c:5215
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f1ca0f7e719
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f1ca1e36038 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007f1ca1135f80 RCX: 00007f1ca0f7e719
RDX: 0000000000000000 RSI: 0000000020000f40 RDI: 00000000200003c0
RBP: 00007f1ca0ff1616 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f1ca1135f80 R15: 00007ffd8c8fd098
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/11/12 09:56 upstream 2d5404caa8c7 75bb1b32 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_fs_file_ioctl
2024/10/23 12:52 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_fs_file_ioctl
* Struck through repros no longer work on HEAD.