syzbot


possible deadlock in iterate_dir (3)

Status: upstream: reported on 2025/09/14 06:44
Subsystems: btrfs
[Documentation on labels]
Reported-by: syzbot+e290013facb5d5159eca@syzkaller.appspotmail.com
First crash: 51d, last: 2d17h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [btrfs?] possible deadlock in iterate_dir (3) 0 (1) 2025/09/14 06:44
Similar bugs (10)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in iterate_dir (3) origin:lts-only 4 C error 7 28d 288d 0/3 upstream: reported C repro on 2025/01/16 11:46
linux-5.15 possible deadlock in iterate_dir (3) missing-backport origin:upstream 4 C error 20 95d 576d 0/3 upstream: reported C repro on 2024/04/03 07:56
upstream possible deadlock in iterate_dir (2) fs 4 1 302d 294d 0/29 auto-obsoleted due to no activity on 2025/04/12 17:38
linux-6.1 possible deadlock in iterate_dir (2) 4 2 555d 580d 0/3 auto-obsoleted due to no activity on 2024/08/02 17:42
linux-6.1 possible deadlock in iterate_dir 4 2 835d 840d 0/3 auto-obsoleted due to no activity on 2023/10/27 20:39
linux-4.14 possible deadlock in iterate_dir ubifs 4 C 11375 969d 1964d 0/1 upstream: reported C repro on 2020/06/15 06:16
linux-5.15 possible deadlock in iterate_dir (2) 4 2 702d 795d 0/3 auto-obsoleted due to no activity on 2024/03/08 13:21
linux-4.19 possible deadlock in iterate_dir 4 1 2136d 2136d 0/1 auto-closed as invalid on 2020/04/24 15:06
upstream possible deadlock in iterate_dir fs 4 23 800d 1042d 0/29 auto-obsoleted due to no activity on 2023/12/01 22:38
linux-5.15 possible deadlock in iterate_dir 4 1 949d 949d 0/3 auto-obsoleted due to no activity on 2023/07/25 13:55

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.9.749/13907 is trying to acquire lock:
ffff888034ea1950 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock_killable+0x1d/0x70 include/linux/mmap_lock.h:377

but task is already holding lock:
ffff8880352bbcf8 (&type->i_mutex_dir_key#5){++++}-{4:4}, at: iterate_dir+0x29e/0x580 fs/readdir.c:101

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #6 (&type->i_mutex_dir_key#5){++++}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       down_read+0x97/0x1f0 kernel/locking/rwsem.c:1537
       inode_lock_shared include/linux/fs.h:995 [inline]
       lookup_slow+0x46/0x70 fs/namei.c:1832
       walk_component+0x2d2/0x400 fs/namei.c:2151
       lookup_last fs/namei.c:2652 [inline]
       path_lookupat+0x163/0x430 fs/namei.c:2676
       filename_lookup+0x212/0x570 fs/namei.c:2705
       kern_path+0x35/0x50 fs/namei.c:2863
       is_same_device fs/btrfs/volumes.c:759 [inline]
       device_list_add+0xe2a/0x22a0 fs/btrfs/volumes.c:894
       btrfs_scan_one_device+0x3ee/0x650 fs/btrfs/volumes.c:1493
       btrfs_get_tree_super fs/btrfs/super.c:1865 [inline]
       btrfs_get_tree_subvol fs/btrfs/super.c:2094 [inline]
       btrfs_get_tree+0x4ab/0x1920 fs/btrfs/super.c:2128
       vfs_get_tree+0x92/0x2b0 fs/super.c:1751
       fc_mount fs/namespace.c:1208 [inline]
       do_new_mount_fc fs/namespace.c:3651 [inline]
       do_new_mount+0x302/0xa10 fs/namespace.c:3727
       do_mount fs/namespace.c:4050 [inline]
       __do_sys_mount fs/namespace.c:4238 [inline]
       __se_sys_mount+0x313/0x410 fs/namespace.c:4215
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #5 (&fs_devs->device_list_mutex){+.+.}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       __mutex_lock_common kernel/locking/rtmutex_api.c:535 [inline]
       mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:547
       btrfs_run_dev_stats+0x102/0x10e0 fs/btrfs/volumes.c:7777
       commit_cowonly_roots+0x1b2/0x860 fs/btrfs/transaction.c:1348
       btrfs_commit_transaction+0xfc7/0x3950 fs/btrfs/transaction.c:2459
       btrfs_rebuild_free_space_tree+0x28b/0x6d0 fs/btrfs/free-space-tree.c:1385
       btrfs_start_pre_rw_mount+0x128f/0x1bf0 fs/btrfs/disk-io.c:3062
       open_ctree+0x2b11/0x3d20 fs/btrfs/disk-io.c:3619
       btrfs_fill_super fs/btrfs/super.c:987 [inline]
       btrfs_get_tree_super fs/btrfs/super.c:1951 [inline]
       btrfs_get_tree_subvol fs/btrfs/super.c:2094 [inline]
       btrfs_get_tree+0x1061/0x1920 fs/btrfs/super.c:2128
       vfs_get_tree+0x92/0x2b0 fs/super.c:1751
       fc_mount fs/namespace.c:1208 [inline]
       do_new_mount_fc fs/namespace.c:3651 [inline]
       do_new_mount+0x302/0xa10 fs/namespace.c:3727
       do_mount fs/namespace.c:4050 [inline]
       __do_sys_mount fs/namespace.c:4238 [inline]
       __se_sys_mount+0x313/0x410 fs/namespace.c:4215
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #4 (&fs_info->reloc_mutex){+.+.}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       __mutex_lock_common kernel/locking/rtmutex_api.c:535 [inline]
       mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:547
       btrfs_commit_transaction+0xedd/0x3950 fs/btrfs/transaction.c:2405
       btrfs_rebuild_free_space_tree+0x28b/0x6d0 fs/btrfs/free-space-tree.c:1385
       btrfs_start_pre_rw_mount+0x128f/0x1bf0 fs/btrfs/disk-io.c:3062
       open_ctree+0x2b11/0x3d20 fs/btrfs/disk-io.c:3619
       btrfs_fill_super fs/btrfs/super.c:987 [inline]
       btrfs_get_tree_super fs/btrfs/super.c:1951 [inline]
       btrfs_get_tree_subvol fs/btrfs/super.c:2094 [inline]
       btrfs_get_tree+0x1061/0x1920 fs/btrfs/super.c:2128
       vfs_get_tree+0x92/0x2b0 fs/super.c:1751
       fc_mount fs/namespace.c:1208 [inline]
       do_new_mount_fc fs/namespace.c:3651 [inline]
       do_new_mount+0x302/0xa10 fs/namespace.c:3727
       do_mount fs/namespace.c:4050 [inline]
       __do_sys_mount fs/namespace.c:4238 [inline]
       __se_sys_mount+0x313/0x410 fs/namespace.c:4215
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (btrfs_trans_unblocked){++++}-{0:0}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       wait_current_trans+0x22b/0x520 fs/btrfs/transaction.c:531
       start_transaction+0x6d1/0x1620 fs/btrfs/transaction.c:707
       btrfs_finish_one_ordered+0x7b0/0x21a0 fs/btrfs/inode.c:3143
       btrfs_work_helper+0x39b/0xc00 fs/btrfs/async-thread.c:312
       process_one_work kernel/workqueue.c:3263 [inline]
       process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3346
       worker_thread+0x8a0/0xda0 kernel/workqueue.c:3427
       kthread+0x711/0x8a0 kernel/kthread.c:463
       ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245

-> #2 (sb_internal#3){.+.+}-{0:0}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       percpu_down_read_internal include/linux/percpu-rwsem.h:53 [inline]
       percpu_down_read_freezable include/linux/percpu-rwsem.h:83 [inline]
       __sb_start_write include/linux/fs.h:1916 [inline]
       sb_start_intwrite include/linux/fs.h:2099 [inline]
       start_transaction+0x56e/0x1620 fs/btrfs/transaction.c:699
       btrfs_dirty_inode+0x9f/0x190 fs/btrfs/inode.c:6270
       inode_update_time fs/inode.c:2117 [inline]
       __file_update_time fs/inode.c:2345 [inline]
       file_update_time+0x34d/0x490 fs/inode.c:2375
       btrfs_page_mkwrite+0x5dd/0x1a80 fs/btrfs/file.c:1915
       do_page_mkwrite+0x150/0x310 mm/memory.c:3488
       do_shared_fault mm/memory.c:5774 [inline]
       do_fault mm/memory.c:5836 [inline]
       do_pte_missing mm/memory.c:4361 [inline]
       handle_pte_fault mm/memory.c:6177 [inline]
       __handle_mm_fault mm/memory.c:6318 [inline]
       handle_mm_fault+0x124b/0x3400 mm/memory.c:6487
       do_user_addr_fault+0xa7c/0x1380 arch/x86/mm/fault.c:1336
       handle_page_fault arch/x86/mm/fault.c:1476 [inline]
       exc_page_fault+0x82/0x100 arch/x86/mm/fault.c:1532
       asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618

-> #1 (sb_pagefaults#4){.+.+}-{0:0}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       percpu_down_read_internal include/linux/percpu-rwsem.h:53 [inline]
       percpu_down_read_freezable include/linux/percpu-rwsem.h:83 [inline]
       __sb_start_write include/linux/fs.h:1916 [inline]
       sb_start_pagefault include/linux/fs.h:2081 [inline]
       btrfs_page_mkwrite+0x32c/0x1a80 fs/btrfs/file.c:1874
       do_page_mkwrite+0x150/0x310 mm/memory.c:3488
       do_shared_fault mm/memory.c:5774 [inline]
       do_fault mm/memory.c:5836 [inline]
       do_pte_missing mm/memory.c:4361 [inline]
       handle_pte_fault mm/memory.c:6177 [inline]
       __handle_mm_fault mm/memory.c:6318 [inline]
       handle_mm_fault+0x124b/0x3400 mm/memory.c:6487
       do_user_addr_fault+0x764/0x1380 arch/x86/mm/fault.c:1387
       handle_page_fault arch/x86/mm/fault.c:1476 [inline]
       exc_page_fault+0x82/0x100 arch/x86/mm/fault.c:1532
       asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618

-> #0 (&mm->mmap_lock){++++}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3165 [inline]
       check_prevs_add kernel/locking/lockdep.c:3284 [inline]
       validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
       __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
       down_read_killable+0x9d/0x220 kernel/locking/rwsem.c:1560
       mmap_read_lock_killable+0x1d/0x70 include/linux/mmap_lock.h:377
       get_mmap_lock_carefully mm/mmap_lock.c:377 [inline]
       lock_mm_and_find_vma+0x2a8/0x300 mm/mmap_lock.c:428
       do_user_addr_fault+0x331/0x1380 arch/x86/mm/fault.c:1359
       handle_page_fault arch/x86/mm/fault.c:1476 [inline]
       exc_page_fault+0x82/0x100 arch/x86/mm/fault.c:1532
       asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618
       filldir64+0x2c1/0x690 fs/readdir.c:-1
       dir_emit include/linux/fs.h:3986 [inline]
       offset_dir_emit fs/libfs.c:507 [inline]
       offset_iterate_dir fs/libfs.c:523 [inline]
       offset_readdir+0x44c/0x560 fs/libfs.c:572
       iterate_dir+0x3a5/0x580 fs/readdir.c:108
       __do_sys_getdents64 fs/readdir.c:410 [inline]
       __se_sys_getdents64+0xe4/0x260 fs/readdir.c:396
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &mm->mmap_lock --> &fs_devs->device_list_mutex --> &type->i_mutex_dir_key#5

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  rlock(&type->i_mutex_dir_key#5);
                               lock(&fs_devs->device_list_mutex);
                               lock(&type->i_mutex_dir_key#5);
  rlock(&mm->mmap_lock);

 *** DEADLOCK ***

2 locks held by syz.9.749/13907:
 #0: ffff88803683d128 (&f->f_pos_lock){+.+.}-{4:4}, at: fdget_pos+0x253/0x320 fs/file.c:1232
 #1: ffff8880352bbcf8 (&type->i_mutex_dir_key#5){++++}-{4:4}, at: iterate_dir+0x29e/0x580 fs/readdir.c:101

stack backtrace:
CPU: 1 UID: 0 PID: 13907 Comm: syz.9.749 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2043
 check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2175
 check_prev_add kernel/locking/lockdep.c:3165 [inline]
 check_prevs_add kernel/locking/lockdep.c:3284 [inline]
 validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
 __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
 down_read_killable+0x9d/0x220 kernel/locking/rwsem.c:1560
 mmap_read_lock_killable+0x1d/0x70 include/linux/mmap_lock.h:377
 get_mmap_lock_carefully mm/mmap_lock.c:377 [inline]
 lock_mm_and_find_vma+0x2a8/0x300 mm/mmap_lock.c:428
 do_user_addr_fault+0x331/0x1380 arch/x86/mm/fault.c:1359
 handle_page_fault arch/x86/mm/fault.c:1476 [inline]
 exc_page_fault+0x82/0x100 arch/x86/mm/fault.c:1532
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618
RIP: 0010:filldir64+0x2c1/0x690 fs/readdir.c:379
Code: e8 48 8b 44 24 50 49 89 44 24 08 48 8b 4c 24 08 48 8b 44 24 58 48 89 01 48 8b 04 24 8b 54 24 14 49 bc 00 00 00 00 00 fc ff df <66> 89 41 10 80 e2 0f 88 51 12 49 63 ee c6 44 29 13 00 4c 8d 79 13
RSP: 0018:ffffc900198f7c90 EFLAGS: 00050283
RAX: 0000000000000020 RBX: ffffc900198f7e38 RCX: 0000200000001ff0
RDX: 000000000000000a RSI: 0000200000001fd0 RDI: 0000200000002010
RBP: 00007ffffffff000 R08: 0000000000000000 R09: 0000000000000000
R10: dffffc0000000000 R11: ffffed1005daeb41 R12: dffffc0000000000
R13: ffff88803d721358 R14: 000000000000000a R15: 0000200000002010
 dir_emit include/linux/fs.h:3986 [inline]
 offset_dir_emit fs/libfs.c:507 [inline]
 offset_iterate_dir fs/libfs.c:523 [inline]
 offset_readdir+0x44c/0x560 fs/libfs.c:572
 iterate_dir+0x3a5/0x580 fs/readdir.c:108
 __do_sys_getdents64 fs/readdir.c:410 [inline]
 __se_sys_getdents64+0xe4/0x260 fs/readdir.c:396
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f28f1beefc9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f28efe35038 EFLAGS: 00000246 ORIG_RAX: 00000000000000d9
RAX: ffffffffffffffda RBX: 00007f28f1e46090 RCX: 00007f28f1beefc9
RDX: 0000000000001000 RSI: 0000200000001f80 RDI: 0000000000000004
RBP: 00007f28f1c71f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f28f1e46128 R14: 00007f28f1e46090 R15: 00007fffd12a2a08
 </TASK>
----------------
Code disassembly (best guess):
   0:	e8 48 8b 44 24       	call   0x24448b4d
   5:	50                   	push   %rax
   6:	49 89 44 24 08       	mov    %rax,0x8(%r12)
   b:	48 8b 4c 24 08       	mov    0x8(%rsp),%rcx
  10:	48 8b 44 24 58       	mov    0x58(%rsp),%rax
  15:	48 89 01             	mov    %rax,(%rcx)
  18:	48 8b 04 24          	mov    (%rsp),%rax
  1c:	8b 54 24 14          	mov    0x14(%rsp),%edx
  20:	49 bc 00 00 00 00 00 	movabs $0xdffffc0000000000,%r12
  27:	fc ff df
* 2a:	66 89 41 10          	mov    %ax,0x10(%rcx) <-- trapping instruction
  2e:	80 e2 0f             	and    $0xf,%dl
  31:	88 51 12             	mov    %dl,0x12(%rcx)
  34:	49 63 ee             	movslq %r14d,%rbp
  37:	c6 44 29 13 00       	movb   $0x0,0x13(%rcx,%rbp,1)
  3c:	4c 8d 79 13          	lea    0x13(%rcx),%r15

Crashes (8):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/10/29 05:20 upstream 8eefed8f65cc fd2207e7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in iterate_dir
2025/10/16 07:24 upstream 7ea30958b305 19568248 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in iterate_dir
2025/10/13 21:42 upstream 3a8660878839 b6605ba8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in iterate_dir
2025/09/22 18:52 upstream 07e27ad16399 0ac7291c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in iterate_dir
2025/09/19 18:36 upstream 097a6c336d00 67c37560 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in iterate_dir
2025/09/16 18:34 upstream 46a51f4f5eda e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in iterate_dir
2025/09/15 23:32 upstream f83ec76bf285 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in iterate_dir
2025/09/10 06:35 upstream 9dd1835ecda5 fdeaa69b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in iterate_dir
* Struck through repros no longer work on HEAD.