syzbot


possible deadlock in reiserfs_get_block

Status: upstream: reported on 2022/12/23 16:13
Labels: reiserfs (incorrect?)
Reported-by: syzbot+8a4c84020c63609f15ec@syzkaller.appspotmail.com
First crash: 164d, last: 13d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [reiserfs?] possible deadlock in reiserfs_get_block 0 (1) 2022/12/23 16:13

Sample crash report:
REISERFS (device loop4): using 3.5.x disk format
REISERFS (device loop4): Created .reiserfs_priv - reserved for xattr storage.
REISERFS warning (device loop4): super-6502 reiserfs_getopt: unknown mount option "01777777777777777777777µÕFî§<< G4š¶mRŸ±â½ÆuÆÌëê0º‰w/™^£àíù†¶Žæ"
======================================================
WARNING: possible circular locking dependency detected
6.4.0-rc2-syzkaller-00238-gcbd6ac3837cd #0 Not tainted
------------------------------------------------------
syz-executor.4/29135 is trying to acquire lock:
ffff88801779b768 (&mm->mmap_lock){++++}-{3:3}, at: __might_fault+0x93/0x120 mm/memory.c:5731

but task is already holding lock:
ffff888035a88090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x7a/0xd0 fs/reiserfs/lock.c:27

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&sbi->lock){+.+.}-{3:3}:
       lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5691
       __mutex_lock_common+0x1d8/0x2530 kernel/locking/mutex.c:603
       __mutex_lock kernel/locking/mutex.c:747 [inline]
       mutex_lock_nested+0x1b/0x20 kernel/locking/mutex.c:799
       reiserfs_write_lock+0x7a/0xd0 fs/reiserfs/lock.c:27
       reiserfs_get_block+0x280/0x5130 fs/reiserfs/inode.c:680
       do_mpage_readpage+0x911/0x1fa0 fs/mpage.c:234
       mpage_readahead+0x454/0x930 fs/mpage.c:382
       read_pages+0x183/0x830 mm/readahead.c:161
       page_cache_ra_unbounded+0x697/0x7c0 mm/readahead.c:270
       page_cache_sync_readahead include/linux/pagemap.h:1211 [inline]
       filemap_get_pages+0x49c/0x20c0 mm/filemap.c:2595
       filemap_read+0x45a/0x1170 mm/filemap.c:2690
       call_read_iter include/linux/fs.h:1862 [inline]
       generic_file_splice_read+0x240/0x640 fs/splice.c:419
       do_splice_to fs/splice.c:902 [inline]
       splice_direct_to_actor+0x40c/0xbd0 fs/splice.c:973
       do_splice_direct+0x283/0x3d0 fs/splice.c:1082
       do_sendfile+0x620/0xff0 fs/read_write.c:1254
       __do_sys_sendfile64 fs/read_write.c:1322 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1308
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #1 (mapping.invalidate_lock#16){.+.+}-{3:3}:
       lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5691
       down_read+0x47/0x2f0 kernel/locking/rwsem.c:1520
       filemap_invalidate_lock_shared include/linux/fs.h:830 [inline]
       filemap_fault+0x647/0x1810 mm/filemap.c:3271
       __do_fault+0x136/0x500 mm/memory.c:4176
       do_read_fault mm/memory.c:4530 [inline]
       do_fault mm/memory.c:4659 [inline]
       do_pte_missing mm/memory.c:3647 [inline]
       handle_pte_fault mm/memory.c:4947 [inline]
       __handle_mm_fault mm/memory.c:5089 [inline]
       handle_mm_fault+0x41a8/0x5860 mm/memory.c:5243
       faultin_page mm/gup.c:925 [inline]
       __get_user_pages+0x5d9/0x12b0 mm/gup.c:1147
       populate_vma_page_range+0x2c7/0x3b0 mm/gup.c:1543
       __mm_populate+0x279/0x450 mm/gup.c:1652
       mm_populate include/linux/mm.h:3153 [inline]
       vm_mmap_pgoff+0x300/0x410 mm/util.c:548
       ksys_mmap_pgoff+0x4f9/0x6d0 mm/mmap.c:1440
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&mm->mmap_lock){++++}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3108 [inline]
       check_prevs_add kernel/locking/lockdep.c:3227 [inline]
       validate_chain+0x166b/0x58e0 kernel/locking/lockdep.c:3842
       __lock_acquire+0x1295/0x2000 kernel/locking/lockdep.c:5074
       lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5691
       __might_fault+0xba/0x120 mm/memory.c:5732
       reiserfs_ioctl+0x121/0x340 fs/reiserfs/ioctl.c:96
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:870 [inline]
       __se_sys_ioctl+0xf1/0x160 fs/ioctl.c:856
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

Chain exists of:
  &mm->mmap_lock --> mapping.invalidate_lock#16 --> &sbi->lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&sbi->lock);
                               lock(mapping.invalidate_lock#16);
                               lock(&sbi->lock);
  rlock(&mm->mmap_lock);

 *** DEADLOCK ***

1 lock held by syz-executor.4/29135:
 #0: ffff888035a88090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x7a/0xd0 fs/reiserfs/lock.c:27

stack backtrace:
CPU: 0 PID: 29135 Comm: syz-executor.4 Not tainted 6.4.0-rc2-syzkaller-00238-gcbd6ac3837cd #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/28/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
 check_noncircular+0x2fe/0x3b0 kernel/locking/lockdep.c:2188
 check_prev_add kernel/locking/lockdep.c:3108 [inline]
 check_prevs_add kernel/locking/lockdep.c:3227 [inline]
 validate_chain+0x166b/0x58e0 kernel/locking/lockdep.c:3842
 __lock_acquire+0x1295/0x2000 kernel/locking/lockdep.c:5074
 lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5691
 __might_fault+0xba/0x120 mm/memory.c:5732
 reiserfs_ioctl+0x121/0x340 fs/reiserfs/ioctl.c:96
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:870 [inline]
 __se_sys_ioctl+0xf1/0x160 fs/ioctl.c:856
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7fb34d48c169
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fb34e2bc168 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fb34d5abf80 RCX: 00007fb34d48c169
RDX: 0000000020000040 RSI: 0000000080087601 RDI: 0000000000000004
RBP: 00007fb34d4e7ca1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffc1135e01f R14: 00007fb34e2bc300 R15: 0000000000022000
 </TASK>

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets Manager Title
2023/05/20 02:18 upstream cbd6ac3837cd 96689200 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in reiserfs_get_block
2023/04/27 22:58 upstream 6e98b09da931 6f3d6fa7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in reiserfs_get_block
2023/01/22 14:58 upstream 2241ab53cbb5 cc0f9968 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in reiserfs_get_block
2022/12/19 16:07 upstream f9ff5644bcc0 05494336 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in reiserfs_get_block
* Struck through repros no longer work on HEAD.