syzbot


possible deadlock in mark_as_free_ex

Status: auto-obsoleted due to no activity on 2023/09/13 11:34
Subsystems: ntfs3
[Documentation on labels]
Reported-by: syzbot+e94d98936a0ed08bde43@syzkaller.appspotmail.com
First crash: 330d, last: 326d
Discussions (5)
Title Replies (including bot) Last reply
[PATCH AUTOSEL 5.15 05/28] fs/ntfs3: fix deadlock in mark_as_free_ex 1 (1) 2023/10/29 22:58
[PATCH AUTOSEL 6.1 05/39] fs/ntfs3: fix deadlock in mark_as_free_ex 1 (1) 2023/10/29 22:56
[PATCH AUTOSEL 6.5 06/52] fs/ntfs3: fix deadlock in mark_as_free_ex 1 (1) 2023/10/29 22:52
[PATCH 7/8] fs/ntfs3: fix deadlock in mark_as_free_ex 1 (1) 2023/07/03 07:27
[syzbot] [ntfs3?] possible deadlock in mark_as_free_ex 0 (1) 2023/05/31 19:32

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.4.0-rc5-syzkaller #0 Not tainted
------------------------------------------------------
kworker/u4:10/7546 is trying to acquire lock:
ffff88803df50268
 (&wnd->rw_lock){++++}-{3:3}, at: mark_as_free_ex+0x3d/0x330 fs/ntfs3/fsntfs.c:2464

but task is already holding lock:
ffff8880383f9e80 (&ni->ni_lock
){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1141 [inline]
){+.+.}-{3:3}, at: ni_write_inode+0x167/0x10c0 fs/ntfs3/frecord.c:3252

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1
 (&ni->ni_lock){+.+.}-{3:3}:
       lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5705
       __mutex_lock_common+0x1d8/0x2530 kernel/locking/mutex.c:603
       __mutex_lock kernel/locking/mutex.c:747 [inline]
       mutex_lock_nested+0x1b/0x20 kernel/locking/mutex.c:799
       ntfs_set_state+0x212/0x730 fs/ntfs3/fsntfs.c:945
       mark_as_free_ex+0x6e/0x330 fs/ntfs3/fsntfs.c:2466
       run_deallocate_ex+0x244/0x5f0 fs/ntfs3/attrib.c:122
       attr_set_size+0x1684/0x4290 fs/ntfs3/attrib.c:750
       ntfs_truncate fs/ntfs3/file.c:393 [inline]
       ntfs3_setattr+0x556/0xb00 fs/ntfs3/file.c:682
       notify_change+0xc8b/0xf40 fs/attr.c:483
       do_truncate+0x220/0x300 fs/open.c:66
       handle_truncate fs/namei.c:3295 [inline]
       do_open fs/namei.c:3640 [inline]
       path_openat+0x294e/0x3170 fs/namei.c:3791
       do_filp_open+0x234/0x490 fs/namei.c:3818
       do_sys_openat2+0x13f/0x500 fs/open.c:1356
       do_sys_open fs/open.c:1372 [inline]
       __do_sys_creat fs/open.c:1448 [inline]
       __se_sys_creat fs/open.c:1442 [inline]
       __x64_sys_creat+0x123/0x160 fs/open.c:1442
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&wnd->rw_lock){++++}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3113 [inline]
       check_prevs_add kernel/locking/lockdep.c:3232 [inline]
       validate_chain+0x166b/0x58f0 kernel/locking/lockdep.c:3847
       __lock_acquire+0x1316/0x2070 kernel/locking/lockdep.c:5088
       lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5705
       down_write_nested+0x3d/0x50 kernel/locking/rwsem.c:1689
       mark_as_free_ex+0x3d/0x330 fs/ntfs3/fsntfs.c:2464
       run_deallocate+0x13b/0x230 fs/ntfs3/fsntfs.c:2534
       ni_try_remove_attr_list+0x1558/0x1930 fs/ntfs3/frecord.c:773
       ni_write_inode+0xd14/0x10c0 fs/ntfs3/frecord.c:3318
       write_inode fs/fs-writeback.c:1456 [inline]
       __writeback_single_inode+0x69b/0xfa0 fs/fs-writeback.c:1668
       writeback_sb_inodes+0x8e3/0x11d0 fs/fs-writeback.c:1894
       wb_writeback+0x458/0xc70 fs/fs-writeback.c:2068
       wb_do_writeback fs/fs-writeback.c:2211 [inline]
       wb_workfn+0x400/0xff0 fs/fs-writeback.c:2251
       process_one_work+0x8a0/0x10e0 kernel/workqueue.c:2405
       worker_thread+0xa63/0x1210 kernel/workqueue.c:2552
       kthread+0x2b8/0x350 kernel/kthread.c:379
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&ni->ni_lock);
                               lock(&wnd->rw_lock);
                               lock(&ni->ni_lock);
  lock(&wnd->rw_lock);

 *** DEADLOCK ***

3 locks held by kworker/u4:10/7546:
 #0: ffff888145645938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc90015097d20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
 #2: ffff8880383f9e80 (&ni->ni_lock){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1141 [inline]
 #2: ffff8880383f9e80 (&ni->ni_lock){+.+.}-{3:3}, at: ni_write_inode+0x167/0x10c0 fs/ntfs3/frecord.c:3252

stack backtrace:
CPU: 1 PID: 7546 Comm: kworker/u4:10 Not tainted 6.4.0-rc5-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
Workqueue: writeback wb_workfn (flush-7:1)
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
 check_noncircular+0x2fe/0x3b0 kernel/locking/lockdep.c:2188
 check_prev_add kernel/locking/lockdep.c:3113 [inline]
 check_prevs_add kernel/locking/lockdep.c:3232 [inline]
 validate_chain+0x166b/0x58f0 kernel/locking/lockdep.c:3847
 __lock_acquire+0x1316/0x2070 kernel/locking/lockdep.c:5088
 lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5705
 down_write_nested+0x3d/0x50 kernel/locking/rwsem.c:1689
 mark_as_free_ex+0x3d/0x330 fs/ntfs3/fsntfs.c:2464
 run_deallocate+0x13b/0x230 fs/ntfs3/fsntfs.c:2534
 ni_try_remove_attr_list+0x1558/0x1930 fs/ntfs3/frecord.c:773
 ni_write_inode+0xd14/0x10c0 fs/ntfs3/frecord.c:3318
 write_inode fs/fs-writeback.c:1456 [inline]
 __writeback_single_inode+0x69b/0xfa0 fs/fs-writeback.c:1668
 writeback_sb_inodes+0x8e3/0x11d0 fs/fs-writeback.c:1894
 wb_writeback+0x458/0xc70 fs/fs-writeback.c:2068
 wb_do_writeback fs/fs-writeback.c:2211 [inline]
 wb_workfn+0x400/0xff0 fs/fs-writeback.c:2251
 process_one_work+0x8a0/0x10e0 kernel/workqueue.c:2405
 worker_thread+0xa63/0x1210 kernel/workqueue.c:2552
 kthread+0x2b8/0x350 kernel/kthread.c:379
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
EXT4-fs error (device loop5): ext4_mb_generate_buddy:1100: group 0, block bitmap and bg descriptor inconsistent: 25 vs 150994969 free clusters
EXT4-fs (loop5): Delayed block allocation failed for inode 16 at logical offset 16 with max blocks 5 with error 28
EXT4-fs (loop5): This should not happen!! Data will be lost

EXT4-fs (loop5): Total free blocks count 0
EXT4-fs (loop5): Free/Dirty block details
EXT4-fs (loop5): free_blocks=2415919104
EXT4-fs (loop5): dirty_blocks=16
EXT4-fs (loop5): Block reservation details
EXT4-fs (loop5): i_reserved_data_blocks=1

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/06/05 11:34 upstream 9561de3a55be a4ae4f42 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in mark_as_free_ex
2023/06/01 21:14 upstream 929ed21dfdb6 a4ae4f42 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in mark_as_free_ex
2023/05/31 19:20 upstream 48b1320a674e 09898419 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in mark_as_free_ex
* Struck through repros no longer work on HEAD.