====================================================== WARNING: possible circular locking dependency detected 6.6.0-rc3-syzkaller-00214-ge402b08634b3 #0 Not tainted ------------------------------------------------------ kworker/u4:6/8680 is trying to acquire lock: ffff88801ef3a120 (&wnd->rw_lock/1){+.+.}-{3:3}, at: ntfs_mark_rec_free+0x2f4/0x400 fs/ntfs3/fsntfs.c:742 but task is already holding lock: ffff88807bd00860 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1144 [inline] ffff88807bd00860 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_write_inode+0x1c3/0x2810 fs/ntfs3/frecord.c:3256 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&ni->ni_lock#2){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747 ntfs_set_state+0x1d2/0x6e0 fs/ntfs3/fsntfs.c:946 attr_set_size+0x139c/0x2ca0 fs/ntfs3/attrib.c:866 ntfs_extend_mft+0x29f/0x430 fs/ntfs3/fsntfs.c:527 ntfs_look_free_mft+0x777/0xdd0 fs/ntfs3/fsntfs.c:590 ni_create_attr_list+0x937/0x1520 fs/ntfs3/frecord.c:876 ni_ins_attr_ext+0x23f/0xaf0 fs/ntfs3/frecord.c:974 ni_insert_attr+0x310/0x870 fs/ntfs3/frecord.c:1141 ni_insert_resident+0xd2/0x3a0 fs/ntfs3/frecord.c:1525 ntfs_set_ea+0xf46/0x13d0 fs/ntfs3/xattr.c:437 ntfs_save_wsl_perm+0x134/0x3d0 fs/ntfs3/xattr.c:946 ntfs3_setattr+0x92e/0xb20 fs/ntfs3/file.c:708 notify_change+0x742/0x11c0 fs/attr.c:499 chown_common+0x596/0x660 fs/open.c:783 do_fchownat+0x140/0x1f0 fs/open.c:814 __do_sys_lchown fs/open.c:839 [inline] __se_sys_lchown fs/open.c:837 [inline] __x64_sys_lchown+0x7e/0xc0 fs/open.c:837 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #1 (&ni->file.run_lock#2){++++}-{3:3}: down_write+0x93/0x200 kernel/locking/rwsem.c:1573 ntfs_extend_mft+0x138/0x430 fs/ntfs3/fsntfs.c:511 ntfs_look_free_mft+0x777/0xdd0 fs/ntfs3/fsntfs.c:590 ni_create_attr_list+0x937/0x1520 fs/ntfs3/frecord.c:876 ni_ins_attr_ext+0x23f/0xaf0 fs/ntfs3/frecord.c:974 ni_insert_attr+0x310/0x870 fs/ntfs3/frecord.c:1141 ni_insert_resident+0xd2/0x3a0 fs/ntfs3/frecord.c:1525 ntfs_set_ea+0xf46/0x13d0 fs/ntfs3/xattr.c:437 ntfs_save_wsl_perm+0x134/0x3d0 fs/ntfs3/xattr.c:946 ntfs3_setattr+0x92e/0xb20 fs/ntfs3/file.c:708 notify_change+0x742/0x11c0 fs/attr.c:499 chown_common+0x596/0x660 fs/open.c:783 do_fchownat+0x140/0x1f0 fs/open.c:814 __do_sys_lchown fs/open.c:839 [inline] __se_sys_lchown fs/open.c:837 [inline] __x64_sys_lchown+0x7e/0xc0 fs/open.c:837 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #0 (&wnd->rw_lock/1){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5136 lock_acquire kernel/locking/lockdep.c:5753 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718 down_write_nested+0x97/0x200 kernel/locking/rwsem.c:1689 ntfs_mark_rec_free+0x2f4/0x400 fs/ntfs3/fsntfs.c:742 ni_write_inode+0x475/0x2810 fs/ntfs3/frecord.c:3353 write_inode fs/fs-writeback.c:1456 [inline] __writeback_single_inode+0xa81/0xe70 fs/fs-writeback.c:1673 writeback_sb_inodes+0x599/0x1070 fs/fs-writeback.c:1899 wb_writeback+0x2a5/0xa90 fs/fs-writeback.c:2075 wb_do_writeback fs/fs-writeback.c:2222 [inline] wb_workfn+0x29c/0xfd0 fs/fs-writeback.c:2262 process_one_work+0x884/0x15c0 kernel/workqueue.c:2630 process_scheduled_works kernel/workqueue.c:2703 [inline] worker_thread+0x8b9/0x1290 kernel/workqueue.c:2784 kthread+0x33c/0x440 kernel/kthread.c:388 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304 other info that might help us debug this: Chain exists of: &wnd->rw_lock/1 --> &ni->file.run_lock#2 --> &ni->ni_lock#2 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ni->ni_lock#2); lock(&ni->file.run_lock#2); lock(&ni->ni_lock#2); lock(&wnd->rw_lock/1); *** DEADLOCK *** 3 locks held by kworker/u4:6/8680: #0: ffff888018242d38 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x787/0x15c0 kernel/workqueue.c:2605 #1: ffffc9000c6afd80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7e9/0x15c0 kernel/workqueue.c:2606 #2: ffff88807bd00860 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1144 [inline] #2: ffff88807bd00860 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_write_inode+0x1c3/0x2810 fs/ntfs3/frecord.c:3256 stack backtrace: CPU: 1 PID: 8680 Comm: kworker/u4:6 Not tainted 6.6.0-rc3-syzkaller-00214-ge402b08634b3 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/06/2023 Workqueue: writeback wb_workfn (flush-7:4) Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106 check_noncircular+0x311/0x3f0 kernel/locking/lockdep.c:2187 check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5136 lock_acquire kernel/locking/lockdep.c:5753 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718 down_write_nested+0x97/0x200 kernel/locking/rwsem.c:1689 ntfs_mark_rec_free+0x2f4/0x400 fs/ntfs3/fsntfs.c:742 ni_write_inode+0x475/0x2810 fs/ntfs3/frecord.c:3353 write_inode fs/fs-writeback.c:1456 [inline] __writeback_single_inode+0xa81/0xe70 fs/fs-writeback.c:1673 writeback_sb_inodes+0x599/0x1070 fs/fs-writeback.c:1899 wb_writeback+0x2a5/0xa90 fs/fs-writeback.c:2075 wb_do_writeback fs/fs-writeback.c:2222 [inline] wb_workfn+0x29c/0xfd0 fs/fs-writeback.c:2262 process_one_work+0x884/0x15c0 kernel/workqueue.c:2630 process_scheduled_works kernel/workqueue.c:2703 [inline] worker_thread+0x8b9/0x1290 kernel/workqueue.c:2784 kthread+0x33c/0x440 kernel/kthread.c:388 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304