====================================================== WARNING: possible circular locking dependency detected 6.1.28-syzkaller #0 Not tainted ------------------------------------------------------ kworker/u4:39/5457 is trying to acquire lock: ffff888039d40120 (&wnd->rw_lock/1){+.+.}-{3:3}, at: ntfs_mark_rec_free+0x3b/0x2b0 fs/ntfs3/fsntfs.c:713 but task is already holding lock: ffff888072fa6840 (&ni->ni_lock){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1124 [inline] ffff888072fa6840 (&ni->ni_lock){+.+.}-{3:3}, at: ni_write_inode+0x151/0x1240 fs/ntfs3/frecord.c:3240 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&ni->ni_lock){+.+.}-{3:3}: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669 __mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603 __mutex_lock kernel/locking/mutex.c:747 [inline] mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799 ntfs_set_state+0x217/0x6f0 fs/ntfs3/fsntfs.c:920 attr_set_size+0x32e5/0x42f0 fs/ntfs3/attrib.c:880 ntfs_extend_mft+0x2f6/0x4b0 fs/ntfs3/fsntfs.c:498 ntfs_look_free_mft+0x439/0x10c0 fs/ntfs3/fsntfs.c:561 ni_create_attr_list+0x9d0/0x1510 fs/ntfs3/frecord.c:873 ni_ins_attr_ext+0x330/0xbf0 fs/ntfs3/frecord.c:968 ni_insert_attr+0x354/0x900 fs/ntfs3/frecord.c:1135 ni_insert_resident+0xf4/0x3c0 fs/ntfs3/frecord.c:1519 ntfs_set_ea+0xa70/0x16b0 fs/ntfs3/xattr.c:398 ntfs_save_wsl_perm+0x139/0x490 fs/ntfs3/xattr.c:929 ntfs3_setattr+0x961/0xb70 fs/ntfs3/file.c:817 notify_change+0xdcd/0x1080 fs/attr.c:482 chown_common+0x5aa/0x900 fs/open.c:736 do_fchownat+0x169/0x240 fs/open.c:767 __do_sys_lchown fs/open.c:792 [inline] __se_sys_lchown fs/open.c:790 [inline] __x64_sys_lchown+0x81/0x90 fs/open.c:790 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #1 (&ni->file.run_lock#2){++++}-{3:3}: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669 down_write+0x36/0x60 kernel/locking/rwsem.c:1573 ntfs_extend_mft+0x15c/0x4b0 fs/ntfs3/fsntfs.c:482 ntfs_look_free_mft+0x439/0x10c0 fs/ntfs3/fsntfs.c:561 ni_create_attr_list+0x9d0/0x1510 fs/ntfs3/frecord.c:873 ni_ins_attr_ext+0x330/0xbf0 fs/ntfs3/frecord.c:968 ni_insert_attr+0x354/0x900 fs/ntfs3/frecord.c:1135 ni_insert_resident+0xf4/0x3c0 fs/ntfs3/frecord.c:1519 ntfs_set_ea+0xa70/0x16b0 fs/ntfs3/xattr.c:398 ntfs_save_wsl_perm+0x139/0x490 fs/ntfs3/xattr.c:929 ntfs3_setattr+0x961/0xb70 fs/ntfs3/file.c:817 notify_change+0xdcd/0x1080 fs/attr.c:482 chown_common+0x5aa/0x900 fs/open.c:736 do_fchownat+0x169/0x240 fs/open.c:767 __do_sys_lchown fs/open.c:792 [inline] __se_sys_lchown fs/open.c:790 [inline] __x64_sys_lchown+0x81/0x90 fs/open.c:790 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #0 (&wnd->rw_lock/1){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3098 [inline] check_prevs_add kernel/locking/lockdep.c:3217 [inline] validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3832 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669 down_write_nested+0x39/0x60 kernel/locking/rwsem.c:1689 ntfs_mark_rec_free+0x3b/0x2b0 fs/ntfs3/fsntfs.c:713 ni_write_inode+0x433/0x1240 fs/ntfs3/frecord.c:3332 write_inode fs/fs-writeback.c:1443 [inline] __writeback_single_inode+0x67d/0x11e0 fs/fs-writeback.c:1655 writeback_sb_inodes+0xc21/0x1ac0 fs/fs-writeback.c:1881 wb_writeback+0x49d/0xe10 fs/fs-writeback.c:2055 wb_do_writeback fs/fs-writeback.c:2198 [inline] wb_workfn+0x427/0x1020 fs/fs-writeback.c:2238 process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289 worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436 kthread+0x26e/0x300 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 other info that might help us debug this: Chain exists of: &wnd->rw_lock/1 --> &ni->file.run_lock#2 --> &ni->ni_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ni->ni_lock); lock(&ni->file.run_lock#2); lock(&ni->ni_lock); lock(&wnd->rw_lock/1); *** DEADLOCK *** 3 locks held by kworker/u4:39/5457: #0: ffff888015aa6938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0 #1: ffffc900061b7d20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264 #2: ffff888072fa6840 (&ni->ni_lock){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1124 [inline] #2: ffff888072fa6840 (&ni->ni_lock){+.+.}-{3:3}, at: ni_write_inode+0x151/0x1240 fs/ntfs3/frecord.c:3240 stack backtrace: CPU: 0 PID: 5457 Comm: kworker/u4:39 Not tainted 6.1.28-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/28/2023 Workqueue: writeback wb_workfn (flush-7:5) Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106 check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2178 check_prev_add kernel/locking/lockdep.c:3098 [inline] check_prevs_add kernel/locking/lockdep.c:3217 [inline] validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3832 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5056 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5669 down_write_nested+0x39/0x60 kernel/locking/rwsem.c:1689 ntfs_mark_rec_free+0x3b/0x2b0 fs/ntfs3/fsntfs.c:713 ni_write_inode+0x433/0x1240 fs/ntfs3/frecord.c:3332 write_inode fs/fs-writeback.c:1443 [inline] __writeback_single_inode+0x67d/0x11e0 fs/fs-writeback.c:1655 writeback_sb_inodes+0xc21/0x1ac0 fs/fs-writeback.c:1881 wb_writeback+0x49d/0xe10 fs/fs-writeback.c:2055 wb_do_writeback fs/fs-writeback.c:2198 [inline] wb_workfn+0x427/0x1020 fs/fs-writeback.c:2238 process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289 worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436 kthread+0x26e/0x300 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306