====================================================== WARNING: possible circular locking dependency detected 6.1.115-syzkaller #0 Not tainted ------------------------------------------------------ kworker/u4:4/75 is trying to acquire lock: ffff8880622b6120 (&wnd->rw_lock/1){+.+.}-{3:3}, at: ntfs_mark_rec_free+0x3b/0x2b0 fs/ntfs3/fsntfs.c:713 but task is already holding lock: ffff888052952d40 (&ni->ni_lock){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1116 [inline] ffff888052952d40 (&ni->ni_lock){+.+.}-{3:3}, at: ni_write_inode+0x16b/0x1070 fs/ntfs3/frecord.c:3318 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&ni->ni_lock){+.+.}-{3:3}: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x132/0xd80 kernel/locking/mutex.c:747 ntfs_set_state+0x1fa/0x660 fs/ntfs3/fsntfs.c:920 attr_set_size+0x330d/0x4310 fs/ntfs3/attrib.c:884 ntfs_extend_mft+0x2f2/0x4a0 fs/ntfs3/fsntfs.c:498 ntfs_look_free_mft+0x778/0x10c0 fs/ntfs3/fsntfs.c:680 ni_create_attr_list+0x9b6/0x1470 fs/ntfs3/frecord.c:873 ni_ins_attr_ext+0x330/0xbf0 fs/ntfs3/frecord.c:974 ni_insert_attr fs/ntfs3/frecord.c:1141 [inline] ni_insert_resident fs/ntfs3/frecord.c:1525 [inline] ni_add_name+0x619/0xc30 fs/ntfs3/frecord.c:3103 ni_rename+0xbe/0x1e0 fs/ntfs3/frecord.c:3143 ntfs_rename+0x74a/0xd10 fs/ntfs3/namei.c:318 vfs_rename+0xd32/0x10f0 fs/namei.c:4874 do_renameat2+0xde0/0x1440 fs/namei.c:5027 __do_sys_renameat2 fs/namei.c:5060 [inline] __se_sys_renameat2 fs/namei.c:5057 [inline] __x64_sys_renameat2+0xce/0xe0 fs/namei.c:5057 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x68/0xd2 -> #1 (&ni->file.run_lock#2){++++}-{3:3}: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 down_read+0xad/0xa30 kernel/locking/rwsem.c:1520 mi_read+0x17d/0x5a0 fs/ntfs3/record.c:129 mi_format_new+0x1a7/0x5c0 fs/ntfs3/record.c:415 ni_add_subrecord+0xde/0x430 fs/ntfs3/frecord.c:371 ntfs_look_free_mft+0x874/0x10c0 fs/ntfs3/fsntfs.c:686 ni_create_attr_list+0x9b6/0x1470 fs/ntfs3/frecord.c:873 ni_ins_attr_ext+0x330/0xbf0 fs/ntfs3/frecord.c:974 ni_insert_attr+0x354/0x900 fs/ntfs3/frecord.c:1141 ni_insert_resident+0xf4/0x3c0 fs/ntfs3/frecord.c:1525 ntfs_set_ea+0xab8/0x1660 fs/ntfs3/xattr.c:445 ntfs_save_wsl_perm+0x139/0x490 fs/ntfs3/xattr.c:976 ntfs3_setattr+0x961/0xb70 fs/ntfs3/file.c:816 notify_change+0xce3/0xfc0 fs/attr.c:499 chown_common+0x5aa/0x900 fs/open.c:736 do_fchownat+0x169/0x240 fs/open.c:767 __do_sys_lchown fs/open.c:792 [inline] __se_sys_lchown fs/open.c:790 [inline] __x64_sys_lchown+0x81/0x90 fs/open.c:790 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x68/0xd2 -> #0 (&wnd->rw_lock/1){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3090 [inline] check_prevs_add kernel/locking/lockdep.c:3209 [inline] validate_chain+0x1661/0x5950 kernel/locking/lockdep.c:3825 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 down_write_nested+0x39/0x60 kernel/locking/rwsem.c:1689 ntfs_mark_rec_free+0x3b/0x2b0 fs/ntfs3/fsntfs.c:713 ni_write_inode+0x506/0x1070 fs/ntfs3/frecord.c:3413 write_inode fs/fs-writeback.c:1460 [inline] __writeback_single_inode+0x67d/0x11e0 fs/fs-writeback.c:1677 writeback_sb_inodes+0xc2b/0x1b20 fs/fs-writeback.c:1903 wb_writeback+0x49d/0xe10 fs/fs-writeback.c:2077 wb_do_writeback fs/fs-writeback.c:2220 [inline] wb_workfn+0x427/0x1020 fs/fs-writeback.c:2260 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 other info that might help us debug this: Chain exists of: &wnd->rw_lock/1 --> &ni->file.run_lock#2 --> &ni->ni_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ni->ni_lock); lock(&ni->file.run_lock#2); lock(&ni->ni_lock); lock(&wnd->rw_lock/1); *** DEADLOCK *** 3 locks held by kworker/u4:4/75: #0: ffff88801c668938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #1: ffffc900015f7d20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #2: ffff888052952d40 (&ni->ni_lock){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1116 [inline] #2: ffff888052952d40 (&ni->ni_lock){+.+.}-{3:3}, at: ni_write_inode+0x16b/0x1070 fs/ntfs3/frecord.c:3318 stack backtrace: CPU: 1 PID: 75 Comm: kworker/u4:4 Not tainted 6.1.115-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Workqueue: writeback wb_workfn (flush-7:3) Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106 check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2170 check_prev_add kernel/locking/lockdep.c:3090 [inline] check_prevs_add kernel/locking/lockdep.c:3209 [inline] validate_chain+0x1661/0x5950 kernel/locking/lockdep.c:3825 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 down_write_nested+0x39/0x60 kernel/locking/rwsem.c:1689 ntfs_mark_rec_free+0x3b/0x2b0 fs/ntfs3/fsntfs.c:713 ni_write_inode+0x506/0x1070 fs/ntfs3/frecord.c:3413 write_inode fs/fs-writeback.c:1460 [inline] __writeback_single_inode+0x67d/0x11e0 fs/fs-writeback.c:1677 writeback_sb_inodes+0xc2b/0x1b20 fs/fs-writeback.c:1903 wb_writeback+0x49d/0xe10 fs/fs-writeback.c:2077 wb_do_writeback fs/fs-writeback.c:2220 [inline] wb_workfn+0x427/0x1020 fs/fs-writeback.c:2260 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295