====================================================== WARNING: possible circular locking dependency detected 6.6.0-rc5-syzkaller #0 Not tainted ------------------------------------------------------ kworker/u4:0/11 is trying to acquire lock: ffff88805553a120 (&wnd->rw_lock/1){+.+.}-{3:3}, at: ntfs_mark_rec_free+0x3f/0x2b0 fs/ntfs3/fsntfs.c:742 but task is already holding lock: ffff88807c3cb4a0 (&ni->ni_lock){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1144 [inline] ffff88807c3cb4a0 (&ni->ni_lock){+.+.}-{3:3}, at: ni_write_inode+0x163/0x1070 fs/ntfs3/frecord.c:3256 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&ni->ni_lock){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x136/0xd60 kernel/locking/mutex.c:747 ntfs_set_state+0x212/0x730 fs/ntfs3/fsntfs.c:946 attr_set_size+0x3311/0x4290 fs/ntfs3/attrib.c:866 ntfs_extend_mft+0x2fa/0x4b0 fs/ntfs3/fsntfs.c:527 ntfs_look_free_mft+0x43d/0x10c0 fs/ntfs3/fsntfs.c:590 ni_create_attr_list+0x9bd/0x1480 fs/ntfs3/frecord.c:876 ni_ins_attr_ext+0x365/0xbe0 fs/ntfs3/frecord.c:974 ni_insert_attr+0x358/0x900 fs/ntfs3/frecord.c:1141 ni_insert_resident+0xf8/0x3c0 fs/ntfs3/frecord.c:1525 ntfs_set_ea+0xabc/0x16c0 fs/ntfs3/xattr.c:437 ntfs_save_wsl_perm+0x14f/0x500 fs/ntfs3/xattr.c:946 ntfs3_setattr+0x916/0xae0 fs/ntfs3/file.c:708 notify_change+0xb99/0xe60 fs/attr.c:499 chown_common+0x500/0x850 fs/open.c:783 do_fchownat+0x16d/0x240 fs/open.c:814 __do_sys_chown fs/open.c:834 [inline] __se_sys_chown fs/open.c:832 [inline] __x64_sys_chown+0x82/0x90 fs/open.c:832 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #1 (&ni->file.run_lock#2){++++}-{3:3}: down_write+0x3a/0x50 kernel/locking/rwsem.c:1573 ntfs_extend_mft+0x160/0x4b0 fs/ntfs3/fsntfs.c:511 ntfs_look_free_mft+0x43d/0x10c0 fs/ntfs3/fsntfs.c:590 ntfs_create_inode+0x519/0x3b00 fs/ntfs3/inode.c:1310 ntfs_atomic_open+0x423/0x570 fs/ntfs3/namei.c:422 atomic_open fs/namei.c:3358 [inline] lookup_open fs/namei.c:3466 [inline] open_last_lookups fs/namei.c:3563 [inline] path_openat+0x1044/0x3180 fs/namei.c:3793 do_filp_open+0x234/0x490 fs/namei.c:3823 do_sys_openat2+0x13e/0x1d0 fs/open.c:1422 do_sys_open fs/open.c:1437 [inline] __do_sys_open fs/open.c:1445 [inline] __se_sys_open fs/open.c:1441 [inline] __x64_sys_open+0x225/0x270 fs/open.c:1441 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #0 (&wnd->rw_lock/1){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x39ff/0x7f70 kernel/locking/lockdep.c:5136 lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5753 down_write_nested+0x3d/0x50 kernel/locking/rwsem.c:1689 ntfs_mark_rec_free+0x3f/0x2b0 fs/ntfs3/fsntfs.c:742 ni_write_inode+0x54f/0x1070 fs/ntfs3/frecord.c:3353 write_inode fs/fs-writeback.c:1456 [inline] __writeback_single_inode+0x69b/0xfa0 fs/fs-writeback.c:1673 writeback_sb_inodes+0x8e3/0x1210 fs/fs-writeback.c:1899 wb_writeback+0x44d/0xc60 fs/fs-writeback.c:2075 wb_do_writeback fs/fs-writeback.c:2222 [inline] wb_workfn+0x400/0xff0 fs/fs-writeback.c:2262 process_one_work kernel/workqueue.c:2630 [inline] process_scheduled_works+0x90f/0x1400 kernel/workqueue.c:2703 worker_thread+0xa5f/0xff0 kernel/workqueue.c:2784 kthread+0x2d3/0x370 kernel/kthread.c:388 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304 other info that might help us debug this: Chain exists of: &wnd->rw_lock/1 --> &ni->file.run_lock#2 --> &ni->ni_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ni->ni_lock); lock(&ni->file.run_lock#2); lock(&ni->ni_lock); lock(&wnd->rw_lock/1); *** DEADLOCK *** 3 locks held by kworker/u4:0/11: #0: ffff888141e55938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2605 [inline] #0: ffff888141e55938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x825/0x1400 kernel/workqueue.c:2703 #1: ffffc90000107d20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2605 [inline] #1: ffffc90000107d20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x825/0x1400 kernel/workqueue.c:2703 #2: ffff88807c3cb4a0 (&ni->ni_lock){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1144 [inline] #2: ffff88807c3cb4a0 (&ni->ni_lock){+.+.}-{3:3}, at: ni_write_inode+0x163/0x1070 fs/ntfs3/frecord.c:3256 stack backtrace: CPU: 1 PID: 11 Comm: kworker/u4:0 Not tainted 6.6.0-rc5-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/06/2023 Workqueue: writeback wb_workfn (flush-7:3) Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106 check_noncircular+0x375/0x4a0 kernel/locking/lockdep.c:2187 check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x39ff/0x7f70 kernel/locking/lockdep.c:5136 lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5753 down_write_nested+0x3d/0x50 kernel/locking/rwsem.c:1689 ntfs_mark_rec_free+0x3f/0x2b0 fs/ntfs3/fsntfs.c:742 ni_write_inode+0x54f/0x1070 fs/ntfs3/frecord.c:3353 write_inode fs/fs-writeback.c:1456 [inline] __writeback_single_inode+0x69b/0xfa0 fs/fs-writeback.c:1673 writeback_sb_inodes+0x8e3/0x1210 fs/fs-writeback.c:1899 wb_writeback+0x44d/0xc60 fs/fs-writeback.c:2075 wb_do_writeback fs/fs-writeback.c:2222 [inline] wb_workfn+0x400/0xff0 fs/fs-writeback.c:2262 process_one_work kernel/workqueue.c:2630 [inline] process_scheduled_works+0x90f/0x1400 kernel/workqueue.c:2703 worker_thread+0xa5f/0xff0 kernel/workqueue.c:2784 kthread+0x2d3/0x370 kernel/kthread.c:388 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304