====================================================== WARNING: possible circular locking dependency detected 6.12.0-rc1-syzkaller-00042-gf23aa4c0761a #0 Not tainted ------------------------------------------------------ kworker/u8:4/66 is trying to acquire lock: ffff888060048128 (&wnd->rw_lock/1){+.+.}-{3:3}, at: ntfs_mark_rec_free+0x3f/0x2b0 fs/ntfs3/fsntfs.c:742 but task is already holding lock: ffff8880584351c8 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1129 [inline] ffff8880584351c8 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_write_inode+0x1bc/0x1010 fs/ntfs3/frecord.c:3333 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&ni->ni_lock#2){+.+.}-{3:3}: lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5825 __mutex_lock_common kernel/locking/mutex.c:608 [inline] __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752 ntfs_set_state+0x1ff/0x6c0 fs/ntfs3/fsntfs.c:947 run_deallocate_ex+0x244/0x5f0 fs/ntfs3/attrib.c:122 attr_set_size+0x168d/0x4300 fs/ntfs3/attrib.c:753 ntfs_truncate fs/ntfs3/file.c:458 [inline] ntfs3_setattr+0x7a4/0xb80 fs/ntfs3/file.c:774 notify_change+0xbca/0xe90 fs/attr.c:503 do_truncate+0x220/0x310 fs/open.c:65 handle_truncate fs/namei.c:3395 [inline] do_open fs/namei.c:3778 [inline] path_openat+0x2e1e/0x3590 fs/namei.c:3933 do_filp_open+0x235/0x490 fs/namei.c:3960 do_sys_openat2+0x13e/0x1d0 fs/open.c:1415 do_sys_open fs/open.c:1430 [inline] __do_sys_creat fs/open.c:1506 [inline] __se_sys_creat fs/open.c:1500 [inline] __x64_sys_creat+0x123/0x170 fs/open.c:1500 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #1 (&ni->file.run_lock#2){++++}-{3:3}: lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5825 down_read+0xb1/0xa40 kernel/locking/rwsem.c:1524 mi_read+0x181/0x5a0 fs/ntfs3/record.c:129 mi_format_new+0x1ab/0x5d0 fs/ntfs3/record.c:420 ni_add_subrecord+0xe2/0x430 fs/ntfs3/frecord.c:372 ntfs_look_free_mft+0x878/0x10c0 fs/ntfs3/fsntfs.c:715 ni_create_attr_list+0x9bd/0x1480 fs/ntfs3/frecord.c:876 ni_ins_attr_ext+0x369/0xbe0 fs/ntfs3/frecord.c:974 ni_insert_attr fs/ntfs3/frecord.c:1141 [inline] ni_insert_resident fs/ntfs3/frecord.c:1525 [inline] ni_add_name+0x809/0xe90 fs/ntfs3/frecord.c:3115 ni_rename+0xc2/0x1e0 fs/ntfs3/frecord.c:3155 ntfs_rename+0x7c1/0xd10 fs/ntfs3/namei.c:317 vfs_rename+0xbdb/0xf00 fs/namei.c:5013 do_renameat2+0xd94/0x13f0 fs/namei.c:5170 __do_sys_rename fs/namei.c:5217 [inline] __se_sys_rename fs/namei.c:5215 [inline] __x64_sys_rename+0x82/0x90 fs/namei.c:5215 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (&wnd->rw_lock/1){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904 __lock_acquire+0x1384/0x2050 kernel/locking/lockdep.c:5202 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5825 down_write_nested+0xa2/0x220 kernel/locking/rwsem.c:1693 ntfs_mark_rec_free+0x3f/0x2b0 fs/ntfs3/fsntfs.c:742 ni_write_inode+0xb8a/0x1010 fs/ntfs3/frecord.c:3433 write_inode fs/fs-writeback.c:1503 [inline] __writeback_single_inode+0x711/0x10d0 fs/fs-writeback.c:1723 writeback_sb_inodes+0x80c/0x1370 fs/fs-writeback.c:1954 wb_writeback+0x41b/0xbd0 fs/fs-writeback.c:2134 wb_do_writeback fs/fs-writeback.c:2281 [inline] wb_workfn+0x410/0x1090 fs/fs-writeback.c:2321 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 other info that might help us debug this: Chain exists of: &wnd->rw_lock/1 --> &ni->file.run_lock#2 --> &ni->ni_lock#2 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ni->ni_lock#2); lock(&ni->file.run_lock#2); lock(&ni->ni_lock#2); lock(&wnd->rw_lock/1); *** DEADLOCK *** 3 locks held by kworker/u8:4/66: #0: ffff8881416f6948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff8881416f6948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc900015d7d00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc900015d7d00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffff8880584351c8 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1129 [inline] #2: ffff8880584351c8 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_write_inode+0x1bc/0x1010 fs/ntfs3/frecord.c:3333 stack backtrace: CPU: 0 UID: 0 PID: 66 Comm: kworker/u8:4 Not tainted 6.12.0-rc1-syzkaller-00042-gf23aa4c0761a #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Workqueue: writeback wb_workfn (flush-7:2) Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206 check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904 __lock_acquire+0x1384/0x2050 kernel/locking/lockdep.c:5202 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5825 down_write_nested+0xa2/0x220 kernel/locking/rwsem.c:1693 ntfs_mark_rec_free+0x3f/0x2b0 fs/ntfs3/fsntfs.c:742 ni_write_inode+0xb8a/0x1010 fs/ntfs3/frecord.c:3433 write_inode fs/fs-writeback.c:1503 [inline] __writeback_single_inode+0x711/0x10d0 fs/fs-writeback.c:1723 writeback_sb_inodes+0x80c/0x1370 fs/fs-writeback.c:1954 wb_writeback+0x41b/0xbd0 fs/fs-writeback.c:2134 wb_do_writeback fs/fs-writeback.c:2281 [inline] wb_workfn+0x410/0x1090 fs/fs-writeback.c:2321 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 bridge0: port 1(bridge_slave_0) entered blocking state bridge0: port 1(bridge_slave_0) entered forwarding state bridge0: port 2(bridge_slave_1) entered blocking state bridge0: port 2(bridge_slave_1) entered forwarding state