====================================================== WARNING: possible circular locking dependency detected 6.1.0-rc5-next-20221116-syzkaller #0 Not tainted ------------------------------------------------------ kworker/1:9/5891 is trying to acquire lock: ffffffff8d483d08 (nci_mutex){+.+.}-{3:3}, at: virtual_nci_close+0x17/0x50 drivers/nfc/virtual_ncidev.c:44 but task is already holding lock: ffff888078770350 (&ndev->req_lock){+.+.}-{3:3}, at: nci_close_device+0x6d/0x370 net/nfc/nci/core.c:561 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&ndev->req_lock){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x12f/0x1360 kernel/locking/mutex.c:747 nci_close_device+0x6d/0x370 net/nfc/nci/core.c:561 nfc_dev_down+0x19a/0x2d0 net/nfc/core.c:161 nfc_rfkill_set_block+0x33/0xd0 net/nfc/core.c:179 rfkill_set_block+0x1f9/0x540 net/rfkill/core.c:345 rfkill_sync_work+0x8e/0xc0 net/rfkill/core.c:1042 process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289 worker_thread+0x669/0x1090 kernel/workqueue.c:2436 kthread+0x2e8/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 -> #1 (rfkill_global_mutex){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x12f/0x1360 kernel/locking/mutex.c:747 rfkill_register+0x3a/0xb00 net/rfkill/core.c:1057 nfc_register_device+0x124/0x3b0 net/nfc/core.c:1132 nci_register_device+0x7cb/0xb50 net/nfc/nci/core.c:1257 virtual_ncidev_open+0x71/0x110 drivers/nfc/virtual_ncidev.c:146 misc_open+0x37a/0x4a0 drivers/char/misc.c:143 chrdev_open+0x26a/0x770 fs/char_dev.c:414 do_dentry_open+0x6cc/0x13f0 fs/open.c:882 do_open fs/namei.c:3557 [inline] path_openat+0x1bbc/0x2a50 fs/namei.c:3713 do_filp_open+0x1ba/0x410 fs/namei.c:3740 do_sys_openat2+0x16d/0x4c0 fs/open.c:1310 do_sys_open fs/open.c:1326 [inline] __do_sys_openat fs/open.c:1342 [inline] __se_sys_openat fs/open.c:1337 [inline] __x64_sys_openat+0x143/0x1f0 fs/open.c:1337 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #0 (nci_mutex){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3097 [inline] check_prevs_add kernel/locking/lockdep.c:3216 [inline] validate_chain kernel/locking/lockdep.c:3831 [inline] __lock_acquire+0x2a43/0x56d0 kernel/locking/lockdep.c:5055 lock_acquire kernel/locking/lockdep.c:5668 [inline] lock_acquire+0x1e3/0x630 kernel/locking/lockdep.c:5633 __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x12f/0x1360 kernel/locking/mutex.c:747 virtual_nci_close+0x17/0x50 drivers/nfc/virtual_ncidev.c:44 nci_close_device+0x231/0x370 net/nfc/nci/core.c:593 nfc_dev_down+0x19a/0x2d0 net/nfc/core.c:161 nfc_rfkill_set_block+0x33/0xd0 net/nfc/core.c:179 rfkill_set_block+0x1f9/0x540 net/rfkill/core.c:345 rfkill_sync_work+0x8e/0xc0 net/rfkill/core.c:1042 process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289 worker_thread+0x669/0x1090 kernel/workqueue.c:2436 kthread+0x2e8/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 other info that might help us debug this: Chain exists of: nci_mutex --> rfkill_global_mutex --> &ndev->req_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ndev->req_lock); lock(rfkill_global_mutex); lock(&ndev->req_lock); lock(nci_mutex); *** DEADLOCK *** 5 locks held by kworker/1:9/5891: #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline] #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline] #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x86d/0x1710 kernel/workqueue.c:2260 #1: ffffc9000798fda8 ((work_completion)(&rfkill->sync_work)){+.+.}-{0:0}, at: process_one_work+0x8a1/0x1710 kernel/workqueue.c:2264 #2: ffffffff8e502148 (rfkill_global_mutex){+.+.}-{3:3}, at: rfkill_sync_work+0x1c/0xc0 net/rfkill/core.c:1040 #3: ffff888079979100 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:851 [inline] #3: ffff888079979100 (&dev->mutex){....}-{3:3}, at: nfc_dev_down+0x2d/0x2d0 net/nfc/core.c:143 #4: ffff888078770350 (&ndev->req_lock){+.+.}-{3:3}, at: nci_close_device+0x6d/0x370 net/nfc/nci/core.c:561 stack backtrace: CPU: 1 PID: 5891 Comm: kworker/1:9 Not tainted 6.1.0-rc5-next-20221116-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 Workqueue: events rfkill_sync_work Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd1/0x138 lib/dump_stack.c:106 check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2177 check_prev_add kernel/locking/lockdep.c:3097 [inline] check_prevs_add kernel/locking/lockdep.c:3216 [inline] validate_chain kernel/locking/lockdep.c:3831 [inline] __lock_acquire+0x2a43/0x56d0 kernel/locking/lockdep.c:5055 lock_acquire kernel/locking/lockdep.c:5668 [inline] lock_acquire+0x1e3/0x630 kernel/locking/lockdep.c:5633 __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x12f/0x1360 kernel/locking/mutex.c:747 virtual_nci_close+0x17/0x50 drivers/nfc/virtual_ncidev.c:44 nci_close_device+0x231/0x370 net/nfc/nci/core.c:593 nfc_dev_down+0x19a/0x2d0 net/nfc/core.c:161 nfc_rfkill_set_block+0x33/0xd0 net/nfc/core.c:179 rfkill_set_block+0x1f9/0x540 net/rfkill/core.c:345 rfkill_sync_work+0x8e/0xc0 net/rfkill/core.c:1042 process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289 worker_thread+0x669/0x1090 kernel/workqueue.c:2436 kthread+0x2e8/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308