====================================================== WARNING: possible circular locking dependency detected 5.14.0-rc5-syzkaller #0 Not tainted ------------------------------------------------------ kworker/0:3/4860 is trying to acquire lock: ffff88801edd9518 (&disk->open_mutex){+.+.}-{3:3}, at: del_gendisk+0x86/0x610 block/genhd.c:587 but task is already holding lock: ffffc9000162fd20 ((work_completion)(&mddev->del_work)){+.+.}-{0:0}, at: process_one_work+0x7e8/0x10c0 kernel/workqueue.c:2251 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 ((work_completion)(&mddev->del_work)){+.+.}-{0:0}: lock_acquire+0x182/0x4a0 kernel/locking/lockdep.c:5625 process_one_work+0x807/0x10c0 kernel/workqueue.c:2252 worker_thread+0xac1/0x1320 kernel/workqueue.c:2422 kthread+0x453/0x480 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 -> #3 ((wq_completion)md_misc){+.+.}-{0:0}: lock_acquire+0x182/0x4a0 kernel/locking/lockdep.c:5625 flush_workqueue+0x178/0x1750 kernel/workqueue.c:2787 md_alloc+0x24/0xc90 drivers/md/md.c:5642 blk_request_module+0x19d/0x1c0 block/genhd.c:660 blkdev_get_no_open+0x44/0x1f0 fs/block_dev.c:1334 blkdev_get_by_dev+0x89/0xdc0 fs/block_dev.c:1397 blkdev_open+0x132/0x2c0 fs/block_dev.c:1512 do_dentry_open+0x7cb/0x1020 fs/open.c:826 do_open fs/namei.c:3374 [inline] path_openat+0x27e7/0x36b0 fs/namei.c:3507 do_filp_open+0x253/0x4d0 fs/namei.c:3534 do_sys_openat2+0x124/0x460 fs/open.c:1204 do_sys_open fs/open.c:1220 [inline] __do_sys_openat fs/open.c:1236 [inline] __se_sys_openat fs/open.c:1231 [inline] __x64_sys_openat+0x243/0x290 fs/open.c:1231 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae -> #2 (major_names_lock){+.+.}-{3:3}: lock_acquire+0x182/0x4a0 kernel/locking/lockdep.c:5625 __mutex_lock_common+0x1ad/0x3770 kernel/locking/mutex.c:959 __mutex_lock kernel/locking/mutex.c:1104 [inline] mutex_lock_nested+0x1a/0x20 kernel/locking/mutex.c:1119 __register_blkdev+0x2c/0x360 block/genhd.c:216 register_mtd_blktrans+0x94/0x3d0 drivers/mtd/mtd_blkdevs.c:531 do_one_initcall+0x197/0x3f0 init/main.c:1282 do_initcall_level+0x14a/0x1f5 init/main.c:1355 do_initcalls+0x4b/0x8c init/main.c:1371 kernel_init_freeable+0x3f1/0x57e init/main.c:1593 kernel_init+0x19/0x2a0 init/main.c:1485 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 -> #1 (mtd_table_mutex){+.+.}-{3:3}: lock_acquire+0x182/0x4a0 kernel/locking/lockdep.c:5625 __mutex_lock_common+0x1ad/0x3770 kernel/locking/mutex.c:959 __mutex_lock kernel/locking/mutex.c:1104 [inline] mutex_lock_nested+0x1a/0x20 kernel/locking/mutex.c:1119 blktrans_open+0x61/0x430 drivers/mtd/mtd_blkdevs.c:210 blkdev_get_whole+0x94/0x500 fs/block_dev.c:1253 blkdev_get_by_dev+0x339/0xdc0 fs/block_dev.c:1417 blkdev_open+0x132/0x2c0 fs/block_dev.c:1512 do_dentry_open+0x7cb/0x1020 fs/open.c:826 do_open fs/namei.c:3374 [inline] path_openat+0x27e7/0x36b0 fs/namei.c:3507 do_filp_open+0x253/0x4d0 fs/namei.c:3534 do_sys_openat2+0x124/0x460 fs/open.c:1204 do_sys_open fs/open.c:1220 [inline] __do_sys_open fs/open.c:1228 [inline] __se_sys_open fs/open.c:1224 [inline] __x64_sys_open+0x221/0x270 fs/open.c:1224 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae -> #0 (&disk->open_mutex){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3051 [inline] check_prevs_add+0x4f9/0x5b30 kernel/locking/lockdep.c:3174 validate_chain kernel/locking/lockdep.c:3789 [inline] __lock_acquire+0x4476/0x6100 kernel/locking/lockdep.c:5015 lock_acquire+0x182/0x4a0 kernel/locking/lockdep.c:5625 __mutex_lock_common+0x1ad/0x3770 kernel/locking/mutex.c:959 __mutex_lock kernel/locking/mutex.c:1104 [inline] mutex_lock_nested+0x1a/0x20 kernel/locking/mutex.c:1119 del_gendisk+0x86/0x610 block/genhd.c:587 md_free+0xc1/0x180 drivers/md/md.c:5571 kobject_cleanup+0x1c0/0x280 lib/kobject.c:705 process_one_work+0x833/0x10c0 kernel/workqueue.c:2276 worker_thread+0xac1/0x1320 kernel/workqueue.c:2422 kthread+0x453/0x480 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 other info that might help us debug this: Chain exists of: &disk->open_mutex --> (wq_completion)md_misc --> (work_completion)(&mddev->del_work) Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock((work_completion)(&mddev->del_work)); lock((wq_completion)md_misc); lock((work_completion)(&mddev->del_work)); lock(&disk->open_mutex); *** DEADLOCK *** 2 locks held by kworker/0:3/4860: #0: ffff888142bf0938 ((wq_completion)md_misc){+.+.}-{0:0}, at: process_one_work+0x7aa/0x10c0 kernel/workqueue.c:2249 #1: ffffc9000162fd20 ((work_completion)(&mddev->del_work)){+.+.}-{0:0}, at: process_one_work+0x7e8/0x10c0 kernel/workqueue.c:2251 stack backtrace: CPU: 0 PID: 4860 Comm: kworker/0:3 Not tainted 5.14.0-rc5-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: md_misc mddev_delayed_delete Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1ae/0x29f lib/dump_stack.c:105 print_circular_bug+0xb17/0xdc0 kernel/locking/lockdep.c:2009 check_noncircular+0x2cc/0x390 kernel/locking/lockdep.c:2131 check_prev_add kernel/locking/lockdep.c:3051 [inline] check_prevs_add+0x4f9/0x5b30 kernel/locking/lockdep.c:3174 validate_chain kernel/locking/lockdep.c:3789 [inline] __lock_acquire+0x4476/0x6100 kernel/locking/lockdep.c:5015 lock_acquire+0x182/0x4a0 kernel/locking/lockdep.c:5625 __mutex_lock_common+0x1ad/0x3770 kernel/locking/mutex.c:959 __mutex_lock kernel/locking/mutex.c:1104 [inline] mutex_lock_nested+0x1a/0x20 kernel/locking/mutex.c:1119 del_gendisk+0x86/0x610 block/genhd.c:587 md_free+0xc1/0x180 drivers/md/md.c:5571 kobject_cleanup+0x1c0/0x280 lib/kobject.c:705 process_one_work+0x833/0x10c0 kernel/workqueue.c:2276 worker_thread+0xac1/0x1320 kernel/workqueue.c:2422 kthread+0x453/0x480 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295