syzbot


possible deadlock in __kernfs_remove

Status: upstream: reported C repro on 2024/06/24 20:40
Subsystems: kernfs
[Documentation on labels]
Reported-by: syzbot+4762dd74e32532cda5ff@syzkaller.appspotmail.com
First crash: 8d03h, last: 2d10h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernfs?] possible deadlock in __kernfs_remove 2 (4) 2024/06/25 02:26

Sample crash report:
Buffer I/O error on dev loop0p1, logical block 8, async page read
======================================================
WARNING: possible circular locking dependency detected
6.10.0-rc5-syzkaller-00012-g626737a5791b #0 Not tainted
------------------------------------------------------
udevd/5224 is trying to acquire lock:
ffff888029878a58 (kn->active#5){++++}-{0:0}, at: __kernfs_remove+0x281/0x670 fs/kernfs/dir.c:1486

but task is already holding lock:
ffff8880219b84c8 (&disk->open_mutex){+.+.}-{3:3}, at: bdev_open+0x41a/0xe50 block/bdev.c:897

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&disk->open_mutex){+.+.}-{3:3}:
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x175/0x9c0 kernel/locking/mutex.c:752
       bdev_open+0x41a/0xe50 block/bdev.c:897
       bdev_file_open_by_dev block/bdev.c:1011 [inline]
       bdev_file_open_by_dev+0x17d/0x210 block/bdev.c:986
       disk_scan_partitions+0x1ed/0x320 block/genhd.c:367
       device_add_disk+0xe97/0x1250 block/genhd.c:510
       pmem_attach_disk+0x9fe/0x1400 drivers/nvdimm/pmem.c:578
       nd_pmem_probe+0x1a9/0x1f0 drivers/nvdimm/pmem.c:651
       nvdimm_bus_probe+0x169/0x5d0 drivers/nvdimm/bus.c:91
       call_driver_probe drivers/base/dd.c:578 [inline]
       really_probe+0x23e/0xa90 drivers/base/dd.c:656
       __driver_probe_device+0x1de/0x440 drivers/base/dd.c:798
       driver_probe_device+0x4c/0x1b0 drivers/base/dd.c:828
       __driver_attach+0x283/0x580 drivers/base/dd.c:1214
       bus_for_each_dev+0x13c/0x1d0 drivers/base/bus.c:368
       bus_add_driver+0x2e9/0x690 drivers/base/bus.c:673
       driver_register+0x15c/0x4b0 drivers/base/driver.c:246
       __nd_driver_register+0x103/0x1a0 drivers/nvdimm/bus.c:619
       do_one_initcall+0x128/0x700 init/main.c:1267
       do_initcall_level init/main.c:1329 [inline]
       do_initcalls init/main.c:1345 [inline]
       do_basic_setup init/main.c:1364 [inline]
       kernel_init_freeable+0x69d/0xca0 init/main.c:1578
       kernel_init+0x1c/0x2b0 init/main.c:1467
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #1 (&nvdimm_namespace_key){+.+.}-{3:3}:
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x175/0x9c0 kernel/locking/mutex.c:752
       device_lock include/linux/device.h:1009 [inline]
       uevent_show+0x188/0x3b0 drivers/base/core.c:2743
       dev_attr_show+0x53/0xe0 drivers/base/core.c:2437
       sysfs_kf_seq_show+0x23e/0x410 fs/sysfs/file.c:59
       seq_read_iter+0x4fa/0x12c0 fs/seq_file.c:230
       kernfs_fop_read_iter+0x41a/0x590 fs/kernfs/file.c:279
       new_sync_read fs/read_write.c:395 [inline]
       vfs_read+0x869/0xbd0 fs/read_write.c:476
       ksys_read+0x12f/0x260 fs/read_write.c:619
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (kn->active#5){++++}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain kernel/locking/lockdep.c:3869 [inline]
       __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
       lock_acquire kernel/locking/lockdep.c:5754 [inline]
       lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
       kernfs_drain+0x48f/0x590 fs/kernfs/dir.c:500
       __kernfs_remove+0x281/0x670 fs/kernfs/dir.c:1486
       kernfs_remove_by_name_ns+0xb2/0x130 fs/kernfs/dir.c:1694
       sysfs_remove_file include/linux/sysfs.h:773 [inline]
       device_remove_file drivers/base/core.c:3061 [inline]
       device_remove_file drivers/base/core.c:3057 [inline]
       device_del+0x381/0x9f0 drivers/base/core.c:3866
       drop_partition+0x109/0x1c0 block/partitions/core.c:273
       bdev_disk_changed+0x24d/0x14f0 block/partitions/core.c:664
       blkdev_get_whole+0x187/0x290 block/bdev.c:700
       bdev_open+0x2c7/0xe50 block/bdev.c:909
       blkdev_open+0x17b/0x1f0 block/fops.c:615
       do_dentry_open+0x910/0x1930 fs/open.c:955
       do_open fs/namei.c:3650 [inline]
       path_openat+0x1e3a/0x29f0 fs/namei.c:3807
       do_filp_open+0x1dc/0x430 fs/namei.c:3834
       do_sys_openat2+0x17a/0x1e0 fs/open.c:1405
       do_sys_open fs/open.c:1420 [inline]
       __do_sys_openat fs/open.c:1436 [inline]
       __se_sys_openat fs/open.c:1431 [inline]
       __x64_sys_openat+0x175/0x210 fs/open.c:1431
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  kn->active#5 --> &nvdimm_namespace_key --> &disk->open_mutex

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&disk->open_mutex);
                               lock(&nvdimm_namespace_key);
                               lock(&disk->open_mutex);
  lock(kn->active#5);

 *** DEADLOCK ***

1 lock held by udevd/5224:
 #0: ffff8880219b84c8 (&disk->open_mutex){+.+.}-{3:3}, at: bdev_open+0x41a/0xe50 block/bdev.c:897

stack backtrace:
CPU: 2 PID: 5224 Comm: udevd Not tainted 6.10.0-rc5-syzkaller-00012-g626737a5791b #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain kernel/locking/lockdep.c:3869 [inline]
 __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
 lock_acquire kernel/locking/lockdep.c:5754 [inline]
 lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
 kernfs_drain+0x48f/0x590 fs/kernfs/dir.c:500
 __kernfs_remove+0x281/0x670 fs/kernfs/dir.c:1486
 kernfs_remove_by_name_ns+0xb2/0x130 fs/kernfs/dir.c:1694
 sysfs_remove_file include/linux/sysfs.h:773 [inline]
 device_remove_file drivers/base/core.c:3061 [inline]
 device_remove_file drivers/base/core.c:3057 [inline]
 device_del+0x381/0x9f0 drivers/base/core.c:3866
 drop_partition+0x109/0x1c0 block/partitions/core.c:273
 bdev_disk_changed+0x24d/0x14f0 block/partitions/core.c:664
 blkdev_get_whole+0x187/0x290 block/bdev.c:700
 bdev_open+0x2c7/0xe50 block/bdev.c:909
 blkdev_open+0x17b/0x1f0 block/fops.c:615
 do_dentry_open+0x910/0x1930 fs/open.c:955
 do_open fs/namei.c:3650 [inline]
 path_openat+0x1e3a/0x29f0 fs/namei.c:3807
 do_filp_open+0x1dc/0x430 fs/namei.c:3834
 do_sys_openat2+0x17a/0x1e0 fs/open.c:1405
 do_sys_open fs/open.c:1420 [inline]
 __do_sys_openat fs/open.c:1436 [inline]
 __se_sys_openat fs/open.c:1431 [inline]
 __x64_sys_openat+0x175/0x210 fs/open.c:1431
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fc33c7979a4
Code: 24 20 48 8d 44 24 30 48 89 44 24 28 64 8b 04 25 18 00 00 00 85 c0 75 2c 44 89 e2 48 89 ee bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 76 60 48 8b 15 55 a4 0d 00 f7 d8 64 89 02 48 83
RSP: 002b:00007ffc38f9eb40 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00005569ecaa5ab0 RCX: 00007fc33c7979a4
RDX: 00000000000a0800 RSI: 00005569eca8bfb0 RDI: 00000000ffffff9c
RBP: 00005569eca8bfb0 R08: 00000000ffffffff R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00000000000a0800
R13: 00005569ecaa7820 R14: 0000000000000001 R15: 00005569eca722c0
 </TASK>
blk_print_req_error: 53 callbacks suppressed
I/O error, dev loop0, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
udevd[5224]: inotify_add_watch(7, /dev/loop0p1, 10) failed: No such file or directory
udevd[5224]: inotify_add_watch(7, /dev/loop0p1, 10) failed: No such file or directory
I/O error, dev loop0, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
udevd[5224]: inotify_add_watch(7, /dev/loop0p1, 10) failed: No such file or directory
I/O error, dev loop0, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
udevd[5224]: inotify_add_watch(7, /dev/loop0p1, 10) failed: No such file or directory
I/O error, dev loop0, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
I/O error, dev loop0, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
I/O error, dev loop0, sector 0 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
I/O error, dev loop0, sector 108 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
Buffer I/O error on dev loop0p1, logical block 8, async page read
udevd[5224]: inotify_add_watch(7, /dev/loop0p1, 10) failed: No such file or directory
blk_print_req_error: 138 callbacks suppressed
I/O error, dev loop0, sector 108 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
udevd[5224]: inotify_add_watch(7, /dev/loop0p1, 10) failed: No such file or directory
udevd[5224]: inotify_add_watch(7, /dev/loop0p1, 10) failed: No such file or directory

Crashes (428):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/06/24 23:40 upstream 626737a5791b c2e07261 .config console log report syz / log C [disk image (non-bootable)] [vmlinux] [kernel image] [mounted in repro] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/26 12:25 upstream 55027e689933 dec8bc94 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/26 09:03 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/26 07:52 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/26 02:19 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/25 13:06 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/25 10:32 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/25 08:57 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/25 00:27 upstream 626737a5791b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 22:12 upstream 626737a5791b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 19:29 upstream 626737a5791b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 18:19 upstream 626737a5791b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 17:12 upstream 626737a5791b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 14:48 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 13:07 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 10:31 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 06:26 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 05:22 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 03:29 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/24 02:27 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in __kernfs_remove
2024/06/26 11:43 upstream 55027e689933 dec8bc94 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/26 10:23 upstream 55027e689933 dec8bc94 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/26 05:22 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/26 03:39 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/26 00:47 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 22:47 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 21:23 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 20:01 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 18:33 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 17:33 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 17:29 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 14:11 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 11:54 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 07:58 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 05:43 upstream 55027e689933 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 04:23 upstream 626737a5791b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/25 01:39 upstream 626737a5791b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/24 23:31 upstream 626737a5791b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/24 20:59 upstream 626737a5791b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/24 19:43 upstream 626737a5791b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/24 09:20 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/24 08:55 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/24 04:34 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/24 01:32 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/24 00:15 upstream f2661062f16b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
2024/06/20 20:35 upstream 50736169ecc8 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __kernfs_remove
* Struck through repros no longer work on HEAD.