syzbot


possible deadlock in start_creating (2)

Status: moderation: reported on 2025/01/10 15:43
Subsystems: fs
[Documentation on labels]
Reported-by: syzbot+d7d6656eb0ea2df0cb0c@syzkaller.appspotmail.com
First crash: 8d17h, last: 1d17h
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in start_creating fs 2 2226d 2239d 0/28 auto-closed as invalid on 2019/06/10 06:25

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.13.0-rc6-syzkaller-00290-gbe548645527a #0 Not tainted
------------------------------------------------------
syz.7.2798/18466 is trying to acquire lock:
ffff8880621f6988 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline]
ffff8880621f6988 (&sb->s_type->i_mutex_key#3){++++}-{4:4}, at: start_creating.part.0+0xb0/0x3a0 fs/debugfs/inode.c:374

but task is already holding lock:
ffffffff8de395c8 (relay_channels_mutex){+.+.}-{4:4}, at: relay_open+0x324/0xa20 kernel/relay.c:515

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (relay_channels_mutex){+.+.}-{4:4}:
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x19b/0xa60 kernel/locking/mutex.c:735
       relay_prepare_cpu+0x2c/0x300 kernel/relay.c:438
       cpuhp_invoke_callback+0x3d0/0xa10 kernel/cpu.c:194
       __cpuhp_invoke_callback_range+0x101/0x200 kernel/cpu.c:965
       cpuhp_invoke_callback_range kernel/cpu.c:989 [inline]
       cpuhp_up_callbacks kernel/cpu.c:1020 [inline]
       _cpu_up+0x3fd/0x910 kernel/cpu.c:1690
       cpu_up+0x1dc/0x240 kernel/cpu.c:1722
       cpuhp_bringup_mask+0xdc/0x210 kernel/cpu.c:1788
       cpuhp_bringup_cpus_parallel kernel/cpu.c:1878 [inline]
       bringup_nonboot_cpus+0x176/0x1c0 kernel/cpu.c:1892
       smp_init+0x34/0x160 kernel/smp.c:1009
       kernel_init_freeable+0x3ad/0x8b0 init/main.c:1569
       kernel_init+0x1c/0x2b0 init/main.c:1466
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #1 (cpu_hotplug_lock){++++}-{0:0}:
       percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
       cpus_read_lock+0x42/0x160 kernel/cpu.c:490
       acomp_ctx_get_cpu mm/zswap.c:886 [inline]
       zswap_compress mm/zswap.c:908 [inline]
       zswap_store_page mm/zswap.c:1439 [inline]
       zswap_store+0x8f8/0x25f0 mm/zswap.c:1546
       swap_writepage+0x3b6/0x1120 mm/page_io.c:279
       shmem_writepage+0xf7b/0x1490 mm/shmem.c:1579
       pageout+0x3b2/0xaa0 mm/vmscan.c:696
       shrink_folio_list+0x3025/0x42d0 mm/vmscan.c:1374
       evict_folios+0x6e3/0x19c0 mm/vmscan.c:4600
       try_to_shrink_lruvec+0x61e/0xa80 mm/vmscan.c:4796
       lru_gen_shrink_lruvec mm/vmscan.c:4945 [inline]
       shrink_lruvec+0x313/0x2ba0 mm/vmscan.c:5700
       shrink_node_memcgs mm/vmscan.c:5936 [inline]
       shrink_node mm/vmscan.c:5977 [inline]
       shrink_node+0x105e/0x3f20 mm/vmscan.c:5955
       shrink_zones mm/vmscan.c:6222 [inline]
       do_try_to_free_pages+0x35f/0x1a30 mm/vmscan.c:6284
       try_to_free_mem_cgroup_pages+0x31a/0x7a0 mm/vmscan.c:6616
       try_charge_memcg+0x356/0xaf0 mm/memcontrol.c:2238
       obj_cgroup_charge_pages mm/memcontrol.c:2646 [inline]
       obj_cgroup_charge+0x179/0x4d0 mm/memcontrol.c:2937
       __memcg_slab_post_alloc_hook+0x1b6/0x9b0 mm/memcontrol.c:2998
       memcg_slab_post_alloc_hook mm/slub.c:2152 [inline]
       slab_post_alloc_hook mm/slub.c:4129 [inline]
       slab_alloc_node mm/slub.c:4168 [inline]
       kmem_cache_alloc_lru_noprof+0x30d/0x3b0 mm/slub.c:4187
       alloc_inode+0xbf/0x230 fs/inode.c:338
       new_inode_pseudo fs/inode.c:1174 [inline]
       new_inode+0x22/0x210 fs/inode.c:1193
       debugfs_get_inode fs/debugfs/inode.c:72 [inline]
       __debugfs_create_file+0x11a/0x660 fs/debugfs/inode.c:433
       debugfs_create_file_full+0x6d/0xa0 fs/debugfs/inode.c:462
       kvm_create_vm_debugfs virt/kvm/kvm_main.c:1056 [inline]
       kvm_create_vm virt/kvm/kvm_main.c:1193 [inline]
       kvm_dev_ioctl_create_vm virt/kvm/kvm_main.c:5353 [inline]
       kvm_dev_ioctl+0x16b7/0x1aa0 virt/kvm/kvm_main.c:5395
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl fs/ioctl.c:892 [inline]
       __x64_sys_ioctl+0x190/0x200 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&sb->s_type->i_mutex_key#3){++++}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain kernel/locking/lockdep.c:3904 [inline]
       __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226
       lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849
       down_write+0x93/0x200 kernel/locking/rwsem.c:1577
       inode_lock include/linux/fs.h:818 [inline]
       start_creating.part.0+0xb0/0x3a0 fs/debugfs/inode.c:374
       start_creating fs/debugfs/inode.c:351 [inline]
       __debugfs_create_file+0xa5/0x660 fs/debugfs/inode.c:423
       debugfs_create_file_full+0x6d/0xa0 fs/debugfs/inode.c:462
       relay_create_buf_file+0xf0/0x170 kernel/relay.c:360
       relay_open_buf.part.0+0x760/0xb90 kernel/relay.c:389
       relay_open_buf kernel/relay.c:536 [inline]
       relay_open+0x5e2/0xa20 kernel/relay.c:517
       do_blk_trace_setup+0x4b4/0xac0 kernel/trace/blktrace.c:590
       __blk_trace_setup+0xd8/0x180 kernel/trace/blktrace.c:630
       blk_trace_setup+0x47/0x70 kernel/trace/blktrace.c:648
       sg_ioctl_common drivers/scsi/sg.c:1114 [inline]
       sg_ioctl+0x7a3/0x26b0 drivers/scsi/sg.c:1156
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl fs/ioctl.c:892 [inline]
       __x64_sys_ioctl+0x190/0x200 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &sb->s_type->i_mutex_key#3 --> cpu_hotplug_lock --> relay_channels_mutex

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(relay_channels_mutex);
                               lock(cpu_hotplug_lock);
                               lock(relay_channels_mutex);
  lock(&sb->s_type->i_mutex_key#3);

 *** DEADLOCK ***

2 locks held by syz.7.2798/18466:
 #0: ffff8881433b1ca8 (&q->debugfs_mutex){+.+.}-{4:4}, at: blk_trace_setup+0x33/0x70 kernel/trace/blktrace.c:647
 #1: ffffffff8de395c8 (relay_channels_mutex){+.+.}-{4:4}, at: relay_open+0x324/0xa20 kernel/relay.c:515

stack backtrace:
CPU: 1 UID: 0 PID: 18466 Comm: syz.7.2798 Not tainted 6.13.0-rc6-syzkaller-00290-gbe548645527a #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 print_circular_bug+0x41c/0x610 kernel/locking/lockdep.c:2074
 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain kernel/locking/lockdep.c:3904 [inline]
 __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226
 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849
 down_write+0x93/0x200 kernel/locking/rwsem.c:1577
 inode_lock include/linux/fs.h:818 [inline]
 start_creating.part.0+0xb0/0x3a0 fs/debugfs/inode.c:374
 start_creating fs/debugfs/inode.c:351 [inline]
 __debugfs_create_file+0xa5/0x660 fs/debugfs/inode.c:423
 debugfs_create_file_full+0x6d/0xa0 fs/debugfs/inode.c:462
 relay_create_buf_file+0xf0/0x170 kernel/relay.c:360
 relay_open_buf.part.0+0x760/0xb90 kernel/relay.c:389
 relay_open_buf kernel/relay.c:536 [inline]
 relay_open+0x5e2/0xa20 kernel/relay.c:517
 do_blk_trace_setup+0x4b4/0xac0 kernel/trace/blktrace.c:590
 __blk_trace_setup+0xd8/0x180 kernel/trace/blktrace.c:630
 blk_trace_setup+0x47/0x70 kernel/trace/blktrace.c:648
 sg_ioctl_common drivers/scsi/sg.c:1114 [inline]
 sg_ioctl+0x7a3/0x26b0 drivers/scsi/sg.c:1156
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:906 [inline]
 __se_sys_ioctl fs/ioctl.c:892 [inline]
 __x64_sys_ioctl+0x190/0x200 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f596cf85d29
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f596de83038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f596d176080 RCX: 00007f596cf85d29
RDX: 0000000000000038 RSI: 00000000c0481273 RDI: 0000000000000003
RBP: 00007f596d001b08 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f596d176080 R15: 00007ffc52cefc58
 </TASK>

Crashes (20):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/13 08:56 upstream be548645527a 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/13 04:15 upstream be548645527a 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/12 16:38 upstream b62cef9a5c67 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/11 12:10 upstream 77a903cd8e5a 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/10 19:59 upstream 2144da25584e d9381135 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/09 11:17 upstream eea6e4b4dfb8 9220929f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/09 04:24 upstream 0b7958fa05d5 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/08 20:56 upstream 0b7958fa05d5 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/08 20:55 upstream 0b7958fa05d5 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/08 15:41 upstream 09a0fa92e5b4 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce possible deadlock in start_creating
2025/01/08 10:23 upstream 09a0fa92e5b4 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/08 09:48 upstream 09a0fa92e5b4 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/08 06:03 upstream 09a0fa92e5b4 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/07 12:48 upstream fbfd64d25c7a f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/07 11:16 upstream fbfd64d25c7a f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/07 07:21 upstream fbfd64d25c7a f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/07 03:37 upstream 5428dc1906dd f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/06 15:30 upstream 5428dc1906dd f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/06 15:30 upstream 5428dc1906dd f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in start_creating
2025/01/13 15:02 linux-next 7b4b9bf203da 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in start_creating
* Struck through repros no longer work on HEAD.