====================================================== WARNING: possible circular locking dependency detected 4.14.294-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.0/10055 is trying to acquire lock: (event_mutex){+.+.}, at: [] perf_trace_destroy+0x23/0xf0 kernel/trace/trace_event_perf.c:234 but task is already holding lock: (&event->child_mutex){+.+.}, at: [] perf_event_release_kernel+0x208/0x8a0 kernel/events/core.c:4405 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #5 (&event->child_mutex){+.+.}: __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 perf_event_for_each_child+0x82/0x140 kernel/events/core.c:4690 _perf_ioctl+0x471/0x1a60 kernel/events/core.c:4877 perf_ioctl+0x55/0x80 kernel/events/core.c:4889 vfs_ioctl fs/ioctl.c:46 [inline] file_ioctl fs/ioctl.c:500 [inline] do_vfs_ioctl+0x75a/0xff0 fs/ioctl.c:684 SYSC_ioctl fs/ioctl.c:701 [inline] SyS_ioctl+0x7f/0xb0 fs/ioctl.c:692 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb -> #4 (&cpuctx_mutex){+.+.}: __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 perf_event_init_cpu+0xb7/0x170 kernel/events/core.c:11286 perf_event_init+0x2cc/0x308 kernel/events/core.c:11333 start_kernel+0x45d/0x763 init/main.c:624 secondary_startup_64+0xa5/0xb0 arch/x86/kernel/head_64.S:240 -> #3 (pmus_lock){+.+.}: __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 libceph: connect [d::]:6789 error -101 perf_event_init_cpu+0x2c/0x170 kernel/events/core.c:11280 cpuhp_invoke_callback+0x1e6/0x1a80 kernel/cpu.c:186 cpuhp_up_callbacks kernel/cpu.c:574 [inline] _cpu_up+0x21e/0x520 kernel/cpu.c:1193 libceph: mon0 [d::]:6789 connect error do_cpu_up+0x9a/0x160 kernel/cpu.c:1229 smp_init+0x197/0x1ac kernel/smp.c:578 kernel_init_freeable+0x406/0x626 init/main.c:1074 kernel_init+0xd/0x16a init/main.c:1006 ceph: No mds server is up or the cluster is laggy ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:404 -> #2 (cpu_hotplug_lock.rw_sem){++++}: percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36 [inline] percpu_down_read include/linux/percpu-rwsem.h:59 [inline] cpus_read_lock+0x39/0xc0 kernel/cpu.c:297 static_key_slow_inc+0xe/0x20 kernel/jump_label.c:123 tracepoint_add_func+0x747/0xa40 kernel/tracepoint.c:269 tracepoint_probe_register_prio kernel/tracepoint.c:331 [inline] tracepoint_probe_register+0x8c/0xc0 kernel/tracepoint.c:352 trace_event_reg+0x272/0x330 kernel/trace/trace_events.c:305 perf_trace_event_reg kernel/trace/trace_event_perf.c:122 [inline] perf_trace_event_init kernel/trace/trace_event_perf.c:197 [inline] perf_trace_init+0x424/0xa30 kernel/trace/trace_event_perf.c:221 libceph: connect [d::]:6789 error -101 perf_tp_event_init+0x79/0xf0 kernel/events/core.c:8140 perf_try_init_event+0x15b/0x1f0 kernel/events/core.c:9374 perf_init_event kernel/events/core.c:9412 [inline] perf_event_alloc.part.0+0xe2d/0x2640 kernel/events/core.c:9672 perf_event_alloc kernel/events/core.c:10042 [inline] SYSC_perf_event_open kernel/events/core.c:10146 [inline] SyS_perf_event_open+0x683/0x2530 kernel/events/core.c:10032 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 libceph: mon0 [d::]:6789 connect error entry_SYSCALL_64_after_hwframe+0x46/0xbb -> #1 (tracepoints_mutex){+.+.}: __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 tracepoint_probe_register_prio kernel/tracepoint.c:327 [inline] tracepoint_probe_register+0x68/0xc0 kernel/tracepoint.c:352 trace_event_reg+0x272/0x330 kernel/trace/trace_events.c:305 perf_trace_event_reg kernel/trace/trace_event_perf.c:122 [inline] perf_trace_event_init kernel/trace/trace_event_perf.c:197 [inline] perf_trace_init+0x424/0xa30 kernel/trace/trace_event_perf.c:221 perf_tp_event_init+0x79/0xf0 kernel/events/core.c:8140 perf_try_init_event+0x15b/0x1f0 kernel/events/core.c:9374 perf_init_event kernel/events/core.c:9412 [inline] perf_event_alloc.part.0+0xe2d/0x2640 kernel/events/core.c:9672 perf_event_alloc kernel/events/core.c:10042 [inline] SYSC_perf_event_open kernel/events/core.c:10146 [inline] SyS_perf_event_open+0x683/0x2530 kernel/events/core.c:10032 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb -> #0 (event_mutex){+.+.}: lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 perf_trace_destroy+0x23/0xf0 kernel/trace/trace_event_perf.c:234 _free_event+0x321/0xe20 kernel/events/core.c:4246 free_event+0x32/0x40 kernel/events/core.c:4273 perf_event_release_kernel+0x368/0x8a0 kernel/events/core.c:4417 perf_release+0x33/0x40 kernel/events/core.c:4443 __fput+0x25f/0x7a0 fs/file_table.c:210 task_work_run+0x11f/0x190 kernel/task_work.c:113 exit_task_work include/linux/task_work.h:22 [inline] do_exit+0xa44/0x2850 kernel/exit.c:868 SYSC_exit kernel/exit.c:934 [inline] SyS_exit+0x1e/0x20 kernel/exit.c:932 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb other info that might help us debug this: Chain exists of: event_mutex --> &cpuctx_mutex --> &event->child_mutex Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&event->child_mutex); lock(&cpuctx_mutex); lock(&event->child_mutex); lock(event_mutex); *** DEADLOCK *** 2 locks held by syz-executor.0/10055: #0: (&ctx->mutex){+.+.}, at: [] perf_event_release_kernel+0x1fe/0x8a0 kernel/events/core.c:4404 #1: (&event->child_mutex){+.+.}, at: [] perf_event_release_kernel+0x208/0x8a0 kernel/events/core.c:4405 stack backtrace: CPU: 0 PID: 10055 Comm: syz-executor.0 Not tainted 4.14.294-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x1b2/0x281 lib/dump_stack.c:58 print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1258 check_prev_add kernel/locking/lockdep.c:1905 [inline] check_prevs_add kernel/locking/lockdep.c:2022 [inline] validate_chain kernel/locking/lockdep.c:2464 [inline] __lock_acquire+0x2e0e/0x3f20 kernel/locking/lockdep.c:3491 lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 perf_trace_destroy+0x23/0xf0 kernel/trace/trace_event_perf.c:234 _free_event+0x321/0xe20 kernel/events/core.c:4246 free_event+0x32/0x40 kernel/events/core.c:4273 perf_event_release_kernel+0x368/0x8a0 kernel/events/core.c:4417 perf_release+0x33/0x40 kernel/events/core.c:4443 __fput+0x25f/0x7a0 fs/file_table.c:210 task_work_run+0x11f/0x190 kernel/task_work.c:113 exit_task_work include/linux/task_work.h:22 [inline] do_exit+0xa44/0x2850 kernel/exit.c:868 SYSC_exit kernel/exit.c:934 [inline] SyS_exit+0x1e/0x20 kernel/exit.c:932 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb RIP: 0033:0x7f6970a6f669 RSP: 002b:00007f696f3e3118 EFLAGS: 00000246 ORIG_RAX: 000000000000003c RAX: ffffffffffffffda RBX: 00007f6970b90f80 RCX: 00007f6970a6f669 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 00007f6970aca560 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffe5c372d9f R14: 00007f696f3e3300 R15: 0000000000022000 bridge0: port 3(gretap0) entered blocking state bridge0: port 3(gretap0) entered disabled state device gretap0 entered promiscuous mode bridge0: port 3(gretap0) entered blocking state bridge0: port 3(gretap0) entered forwarding state ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy hrtimer: interrupt took 26383 ns libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy netlink: 76 bytes leftover after parsing attributes in process `syz-executor.1'. audit: type=1800 audit(1663912379.914:4): pid=10404 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.5" name="file0" dev="sda1" ino=13985 res=0 netlink: 76 bytes leftover after parsing attributes in process `syz-executor.1'. audit: type=1800 audit(1663912379.914:5): pid=10409 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.4" name="file0" dev="sda1" ino=13986 res=0 audit: type=1804 audit(1663912379.954:6): pid=10404 uid=0 auid=4294967295 ses=4294967295 op="invalid_pcr" cause="open_writers" comm="syz-executor.5" name="/root/syzkaller-testdir3727682521/syzkaller.4u1Umb/47/file0" dev="sda1" ino=13985 res=1 netlink: 76 bytes leftover after parsing attributes in process `syz-executor.1'. audit: type=1800 audit(1663912380.074:7): pid=10470 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.2" name="file0" dev="sda1" ino=13991 res=0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 BFS-fs: bfs_fill_super(): loop1 is unclean, continuing hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 audit: type=1800 audit(1663912380.224:8): pid=10531 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.4" name="file0" dev="sda1" ino=14000 res=0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 ================================================================== BUG: KASAN: slab-out-of-bounds in find_first_zero_bit+0x84/0x90 lib/find_bit.c:105 Read of size 8 at addr ffff88809c605d00 by task syz-executor.1/10518 CPU: 0 PID: 10518 Comm: syz-executor.1 Not tainted 4.14.294-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x1b2/0x281 lib/dump_stack.c:58 print_address_description.cold+0x54/0x1d3 mm/kasan/report.c:252 kasan_report_error.cold+0x8a/0x191 mm/kasan/report.c:351 kasan_report mm/kasan/report.c:409 [inline] __asan_report_load8_noabort+0x68/0x70 mm/kasan/report.c:430 find_first_zero_bit+0x84/0x90 lib/find_bit.c:105 bfs_create+0xfb/0x620 fs/bfs/dir.c:92 lookup_open+0x77a/0x1750 fs/namei.c:3241 do_last fs/namei.c:3334 [inline] path_openat+0xe08/0x2970 fs/namei.c:3571 do_filp_open+0x179/0x3c0 fs/namei.c:3605 do_sys_open+0x296/0x410 fs/open.c:1081 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb RIP: 0033:0x7f5afc6cf669 RSP: 002b:00007f5afb043168 EFLAGS: 00000246 ORIG_RAX: 0000000000000101 RAX: ffffffffffffffda RBX: 00007f5afc7f0f80 RCX: 00007f5afc6cf669 audit: type=1800 audit(1663912380.264:9): pid=10530 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.3" name="file0" dev="sda1" ino=14000 res=0 RDX: 0000000000020842 RSI: 000000002000c380 RDI: ffffffffffffff9c RBP: 00007f5afc72a560 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffe6bf4e5df R14: 00007f5afb043300 R15: 0000000000022000 ====================================================== WARNING: the mand mount option is being deprecated and will be removed in v5.15! ====================================================== Allocated by task 10518: save_stack mm/kasan/kasan.c:447 [inline] set_track mm/kasan/kasan.c:459 [inline] kasan_kmalloc+0xeb/0x160 mm/kasan/kasan.c:551 __do_kmalloc mm/slab.c:3720 [inline] __kmalloc+0x15a/0x400 mm/slab.c:3729 kmalloc include/linux/slab.h:493 [inline] kzalloc include/linux/slab.h:661 [inline] bfs_fill_super+0x3d5/0xd80 fs/bfs/inode.c:363 mount_bdev+0x2b3/0x360 fs/super.c:1134 mount_fs+0x92/0x2a0 fs/super.c:1237 vfs_kern_mount.part.0+0x5b/0x470 fs/namespace.c:1046 vfs_kern_mount fs/namespace.c:1036 [inline] do_new_mount fs/namespace.c:2572 [inline] do_mount+0xe65/0x2a30 fs/namespace.c:2905 SYSC_mount fs/namespace.c:3121 [inline] SyS_mount+0xa8/0x120 fs/namespace.c:3098 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb Freed by task 8009: save_stack mm/kasan/kasan.c:447 [inline] set_track mm/kasan/kasan.c:459 [inline] kasan_slab_free+0xc3/0x1a0 mm/kasan/kasan.c:524 __cache_free mm/slab.c:3496 [inline] kfree+0xc9/0x250 mm/slab.c:3815 kvfree+0x45/0x50 mm/util.c:449 __vunmap+0x20f/0x300 mm/vmalloc.c:1548 vfree+0x4b/0xd0 mm/vmalloc.c:1612 copy_entries_to_user net/ipv6/netfilter/ip6_tables.c:886 [inline] get_entries net/ipv6/netfilter/ip6_tables.c:1045 [inline] do_ip6t_get_ctl+0x5fc/0x7d0 net/ipv6/netfilter/ip6_tables.c:1716 nf_sockopt net/netfilter/nf_sockopt.c:104 [inline] nf_getsockopt+0x62/0xc0 net/netfilter/nf_sockopt.c:122 ipv6_getsockopt+0x146/0x1e0 net/ipv6/ipv6_sockglue.c:1376 tcp_getsockopt+0x7b/0xc0 net/ipv4/tcp.c:3259 SYSC_getsockopt net/socket.c:1896 [inline] SyS_getsockopt+0x102/0x1c0 net/socket.c:1878 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb The buggy address belongs to the object at ffff88809c605d00 which belongs to the cache kmalloc-32 of size 32 The buggy address is located 0 bytes inside of 32-byte region [ffff88809c605d00, ffff88809c605d20) The buggy address belongs to the page: page:ffffea0002718140 count:1 mapcount:0 mapping:ffff88809c605000 index:0xffff88809c605fc1 flags: 0xfff00000000100(slab) raw: 00fff00000000100 ffff88809c605000 ffff88809c605fc1 0000000100000038 raw: ffffea0002d28d20 ffffea0002c5e820 ffff88813fe741c0 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff88809c605c00: fb fb fb fb fc fc fc fc fb fb fb fb fc fc fc fc ffff88809c605c80: fb fb fb fb fc fc fc fc fb fb fb fb fc fc fc fc >ffff88809c605d00: 07 fc fc fc fc fc fc fc 00 00 00 00 fc fc fc fc ^ ffff88809c605d80: fb fb fb fb fc fc fc fc 00 00 00 00 fc fc fc fc ffff88809c605e00: fb fb fb fb fc fc fc fc fb fb fb fb fc fc fc fc ================================================================== hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 audit: type=1800 audit(1663912380.504:10): pid=10572 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.5" name="file0" dev="sda1" ino=13974 res=0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0 hid-generic 0000:0000:0000.0001: unknown main item tag 0x0