================================================================== BUG: KASAN: use-after-free in debugfs_remove+0x10d/0x130 /fs/debugfs/inode.c:705 Read of size 8 at addr ffff8880aa0c4300 by task kworker/0:2/2622 CPU: 0 PID: 2622 Comm: kworker/0:2 Not tainted 5.2.0+ #71 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events __blk_release_queue Call Trace: __dump_stack /lib/dump_stack.c:77 [inline] dump_stack+0x16f/0x1f0 /lib/dump_stack.c:113 print_address_description.cold+0xd4/0x306 /mm/kasan/report.c:351 __kasan_report.cold+0x1b/0x36 /mm/kasan/report.c:482 kasan_report+0x12/0x17 /mm/kasan/common.c:612 __asan_report_load8_noabort+0x14/0x20 /mm/kasan/generic_report.c:132 debugfs_remove+0x10d/0x130 /fs/debugfs/inode.c:705 blk_trace_free+0x38/0x140 /kernel/trace/blktrace.c:312 blk_trace_cleanup /kernel/trace/blktrace.c:339 [inline] __blk_trace_remove+0x78/0xa0 /kernel/trace/blktrace.c:352 blk_trace_shutdown+0x67/0x90 /kernel/trace/blktrace.c:747 __blk_release_queue+0x1de/0x340 /block/blk-sysfs.c:902 process_one_work+0x9af/0x16d0 /kernel/workqueue.c:2269 worker_thread+0x98/0xe40 /kernel/workqueue.c:2415 kthread+0x361/0x430 /kernel/kthread.c:255 ret_from_fork+0x24/0x30 /arch/x86/entry/entry_64.S:352 Allocated by task 9284: save_stack+0x23/0x90 /mm/kasan/common.c:69 set_track /mm/kasan/common.c:77 [inline] __kasan_kmalloc /mm/kasan/common.c:487 [inline] __kasan_kmalloc.constprop.0+0xcf/0xe0 /mm/kasan/common.c:460 kasan_slab_alloc+0xf/0x20 /mm/kasan/common.c:495 slab_post_alloc_hook /mm/slab.h:520 [inline] slab_alloc /mm/slab.c:3319 [inline] kmem_cache_alloc+0x121/0x700 /mm/slab.c:3483 __d_alloc+0x2e/0x8c0 /fs/dcache.c:1688 d_alloc+0x4d/0x280 /fs/dcache.c:1767 d_alloc_parallel+0xf4/0x1b90 /fs/dcache.c:2519 __lookup_slow+0x1ab/0x500 /fs/namei.c:1652 lookup_one_len+0x16d/0x1a0 /fs/namei.c:2541 start_creating+0xc5/0x1d0 /fs/debugfs/inode.c:312 __debugfs_create_file+0x65/0x3c0 /fs/debugfs/inode.c:357 debugfs_create_file+0x5a/0x70 /fs/debugfs/inode.c:413 do_blk_trace_setup+0x361/0xb50 /kernel/trace/blktrace.c:524 __blk_trace_setup+0xe3/0x190 /kernel/trace/blktrace.c:571 blk_trace_ioctl+0x170/0x300 /kernel/trace/blktrace.c:710 blkdev_ioctl+0x126/0x1c1a /block/ioctl.c:592 block_ioctl+0xee/0x130 /fs/block_dev.c:1918 vfs_ioctl /fs/ioctl.c:46 [inline] file_ioctl /fs/ioctl.c:509 [inline] do_vfs_ioctl+0xdb6/0x13e0 /fs/ioctl.c:696 ksys_ioctl+0xab/0xd0 /fs/ioctl.c:713 __do_sys_ioctl /fs/ioctl.c:720 [inline] __se_sys_ioctl /fs/ioctl.c:718 [inline] __x64_sys_ioctl+0x73/0xb0 /fs/ioctl.c:718 do_syscall_64+0xfd/0x6a0 /arch/x86/entry/common.c:296 entry_SYSCALL_64_after_hwframe+0x49/0xbe Freed by task 0: save_stack+0x23/0x90 /mm/kasan/common.c:69 set_track /mm/kasan/common.c:77 [inline] __kasan_slab_free+0x102/0x150 /mm/kasan/common.c:449 kasan_slab_free+0xe/0x10 /mm/kasan/common.c:457 __cache_free /mm/slab.c:3425 [inline] kmem_cache_free+0x86/0x310 /mm/slab.c:3693 __d_free+0x20/0x30 /fs/dcache.c:271 __rcu_reclaim /kernel/rcu/rcu.h:222 [inline] rcu_do_batch /kernel/rcu/tree.c:2114 [inline] rcu_core+0x66a/0x1470 /kernel/rcu/tree.c:2314 rcu_core_si+0x9/0x10 /kernel/rcu/tree.c:2323 __do_softirq+0x30d/0x970 /kernel/softirq.c:292 The buggy address belongs to the object at ffff8880aa0c42c0 which belongs to the cache dentry of size 288 The buggy address is located 64 bytes inside of 288-byte region [ffff8880aa0c42c0, ffff8880aa0c43e0) The buggy address belongs to the page: page:ffffea0002a83100 refcount:1 mapcount:0 mapping:ffff88821bc46540 index:0x0 flags: 0x1fffc0000000200(slab) raw: 01fffc0000000200 ffffea0002250388 ffffea0002a81308 ffff88821bc46540 raw: 0000000000000000 ffff8880aa0c4000 000000010000000b 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8880aa0c4200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880aa0c4280: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb >ffff8880aa0c4300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8880aa0c4380: fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc ffff8880aa0c4400: fc fc fc fc fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================