================================================================== BUG: KASAN: use-after-free in __lock_acquire+0x3b78/0x4c30 kernel/locking/lockdep.c:3753 Read of size 8 at addr ffff888095950f50 by task kworker/u4:1/27379 CPU: 1 PID: 27379 Comm: kworker/u4:1 Not tainted 5.3.0-rc1-next-20190724 #50 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: ib_addr process_one_req Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x172/0x1f0 lib/dump_stack.c:113 print_address_description.cold+0xd4/0x306 mm/kasan/report.c:351 __kasan_report.cold+0x1b/0x36 mm/kasan/report.c:482 kasan_report+0x12/0x17 mm/kasan/common.c:612 __asan_report_load8_noabort+0x14/0x20 mm/kasan/generic_report.c:132 __lock_acquire+0x3b78/0x4c30 kernel/locking/lockdep.c:3753 lock_acquire+0x190/0x410 kernel/locking/lockdep.c:4413 __mutex_lock_common kernel/locking/mutex.c:926 [inline] __mutex_lock+0xf7/0x1340 kernel/locking/mutex.c:1073 mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1088 addr_handler+0xaf/0x3d0 drivers/infiniband/core/cma.c:3031 process_one_req+0x106/0x680 drivers/infiniband/core/addr.c:644 process_one_work+0x9af/0x1740 kernel/workqueue.c:2269 worker_thread+0x98/0xe40 kernel/workqueue.c:2415 kthread+0x361/0x430 kernel/kthread.c:255 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352 Allocated by task 16174: save_stack+0x23/0x90 mm/kasan/common.c:69 set_track mm/kasan/common.c:77 [inline] __kasan_kmalloc mm/kasan/common.c:487 [inline] __kasan_kmalloc.constprop.0+0xcf/0xe0 mm/kasan/common.c:460 kasan_kmalloc+0x9/0x10 mm/kasan/common.c:501 kmem_cache_alloc_trace+0x158/0x790 mm/slab.c:3550 kmalloc include/linux/slab.h:552 [inline] kzalloc include/linux/slab.h:748 [inline] __rdma_create_id+0x5f/0x4e0 drivers/infiniband/core/cma.c:882 ucma_create_id+0x1de/0x620 drivers/infiniband/core/ucma.c:501 ucma_write+0x2d7/0x3c0 drivers/infiniband/core/ucma.c:1684 __vfs_write+0x8a/0x110 fs/read_write.c:494 vfs_write+0x268/0x5d0 fs/read_write.c:558 ksys_write+0x220/0x290 fs/read_write.c:611 __do_sys_write fs/read_write.c:623 [inline] __se_sys_write fs/read_write.c:620 [inline] __x64_sys_write+0x73/0xb0 fs/read_write.c:620 do_syscall_64+0xfa/0x760 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe Freed by task 16172: save_stack+0x23/0x90 mm/kasan/common.c:69 set_track mm/kasan/common.c:77 [inline] __kasan_slab_free+0x102/0x150 mm/kasan/common.c:449 kasan_slab_free+0xe/0x10 mm/kasan/common.c:457 __cache_free mm/slab.c:3425 [inline] kfree+0x10a/0x2c0 mm/slab.c:3756 rdma_destroy_id+0x719/0xaa0 drivers/infiniband/core/cma.c:1877 ucma_close+0x115/0x310 drivers/infiniband/core/ucma.c:1762 __fput+0x2ff/0x890 fs/file_table.c:280 ____fput+0x16/0x20 fs/file_table.c:313 task_work_run+0x145/0x1c0 kernel/task_work.c:113 tracehook_notify_resume include/linux/tracehook.h:188 [inline] exit_to_usermode_loop+0x316/0x380 arch/x86/entry/common.c:163 prepare_exit_to_usermode arch/x86/entry/common.c:194 [inline] syscall_return_slowpath arch/x86/entry/common.c:274 [inline] do_syscall_64+0x65f/0x760 arch/x86/entry/common.c:300 entry_SYSCALL_64_after_hwframe+0x49/0xbe The buggy address belongs to the object at ffff888095950bc0 which belongs to the cache kmalloc-2k of size 2048 The buggy address is located 912 bytes inside of 2048-byte region [ffff888095950bc0, ffff8880959513c0) The buggy address belongs to the page: page:ffffea0002565400 refcount:1 mapcount:0 mapping:ffff8880aa400e00 index:0xffff888095950340 compound_mapcount: 0 flags: 0x1fffc0000010200(slab|head) raw: 01fffc0000010200 ffffea0002854e88 ffffea00024e3688 ffff8880aa400e00 raw: ffff888095950340 ffff888095950340 0000000100000002 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff888095950e00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888095950e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff888095950f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888095950f80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888095951000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================