tmpfs: Bad value 'alway' for mount option 'huge' ================================================================== BUG: KASAN: use-after-free in __lock_acquire+0x2c57/0x3f20 kernel/locking/lockdep.c:3369 Read of size 8 at addr ffff888093d06960 by task kworker/1:3/7981 CPU: 1 PID: 7981 Comm: kworker/1:3 Not tainted 4.14.231-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events l2cap_chan_timeout Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x1b2/0x281 lib/dump_stack.c:58 print_address_description.cold+0x54/0x1d3 mm/kasan/report.c:252 kasan_report_error.cold+0x8a/0x191 mm/kasan/report.c:351 kasan_report mm/kasan/report.c:409 [inline] __asan_report_load8_noabort+0x68/0x70 mm/kasan/report.c:430 __lock_acquire+0x2c57/0x3f20 kernel/locking/lockdep.c:3369 lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x2f/0x40 kernel/locking/spinlock.c:176 spin_lock_bh include/linux/spinlock.h:322 [inline] lock_sock_nested+0x39/0x100 net/core/sock.c:2788 l2cap_sock_teardown_cb+0x93/0x650 net/bluetooth/l2cap_sock.c:1341 l2cap_chan_del+0xaf/0x950 net/bluetooth/l2cap_core.c:599 l2cap_chan_close+0x103/0x870 net/bluetooth/l2cap_core.c:757 l2cap_chan_timeout+0x143/0x2a0 net/bluetooth/l2cap_core.c:430 process_one_work+0x793/0x14a0 kernel/workqueue.c:2116 worker_thread+0x5cc/0xff0 kernel/workqueue.c:2250 kthread+0x30d/0x420 kernel/kthread.c:232 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:404 Allocated by task 7978: save_stack mm/kasan/kasan.c:447 [inline] set_track mm/kasan/kasan.c:459 [inline] kasan_kmalloc+0xeb/0x160 mm/kasan/kasan.c:551 __do_kmalloc_node mm/slab.c:3682 [inline] __kmalloc_node+0x4c/0x70 mm/slab.c:3689 kmalloc_node include/linux/slab.h:530 [inline] kvmalloc_node+0x46/0xd0 mm/util.c:397 kvmalloc include/linux/mm.h:531 [inline] kvmalloc_array include/linux/mm.h:547 [inline] alloc_fdtable+0xc7/0x270 fs/file.c:120 dup_fd+0x5f2/0xaf0 fs/file.c:315 copy_files kernel/fork.c:1304 [inline] copy_process.part.0+0x1b4f/0x71c0 kernel/fork.c:1778 copy_process kernel/fork.c:1605 [inline] _do_fork+0x184/0xc80 kernel/fork.c:2091 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb Freed by task 12197: save_stack mm/kasan/kasan.c:447 [inline] set_track mm/kasan/kasan.c:459 [inline] kasan_slab_free+0xc3/0x1a0 mm/kasan/kasan.c:524 __cache_free mm/slab.c:3496 [inline] kfree+0xc9/0x250 mm/slab.c:3815 kvfree+0x45/0x50 mm/util.c:416 __free_fdtable fs/file.c:36 [inline] put_files_struct fs/file.c:425 [inline] put_files_struct+0x259/0x340 fs/file.c:418 exit_files+0x7e/0xa0 fs/file.c:450 do_exit+0xa18/0x2850 kernel/exit.c:863 do_group_exit+0x100/0x2e0 kernel/exit.c:965 get_signal+0x38d/0x1ca0 kernel/signal.c:2423 do_signal+0x7c/0x1550 arch/x86/kernel/signal.c:792 exit_to_usermode_loop+0x160/0x200 arch/x86/entry/common.c:160 prepare_exit_to_usermode arch/x86/entry/common.c:199 [inline] syscall_return_slowpath arch/x86/entry/common.c:270 [inline] do_syscall_64+0x4a3/0x640 arch/x86/entry/common.c:297 entry_SYSCALL_64_after_hwframe+0x46/0xbb The buggy address belongs to the object at ffff888093d068c0 which belongs to the cache kmalloc-2048 of size 2048 The buggy address is located 160 bytes inside of 2048-byte region [ffff888093d068c0, ffff888093d070c0) The buggy address belongs to the page: page:ffffea00024f4180 count:1 mapcount:0 mapping:ffff888093d06040 index:0x0 compound_mapcount: 0 flags: 0xfff00000008100(slab|head) raw: 00fff00000008100 ffff888093d06040 0000000000000000 0000000100000003 raw: ffffea0002508520 ffffea0002ad6ea0 ffff88813fe80c40 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff888093d06800: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff888093d06880: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb >ffff888093d06900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888093d06980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888093d06a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================