================================================================== BUG: KASAN: slab-use-after-free in register_lock_class+0xdab/0x1230 kernel/locking/lockdep.c:1328 Read of size 8 at addr ffff88801df6c530 by task kworker/2:1H/1198 CPU: 2 PID: 1198 Comm: kworker/2:1H Not tainted 6.9.0-rc5-syzkaller-00238-ge6ebf0117218 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Workqueue: glock_workqueue glock_work_func Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114 print_address_description mm/kasan/report.c:377 [inline] print_report+0xc3/0x620 mm/kasan/report.c:488 kasan_report+0xd9/0x110 mm/kasan/report.c:601 register_lock_class+0xdab/0x1230 kernel/locking/lockdep.c:1328 __lock_acquire+0x111/0x3b30 kernel/locking/lockdep.c:5014 lock_acquire kernel/locking/lockdep.c:5754 [inline] lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x3a/0x60 kernel/locking/spinlock.c:162 __wake_up_common_lock kernel/sched/wait.c:105 [inline] __wake_up+0x1c/0x60 kernel/sched/wait.c:127 gfs2_glock_free+0xe70/0x12d0 fs/gfs2/glock.c:179 glock_work_func+0x2bc/0x390 fs/gfs2/glock.c:1109 process_one_work+0x902/0x1a30 kernel/workqueue.c:3254 process_scheduled_works kernel/workqueue.c:3335 [inline] worker_thread+0x6c8/0xf70 kernel/workqueue.c:3416 kthread+0x2c1/0x3a0 kernel/kthread.c:388 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Allocated by task 6988: kasan_save_stack+0x33/0x60 mm/kasan/common.c:47 kasan_save_track+0x14/0x30 mm/kasan/common.c:68 poison_kmalloc_redzone mm/kasan/common.c:370 [inline] __kasan_kmalloc+0xaa/0xb0 mm/kasan/common.c:387 kmalloc include/linux/slab.h:628 [inline] kzalloc include/linux/slab.h:749 [inline] init_sbd fs/gfs2/ops_fstype.c:77 [inline] gfs2_fill_super+0x141/0x2ac0 fs/gfs2/ops_fstype.c:1160 get_tree_bdev+0x36f/0x610 fs/super.c:1614 gfs2_get_tree+0x4e/0x280 fs/gfs2/ops_fstype.c:1341 vfs_get_tree+0x8f/0x380 fs/super.c:1779 do_new_mount fs/namespace.c:3352 [inline] path_mount+0x6e1/0x1f10 fs/namespace.c:3679 do_mount fs/namespace.c:3692 [inline] __do_sys_mount fs/namespace.c:3898 [inline] __se_sys_mount fs/namespace.c:3875 [inline] __ia32_sys_mount+0x295/0x320 fs/namespace.c:3875 do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline] __do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386 do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411 entry_SYSENTER_compat_after_hwframe+0x84/0x8e Freed by task 6751: kasan_save_stack+0x33/0x60 mm/kasan/common.c:47 kasan_save_track+0x14/0x30 mm/kasan/common.c:68 kasan_save_free_info+0x3b/0x60 mm/kasan/generic.c:579 poison_slab_object mm/kasan/common.c:240 [inline] __kasan_slab_free+0x11d/0x1a0 mm/kasan/common.c:256 kasan_slab_free include/linux/kasan.h:184 [inline] slab_free_hook mm/slub.c:2106 [inline] slab_free mm/slub.c:4280 [inline] kfree+0x129/0x390 mm/slub.c:4390 generic_shutdown_super+0x159/0x3d0 fs/super.c:641 kill_block_super+0x3b/0x90 fs/super.c:1675 gfs2_kill_sb+0x360/0x410 fs/gfs2/ops_fstype.c:1804 deactivate_locked_super+0xbe/0x1a0 fs/super.c:472 deactivate_super+0xde/0x100 fs/super.c:505 cleanup_mnt+0x222/0x450 fs/namespace.c:1267 task_work_run+0x14e/0x250 kernel/task_work.c:180 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] exit_to_user_mode_loop kernel/entry/common.c:114 [inline] exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline] __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline] syscall_exit_to_user_mode+0x278/0x2a0 kernel/entry/common.c:218 __do_fast_syscall_32+0x82/0x120 arch/x86/entry/common.c:389 do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411 entry_SYSENTER_compat_after_hwframe+0x84/0x8e The buggy address belongs to the object at ffff88801df6c000 which belongs to the cache kmalloc-8k of size 8192 The buggy address is located 1328 bytes inside of freed 8192-byte region [ffff88801df6c000, ffff88801df6e000) The buggy address belongs to the physical page: page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1df68 head: order:3 entire_mapcount:0 nr_pages_mapped:0 pincount:0 flags: 0xfff80000000840(slab|head|node=0|zone=1|lastcpupid=0xfff) page_type: 0xffffffff() raw: 00fff80000000840 ffff888014c43180 dead000000000100 dead000000000122 raw: 0000000000000000 0000000000020002 00000001ffffffff 0000000000000000 head: 00fff80000000840 ffff888014c43180 dead000000000100 dead000000000122 head: 0000000000000000 0000000000020002 00000001ffffffff 0000000000000000 head: 00fff80000000003 ffffea000077da01 ffffea000077da48 00000000ffffffff head: 0000000800000000 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0x152820(GFP_ATOMIC|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_HARDWALL), pid 5451, tgid -1145328208 (syz-executor.3), ts 5451, free_ts 204995315024 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x2d4/0x350 mm/page_alloc.c:1534 prep_new_page mm/page_alloc.c:1541 [inline] get_page_from_freelist+0xa28/0x3780 mm/page_alloc.c:3317 __alloc_pages+0x22b/0x2460 mm/page_alloc.c:4575 __alloc_pages_node include/linux/gfp.h:238 [inline] alloc_pages_node include/linux/gfp.h:261 [inline] alloc_slab_page mm/slub.c:2175 [inline] allocate_slab mm/slub.c:2338 [inline] new_slab+0xcc/0x3a0 mm/slub.c:2391 ___slab_alloc+0x670/0x16d0 mm/slub.c:3525 __slab_alloc.constprop.0+0x56/0xb0 mm/slub.c:3610 __slab_alloc_node mm/slub.c:3663 [inline] slab_alloc_node mm/slub.c:3835 [inline] __do_kmalloc_node mm/slub.c:3965 [inline] __kmalloc+0x3b4/0x440 mm/slub.c:3979 kmalloc_array include/linux/slab.h:665 [inline] batadv_hash_new+0xb2/0x2e0 net/batman-adv/hash.c:56 batadv_bla_init+0x370/0x750 net/batman-adv/bridge_loop_avoidance.c:1563 batadv_mesh_init+0x523/0x9a0 net/batman-adv/main.c:212 batadv_softif_init_late+0xbd6/0xf30 net/batman-adv/soft-interface.c:812 register_netdevice+0x59f/0x1c40 net/core/dev.c:10210 batadv_softif_newlink+0x70/0x90 net/batman-adv/soft-interface.c:1088 rtnl_newlink_create net/core/rtnetlink.c:3494 [inline] __rtnl_newlink+0x119c/0x1960 net/core/rtnetlink.c:3714 rtnl_newlink+0x67/0xa0 net/core/rtnetlink.c:3727 rtnetlink_rcv_msg+0x3c7/0xe60 net/core/rtnetlink.c:6595 page last free pid 5444 tgid 5444 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1141 [inline] free_unref_page_prepare+0x527/0xb10 mm/page_alloc.c:2347 free_unref_page+0x33/0x3c0 mm/page_alloc.c:2487 __put_partials+0x14c/0x170 mm/slub.c:2906 qlink_free mm/kasan/quarantine.c:163 [inline] qlist_free_all+0x4e/0x140 mm/kasan/quarantine.c:179 kasan_quarantine_reduce+0x192/0x1e0 mm/kasan/quarantine.c:286 __kasan_slab_alloc+0x69/0x90 mm/kasan/common.c:322 kasan_slab_alloc include/linux/kasan.h:201 [inline] slab_post_alloc_hook mm/slub.c:3798 [inline] slab_alloc_node mm/slub.c:3845 [inline] kmalloc_trace+0x147/0x330 mm/slub.c:3992 kmalloc include/linux/slab.h:628 [inline] kzalloc include/linux/slab.h:749 [inline] kobject_uevent_env+0x265/0x15f0 lib/kobject_uevent.c:525 __kobject_del+0x168/0x1f0 lib/kobject.c:601 kobject_cleanup lib/kobject.c:680 [inline] kobject_release lib/kobject.c:720 [inline] kref_put include/linux/kref.h:65 [inline] kobject_put+0x31c/0x5b0 lib/kobject.c:737 net_rx_queue_update_kobjects+0x478/0x5f0 net/core/net-sysfs.c:1174 netif_set_real_num_rx_queues+0x169/0x210 net/core/dev.c:2941 veth_init_queues+0x151/0x190 drivers/net/veth.c:1759 veth_newlink+0x546/0xa10 drivers/net/veth.c:1871 rtnl_newlink_create net/core/rtnetlink.c:3494 [inline] __rtnl_newlink+0x119c/0x1960 net/core/rtnetlink.c:3714 rtnl_newlink+0x67/0xa0 net/core/rtnetlink.c:3727 Memory state around the buggy address: ffff88801df6c400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88801df6c480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff88801df6c500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88801df6c580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88801df6c600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================