================================================================== BUG: KASAN: double-free in dbUnmount+0xf8/0x110 fs/jfs/jfs_dmap.c:271 Free of addr ffff8880397dd000 by task syz-executor.1/5044 CPU: 1 PID: 5044 Comm: syz-executor.1 Not tainted 6.5.0-rc1-syzkaller-00259-g831fe284d827 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/03/2023 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:364 [inline] print_report+0x163/0x540 mm/kasan/report.c:475 kasan_report_invalid_free+0xeb/0x110 mm/kasan/report.c:550 ____kasan_slab_free+0xfb/0x120 kasan_slab_free include/linux/kasan.h:162 [inline] slab_free_hook mm/slub.c:1792 [inline] slab_free_freelist_hook mm/slub.c:1818 [inline] slab_free mm/slub.c:3801 [inline] __kmem_cache_free+0x25f/0x3b0 mm/slub.c:3814 dbUnmount+0xf8/0x110 fs/jfs/jfs_dmap.c:271 jfs_umount+0x238/0x3a0 fs/jfs/jfs_umount.c:87 jfs_put_super+0x8a/0x190 fs/jfs/super.c:194 generic_shutdown_super+0x134/0x340 fs/super.c:499 kill_block_super+0x68/0xa0 fs/super.c:1417 deactivate_locked_super+0xa4/0x110 fs/super.c:330 cleanup_mnt+0x426/0x4c0 fs/namespace.c:1254 task_work_run+0x24a/0x300 kernel/task_work.c:179 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop+0xd9/0x100 kernel/entry/common.c:171 exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204 __syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline] syscall_exit_to_user_mode+0x64/0x280 kernel/entry/common.c:297 do_syscall_64+0x4d/0xc0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f52da27de57 Code: b0 ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 0f 1f 44 00 00 31 f6 e9 09 00 00 00 66 0f 1f 84 00 00 00 00 00 b8 a6 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 01 c3 48 c7 c2 b0 ff ff ff f7 d8 64 89 02 b8 RSP: 002b:00007ffc4d13c718 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f52da27de57 RDX: 0000000000000000 RSI: 000000000000000a RDI: 00007ffc4d13c7d0 RBP: 00007ffc4d13c7d0 R08: 0000000000000000 R09: 0000000000000000 R10: 00000000ffffffff R11: 0000000000000246 R12: 00007ffc4d13d890 R13: 00007f52da2c73b9 R14: 000000000022ad58 R15: 000000000000000d Allocated by task 4479: kasan_save_stack mm/kasan/common.c:45 [inline] kasan_set_track+0x4f/0x70 mm/kasan/common.c:52 ____kasan_kmalloc mm/kasan/common.c:374 [inline] __kasan_kmalloc+0x98/0xb0 mm/kasan/common.c:383 kmalloc include/linux/slab.h:582 [inline] dbMount+0x58/0x980 fs/jfs/jfs_dmap.c:164 jfs_mount+0x1e5/0x830 fs/jfs/jfs_mount.c:121 jfs_fill_super+0x59c/0xc50 fs/jfs/super.c:556 mount_bdev+0x276/0x3b0 fs/super.c:1391 legacy_get_tree+0xef/0x190 fs/fs_context.c:611 vfs_get_tree+0x8c/0x270 fs/super.c:1519 do_new_mount+0x28f/0xae0 fs/namespace.c:3335 do_mount fs/namespace.c:3675 [inline] __do_sys_mount fs/namespace.c:3884 [inline] __se_sys_mount+0x2d9/0x3c0 fs/namespace.c:3861 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd Freed by task 4480: kasan_save_stack mm/kasan/common.c:45 [inline] kasan_set_track+0x4f/0x70 mm/kasan/common.c:52 kasan_save_free_info+0x28/0x40 mm/kasan/generic.c:522 ____kasan_slab_free+0xd6/0x120 mm/kasan/common.c:236 kasan_slab_free include/linux/kasan.h:162 [inline] slab_free_hook mm/slub.c:1792 [inline] slab_free_freelist_hook mm/slub.c:1818 [inline] slab_free mm/slub.c:3801 [inline] __kmem_cache_free+0x25f/0x3b0 mm/slub.c:3814 dbUnmount+0xf8/0x110 fs/jfs/jfs_dmap.c:271 jfs_mount_rw+0x4ac/0x6a0 fs/jfs/jfs_mount.c:247 jfs_remount+0x3d1/0x6b0 fs/jfs/super.c:454 reconfigure_super+0x43e/0x870 fs/super.c:961 vfs_fsconfig_locked fs/fsopen.c:254 [inline] __do_sys_fsconfig fs/fsopen.c:439 [inline] __se_sys_fsconfig+0xa2c/0xf70 fs/fsopen.c:314 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd Last potentially related work creation: kasan_save_stack+0x3f/0x60 mm/kasan/common.c:45 __kasan_record_aux_stack+0xad/0xc0 mm/kasan/generic.c:492 __call_rcu_common kernel/rcu/tree.c:2649 [inline] call_rcu+0x167/0xa70 kernel/rcu/tree.c:2763 add_new_keypair drivers/net/wireguard/noise.c:248 [inline] wg_noise_handshake_begin_session+0x61f/0xb60 drivers/net/wireguard/noise.c:845 wg_packet_send_handshake_response+0x120/0x2d0 drivers/net/wireguard/send.c:96 wg_receive_handshake_packet drivers/net/wireguard/receive.c:154 [inline] wg_packet_handshake_receive_worker+0x5dd/0xf00 drivers/net/wireguard/receive.c:213 process_one_work+0x92c/0x12c0 kernel/workqueue.c:2597 worker_thread+0xa63/0x1210 kernel/workqueue.c:2748 kthread+0x2b8/0x350 kernel/kthread.c:389 ret_from_fork+0x2e/0x60 arch/x86/kernel/process.c:145 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:296 Second to last potentially related work creation: kasan_save_stack+0x3f/0x60 mm/kasan/common.c:45 __kasan_record_aux_stack+0xad/0xc0 mm/kasan/generic.c:492 __call_rcu_common kernel/rcu/tree.c:2649 [inline] call_rcu+0x167/0xa70 kernel/rcu/tree.c:2763 add_new_keypair drivers/net/wireguard/noise.c:248 [inline] wg_noise_handshake_begin_session+0x61f/0xb60 drivers/net/wireguard/noise.c:845 wg_packet_send_handshake_response+0x120/0x2d0 drivers/net/wireguard/send.c:96 wg_receive_handshake_packet drivers/net/wireguard/receive.c:154 [inline] wg_packet_handshake_receive_worker+0x5dd/0xf00 drivers/net/wireguard/receive.c:213 process_one_work+0x92c/0x12c0 kernel/workqueue.c:2597 worker_thread+0xa63/0x1210 kernel/workqueue.c:2748 kthread+0x2b8/0x350 kernel/kthread.c:389 ret_from_fork+0x2e/0x60 arch/x86/kernel/process.c:145 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:296 The buggy address belongs to the object at ffff8880397dd000 which belongs to the cache kmalloc-2k of size 2048 The buggy address is located 0 bytes inside of 2048-byte region [ffff8880397dd000, ffff8880397dd800) The buggy address belongs to the physical page: page:ffffea0000e5f600 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x397d8 head:ffffea0000e5f600 order:3 entire_mapcount:0 nr_pages_mapped:0 pincount:0 flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff) page_type: 0xffffffff() raw: 00fff00000010200 ffff888012842000 ffffea0001dcf400 dead000000000002 raw: 0000000000000000 0000000000080008 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 4456, tgid 4456 (klogd), ts 177208733105, free_ts 0 set_page_owner include/linux/page_owner.h:31 [inline] post_alloc_hook+0x1e6/0x210 mm/page_alloc.c:1570 prep_new_page mm/page_alloc.c:1577 [inline] get_page_from_freelist+0x31e8/0x3370 mm/page_alloc.c:3221 __alloc_pages+0x255/0x670 mm/page_alloc.c:4477 alloc_slab_page+0x6a/0x160 mm/slub.c:1862 allocate_slab mm/slub.c:2009 [inline] new_slab+0x84/0x2f0 mm/slub.c:2062 ___slab_alloc+0xade/0x1100 mm/slub.c:3215 __slab_alloc mm/slub.c:3314 [inline] __slab_alloc_node mm/slub.c:3367 [inline] slab_alloc_node mm/slub.c:3460 [inline] __kmem_cache_alloc_node+0x1af/0x270 mm/slub.c:3509 kmalloc_trace+0x2a/0xe0 mm/slab_common.c:1076 kmalloc include/linux/slab.h:582 [inline] syslog_print+0x121/0x9b0 kernel/printk/printk.c:1553 do_syslog+0x505/0x890 kernel/printk/printk.c:1732 __do_sys_syslog kernel/printk/printk.c:1824 [inline] __se_sys_syslog kernel/printk/printk.c:1822 [inline] __x64_sys_syslog+0x7c/0x90 kernel/printk/printk.c:1822 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd page_owner free stack trace missing Memory state around the buggy address: ffff8880397dcf00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff8880397dcf80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff8880397dd000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8880397dd080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880397dd100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================