syzbot


KASAN: slab-use-after-free Read in lru_add_fn

Status: upstream: reported on 2025/07/23 03:44
Reported-by: syzbot+f9e4c812b9ab2e943d86@syzkaller.appspotmail.com
First crash: 12d, last: 12d
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream KASAN: slab-use-after-free Read in lru_add_fn nilfs mm 19 C inconclusive 68 404d 452d 27/29 fixed on 2024/08/14 19:57
linux-6.1 KASAN: use-after-free Read in lru_add_fn (2) origin:upstream 19 C 1 26d 26d 0/3 upstream: reported C repro on 2025/07/09 05:00
linux-6.1 KASAN: use-after-free Read in lru_add_fn 19 42 404d 417d 0/3 auto-obsoleted due to no activity on 2024/09/04 15:15

Sample crash report:
gfs2: fsid=norecovery.s: Error checking journal for spectator mount.
==================================================================
BUG: KASAN: slab-use-after-free in instrument_atomic_read include/linux/instrumented.h:68 [inline]
BUG: KASAN: slab-use-after-free in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
BUG: KASAN: slab-use-after-free in mapping_unevictable include/linux/pagemap.h:256 [inline]
BUG: KASAN: slab-use-after-free in folio_evictable mm/internal.h:207 [inline]
BUG: KASAN: slab-use-after-free in lru_add_fn+0x279/0x1720 mm/swap.c:173
Read of size 8 at addr ffff88805b86c3c8 by task syz.1.695/7406

CPU: 0 PID: 7406 Comm: syz.1.695 Not tainted 6.6.99-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
 print_address_description mm/kasan/report.c:364 [inline]
 print_report+0xac/0x200 mm/kasan/report.c:466
 kasan_report+0x117/0x150 mm/kasan/report.c:579
 check_region_inline mm/kasan/generic.c:-1 [inline]
 kasan_check_range+0x288/0x290 mm/kasan/generic.c:187
 instrument_atomic_read include/linux/instrumented.h:68 [inline]
 _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
 mapping_unevictable include/linux/pagemap.h:256 [inline]
 folio_evictable mm/internal.h:207 [inline]
 lru_add_fn+0x279/0x1720 mm/swap.c:173
 folio_batch_move_lru+0x2e7/0x6b0 mm/swap.c:209
 lru_add_drain_cpu+0x10e/0x8c0 mm/swap.c:644
 lru_add_drain+0x121/0x3e0 mm/swap.c:744
 __folio_batch_release+0x48/0xe0 mm/swap.c:1039
 folio_batch_release include/linux/pagevec.h:83 [inline]
 shmem_undo_range+0x5d0/0x1a40 mm/shmem.c:1027
 shmem_truncate_range mm/shmem.c:1136 [inline]
 shmem_evict_inode+0x273/0xa70 mm/shmem.c:1265
 evict+0x486/0x870 fs/inode.c:705
 __dentry_kill+0x431/0x650 fs/dcache.c:611
 dentry_kill+0xb8/0x290 fs/dcache.c:-1
 dput+0xfe/0x1e0 fs/dcache.c:918
 __fput+0x5e5/0x970 fs/file_table.c:392
 task_work_run+0x1ce/0x250 kernel/task_work.c:239
 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
 exit_to_user_mode_loop+0xe6/0x110 kernel/entry/common.c:177
 exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:210
 __syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline]
 syscall_exit_to_user_mode+0x1a/0x50 kernel/entry/common.c:302
 do_syscall_64+0x61/0xb0 arch/x86/entry/common.c:87
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fba8058e5ab
Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 1c 48 8b 44 24 18 64 48 2b 04 25 28 00 00
RSP: 002b:00007fba814abe10 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: 0000000000000000 RBX: ffffffffffffffff RCX: 00007fba8058e5ab
RDX: 0000000000000000 RSI: 0000000000004c01 RDI: 0000000000000003
RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000200001
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000005
R13: 00007fba814abeb0 R14: 00000000000125bb R15: 0000200000000180
 </TASK>

Allocated by task 7406:
 kasan_save_stack mm/kasan/common.c:45 [inline]
 kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
 __kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:328
 kasan_slab_alloc include/linux/kasan.h:188 [inline]
 slab_post_alloc_hook+0x6e/0x4d0 mm/slab.h:767
 slab_alloc_node mm/slub.c:3485 [inline]
 slab_alloc mm/slub.c:3493 [inline]
 __kmem_cache_alloc_lru mm/slub.c:3500 [inline]
 kmem_cache_alloc+0x11e/0x2e0 mm/slub.c:3509
 gfs2_glock_get+0x288/0xed0 fs/gfs2/glock.c:1212
 gfs2_inode_lookup+0x219/0xae0 fs/gfs2/inode.c:135
 gfs2_dir_search+0x169/0x220 fs/gfs2/dir.c:1664
 gfs2_lookupi+0x3d9/0x5a0 fs/gfs2/inode.c:339
 gfs2_jindex_hold fs/gfs2/ops_fstype.c:609 [inline]
 init_journal+0x54b/0x2260 fs/gfs2/ops_fstype.c:751
 init_inodes+0xdb/0x320 fs/gfs2/ops_fstype.c:886
 gfs2_fill_super+0x1815/0x1f80 fs/gfs2/ops_fstype.c:1266
 get_tree_bdev+0x3e4/0x510 fs/super.c:1591
 gfs2_get_tree+0x51/0x1e0 fs/gfs2/ops_fstype.c:1344
 vfs_get_tree+0x8c/0x280 fs/super.c:1764
 do_new_mount+0x24b/0xa40 fs/namespace.c:3366
 do_mount fs/namespace.c:3706 [inline]
 __do_sys_mount fs/namespace.c:3915 [inline]
 __se_sys_mount+0x2da/0x3c0 fs/namespace.c:3892
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2

Freed by task 22:
 kasan_save_stack mm/kasan/common.c:45 [inline]
 kasan_set_track+0x4e/0x70 mm/kasan/common.c:52
 kasan_save_free_info+0x2e/0x50 mm/kasan/generic.c:522
 ____kasan_slab_free+0x126/0x1e0 mm/kasan/common.c:236
 kasan_slab_free include/linux/kasan.h:164 [inline]
 slab_free_hook mm/slub.c:1806 [inline]
 slab_free_freelist_hook+0x130/0x1b0 mm/slub.c:1832
 slab_free mm/slub.c:3816 [inline]
 kmem_cache_free+0xf8/0x280 mm/slub.c:3838
 rcu_do_batch kernel/rcu/tree.c:2194 [inline]
 rcu_core+0xcc4/0x1720 kernel/rcu/tree.c:2467
 handle_softirqs+0x280/0x820 kernel/softirq.c:578
 run_ksoftirqd+0x9c/0xf0 kernel/softirq.c:950
 smpboot_thread_fn+0x635/0xa00 kernel/smpboot.c:164
 kthread+0x2fa/0x390 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293

Last potentially related work creation:
 kasan_save_stack+0x3e/0x60 mm/kasan/common.c:45
 __kasan_record_aux_stack+0xaf/0xc0 mm/kasan/generic.c:492
 __call_rcu_common kernel/rcu/tree.c:2721 [inline]
 call_rcu+0x158/0x930 kernel/rcu/tree.c:2837
 __gfs2_glock_free+0xb3a/0xc80 fs/gfs2/glock.c:174
 gfs2_glock_free+0x3c/0xa0 fs/gfs2/glock.c:180
 gfs2_glock_put_eventually fs/gfs2/super.c:1275 [inline]
 gfs2_evict_inode+0x65a/0x1220 fs/gfs2/super.c:1557
 evict+0x486/0x870 fs/inode.c:705
 gfs2_jindex_free+0x39d/0x440 fs/gfs2/super.c:79
 init_journal+0x8f2/0x2260 fs/gfs2/ops_fstype.c:868
 init_inodes+0xdb/0x320 fs/gfs2/ops_fstype.c:886
 gfs2_fill_super+0x1815/0x1f80 fs/gfs2/ops_fstype.c:1266
 get_tree_bdev+0x3e4/0x510 fs/super.c:1591
 gfs2_get_tree+0x51/0x1e0 fs/gfs2/ops_fstype.c:1344
 vfs_get_tree+0x8c/0x280 fs/super.c:1764
 do_new_mount+0x24b/0xa40 fs/namespace.c:3366
 do_mount fs/namespace.c:3706 [inline]
 __do_sys_mount fs/namespace.c:3915 [inline]
 __se_sys_mount+0x2da/0x3c0 fs/namespace.c:3892
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2

Second to last potentially related work creation:
 kasan_save_stack+0x3e/0x60 mm/kasan/common.c:45
 __kasan_record_aux_stack+0xaf/0xc0 mm/kasan/generic.c:492
 insert_work+0x3d/0x310 kernel/workqueue.c:1651
 __queue_work+0xc39/0x1020 kernel/workqueue.c:1800
 queue_delayed_work_on+0x12a/0x1e0 kernel/workqueue.c:1987
 queue_delayed_work include/linux/workqueue.h:577 [inline]
 gfs2_glock_queue_work fs/gfs2/glock.c:278 [inline]
 do_xmote+0xcf5/0x12c0 fs/gfs2/glock.c:842
 glock_work_func+0x29e/0x4c0 fs/gfs2/glock.c:1121
 process_one_work kernel/workqueue.c:2634 [inline]
 process_scheduled_works+0xa45/0x15b0 kernel/workqueue.c:2711
 worker_thread+0xa55/0xfc0 kernel/workqueue.c:2792
 kthread+0x2fa/0x390 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293

The buggy address belongs to the object at ffff88805b86c000
 which belongs to the cache gfs2_glock(aspace) of size 1224
The buggy address is located 968 bytes inside of
 freed 1224-byte region [ffff88805b86c000, ffff88805b86c4c8)

The buggy address belongs to the physical page:
page:ffffea00016e1b00 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5b86c
head:ffffea00016e1b00 order:2 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0xfff00000000840(slab|head|node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffffff()
raw: 00fff00000000840 ffff888143a9a000 dead000000000122 0000000000000000
raw: 0000000000000000 00000000800c000c 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0x1d2040(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL), pid 6438, tgid 6437 (syz.2.264), ts 94738629084, free_ts 94198413754
 set_page_owner include/linux/page_owner.h:31 [inline]
 post_alloc_hook+0x1cd/0x210 mm/page_alloc.c:1554
 prep_new_page mm/page_alloc.c:1561 [inline]
 get_page_from_freelist+0x195c/0x19f0 mm/page_alloc.c:3191
 __alloc_pages+0x1e3/0x460 mm/page_alloc.c:4457
 alloc_slab_page+0x5d/0x170 mm/slub.c:1876
 allocate_slab mm/slub.c:2023 [inline]
 new_slab+0x87/0x2e0 mm/slub.c:2076
 ___slab_alloc+0xc6d/0x12f0 mm/slub.c:3230
 __slab_alloc mm/slub.c:3329 [inline]
 __slab_alloc_node mm/slub.c:3382 [inline]
 slab_alloc_node mm/slub.c:3475 [inline]
 slab_alloc mm/slub.c:3493 [inline]
 __kmem_cache_alloc_lru mm/slub.c:3500 [inline]
 kmem_cache_alloc+0x1b7/0x2e0 mm/slub.c:3509
 gfs2_glock_get+0x288/0xed0 fs/gfs2/glock.c:1212
 gfs2_inode_lookup+0x219/0xae0 fs/gfs2/inode.c:135
 gfs2_lookup_root fs/gfs2/ops_fstype.c:462 [inline]
 init_sb+0xa2c/0x1310 fs/gfs2/ops_fstype.c:529
 gfs2_fill_super+0x14c2/0x1f80 fs/gfs2/ops_fstype.c:1233
 get_tree_bdev+0x3e4/0x510 fs/super.c:1591
 gfs2_get_tree+0x51/0x1e0 fs/gfs2/ops_fstype.c:1344
 vfs_get_tree+0x8c/0x280 fs/super.c:1764
 do_new_mount+0x24b/0xa40 fs/namespace.c:3366
 do_mount fs/namespace.c:3706 [inline]
 __do_sys_mount fs/namespace.c:3915 [inline]
 __se_sys_mount+0x2da/0x3c0 fs/namespace.c:3892
page last free stack trace:
 reset_page_owner include/linux/page_owner.h:24 [inline]
 free_pages_prepare mm/page_alloc.c:1154 [inline]
 free_unref_page_prepare+0x7ce/0x8e0 mm/page_alloc.c:2336
 free_unref_page+0x32/0x2e0 mm/page_alloc.c:2429
 discard_slab mm/slub.c:2122 [inline]
 __unfreeze_partials+0x1cf/0x210 mm/slub.c:2662
 put_cpu_partial+0x17c/0x250 mm/slub.c:2738
 __slab_free+0x31d/0x410 mm/slub.c:3686
 qlink_free mm/kasan/quarantine.c:166 [inline]
 qlist_free_all+0x75/0xe0 mm/kasan/quarantine.c:185
 kasan_quarantine_reduce+0x143/0x160 mm/kasan/quarantine.c:292
 __kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:305
 kasan_slab_alloc include/linux/kasan.h:188 [inline]
 slab_post_alloc_hook+0x6e/0x4d0 mm/slab.h:767
 slab_alloc_node mm/slub.c:3485 [inline]
 slab_alloc mm/slub.c:3493 [inline]
 __kmem_cache_alloc_lru mm/slub.c:3500 [inline]
 kmem_cache_alloc+0x11e/0x2e0 mm/slub.c:3509
 vm_area_dup+0x27/0x270 kernel/fork.c:501
 __split_vma+0x19f/0xc00 mm/mmap.c:2369
 do_vmi_align_munmap+0x2dc/0x1660 mm/mmap.c:2497
 do_vmi_munmap+0x252/0x2d0 mm/mmap.c:2656
 __vm_munmap+0x193/0x3c0 mm/mmap.c:2957
 __do_sys_munmap mm/mmap.c:2974 [inline]
 __se_sys_munmap mm/mmap.c:2971 [inline]
 __x64_sys_munmap+0x60/0x70 mm/mmap.c:2971

Memory state around the buggy address:
 ffff88805b86c280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ffff88805b86c300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88805b86c380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                              ^
 ffff88805b86c400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ffff88805b86c480: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc
==================================================================

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/23 03:43 linux-6.6.y d96eb99e2f0e e1dd4f22 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan KASAN: slab-use-after-free Read in lru_add_fn
* Struck through repros no longer work on HEAD.