syzbot


KASAN: slab-use-after-free Read in xfs_qm_dquot_logitem_unpin (2)

Status: moderation: reported on 2024/05/31 18:56
Subsystems: xfs
[Documentation on labels]
Reported-by: syzbot+4e2924b0deaa4c908eef@syzkaller.appspotmail.com
First crash: 29d, last: 22d
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream KASAN: slab-use-after-free Read in xfs_qm_dquot_logitem_unpin xfs 1 308d 304d 0/27 auto-obsoleted due to no activity on 2023/11/20 17:42

Sample crash report:
==================================================================
BUG: KASAN: slab-use-after-free in debug_spin_lock_before kernel/locking/spinlock_debug.c:86 [inline]
BUG: KASAN: slab-use-after-free in do_raw_spin_lock+0x271/0x2c0 kernel/locking/spinlock_debug.c:115
Read of size 4 at addr ffff88802840d5f4 by task kworker/0:1H/69

CPU: 0 PID: 69 Comm: kworker/0:1H Not tainted 6.10.0-rc2-syzkaller #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Workqueue: xfs-log/loop2 xlog_ioend_work
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
 print_address_description mm/kasan/report.c:377 [inline]
 print_report+0xc3/0x620 mm/kasan/report.c:488
 kasan_report+0xd9/0x110 mm/kasan/report.c:601
 debug_spin_lock_before kernel/locking/spinlock_debug.c:86 [inline]
 do_raw_spin_lock+0x271/0x2c0 kernel/locking/spinlock_debug.c:115
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
 _raw_spin_lock_irqsave+0x42/0x60 kernel/locking/spinlock.c:162
 __wake_up_common_lock kernel/sched/wait.c:105 [inline]
 __wake_up+0x1c/0x60 kernel/sched/wait.c:127
 xfs_qm_dquot_logitem_unpin+0x81/0x90 fs/xfs/xfs_dquot_item.c:96
 xfs_log_item_batch_insert fs/xfs/xfs_trans.c:746 [inline]
 xfs_trans_committed_bulk+0x744/0x890 fs/xfs/xfs_trans.c:850
 xlog_cil_committed+0x161/0x840 fs/xfs/xfs_log_cil.c:736
 xlog_cil_process_committed+0x123/0x1f0 fs/xfs/xfs_log_cil.c:768
 xlog_state_do_iclog_callbacks fs/xfs/xfs_log.c:2791 [inline]
 xlog_state_do_callback+0x562/0xd90 fs/xfs/xfs_log.c:2816
 xlog_ioend_work+0x92/0x110 fs/xfs/xfs_log.c:1398
 process_one_work+0x958/0x1ad0 kernel/workqueue.c:3231
 process_scheduled_works kernel/workqueue.c:3312 [inline]
 worker_thread+0x6c8/0xf70 kernel/workqueue.c:3393
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Allocated by task 10734:
 kasan_save_stack+0x33/0x60 mm/kasan/common.c:47
 kasan_save_track+0x14/0x30 mm/kasan/common.c:68
 unpoison_slab_object mm/kasan/common.c:312 [inline]
 __kasan_slab_alloc+0x89/0x90 mm/kasan/common.c:338
 kasan_slab_alloc include/linux/kasan.h:201 [inline]
 slab_post_alloc_hook mm/slub.c:3940 [inline]
 slab_alloc_node mm/slub.c:4000 [inline]
 kmem_cache_alloc_noprof+0x121/0x2f0 mm/slub.c:4007
 xfs_dquot_alloc+0x2a/0x670 fs/xfs/xfs_dquot.c:497
 xfs_qm_dqread+0x8e/0x5f0 fs/xfs/xfs_dquot.c:683
 xfs_qm_dqget+0x151/0x4a0 fs/xfs/xfs_dquot.c:901
 xfs_qm_scall_setqlim+0x172/0x1980 fs/xfs/xfs_qm_syscalls.c:300
 xfs_fs_set_dqblk+0x166/0x1e0 fs/xfs/xfs_quotaops.c:267
 quota_setquota+0x4c5/0x5f0 fs/quota/quota.c:310
 do_quotactl+0xb00/0x1380 fs/quota/quota.c:802
 __do_sys_quotactl fs/quota/quota.c:961 [inline]
 __se_sys_quotactl fs/quota/quota.c:917 [inline]
 __ia32_sys_quotactl+0x1bb/0x450 fs/quota/quota.c:917
 do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
 __do_fast_syscall_32+0x73/0x120 arch/x86/entry/common.c:386
 do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
 entry_SYSENTER_compat_after_hwframe+0x84/0x8e

Freed by task 111:
 kasan_save_stack+0x33/0x60 mm/kasan/common.c:47
 kasan_save_track+0x14/0x30 mm/kasan/common.c:68
 kasan_save_free_info+0x3b/0x60 mm/kasan/generic.c:579
 poison_slab_object+0xf7/0x160 mm/kasan/common.c:240
 __kasan_slab_free+0x32/0x50 mm/kasan/common.c:256
 kasan_slab_free include/linux/kasan.h:184 [inline]
 slab_free_hook mm/slub.c:2195 [inline]
 slab_free mm/slub.c:4436 [inline]
 kmem_cache_free+0x12f/0x3a0 mm/slub.c:4511
 xfs_qm_shrink_scan+0x25c/0x3f0 fs/xfs/xfs_qm.c:531
 do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
 shrink_slab+0x18a/0x1310 mm/shrinker.c:662
 shrink_one+0x493/0x7c0 mm/vmscan.c:4790
 shrink_many mm/vmscan.c:4851 [inline]
 lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4951
 shrink_node mm/vmscan.c:5910 [inline]
 kswapd_shrink_node mm/vmscan.c:6720 [inline]
 balance_pgdat+0x1105/0x1970 mm/vmscan.c:6911
 kswapd+0x5ea/0xbf0 mm/vmscan.c:7180
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

The buggy address belongs to the object at ffff88802840d380
 which belongs to the cache xfs_dquot of size 704
The buggy address is located 628 bytes inside of
 freed 704-byte region [ffff88802840d380, ffff88802840d640)

The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff88802840da00 pfn:0x2840c
head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0xfff00000000040(head|node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffefff(slab)
raw: 00fff00000000040 ffff88801c141540 dead000000000122 0000000000000000
raw: ffff88802840da00 000000008013000a 00000001ffffefff 0000000000000000
head: 00fff00000000040 ffff88801c141540 dead000000000122 0000000000000000
head: ffff88802840da00 000000008013000a 00000001ffffefff 0000000000000000
head: 00fff00000000002 ffffea0000a10301 ffffffffffffffff 0000000000000000
head: 0000000000000004 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0x1d20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL), pid 63, tgid 63 (kworker/u32:3), ts 49513474226, free_ts 48501183994
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x2d1/0x350 mm/page_alloc.c:1468
 prep_new_page mm/page_alloc.c:1476 [inline]
 get_page_from_freelist+0x136a/0x2df0 mm/page_alloc.c:3402
 __alloc_pages_noprof+0x22b/0x2460 mm/page_alloc.c:4660
 __alloc_pages_node_noprof include/linux/gfp.h:269 [inline]
 alloc_pages_node_noprof include/linux/gfp.h:296 [inline]
 alloc_slab_page+0x56/0x110 mm/slub.c:2264
 allocate_slab mm/slub.c:2427 [inline]
 new_slab+0x84/0x260 mm/slub.c:2480
 ___slab_alloc+0xdac/0x1870 mm/slub.c:3666
 __slab_alloc.constprop.0+0x56/0xb0 mm/slub.c:3756
 __slab_alloc_node mm/slub.c:3809 [inline]
 slab_alloc_node mm/slub.c:3988 [inline]
 kmem_cache_alloc_noprof+0x2ae/0x2f0 mm/slub.c:4007
 xfs_dquot_alloc+0x2a/0x670 fs/xfs/xfs_dquot.c:497
 xfs_qm_dqread+0x8e/0x5f0 fs/xfs/xfs_dquot.c:683
 xfs_qm_dqget+0x151/0x4a0 fs/xfs/xfs_dquot.c:901
 xfs_qm_quotacheck_dqadjust+0xb3/0x550 fs/xfs/xfs_qm.c:1098
 xfs_qm_dqusage_adjust+0x57a/0x660 fs/xfs/xfs_qm.c:1224
 xfs_iwalk_ag_recs+0x4cf/0x850 fs/xfs/xfs_iwalk.c:213
 xfs_iwalk_run_callbacks+0x1f3/0x540 fs/xfs/xfs_iwalk.c:371
 xfs_iwalk_ag+0x823/0xa60 fs/xfs/xfs_iwalk.c:477
page last free pid 5231 tgid 5231 stack trace:
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1088 [inline]
 free_unref_page+0x64a/0xe40 mm/page_alloc.c:2565
 __put_partials+0x14c/0x170 mm/slub.c:2994
 qlink_free mm/kasan/quarantine.c:163 [inline]
 qlist_free_all+0x4e/0x140 mm/kasan/quarantine.c:179
 kasan_quarantine_reduce+0x192/0x1e0 mm/kasan/quarantine.c:286
 __kasan_slab_alloc+0x69/0x90 mm/kasan/common.c:322
 kasan_slab_alloc include/linux/kasan.h:201 [inline]
 slab_post_alloc_hook mm/slub.c:3940 [inline]
 slab_alloc_node mm/slub.c:4000 [inline]
 kmalloc_trace_noprof+0x11e/0x310 mm/slub.c:4147
 kmalloc_noprof include/linux/slab.h:660 [inline]
 netdevice_queue_work drivers/infiniband/core/roce_gid_mgmt.c:642 [inline]
 netdevice_event+0x368/0xa10 drivers/infiniband/core/roce_gid_mgmt.c:801
 notifier_call_chain+0xb9/0x410 kernel/notifier.c:93
 call_netdevice_notifiers_info+0xbe/0x140 net/core/dev.c:1992
 call_netdevice_notifiers_extack net/core/dev.c:2030 [inline]
 call_netdevice_notifiers net/core/dev.c:2044 [inline]
 dev_set_mac_address+0x370/0x4a0 net/core/dev.c:9043
 dev_set_mac_address_user+0x30/0x50 net/core/dev.c:9057
 do_setlink+0x6c1/0x3ea0 net/core/rtnetlink.c:2855
 __rtnl_newlink+0xc3a/0x1960 net/core/rtnetlink.c:3696
 rtnl_newlink+0x67/0xa0 net/core/rtnetlink.c:3743
 rtnetlink_rcv_msg+0x3c7/0xe60 net/core/rtnetlink.c:6595
 netlink_rcv_skb+0x165/0x410 net/netlink/af_netlink.c:2564

Memory state around the buggy address:
 ffff88802840d480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ffff88802840d500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>ffff88802840d580: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                                             ^
 ffff88802840d600: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
 ffff88802840d680: fc fc fc fc fc fc fc fc fa fb fb fb fb fb fb fb
==================================================================

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/06/03 06:46 upstream c3f38fa61af7 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 KASAN: slab-use-after-free Read in xfs_qm_dquot_logitem_unpin
2024/05/27 18:48 upstream 2bfcfd584ff5 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 KASAN: slab-use-after-free Read in xfs_qm_dquot_logitem_unpin
* Struck through repros no longer work on HEAD.