syzbot


KCSAN: data-race in mlock_new_folio / need_mlock_drain (3)

Status: moderation: reported on 2024/04/11 03:51
Subsystems: mm
[Documentation on labels]
Reported-by: syzbot+68fbd83d6f243168acea@syzkaller.appspotmail.com
First crash: 701d, last: 21d
✨ AI Jobs (3)
ID Workflow Result Correct Bug Created Started Finished Revision Error
d685a76d-ddb1-468a-b29a-1661395adc36 repro KCSAN: data-race in mlock_new_folio / need_mlock_drain (3) 2026/03/07 20:55 2026/03/07 20:55 2026/03/07 21:02 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
75f0ea02-c803-42fd-9aca-3c6b0be81146 assessment-kcsan Benign: ✅  Confident: ✅  KCSAN: data-race in mlock_new_folio / need_mlock_drain (3) 2026/02/24 20:45 2026/02/24 20:45 2026/02/24 20:56 305c0ec5cd886e2d13738e28e1b2df9b0ec20fc9
5a9bb8a9-f2a5-4691-b608-1abff3737591 assessment-kcsan 💥 KCSAN: data-race in mlock_new_folio / need_mlock_drain (3) 2026/01/10 02:54 2026/01/10 02:54 2026/01/10 03:12 7519916073b761ced56a7b15fdeeb4674e8dc125 Error 429, Message: You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/rate-limit. * Quota exceeded for metric: generativelanguage.googleapis.com/generate_requests_per_model_per_day, limit: 0, Status: RESOURCE_EXHAUSTED, Details: [map[@type:type.googleapis.com/google.rpc.Help links:[map[description:Learn more about Gemini API quotas url:https://ai.google.dev/gemini-api/docs/rate-limits]]] map[@type:type.googleapis.com/google.rpc.QuotaFailure violations:[map[quotaId:GenerateRequestsPerDayPerProjectPerModel quotaMetric:generativelanguage.googleapis.com/generate_requests_per_model_per_day]]] map[@type:type.googleapis.com/google.rpc.DebugInfo detail:[ORIGINAL ERROR] generic::resource_exhausted: You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/rate-limit. * Quota exceeded for metric: generativelanguage.googleapis.com/generate_requests_per_model_per_day, limit: 0 [google.rpc.error_details_ext] { message: "You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/rate-limit. \n* Quota exceeded for metric: generativelanguage.googleapis.com/generate_requests_per_model_per_day, limit: 0" details { type_url: "type.googleapis.com/language_labs.genai.debug.GeminiApiDebugInfo" value: "RM\nK\nEgenerativelanguage.googleapis.com/generate_requests_per_model_per_day\030\000\"\000" } details { [type.googleapis.com/google.rpc.Help] { links { description: "Learn more about Gemini API quotas" url: "https://ai.google.dev/gemini-api/docs/rate-limits" } } } details { [type.googleapis.com/google.rpc.QuotaFailure] { violations { quota_metric: "generativelanguage.googleapis.com/generate_requests_per_model_per_day" quota_id: "GenerateRequestsPerDayPerProjectPerModel" } } } }]]
Similar bugs (2)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream KCSAN: data-race in mlock_new_folio / need_mlock_drain mm 6 2 891d 915d 0/29 auto-obsoleted due to no activity on 2023/11/08 14:06
upstream KCSAN: data-race in mlock_new_folio / need_mlock_drain (2) mm 6 8 740d 825d 0/29 auto-obsoleted due to no activity on 2024/04/07 03:38

Sample crash report:
==================================================================
BUG: KCSAN: data-race in mlock_new_folio / need_mlock_drain

read-write to 0xffff888237d27210 of 1 bytes by task 8016 on cpu 1:
 folio_batch_add include/linux/pagevec.h:77 [inline]
 mlock_new_folio+0x143/0x240 mm/mlock.c:280
 folio_add_lru_vma+0x5f/0x70 mm/swap.c:528
 do_anonymous_page mm/memory.c:5312 [inline]
 do_pte_missing mm/memory.c:4475 [inline]
 handle_pte_fault mm/memory.c:6316 [inline]
 __handle_mm_fault mm/memory.c:6454 [inline]
 handle_mm_fault+0x2c3f/0x3020 mm/memory.c:6623
 faultin_page mm/gup.c:1126 [inline]
 __get_user_pages+0x1023/0x1ea0 mm/gup.c:1428
 populate_vma_page_range mm/gup.c:1860 [inline]
 __mm_populate+0x242/0x390 mm/gup.c:1963
 do_mlock+0x47c/0x520 mm/mlock.c:653
 __do_sys_mlock mm/mlock.c:661 [inline]
 __se_sys_mlock mm/mlock.c:659 [inline]
 __x64_sys_mlock+0x36/0x50 mm/mlock.c:659
 x64_sys_call+0x1ab3/0x3020 arch/x86/include/generated/asm/syscalls_64.h:150
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x12c/0x370 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

read to 0xffff888237d27210 of 1 bytes by task 8005 on cpu 0:
 folio_batch_count include/linux/pagevec.h:56 [inline]
 need_mlock_drain+0x30/0x50 mm/mlock.c:235
 cpu_needs_drain mm/swap.c:786 [inline]
 __lru_add_drain_all+0x273/0x450 mm/swap.c:877
 lru_add_drain_all+0x10/0x20 mm/swap.c:893
 invalidate_bdev+0x47/0x70 block/bdev.c:101
 bdev_disk_changed+0xeb/0xd40 block/partitions/core.c:658
 loop_reread_partitions drivers/block/loop.c:448 [inline]
 loop_set_status+0x5db/0x6a0 drivers/block/loop.c:1277
 loop_set_status64 drivers/block/loop.c:1373 [inline]
 lo_ioctl+0x671/0x13a0 drivers/block/loop.c:1559
 blkdev_ioctl+0x387/0x460 block/ioctl.c:804
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl+0xce/0x140 fs/ioctl.c:583
 __x64_sys_ioctl+0x43/0x50 fs/ioctl.c:583
 x64_sys_call+0x1563/0x3020 arch/x86/include/generated/asm/syscalls_64.h:17
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x12c/0x370 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

value changed: 0x06 -> 0x0b

Reported by Kernel Concurrency Sanitizer on:
CPU: 0 UID: 0 PID: 8005 Comm: syz.2.1151 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
==================================================================
 loop2: p1 < > p2 p4 < p5 >
loop2: partition table partially beyond EOD, truncated
loop2: p1 start 134217728 is beyond EOD, truncated
loop2: p2 size 591360 extends beyond EOD, 
truncated
loop2: p5 size 591360 extends beyond EOD, truncated

Crashes (88):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/20 22:27 upstream a95f71ad3e2e 6e7b5511 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2026/02/14 02:42 upstream cee73b1e840c 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2026/02/10 00:04 upstream 05f7e89ab973 4ab09a02 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2026/02/09 09:42 upstream 05f7e89ab973 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2026/02/06 22:06 upstream b7ff7151e653 97745f52 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2026/01/08 01:08 upstream f0b9d8eb98df d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/12/22 19:55 upstream 9448598b22c5 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/12/17 08:58 upstream ea1013c15392 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/11/22 08:23 upstream 2eba5e05d9bc 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/11/20 23:49 upstream 8e621c9a3375 2cc4c24a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/11/10 04:55 upstream e9a6fb0bcdd7 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/10/28 10:09 upstream fd57572253bc fd2207e7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/10/15 22:44 upstream 1f4a222b0e33 19568248 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/10/12 10:59 upstream 67029a49db6c ff1712fe .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/10/04 21:15 upstream cbf33b8e0b36 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/09/22 08:19 upstream 2d5bd41a4505 67c37560 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/09/18 07:39 upstream d4b779985a6c e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/09/15 11:19 upstream f83ec76bf285 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/08/31 04:34 upstream c8bc81a52d5a 807a3b61 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/08/29 20:52 upstream fb679c832b64 3e1beec6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/08/22 08:00 upstream 3957a5720157 bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/07/26 10:40 upstream 5f33ebd2018c fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/07/08 09:56 upstream d7b8f8e20813 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/05/14 03:56 upstream 405e6c37c89e 7344edeb .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/05/10 00:08 upstream 9c69f8884904 77908e5f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/05/04 03:47 upstream 2a239ffbebb5 b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/04/28 00:37 upstream b4432656b36e c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/04/20 20:10 upstream 6fea5fabd332 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/04/18 19:52 upstream fc96b232f8e7 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/04/11 23:23 upstream e618ee89561b 0bd6db41 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/04/07 18:45 upstream 0af2f6be1b42 a2ada0e7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/03/24 12:21 upstream 586de92313fc 875573af .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/03/19 23:04 upstream a7f2e10ecd8f e20d7b13 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/02/27 08:00 upstream 5394eea10651 6a8fcbc4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/02/18 06:13 upstream 2408a807bfc3 429ea007 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/02/13 00:06 upstream 4dc1d1bec898 b27c2402 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/02/11 21:09 upstream 09fbf3d50205 f2baddf5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/02/11 01:31 upstream a64dcfb451e2 43f51a00 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/01/31 22:13 upstream 69b8923f5003 aa47157c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/01/18 15:04 upstream 595523945be0 f2cb035c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2025/01/11 04:43 upstream 2144da25584e 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/12/30 06:16 upstream 4099a71718b0 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/12/23 05:41 upstream bcde95ce32b6 b4fbdbd4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/12/19 14:16 upstream eabcdba3ad40 1432fc84 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/12/10 12:35 upstream 7cb1b4663150 cfc402b4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/12/08 19:26 upstream 7503345ac5f5 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/11/24 13:52 upstream 9f16d5e6f220 68da6d95 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/11/20 02:55 upstream 158f238aa69d 7d02db5a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/11/18 09:57 upstream adc218676eef cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/07/20 17:56 upstream 3c3ff7be9729 b88348e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/07/09 17:26 upstream 34afb82a3c67 79d68ada .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/07/05 15:08 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/07/04 14:15 upstream 795c58e4c7fc 3f2748a3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/07/04 06:26 upstream 8a9c6c40432e f76a75f3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/07/03 09:14 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/07/02 11:11 upstream 1dfe225e9af5 07f0a0a0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/06/24 14:28 upstream f2661062f16b edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/06/24 02:33 upstream 7c16f0a4ed1c edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/06/17 22:20 upstream 6226e74900d7 1f11cfd7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/06/12 13:59 upstream 2ef5971ff345 f815599d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/06/11 14:01 upstream 83a7eefedc9b b7d9eb04 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/06/10 06:56 upstream 83a7eefedc9b 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/06/02 03:51 upstream 89be4025b0db 3113787f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/05/24 08:04 upstream 2a8120d7b482 8f98448e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/04/28 12:26 upstream 2c8159388952 07b455f9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/04/27 02:26 upstream 5eb4573ea63d 059e9963 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/04/24 11:05 upstream 9d1ddab261f3 21339d7b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
2024/04/11 03:50 upstream 9875c0beb8ad 33b9e058 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-kcsan-gce KCSAN: data-race in mlock_new_folio / need_mlock_drain
* Struck through repros no longer work on HEAD.