================================================================== BUG: KCSAN: data-race in folio_mapping / page_cache_delete_batch write to 0xffffea0004e8e9d8 of 8 bytes by task 6289 on cpu 1: page_cache_delete_batch+0x1a7/0x470 mm/filemap.c:308 delete_from_page_cache_batch+0x18f/0x2a0 mm/filemap.c:334 truncate_inode_pages_range+0x3ca/0xae0 mm/truncate.c:370 truncate_inode_pages+0x20/0x30 mm/truncate.c:452 kill_bdev block/bdev.c:75 [inline] set_blocksize+0x24e/0x270 block/bdev.c:151 sb_set_blocksize+0x2c/0xa0 block/bdev.c:160 __ext4_fill_super fs/ext4/super.c:4753 [inline] ext4_fill_super+0x21d4/0x4f10 fs/ext4/super.c:5517 get_tree_bdev+0x2b4/0x3b0 fs/super.c:1294 ext4_get_tree+0x18/0x20 fs/ext4/super.c:5547 vfs_get_tree+0x49/0x190 fs/super.c:1501 do_new_mount+0x200/0x650 fs/namespace.c:3040 path_mount+0x4b1/0xb60 fs/namespace.c:3370 do_mount fs/namespace.c:3383 [inline] __do_sys_mount fs/namespace.c:3591 [inline] __se_sys_mount+0x281/0x2d0 fs/namespace.c:3568 __x64_sys_mount+0x63/0x70 fs/namespace.c:3568 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd read to 0xffffea0004e8e9d8 of 8 bytes by task 1913 on cpu 0: folio_mapping+0x8e/0x110 mm/util.c:806 folio_evictable mm/internal.h:136 [inline] lru_add_fn+0x12e/0x520 mm/swap.c:210 folio_batch_move_lru+0x1e1/0x2a0 mm/swap.c:246 lru_add_drain_cpu+0x73/0x250 mm/swap.c:626 lru_add_and_bh_lrus_drain mm/swap.c:744 [inline] lru_add_drain_per_cpu+0x21/0x70 mm/swap.c:765 process_one_work+0x3d3/0x720 kernel/workqueue.c:2289 process_scheduled_works kernel/workqueue.c:2352 [inline] worker_thread+0x78f/0xa70 kernel/workqueue.c:2441 kthread+0x1a9/0x1e0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 value changed: 0xffff88810049ab88 -> 0x0000000000000000 Reported by Kernel Concurrency Sanitizer on: CPU: 0 PID: 1913 Comm: kworker/0:4 Not tainted 5.19.0-syzkaller-12689-g3bc1bc0b59d0-dirty #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022 Workqueue: mm_percpu_wq lru_add_drain_per_cpu ==================================================================