syzbot


ID Workflow Result Correct Bug Created Started Finished Revision Error
8bd3885e-3570-437d-9bf2-7d767af4070f assessment-kcsan Benign: ✅  Confident: ✅  KCSAN: data-race in atime_needs_update / inode_update_timestamps 2026/02/24 21:58 2026/02/24 21:58 2026/02/24 22:09 305c0ec5cd886e2d13738e28e1b2df9b0ec20fc9
Benign: ✅    Confident: ✅   

Explanation:
The data race occurs on the `i_mtime_nsec` field of `struct inode` between a write operation (updating the modification time during a write) and a read operation (checking if the access time needs to be updated during a read).

### Analysis:
1.  **Variable and Access**: The field being accessed is `i_mtime_nsec`, a 32-bit integer representing the nanoseconds part of the file's modification time. The write is performed by `inode_set_mtime_to_ts` (called via `inode_update_timestamps`), and the read is performed by `inode_get_mtime` (called via `relatime_need_update`).
2.  **Atomicity**: On the 64-bit system where this was reported (x86_64), 32-bit aligned memory accesses are atomic. The `i_mtime_nsec` field is 4-byte aligned within `struct inode`. Therefore, the reader will always see a valid value (either the old or the new nanosecond count), and no "torn" writes are possible.
3.  **Consistency**: Although the `i_mtime_sec` (seconds) and `i_mtime_nsec` (nanoseconds) fields are updated and read separately, any resulting inconsistency (e.g., mixing the old seconds value with the new nanoseconds value) would result in a timestamp that is off by at most one second. For file metadata like `mtime`, such transient inconsistencies during concurrent updates are generally considered acceptable in the Linux kernel.
4.  **Impact**: The read access is part of the `relatime` heuristic (`relatime_need_update`), which determines if the `atime` (access time) should be updated. If the `mtime` read is slightly inconsistent, the worst consequence is a redundant `atime` update or a missed one. Neither of these leads to memory corruption, crashes, or filesystem state inconsistency. `atime` updates are "best-effort" and are not strictly synchronized with reads to avoid performance degradation.
5.  **Context**: The writer holds the `inode_lock`, ensuring synchronization against other writers. However, read operations (like the one in the report) do not take this lock to maintain high performance. This is a standard pattern in the VFS layer.
6.  **Comparison with `ctime`**: Recent "multigrain timestamp" improvements added explicit synchronization (using `try_cmpxchg`) for the `ctime` field because it is used for the change attribute, which requires stricter monotonicity. The fact that `mtime` was left with plain accesses confirms that it does not require the same level of strictness.

### Conclusion:
The data race is **benign**. It involves a simple timestamp update where a slight inconsistency during a concurrent read is harmless and expected for performance reasons.

Crash report:
==================================================================
BUG: KCSAN: data-race in atime_needs_update / inode_update_timestamps

write to 0xffff88811bb9954c of 4 bytes by task 4332 on cpu 0:
 inode_set_mtime_to_ts include/linux/fs.h:1658 [inline]
 inode_update_timestamps+0x147/0x270 fs/inode.c:2116
 generic_update_time fs/inode.c:2151 [inline]
 inode_update_time fs/inode.c:2171 [inline]
 file_update_time_flags+0x269/0x300 fs/inode.c:2398
 file_update_time+0x17/0x20 fs/inode.c:2419
 shmem_file_write_iter+0x9c/0xf0 mm/shmem.c:3515
 iter_file_splice_write+0x6bc/0xa80 fs/splice.c:738
 do_splice_from fs/splice.c:938 [inline]
 direct_splice_actor+0x156/0x2a0 fs/splice.c:1161
 splice_direct_to_actor+0x311/0x670 fs/splice.c:1105
 do_splice_direct_actor fs/splice.c:1204 [inline]
 do_splice_direct+0x119/0x1a0 fs/splice.c:1230
 do_sendfile+0x382/0x650 fs/read_write.c:1370
 __do_sys_sendfile64 fs/read_write.c:1431 [inline]
 __se_sys_sendfile64 fs/read_write.c:1417 [inline]
 __x64_sys_sendfile64+0x105/0x150 fs/read_write.c:1417
 x64_sys_call+0x2db1/0x3000 arch/x86/include/generated/asm/syscalls_64.h:41
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc0/0x2a0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

read to 0xffff88811bb9954c of 4 bytes by task 4330 on cpu 1:
 inode_get_mtime_nsec include/linux/fs.h:1644 [inline]
 inode_get_mtime include/linux/fs.h:1650 [inline]
 relatime_need_update fs/inode.c:2065 [inline]
 atime_needs_update+0x2a8/0x3e0 fs/inode.c:2211
 touch_atime+0x4a/0x340 fs/inode.c:2226
 file_accessed include/linux/fs.h:2256 [inline]
 shmem_file_read_iter+0x477/0x540 mm/shmem.c:3498
 copy_splice_read+0x471/0x6c0 fs/splice.c:363
 do_splice_read fs/splice.c:981 [inline]
 splice_direct_to_actor+0x28f/0x670 fs/splice.c:1086
 do_splice_direct_actor fs/splice.c:1204 [inline]
 do_splice_direct+0x119/0x1a0 fs/splice.c:1230
 do_sendfile+0x382/0x650 fs/read_write.c:1370
 __do_sys_sendfile64 fs/read_write.c:1431 [inline]
 __se_sys_sendfile64 fs/read_write.c:1417 [inline]
 __x64_sys_sendfile64+0x105/0x150 fs/read_write.c:1417
 x64_sys_call+0x2db1/0x3000 arch/x86/include/generated/asm/syscalls_64.h:41
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc0/0x2a0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

value changed: 0x038f5599 -> 0x0427ec19

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 UID: 0 PID: 4330 Comm: syz.2.259 Not tainted syzkaller #0 PREEMPT(voluntary) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
==================================================================

Trajectory:
Seq Timestamp Type Name Duration
0/0 2026/02/24 21:58 flow assessment-kcsan
11m Results:
map[Benign:true Confident:true Explanation:The data race occurs on the `i_mtime_nsec` field of `struct inode` between a write operation (updating the modification time during a write) and a read operation (checking if the access time needs to be updated during a read).

### Analysis:
1.  **Variable and Access**: The field being accessed is `i_mtime_nsec`, a 32-bit integer representing the nanoseconds part of the file's modification time. The write is performed by `inode_set_mtime_to_ts` (called via `inode_update_timestamps`), and the read is performed by `inode_get_mtime` (called via `relatime_need_update`).
2.  **Atomicity**: On the 64-bit system where this was reported (x86_64), 32-bit aligned memory accesses are atomic. The `i_mtime_nsec` field is 4-byte aligned within `struct inode`. Therefore, the reader will always see a valid value (either the old or the new nanosecond count), and no "torn" writes are possible.
3.  **Consistency**: Although the `i_mtime_sec` (seconds) and `i_mtime_nsec` (nanoseconds) fields are updated and read separately, any resulting inconsistency (e.g., mixing the old seconds value with the new nanoseconds value) would result in a timestamp that is off by at most one second. For file metadata like `mtime`, such transient inconsistencies during concurrent updates are generally considered acceptable in the Linux kernel.
4.  **Impact**: The read access is part of the `relatime` heuristic (`relatime_need_update`), which determines if the `atime` (access time) should be updated. If the `mtime` read is slightly inconsistent, the worst consequence is a redundant `atime` update or a missed one. Neither of these leads to memory corruption, crashes, or filesystem state inconsistency. `atime` updates are "best-effort" and are not strictly synchronized with reads to avoid performance degradation.
5.  **Context**: The writer holds the `inode_lock`, ensuring synchronization against other writers. However, read operations (like the one in the report) do not take this lock to maintain high performance. This is a standard pattern in the VFS layer.
6.  **Comparison with `ctime`**: Recent "multigrain timestamp" improvements added explicit synchronization (using `try_cmpxchg`) for the `ctime` field because it is used for the change attribute, which requires stricter monotonicity. The fact that `mtime` was left with plain accesses confirms that it does not require the same level of strictness.

### Conclusion:
The data race is **benign**. It involves a simple timestamp update where a slight inconsistency during a concurrent read is harmless and expected for performance reasons.]

1/1 2026/02/24 21:58 action kernel-checkouter
0m Results:
map[KernelSrc:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/src/b8b7c16a7783961c8f919b702c956bd1fa74b95c]

2/1 2026/02/24 21:58 action kernel-builder
3m Results:
map[KernelObj:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/build/3031e2f988d719fbbd0b90fd583d04f62be8e366]

3/1 2026/02/24 22:02 action codesearch-prepare
3m Results:
map[Index:codesearch-index]

4/1 2026/02/24 22:05 agent expert
3m Model:
gemini-3-flash-preview

Results:
map[Benign:true Confident:true]

Instruction:
You are an experienced Linux kernel developer tasked with determining if the given kernel
data race is benign or not. The data race report is from KCSAN tool.
It contains 2 stack traces of the memory accesses that constitute a data race.

A "benign" data races are on a simple int/bool variable or similar field,
and the accesses are not supposed to be protected by any mutual exclusion primitives.
Common examples of such "benign" data races are accesses to various flags fields,
statistics counters, and similar. A "benign" data race does not lead to memory corruption/crash
with a conservative compiler that compiles memory accesses to primitive types
effectively as atomic.

A non-benign (or "harmful" data race) can lead to corruption/crash even with
a conservative compiler that compiles memory accesses to primitive types
effectively as atomic. A common example of a "harmful" data races is race on
a complex container (list/hashmap/etc), where accesses are supposed to be protected
by a mutual exclusion primitive.

Also consider races that happen at the same time with the given one.
If there is no synchronization in between, other memory accesses in the involved threads
race with each other if they access the same memory. For example, if both threads execute:

	some_struct->have_elements = true;
	list_add(new_node, &some_struct->list_head);

the race on some_struct->have_elements may appear benign, however it also implies there
is a race on some_struct->list_head which is not benign, since the list is not thread-safe.

Take into account that on 32-bit systems 64-bit memory accesses may be split into two accesses,
and thus even with a conservative compiler may not be fully atomic. However, such races may
still be benign depending on how writes are done, and how read data is used.

In the final reply explain why you think the given data race is benign or is harmful.

Use the provided tools to confirm any assumptions, variables/fields being accessed, etc.
In particular, don't make assumptions about the kernel source code,
use codesearch tools to read the actual source code.

Prefer calling several tools at the same time to save round-trips.


Use set-results tool to provide results of the analysis.
It must be called exactly once before the final reply.
Ignore results of this tool.

Prompt:
The data race report is:

==================================================================
BUG: KCSAN: data-race in atime_needs_update / inode_update_timestamps

write to 0xffff88811bb9954c of 4 bytes by task 4332 on cpu 0:
 inode_set_mtime_to_ts include/linux/fs.h:1658 [inline]
 inode_update_timestamps+0x147/0x270 fs/inode.c:2116
 generic_update_time fs/inode.c:2151 [inline]
 inode_update_time fs/inode.c:2171 [inline]
 file_update_time_flags+0x269/0x300 fs/inode.c:2398
 file_update_time+0x17/0x20 fs/inode.c:2419
 shmem_file_write_iter+0x9c/0xf0 mm/shmem.c:3515
 iter_file_splice_write+0x6bc/0xa80 fs/splice.c:738
 do_splice_from fs/splice.c:938 [inline]
 direct_splice_actor+0x156/0x2a0 fs/splice.c:1161
 splice_direct_to_actor+0x311/0x670 fs/splice.c:1105
 do_splice_direct_actor fs/splice.c:1204 [inline]
 do_splice_direct+0x119/0x1a0 fs/splice.c:1230
 do_sendfile+0x382/0x650 fs/read_write.c:1370
 __do_sys_sendfile64 fs/read_write.c:1431 [inline]
 __se_sys_sendfile64 fs/read_write.c:1417 [inline]
 __x64_sys_sendfile64+0x105/0x150 fs/read_write.c:1417
 x64_sys_call+0x2db1/0x3000 arch/x86/include/generated/asm/syscalls_64.h:41
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc0/0x2a0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

read to 0xffff88811bb9954c of 4 bytes by task 4330 on cpu 1:
 inode_get_mtime_nsec include/linux/fs.h:1644 [inline]
 inode_get_mtime include/linux/fs.h:1650 [inline]
 relatime_need_update fs/inode.c:2065 [inline]
 atime_needs_update+0x2a8/0x3e0 fs/inode.c:2211
 touch_atime+0x4a/0x340 fs/inode.c:2226
 file_accessed include/linux/fs.h:2256 [inline]
 shmem_file_read_iter+0x477/0x540 mm/shmem.c:3498
 copy_splice_read+0x471/0x6c0 fs/splice.c:363
 do_splice_read fs/splice.c:981 [inline]
 splice_direct_to_actor+0x28f/0x670 fs/splice.c:1086
 do_splice_direct_actor fs/splice.c:1204 [inline]
 do_splice_direct+0x119/0x1a0 fs/splice.c:1230
 do_sendfile+0x382/0x650 fs/read_write.c:1370
 __do_sys_sendfile64 fs/read_write.c:1431 [inline]
 __se_sys_sendfile64 fs/read_write.c:1417 [inline]
 __x64_sys_sendfile64+0x105/0x150 fs/read_write.c:1417
 x64_sys_call+0x2db1/0x3000 arch/x86/include/generated/asm/syscalls_64.h:41
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc0/0x2a0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

value changed: 0x038f5599 -> 0x0427ec19

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 UID: 0 PID: 4330 Comm: syz.2.259 Not tainted syzkaller #0 PREEMPT(voluntary) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
==================================================================


Reply:
The data race occurs on the `i_mtime_nsec` field of `struct inode` between a write operation (updating the modification time during a write) and a read operation (checking if the access time needs to be updated during a read).

### Analysis:
1.  **Variable and Access**: The field being accessed is `i_mtime_nsec`, a 32-bit integer representing the nanoseconds part of the file's modification time. The write is performed by `inode_set_mtime_to_ts` (called via `inode_update_timestamps`), and the read is performed by `inode_get_mtime` (called via `relatime_need_update`).
2.  **Atomicity**: On the 64-bit system where this was reported (x86_64), 32-bit aligned memory accesses are atomic. The `i_mtime_nsec` field is 4-byte aligned within `struct inode`. Therefore, the reader will always see a valid value (either the old or the new nanosecond count), and no "torn" writes are possible.
3.  **Consistency**: Although the `i_mtime_sec` (seconds) and `i_mtime_nsec` (nanoseconds) fields are updated and read separately, any resulting inconsistency (e.g., mixing the old seconds value with the new nanoseconds value) would result in a timestamp that is off by at most one second. For file metadata like `mtime`, such transient inconsistencies during concurrent updates are generally considered acceptable in the Linux kernel.
4.  **Impact**: The read access is part of the `relatime` heuristic (`relatime_need_update`), which determines if the `atime` (access time) should be updated. If the `mtime` read is slightly inconsistent, the worst consequence is a redundant `atime` update or a missed one. Neither of these leads to memory corruption, crashes, or filesystem state inconsistency. `atime` updates are "best-effort" and are not strictly synchronized with reads to avoid performance degradation.
5.  **Context**: The writer holds the `inode_lock`, ensuring synchronization against other writers. However, read operations (like the one in the report) do not take this lock to maintain high performance. This is a standard pattern in the VFS layer.
6.  **Comparison with `ctime`**: Recent "multigrain timestamp" improvements added explicit synchronization (using `try_cmpxchg`) for the `ctime` field because it is used for the change attribute, which requires stricter monotonicity. The fact that `mtime` was left with plain accesses confirms that it does not require the same level of strictness.

### Conclusion:
The data race is **benign**. It involves a simple timestamp update where a slight inconsistency during a concurrent read is harmless and expected for performance reasons.

5/2 2026/02/24 22:05 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 4380
							output: 129
							thoughts: 413
						

Thoughts:
**Data Race Analysis: mtime in Inode**

Okay, so this report is showing a nasty little data race. It looks like `inode_update_timestamps`, specifically the part that's dealing with updating the modification time (mtime), is colliding with the code checking access time (atime), likely in `atime_needs_update`. The key is that they're both hitting the mtime field of an inode structure, and the target is the nanoseconds component based on the reference to `inode_get_mtime_nsec`. I need to figure out how this is happening.

The report highlights two tasks: 4332 on CPU 0 is writing, and 4330 on CPU 1 is reading the same memory location, `0xffff88811bb9954c`. That address is definitely the smoking gun. The report also gives me the call stacks, which is incredibly helpful.

On the write side, the trace starts with `shmem_file_write_iter`, which is expected when dealing with a file, and then steps up to the culprit `inode_update_timestamps` and deeper into `inode_set_mtime_to_ts`. That shows me the update is happening during a write, which is common.

On the read side, the stack is `shmem_file_read_iter` and leads up to `atime_needs_update`. This uses `relatime_need_update` (also inline) which then calls `inode_get_mtime`. We can be sure that this branch is checking to see whether the mtime needs updating.

Now, I'm going to have to dive into the code. First things first, I'll need to confirm how the mtime is actually stored within the `struct inode` and see what the code around the listed lines in `fs/inode.c` is doing. I need to understand how the mtime field is being accessed and updated on both sides. I'll need to check the definition of `inode_set_mtime_to_ts`, `inode_get_mtime_nsec` and look at the surrounding code to determine how this concurrent access is occurring, and where the race condition is exposed. I'm especially interested in whether there's any locking or atomic operations missing here, or if the compiler is optimizing in a way that creates a race. I have a feeling the nsec part of the timestamp might be causing the problem due to some atomicity issue or potential for tearing, but I need to confirm that.



6/2 2026/02/24 22:06 tool codesearch-definition-source
0m Args:
map[ContextFile:fs/inode.c IncludeLines:true Name:inode_update_timestamps]

Results:
map[SourceCode:2087:	/**
2088:	 * inode_update_timestamps - update the timestamps on the inode
2089:	 * @inode: inode to be updated
2090:	 * @flags: S_* flags that needed to be updated
2091:	 *
2092:	 * The update_time function is called when an inode's timestamps need to be
2093:	 * updated for a read or write operation. This function handles updating the
2094:	 * actual timestamps. It's up to the caller to ensure that the inode is marked
2095:	 * dirty appropriately.
2096:	 *
2097:	 * In the case where any of S_MTIME, S_CTIME, or S_VERSION need to be updated,
2098:	 * attempt to update all three of them. S_ATIME updates can be handled
2099:	 * independently of the rest.
2100:	 *
2101:	 * Returns a set of S_* flags indicating which values changed.
2102:	 */
2103:	int inode_update_timestamps(struct inode *inode, int flags)
2104:	{
2105:		int updated = 0;
2106:		struct timespec64 now;
2107:	
2108:		if (flags & (S_MTIME|S_CTIME|S_VERSION)) {
2109:			struct timespec64 ctime = inode_get_ctime(inode);
2110:			struct timespec64 mtime = inode_get_mtime(inode);
2111:	
2112:			now = inode_set_ctime_current(inode);
2113:			if (!timespec64_equal(&now, &ctime))
2114:				updated |= S_CTIME;
2115:			if (!timespec64_equal(&now, &mtime)) {
2116:				inode_set_mtime_to_ts(inode, now);
2117:				updated |= S_MTIME;
2118:			}
2119:			if (IS_I_VERSION(inode) && inode_maybe_inc_iversion(inode, updated))
2120:				updated |= S_VERSION;
2121:		} else {
2122:			now = current_time(inode);
2123:		}
2124:	
2125:		if (flags & S_ATIME) {
2126:			struct timespec64 atime = inode_get_atime(inode);
2127:	
2128:			if (!timespec64_equal(&now, &atime)) {
2129:				inode_set_atime_to_ts(inode, now);
2130:				updated |= S_ATIME;
2131:			}
2132:		}
2133:		return updated;
2134:	}
 SourceFile:fs/inode.c]

7/2 2026/02/24 22:06 tool codesearch-definition-source
0m Args:
map[ContextFile:include/linux/fs.h IncludeLines:true Name:inode_set_mtime_to_ts]

Results:
map[SourceCode:1654:	static inline struct timespec64 inode_set_mtime_to_ts(struct inode *inode,
1655:							      struct timespec64 ts)
1656:	{
1657:		inode->i_mtime_sec = ts.tv_sec;
1658:		inode->i_mtime_nsec = ts.tv_nsec;
1659:		return ts;
1660:	}
 SourceFile:include/linux/fs.h]

8/2 2026/02/24 22:06 tool codesearch-definition-source
0m Args:
map[ContextFile:include/linux/fs.h IncludeLines:true Name:inode_get_mtime_nsec]

Results:
map[SourceCode:1642:	static inline long inode_get_mtime_nsec(const struct inode *inode)
1643:	{
1644:		return inode->i_mtime_nsec;
1645:	}
 SourceFile:include/linux/fs.h]

9/2 2026/02/24 22:06 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 6477
							output: 41
							thoughts: 148
						

Thoughts:
**Race Condition Detected: Nanosecond Timestamp Updates**

Okay, so I've been digging into this timestamp behavior, and I've got a potential race condition forming in my mind. The crux of the issue revolves around `inode->i_mtime_nsec`. I can see a conflict between a write operation, labeled Task 4332, and a read operation, Task 4330.

On the write side, we've got this sequence: `inode_update_timestamps` calls `inode_set_mtime_to_ts`, which then directly assigns the nanosecond component, `ts.tv_nsec`, to `inode->i_mtime_nsec`. Clean and simple.

However, the read path is a little more complex. Task 4330, the read, starts with `atime_needs_update` and progresses through `relatime_need_update`, then calls `inode_get_mtime`, finally ending with a call to `inode_get_mtime_nsec`, before returning `inode->i_mtime_nsec`.

The concern is around `relatime_need_update` and if the checks within this routine allow a read to get an inconsistent value if a write happens at the wrong time. We've got a write potentially updating `i_mtime_nsec` concurrently with a read attempting to retrieve it. I need to take a closer look at `relatime_need_update` in `fs/inode.c` to understand the exact conditions that could lead to this race. This is where the vulnerability most likely lies. I will look to see if this routine can be interrupted between checking and returning the nanosecond value.



10/2 2026/02/24 22:06 tool codesearch-definition-source
0m Args:
map[ContextFile:fs/inode.c IncludeLines:true Name:relatime_need_update]

Results:
map[SourceCode:2049:	/*
2050:	 * With relative atime, only update atime if the previous atime is
2051:	 * earlier than or equal to either the ctime or mtime,
2052:	 * or if at least a day has passed since the last atime update.
2053:	 */
2054:	static bool relatime_need_update(struct vfsmount *mnt, struct inode *inode,
2055:				     struct timespec64 now)
2056:	{
2057:		struct timespec64 atime, mtime, ctime;
2058:	
2059:		if (!(mnt->mnt_flags & MNT_RELATIME))
2060:			return true;
2061:		/*
2062:		 * Is mtime younger than or equal to atime? If yes, update atime:
2063:		 */
2064:		atime = inode_get_atime(inode);
2065:		mtime = inode_get_mtime(inode);
2066:		if (timespec64_compare(&mtime, &atime) >= 0)
2067:			return true;
2068:		/*
2069:		 * Is ctime younger than or equal to atime? If yes, update atime:
2070:		 */
2071:		ctime = inode_get_ctime(inode);
2072:		if (timespec64_compare(&ctime, &atime) >= 0)
2073:			return true;
2074:	
2075:		/*
2076:		 * Is the previous atime value older than a day? If yes,
2077:		 * update atime:
2078:		 */
2079:		if ((long)(now.tv_sec - atime.tv_sec) >= 24*60*60)
2080:			return true;
2081:		/*
2082:		 * Good, we can skip the atime update:
2083:		 */
2084:		return false;
2085:	}
 SourceFile:fs/inode.c]

11/2 2026/02/24 22:06 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 7615
							output: 42
							thoughts: 72
						

Thoughts:
Here's my summary, as I see it:

**Digging into that `relatime_need_update` Check**

Okay, so I'm tracing through this code, and the race condition I'm concerned about seems to be brewing right here: `mtime = inode_get_mtime(inode);` within the `relatime_need_update` function. It seems like the core of the issue hinges on how that `inode_get_mtime` function retrieves the modification time. Naturally, I immediately want to understand *how* it's getting that mtime.

Let me just check how the `inode_get_mtime` function is defined. It's time to follow the trail. I'm expecting it probably reads the fields `i_mtime_sec` and `i_mtime_nsec` from the inode, I'm thinking about the implications of how those get updated, and the associated locking mechanisms. I need to understand all the potential weak points.



12/2 2026/02/24 22:06 tool codesearch-definition-source
0m Args:
map[ContextFile:include/linux/fs.h IncludeLines:true Name:inode_get_mtime]

Results:
map[SourceCode:1647:	static inline struct timespec64 inode_get_mtime(const struct inode *inode)
1648:	{
1649:		struct timespec64 ts = { .tv_sec  = inode_get_mtime_sec(inode),
1650:					 .tv_nsec = inode_get_mtime_nsec(inode) };
1651:		return ts;
1652:	}
 SourceFile:include/linux/fs.h]

13/2 2026/02/24 22:06 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 8083
							output: 32
							thoughts: 469
						

Thoughts:
**Analyzing a Potential Timestamp Data Race**

Okay, so I'm looking at this code in `inode_set_mtime_to_ts` where they're writing the modification time, `mtime`, and then in `inode_get_mtime` where it's being read. Classic potential for a data race right here, on this timestamp. Specifically, the concern is that if `relatime_need_update` reads a partially updated `mtime`, it could make the wrong call about whether to update the access time, `atime`.

But then again, `atime` updates are mostly a performance optimization. A slightly inaccurate `atime` update, either too frequent or missed, is typically benign unless it cascades into something more serious. So, a minor blip in `atime` might not be the end of the world.

Hold on a second...I'm catching something here. `i_mtime_sec` is `time64_t`, which is usually 64-bit, right? And `i_mtime_nsec` is a `long`. Now, on a 64-bit system, `long` is 64-bits.

The report mentions a "write to... of 4 bytes" and "read to... of 4 bytes." That's not right. The way this is described implies `i_mtime_nsec` is only 4 bytes, so it's either a 32-bit system or they're forcing `i_mtime_nsec` to be 32-bit. But, the report mentions `x64_sys_call`, so that tells me it's a 64-bit system. So, why are we seeing 4-byte accesses? Something isn't adding up. I need to look at the `struct inode` definition and figure out what's going on here.



14/2 2026/02/24 22:06 tool codesearch-struct-layout
0m Args:
map[ContextFile:include/linux/fs.h Name:inode]

Results:
map[Fields:[map[Name:i_mode OffsetBits:0 SizeBits:16] map[Name:i_opflags OffsetBits:16 SizeBits:16] map[Name:i_flags OffsetBits:32 SizeBits:32] map[Name:i_acl OffsetBits:64 SizeBits:64] map[Name:i_default_acl OffsetBits:128 SizeBits:64] map[Name:i_uid OffsetBits:192 SizeBits:32] map[Name:i_gid OffsetBits:224 SizeBits:32] map[Name:i_op OffsetBits:256 SizeBits:64] map[Name:i_sb OffsetBits:320 SizeBits:64] map[Name:i_mapping OffsetBits:384 SizeBits:64] map[Name:i_security OffsetBits:448 SizeBits:64] map[Name:i_ino OffsetBits:512 SizeBits:64] map[Name: OffsetBits:576 SizeBits:32] map[Name:i_rdev OffsetBits:608 SizeBits:32] map[Name:i_size OffsetBits:640 SizeBits:64] map[Name:i_atime_sec OffsetBits:704 SizeBits:64] map[Name:i_mtime_sec OffsetBits:768 SizeBits:64] map[Name:i_ctime_sec OffsetBits:832 SizeBits:64] map[Name:i_atime_nsec OffsetBits:896 SizeBits:32] map[Name:i_mtime_nsec OffsetBits:928 SizeBits:32] map[Name:i_ctime_nsec OffsetBits:960 SizeBits:32] map[Name:i_generation OffsetBits:992 SizeBits:32] map[Name:i_lock OffsetBits:1024 SizeBits:32] map[Name:i_bytes OffsetBits:1056 SizeBits:16] map[Name:i_blkbits OffsetBits:1072 SizeBits:8] map[Name:i_write_hint OffsetBits:1080 SizeBits:8] map[Name:i_blocks OffsetBits:1088 SizeBits:64] map[Name:i_state OffsetBits:1152 SizeBits:32] map[Name:i_rwsem OffsetBits:1216 SizeBits:320] map[Name:dirtied_when OffsetBits:1536 SizeBits:64] map[Name:dirtied_time_when OffsetBits:1600 SizeBits:64] map[Name:i_hash OffsetBits:1664 SizeBits:128] map[Name:i_io_list OffsetBits:1792 SizeBits:128] map[Name:i_wb OffsetBits:1920 SizeBits:64] map[Name:i_wb_frn_winner OffsetBits:1984 SizeBits:32] map[Name:i_wb_frn_avg_time OffsetBits:2016 SizeBits:16] map[Name:i_wb_frn_history OffsetBits:2032 SizeBits:16] map[Name:i_lru OffsetBits:2048 SizeBits:128] map[Name:i_sb_list OffsetBits:2176 SizeBits:128] map[Name:i_wb_list OffsetBits:2304 SizeBits:128] map[Name: OffsetBits:2432 SizeBits:128] map[Name:i_version OffsetBits:2560 SizeBits:64] map[Name:i_sequence OffsetBits:2624 SizeBits:64] map[Name:i_count OffsetBits:2688 SizeBits:32] map[Name:i_dio_count OffsetBits:2720 SizeBits:32] map[Name:i_writecount OffsetBits:2752 SizeBits:32] map[Name:i_readcount OffsetBits:2784 SizeBits:32] map[Name: OffsetBits:2816 SizeBits:64] map[Name:i_flctx OffsetBits:2880 SizeBits:64] map[Name:i_data OffsetBits:2944 SizeBits:1536] map[Name: OffsetBits:4480 SizeBits:128] map[Name: OffsetBits:4608 SizeBits:64] map[Name:i_fsnotify_mask OffsetBits:4672 SizeBits:32] map[Name:i_fsnotify_marks OffsetBits:4736 SizeBits:64] map[Name:i_private OffsetBits:4800 SizeBits:64]]]

15/2 2026/02/24 22:06 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 10245
							output: 43
							thoughts: 1143
						

Thoughts:
**Analyzing a Potential Race Condition in Timestamp Updates**

Okay, so I'm looking at how `i_mtime_nsec` is updated, and I see it's a 32-bit field. The structure definition clearly shows `i_mtime_nsec` following `i_mtime_sec` in memory. The issue is there could be a race condition when reading and writing `i_mtime_sec` and `i_mtime_nsec` independently, *without* any locking.

Specifically, in `inode_set_mtime_to_ts`, the timestamp components are written in two separate operations: `inode->i_mtime_sec = ts.tv_sec;` followed by `inode->i_mtime_nsec = ts.tv_nsec;`. Similarly, `inode_get_mtime` reads the components individually: `.tv_sec  = inode_get_mtime_sec(inode)` and `.tv_nsec = inode_get_mtime_nsec(inode)`.

The concern is if a reader happens to read `i_mtime_sec` *before* the writer updates it, but then reads `i_mtime_nsec` *after* the writer has modified it. This could result in a mixed timestamp where the seconds are old, but the nanoseconds are new.  Since timestamps are typically increasing, this might result in a slightly "in the future" or "in the past" timestamp, but by at most a second.

Let's spell out the interleavings:
*   Writer: `i_mtime_sec = new_sec`, `i_mtime_nsec = new_nsec`
*   Reader: A = `read i_mtime_sec`, B = `read i_mtime_nsec`
*   Possible interleavings:
    *   A, B, 1, 2: Old sec, old nsec (OK).
    *   1, 2, A, B: New sec, new nsec (OK).
    *   1, A, B, 2: New sec, old nsec (Inconsistent).
    *   A, 1, 2, B: Old sec, new nsec (Inconsistent).

However, an inconsistent timestamp from those last two cases is not *invalid*, as a (new_sec, old_nsec) would be a valid timestamp in the past relative to (new_sec, new_nsec).

Now, considering where this could be problematic, I'm checking `relatime_need_update`.  Here, `mtime` (the read timestamp) is compared to `atime`.  A slight timestamp skew might cause `relatime_need_update` to return `true` when it should be `false`, or vice versa.  Given that `atime` updates are already lazy due to `relatime`, it looks like this is just a minor performance concern, causing a slightly more or less frequent update of the metadata; it shouldn't cause a *correctness* issue for the filesystem, as a more correct `atime` value will get written later.

So, the write is done in `inode_update_timestamps` and the read is done in `atime_needs_update`.

However, `inode_update_timestamps` *is* often called while holding `inode->i_rwsem` or something equivalent. The write path goes through `shmem_file_write_iter` -> `file_update_time` -> `inode_update_time` -> `generic_update_time` -> `inode_update_timestamps`. Let's check `shmem_file_write_iter` for synchronization. Looking at the code for that function, and the call stack, I see `shmem_file_write_iter` calls `file_update_time`, which calls `file_update_time_flags`. This makes me think this race is likely avoided by a lock.

Therefore, the likelihood of an issue is extremely small, if not non-existent.



16/2 2026/02/24 22:06 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/shmem.c IncludeLines:true Name:shmem_file_write_iter]

Results:
map[SourceCode:3502:	static ssize_t shmem_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
3503:	{
3504:		struct file *file = iocb->ki_filp;
3505:		struct inode *inode = file->f_mapping->host;
3506:		ssize_t ret;
3507:	
3508:		inode_lock(inode);
3509:		ret = generic_write_checks(iocb, from);
3510:		if (ret <= 0)
3511:			goto unlock;
3512:		ret = file_remove_privs(file);
3513:		if (ret)
3514:			goto unlock;
3515:		ret = file_update_time(file);
3516:		if (ret)
3517:			goto unlock;
3518:		ret = generic_perform_write(iocb, from);
3519:	unlock:
3520:		inode_unlock(inode);
3521:		return ret;
3522:	}
 SourceFile:mm/shmem.c]

17/2 2026/02/24 22:06 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 12631
							output: 43
							thoughts: 31
						

Thoughts:
Here's my take on the process, considering I'm deep in these kinds of kernel investigations:

**Analyzing the Reader within the Inode Lock Context**

Okay, I've got the `inode_lock(inode)` held. That's crucial. Now, I need to see what's happening on the reader side of things. Let's delve into `shmem_file_read_iter`. I want to understand how the reader is interacting with this shared memory region *while* the inode is locked. The lock is my primary point of interest now; I'm expecting to be able to see exactly what operations are synchronized. I'm focusing on synchronization, potential data races, and overall consistency between reader and writer. What specific memory regions are they accessing? Are there any potential race conditions? I'm anticipating to see how `shmem_file_read_iter` deals with this lock, whether it's correctly acquiring, releasing, and responding to any contention. I need to figure out the interplay of readers and writers on the shared memory resource, and `shmem_file_read_iter` will shed light on that.



18/2 2026/02/24 22:06 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/shmem.c IncludeLines:true Name:shmem_file_read_iter]

Results:
map[SourceCode:3382:	static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
3383:	{
3384:		struct file *file = iocb->ki_filp;
3385:		struct inode *inode = file_inode(file);
3386:		struct address_space *mapping = inode->i_mapping;
3387:		pgoff_t index;
3388:		unsigned long offset;
3389:		int error = 0;
3390:		ssize_t retval = 0;
3391:	
3392:		for (;;) {
3393:			struct folio *folio = NULL;
3394:			struct page *page = NULL;
3395:			unsigned long nr, ret;
3396:			loff_t end_offset, i_size = i_size_read(inode);
3397:			bool fallback_page_copy = false;
3398:			size_t fsize;
3399:	
3400:			if (unlikely(iocb->ki_pos >= i_size))
3401:				break;
3402:	
3403:			index = iocb->ki_pos >> PAGE_SHIFT;
3404:			error = shmem_get_folio(inode, index, 0, &folio, SGP_READ);
3405:			if (error) {
3406:				if (error == -EINVAL)
3407:					error = 0;
3408:				break;
3409:			}
3410:			if (folio) {
3411:				folio_unlock(folio);
3412:	
3413:				page = folio_file_page(folio, index);
3414:				if (PageHWPoison(page)) {
3415:					folio_put(folio);
3416:					error = -EIO;
3417:					break;
3418:				}
3419:	
3420:				if (folio_test_large(folio) &&
3421:				    folio_test_has_hwpoisoned(folio))
3422:					fallback_page_copy = true;
3423:			}
3424:	
3425:			/*
3426:			 * We must evaluate after, since reads (unlike writes)
3427:			 * are called without i_rwsem protection against truncate
3428:			 */
3429:			i_size = i_size_read(inode);
3430:			if (unlikely(iocb->ki_pos >= i_size)) {
3431:				if (folio)
3432:					folio_put(folio);
3433:				break;
3434:			}
3435:			end_offset = min_t(loff_t, i_size, iocb->ki_pos + to->count);
3436:			if (folio && likely(!fallback_page_copy))
3437:				fsize = folio_size(folio);
3438:			else
3439:				fsize = PAGE_SIZE;
3440:			offset = iocb->ki_pos & (fsize - 1);
3441:			nr = min_t(loff_t, end_offset - iocb->ki_pos, fsize - offset);
3442:	
3443:			if (folio) {
3444:				/*
3445:				 * If users can be writing to this page using arbitrary
3446:				 * virtual addresses, take care about potential aliasing
3447:				 * before reading the page on the kernel side.
3448:				 */
3449:				if (mapping_writably_mapped(mapping)) {
3450:					if (likely(!fallback_page_copy))
3451:						flush_dcache_folio(folio);
3452:					else
3453:						flush_dcache_page(page);
3454:				}
3455:	
3456:				/*
3457:				 * Mark the folio accessed if we read the beginning.
3458:				 */
3459:				if (!offset)
3460:					folio_mark_accessed(folio);
3461:				/*
3462:				 * Ok, we have the page, and it's up-to-date, so
3463:				 * now we can copy it to user space...
3464:				 */
3465:				if (likely(!fallback_page_copy))
3466:					ret = copy_folio_to_iter(folio, offset, nr, to);
3467:				else
3468:					ret = copy_page_to_iter(page, offset, nr, to);
3469:				folio_put(folio);
3470:			} else if (user_backed_iter(to)) {
3471:				/*
3472:				 * Copy to user tends to be so well optimized, but
3473:				 * clear_user() not so much, that it is noticeably
3474:				 * faster to copy the zero page instead of clearing.
3475:				 */
3476:				ret = copy_page_to_iter(ZERO_PAGE(0), offset, nr, to);
3477:			} else {
3478:				/*
3479:				 * But submitting the same page twice in a row to
3480:				 * splice() - or others? - can result in confusion:
3481:				 * so don't attempt that optimization on pipes etc.
3482:				 */
3483:				ret = iov_iter_zero(nr, to);
3484:			}
3485:	
3486:			retval += ret;
3487:			iocb->ki_pos += ret;
3488:	
3489:			if (!iov_iter_count(to))
3490:				break;
3491:			if (ret < nr) {
3492:				error = -EFAULT;
3493:				break;
3494:			}
3495:			cond_resched();
3496:		}
3497:	
3498:		file_accessed(file);
3499:		return retval ? retval : error;
3500:	}
 SourceFile:mm/shmem.c]

19/2 2026/02/24 22:06 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 14663
							output: 38
							thoughts: 216
						

Thoughts:
**Analyzing the `atime` Update Race Condition**

Okay, so the current code is structured with a potential race condition in the `atime` update path, and I need to break down the implications of this. It's clear that the reader, in this scenario, doesn't acquire the `inode_lock` before attempting the update, instead, at the end of the read, `file_accessed(file)` is called, which then triggers `touch_atime`, and ultimately `atime_needs_update`. This is expected behavior for `atime` updates; we wouldn't want to degrade read performance significantly by locking on every access just to maintain perfect `atime` accuracy.

The core of the problem, as I see it, revolves around potential issues with `i_mtime_nsec` and `i_mtime_sec`, which the reader *could* be modifying. I know I've considered this before, and the previous conclusion was that it might cause a minor inconsistency in the `mtime` value when read by `relatime_need_update`. But I'm not satisfied with that assessment; this needs to be completely evaluated.

I need to expand my search and think if other readers of `mtime` are more sensitive to this. I'm wondering, could the `stat` system call, for instance, be negatively impacted? I know `vfs_getattr` eventually calls `generic_fillattr`, I need to see how `generic_fillattr` uses these values. I also need to make sure I am not missing any other potential readers.



20/2 2026/02/24 22:06 tool codesearch-definition-source
0m Args:
map[ContextFile:fs/stat.c IncludeLines:true Name:generic_fillattr]

Results:
map[SourceCode:  65:	/**
  66:	 * generic_fillattr - Fill in the basic attributes from the inode struct
  67:	 * @idmap:		idmap of the mount the inode was found from
  68:	 * @request_mask:	statx request_mask
  69:	 * @inode:		Inode to use as the source
  70:	 * @stat:		Where to fill in the attributes
  71:	 *
  72:	 * Fill in the basic attributes in the kstat structure from data that's to be
  73:	 * found on the VFS inode structure.  This is the default if no getattr inode
  74:	 * operation is supplied.
  75:	 *
  76:	 * If the inode has been found through an idmapped mount the idmap of
  77:	 * the vfsmount must be passed through @idmap. This function will then
  78:	 * take care to map the inode according to @idmap before filling in the
  79:	 * uid and gid filds. On non-idmapped mounts or if permission checking is to be
  80:	 * performed on the raw inode simply pass @nop_mnt_idmap.
  81:	 */
  82:	void generic_fillattr(struct mnt_idmap *idmap, u32 request_mask,
  83:			      struct inode *inode, struct kstat *stat)
  84:	{
  85:		vfsuid_t vfsuid = i_uid_into_vfsuid(idmap, inode);
  86:		vfsgid_t vfsgid = i_gid_into_vfsgid(idmap, inode);
  87:	
  88:		stat->dev = inode->i_sb->s_dev;
  89:		stat->ino = inode->i_ino;
  90:		stat->mode = inode->i_mode;
  91:		stat->nlink = inode->i_nlink;
  92:		stat->uid = vfsuid_into_kuid(vfsuid);
  93:		stat->gid = vfsgid_into_kgid(vfsgid);
  94:		stat->rdev = inode->i_rdev;
  95:		stat->size = i_size_read(inode);
  96:		stat->atime = inode_get_atime(inode);
  97:	
  98:		if (is_mgtime(inode)) {
  99:			fill_mg_cmtime(stat, request_mask, inode);
 100:		} else {
 101:			stat->ctime = inode_get_ctime(inode);
 102:			stat->mtime = inode_get_mtime(inode);
 103:		}
 104:	
 105:		stat->blksize = i_blocksize(inode);
 106:		stat->blocks = inode->i_blocks;
 107:	
 108:		if ((request_mask & STATX_CHANGE_COOKIE) && IS_I_VERSION(inode)) {
 109:			stat->result_mask |= STATX_CHANGE_COOKIE;
 110:			stat->change_cookie = inode_query_iversion(inode);
 111:		}
 112:	
 113:	}
 SourceFile:fs/stat.c]

21/2 2026/02/24 22:06 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 16117
							output: 40
							thoughts: 120
						

Thoughts:
**Examining Timestamp Consistency in Filesystems**

Okay, so `generic_fillattr` grabs the modification time (`mtime`) using `inode_get_mtime(inode)`. That's standard. But wait, if someone is writing to the file *while* `stat` is being called, we could get a slightly funky `mtime` back, right? Linux tolerates this, doesn't it? `stat` isn't guaranteed atomic with writes unless you're explicitly using external synchronization. Nothing new there.

But hold on a second… what's this `is_mgtime(inode)` and `fill_mg_cmtime` business? "mgtime"...multigrain timestamps? That sounds like something potentially interesting. Let me go check `is_mgtime` and see what's what.



22/2 2026/02/24 22:06 tool codesearch-definition-source
0m Args:
map[ContextFile:include/linux/fs.h IncludeLines:true Name:is_mgtime]

Results:
map[SourceCode:2299:	/**
2300:	 * is_mgtime: is this inode using multigrain timestamps
2301:	 * @inode: inode to test for multigrain timestamps
2302:	 *
2303:	 * Return true if the inode uses multigrain timestamps, false otherwise.
2304:	 */
2305:	static inline bool is_mgtime(const struct inode *inode)
2306:	{
2307:		return inode->i_opflags & IOP_MGTIME;
2308:	}
 SourceFile:include/linux/fs.h]

23/2 2026/02/24 22:06 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 16622
							output: 28
							thoughts: 726
						

Thoughts:
**Analyzing a Timestamp Data Race**

Okay, so we've got this KCSAN report flagging a potential data race with multigrain timestamps, introduced to improve timestamp granularity. The core of the issue seems to be around how the modification time (`mtime`) is handled in inodes.

The write operation is occurring in `inode_update_timestamps`, specifically this line: `inode_set_mtime_to_ts(inode, now);`. The read is happening through a chain of functions: `atime_needs_update` -> `relatime_need_update` -> `inode_get_mtime`. The KCSAN report pinpoints the data values as changing from `0x038f5599` to `0x0427ec19`, which, given their size, strongly suggests these are nanosecond values.

Now, if we're dealing with just `i_mtime_nsec` being updated, it's a simple 4-byte write. On x86_64, a 4-byte aligned write *is* atomic. I need to quickly confirm the alignment of `i_mtime_nsec`... Hmm, `OffsetBits: 928` which means `928 / 8 = 116`. 116 is a multiple of 4, so it's aligned. That means the read will see either the old or the new 4-byte value, no torn reads. That is important.

But the "inconsistency" I flagged, the potential for mixing an old second value with a new nanosecond value, is still possible. It's because `inode_get_mtime` reads `i_mtime_sec` and `i_mtime_nsec` separately.

The thing is, a lot of these timestamp races are typically considered benign in the kernel. `data_race()` or just accepting them are used to flag them as such. But this one got reported by KCSAN...so it's not marked as benign. That is a problem.

Let me revisit `inode_set_mtime_to_ts`. Okay, here's the code:

```c
1654:	static inline struct timespec64 inode_set_mtime_to_ts(struct inode *inode,
1655:							      struct timespec64 ts)
1656:	{
1657:		inode->i_mtime_sec = ts.tv_sec;
1658:		inode->i_mtime_nsec = ts.tv_nsec;
1659:		return ts;
1660:	}
```

And `inode_get_mtime_nsec`:

```c
1642:	static inline long inode_get_mtime_nsec(const struct inode *inode)
1643:	{
1644:		return inode->i_mtime_nsec;
1645:	}
```

Looking at these, if atomicity is the *intent* (and it seems like it should be), these should really be using `READ_ONCE`/`WRITE_ONCE` or some other mechanism.  `struct timespec64` is 16 bytes, so you can't atomically update it as a whole without a lock on most architectures.

I need to see if there's been any recent discussion or work on making this atomic or explicitly using `data_race()` here. I'll search for `inode_set_mtime_to_ts` in the codebase to check for any relevant comments or recent changes.



24/2 2026/02/24 22:06 tool codesearch-find-references
0m Args:
map[Name:inode_set_mtime_to_ts]

Results:
map[References:[map[ReferenceKind:calls ReferencingEntityKind:function ReferencingEntityName:inode_set_mtime SourceFile:include/linux/fs.h SourceLine:1667]] TruncatedOutput:false]

25/2 2026/02/24 22:06 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 18242
							output: 42
							thoughts: 61
						

Thoughts:
Okay, here's my interpretation of that thought process, tailored for an expert audience:

**Investigating a Limited Reference Count**

Hmm, only *one* direct reference to this function? That's definitely unusual, especially for something as fundamental as `inode_set_mtime`. My spidey-sense is tingling. I know `inode_update_timestamps` *also* calls it. I bet the search tool I'm using, `codesearch-find-references`, might have a blind spot or limitation. It's probably not capturing *all* the uses of this critical function.

Alright, let's bypass that and dig into the header definition itself. Time to check `inode_set_mtime` in `include/linux/fs.h` to see if there's any important commentary or related definitions that clarify its purpose or usage patterns. I'm hoping to get a clearer picture from the source.



26/2 2026/02/24 22:07 tool codesearch-definition-source
0m Args:
map[ContextFile:include/linux/fs.h IncludeLines:true Name:inode_set_mtime]

Results:
map[SourceCode:1662:	static inline struct timespec64 inode_set_mtime(struct inode *inode,
1663:							time64_t sec, long nsec)
1664:	{
1665:		struct timespec64 ts = { .tv_sec  = sec,
1666:					 .tv_nsec = nsec };
1667:		return inode_set_mtime_to_ts(inode, ts);
1668:	}
 SourceFile:include/linux/fs.h]

27/2 2026/02/24 22:07 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 18692
							output: 22
							thoughts: 750
						

Thoughts:
**Analyzing the KCSAN Report on `i_mtime_nsec`**

Okay, so the KCSAN report flagged `i_mtime_nsec`, and my immediate thought is whether this race condition is actually harmful. If `relatime_need_update` gets an inconsistent `mtime` value, it's possible it makes a bad call: either updating `atime` when it doesn't need to, or skipping an update when it should. Extra I/O from an unnecessary `atime` update isn't ideal but not catastrophic. A missed `atime` update just leaves the timestamp a little stale, which is also not the end of the world. Neither scenario screams "crash" or "data corruption," so I wasn't initially concerned.

But let's dive into `inode_update_timestamps` again to be certain. Looking at the code:

```c
2103:	int inode_update_timestamps(struct inode *inode, int flags)
2104:	{
...
2115:			if (!timespec64_equal(&now, &mtime)) {
2116:				inode_set_mtime_to_ts(inode, now);
2117:				updated |= S_MTIME;
2118:			}
...
2125:		if (flags & S_ATIME) {
2126:			struct timespec64 atime = inode_get_atime(inode);
2127:	
2128:			if (!timespec64_equal(&now, &atime)) {
2129:				inode_set_atime_to_ts(inode, now);
2130:				updated |= S_ATIME;
2131:			}
2132:		}
2133:		return updated;
2134:	}
```

Okay, if `S_ATIME` is set in the `flags`, then `atime` *is* updated. In the reader stack, we have `atime_needs_update` -> `relatime_need_update`. So, if `atime_needs_update` returns true, it eventually triggers a call to `touch_atime`, which then calls `inode_update_timestamps` with `S_ATIME` set.

Wait a second, if `atime_needs_update` is being called, it is likely to decide whether to call `touch_atime`. And if `touch_atime` *is* called, it will update `atime`.

So the race boils down to: Thread 1, writing `mtime` (and consequently `ctime`) during a write operation, versus Thread 2, reading `mtime` to determine whether to update `atime` due to a read operation. A slightly inconsistent `mtime` value *could* lead to a "wrong" decision for `relatime`. However, `relatime` is already an approximation, so maybe it's still acceptable.

Now, my more concerning question is: is there any risk of "torn" writes/reads, leading to something worse? `i_mtime_nsec` is 32-bit. On x86_64, aligned 32-bit loads and stores *are* atomic. The report indicates it's a 4-byte access, so atomicity is safe here.

But wait a minute, what if `i_mtime_nsec` is *not* just a nanosecond value? In the multigrain timestamp work, `i_ctime_nsec` (and possibly others) used some bits for flags. Need to verify that. I'll need to check `include/linux/fs.h` for something like `I_CTIME_QUERIED` or something to that effect. Gotta check that possibility.



28/2 2026/02/24 22:07 tool grepper
0m Args:
map[Expression:I_CTIME_QUERIED]

Results:
map[Output:fs/inode.c=2341=struct timespec64 current_time(struct inode *inode)
--
fs/inode.c-2352-	cns = smp_load_acquire(&inode->i_ctime_nsec);
fs/inode.c:2353:	if (cns & I_CTIME_QUERIED) {
fs/inode.c-2354-		/*
--
fs/inode.c-2357-		 */
fs/inode.c:2358:		if (now.tv_nsec == (cns & ~I_CTIME_QUERIED))
fs/inode.c-2359-			ktime_get_real_ts64(&now);
--
fs/inode.c=2801=struct timespec64 inode_set_ctime_current(struct inode *inode)
--
fs/inode.c-2820-	cns = smp_load_acquire(&inode->i_ctime_nsec);
fs/inode.c:2821:	if (cns & I_CTIME_QUERIED) {
fs/inode.c-2822-		struct timespec64 ctime = { .tv_sec = inode->i_ctime_sec,
fs/inode.c:2823:					    .tv_nsec = cns & ~I_CTIME_QUERIED };
fs/inode.c-2824-
--
fs/inode.c-2849-		 * If so then retry the swap. This can only happen once since
fs/inode.c:2850:		 * the only way to clear I_CTIME_QUERIED is to stamp the inode
fs/inode.c-2851-		 * with a new ctime.
fs/inode.c-2852-		 */
fs/inode.c:2853:		if (!(cns & I_CTIME_QUERIED) && (cns | I_CTIME_QUERIED) == cur) {
fs/inode.c-2854-			cns = cur;
--
fs/inode.c-2858-		now.tv_sec = inode->i_ctime_sec;
fs/inode.c:2859:		now.tv_nsec = cur & ~I_CTIME_QUERIED;
fs/inode.c-2860-	}
--
fs/inode.c=2883=struct timespec64 inode_set_ctime_deleg(struct inode *inode, struct timespec64 update)
--
fs/inode.c-2889-	cur = smp_load_acquire(&inode->i_ctime_nsec);
fs/inode.c:2890:	cur_ts.tv_nsec = cur & ~I_CTIME_QUERIED;
fs/inode.c-2891-	cur_ts.tv_sec = inode->i_ctime_sec;
--
fs/inode.c-2925-	 * If so, then retry the swap. This can only happen once since
fs/inode.c:2926:	 * the only way to clear I_CTIME_QUERIED is to stamp the inode
fs/inode.c-2927-	 * with a new ctime.
fs/inode.c-2928-	 */
fs/inode.c:2929:	if (!(old & I_CTIME_QUERIED) && (cur == (old | I_CTIME_QUERIED)))
fs/inode.c-2930-		goto retry;
--
fs/inode.c-2933-	cur_ts.tv_sec = inode->i_ctime_sec;
fs/inode.c:2934:	cur_ts.tv_nsec = cur & ~I_CTIME_QUERIED;
fs/inode.c-2935-	return cur_ts;
--
fs/stat.c=45=void fill_mg_cmtime(struct kstat *stat, u32 request_mask, struct inode *inode)
--
fs/stat.c-57-	stat->ctime.tv_nsec = (u32)atomic_read(pcn);
fs/stat.c:58:	if (!(stat->ctime.tv_nsec & I_CTIME_QUERIED))
fs/stat.c:59:		stat->ctime.tv_nsec = ((u32)atomic_fetch_or(I_CTIME_QUERIED, pcn));
fs/stat.c:60:	stat->ctime.tv_nsec &= ~I_CTIME_QUERIED;
fs/stat.c-61-	trace_fill_mg_cmtime(inode, &stat->ctime, &stat->mtime);
--
include/linux/fs.h=1662=static inline struct timespec64 inode_set_mtime(struct inode *inode,
--
include/linux/fs.h-1678- */
include/linux/fs.h:1679:#define I_CTIME_QUERIED		((u32)BIT(31))
include/linux/fs.h-1680-
--
include/linux/fs.h=1686=static inline long inode_get_ctime_nsec(const struct inode *inode)
include/linux/fs.h-1687-{
include/linux/fs.h:1688:	return inode->i_ctime_nsec & ~I_CTIME_QUERIED;
include/linux/fs.h-1689-}
--
include/trace/events/timestamp.h-11-#define CTIME_QUERIED_FLAGS \
include/trace/events/timestamp.h:12:	{ I_CTIME_QUERIED, "Q" }
include/trace/events/timestamp.h-13-
--
include/trace/events/timestamp.h=52=TRACE_EVENT(ctime_ns_xchg,
--
include/trace/events/timestamp.h-79-		MAJOR(__entry->dev), MINOR(__entry->dev), __entry->ino, __entry->gen,
include/trace/events/timestamp.h:80:		__entry->old & ~I_CTIME_QUERIED,
include/trace/events/timestamp.h:81:		__print_flags(__entry->old & I_CTIME_QUERIED, "|", CTIME_QUERIED_FLAGS),
include/trace/events/timestamp.h-82-		__entry->new,
include/trace/events/timestamp.h:83:		__entry->cur & ~I_CTIME_QUERIED,
include/trace/events/timestamp.h:84:		__print_flags(__entry->cur & I_CTIME_QUERIED, "|", CTIME_QUERIED_FLAGS)
include/trace/events/timestamp.h-85-	)
]

29/2 2026/02/24 22:07 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 22007
							output: 41
							thoughts: 197
						

Thoughts:
**Analyzing Timestamp Consistency in File System Metadata**

Okay, so I'm digging into this race condition, and it looks like the `I_CTIME_QUERIED` flag is only related to `i_ctime_nsec`, which makes sense. The specific race we're observing seems to be around `i_mtime_nsec`, which is interesting.

Let me just double-check that `inode_get_mtime_nsec` function definition again to be absolutely sure.

```c
1642:	static inline long inode_get_mtime_nsec(const struct inode *inode)
1643:	{
1644:		return inode->i_mtime_nsec;
1645:	}
```

Yep, it's just a direct access; no flags or anything fancy going on there. That rules out any direct flag interactions.

Wait a second...I've got it...I see `is_mgtime(inode)` used in `generic_fillattr`. If that's true, we call `fill_mg_cmtime`. Could the issue be related to multigrain timestamps? Does `shmem` actually use multigrain timestamps? Let's check `shmem_init_inode` to see how it handles timestamp initialization. I bet the problem lies there...let's get to the bottom of this.



30/2 2026/02/24 22:07 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/shmem.c IncludeLines:true Name:shmem_get_inode]

Results:
map[SourceCode:3191:	static inline struct inode *shmem_get_inode(struct mnt_idmap *idmap,
3192:					     struct super_block *sb, struct inode *dir,
3193:					     umode_t mode, dev_t dev, unsigned long flags)
3194:	{
3195:		return __shmem_get_inode(idmap, sb, dir, mode, dev, flags);
3196:	}
 SourceFile:mm/shmem.c]

31/2 2026/02/24 22:07 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 22688
							output: 42
							thoughts: 13
						

Thoughts:
Okay, here's my take on those thoughts, tailored for an expert audience, presented in the first person:

**Investigating `__shmem_get_inode`**

Alright, let's dive in. My initial thought is to zero in on `__shmem_get_inode`. That function is central to how shared memory inodes are managed, and it's almost certainly the key to understanding the issue at hand. I need to understand its role in creating, locating, and potentially destroying these inodes. I'll be looking specifically for any areas related to memory allocation, synchronization primitives, potential race conditions, or anything that could contribute to inconsistent state. I'll need to remember the architectural differences and how the function handles different memory models and access patterns in order to form a hypothesis. Is it in the allocation path? Is there a memory leak? Or perhaps incorrect locking strategies? The devil is usually in the details within these low-level routines.



32/2 2026/02/24 22:07 tool codesearch-definition-source
0m Args:
map[ContextFile:mm/shmem.c IncludeLines:true Name:__shmem_get_inode]

Results:
map[SourceCode:3081:	static struct inode *__shmem_get_inode(struct mnt_idmap *idmap,
3082:						     struct super_block *sb,
3083:						     struct inode *dir, umode_t mode,
3084:						     dev_t dev, unsigned long flags)
3085:	{
3086:		struct inode *inode;
3087:		struct shmem_inode_info *info;
3088:		struct shmem_sb_info *sbinfo = SHMEM_SB(sb);
3089:		ino_t ino;
3090:		int err;
3091:	
3092:		err = shmem_reserve_inode(sb, &ino);
3093:		if (err)
3094:			return ERR_PTR(err);
3095:	
3096:		inode = new_inode(sb);
3097:		if (!inode) {
3098:			shmem_free_inode(sb, 0);
3099:			return ERR_PTR(-ENOSPC);
3100:		}
3101:	
3102:		inode->i_ino = ino;
3103:		inode_init_owner(idmap, inode, dir, mode);
3104:		inode->i_blocks = 0;
3105:		simple_inode_init_ts(inode);
3106:		inode->i_generation = get_random_u32();
3107:		info = SHMEM_I(inode);
3108:		memset(info, 0, (char *)inode - (char *)info);
3109:		spin_lock_init(&info->lock);
3110:		atomic_set(&info->stop_eviction, 0);
3111:		info->seals = F_SEAL_SEAL;
3112:		info->flags = (flags & VM_NORESERVE) ? SHMEM_F_NORESERVE : 0;
3113:		info->i_crtime = inode_get_mtime(inode);
3114:		info->fsflags = (dir == NULL) ? 0 :
3115:			SHMEM_I(dir)->fsflags & SHMEM_FL_INHERITED;
3116:		if (info->fsflags)
3117:			shmem_set_inode_flags(inode, info->fsflags, NULL);
3118:		INIT_LIST_HEAD(&info->shrinklist);
3119:		INIT_LIST_HEAD(&info->swaplist);
3120:		simple_xattrs_init(&info->xattrs);
3121:		cache_no_acl(inode);
3122:		if (sbinfo->noswap)
3123:			mapping_set_unevictable(inode->i_mapping);
3124:	
3125:		/* Don't consider 'deny' for emergencies and 'force' for testing */
3126:		if (sbinfo->huge)
3127:			mapping_set_large_folios(inode->i_mapping);
3128:	
3129:		switch (mode & S_IFMT) {
3130:		default:
3131:			inode->i_op = &shmem_special_inode_operations;
3132:			init_special_inode(inode, mode, dev);
3133:			break;
3134:		case S_IFREG:
3135:			inode->i_mapping->a_ops = &shmem_aops;
3136:			inode->i_op = &shmem_inode_operations;
3137:			inode->i_fop = &shmem_file_operations;
3138:			mpol_shared_policy_init(&info->policy,
3139:						 shmem_get_sbmpol(sbinfo));
3140:			break;
3141:		case S_IFDIR:
3142:			inc_nlink(inode);
3143:			/* Some things misbehave if size == 0 on a directory */
3144:			inode->i_size = 2 * BOGO_DIRENT_SIZE;
3145:			inode->i_op = &shmem_dir_inode_operations;
3146:			inode->i_fop = &simple_offset_dir_operations;
3147:			simple_offset_init(shmem_get_offset_ctx(inode));
3148:			break;
3149:		case S_IFLNK:
3150:			/*
3151:			 * Must not load anything in the rbtree,
3152:			 * mpol_free_shared_policy will not be called.
3153:			 */
3154:			mpol_shared_policy_init(&info->policy, NULL);
3155:			break;
3156:		}
3157:	
3158:		lockdep_annotate_inode_mutex_key(inode);
3159:		return inode;
3160:	}
 SourceFile:mm/shmem.c]

33/2 2026/02/24 22:07 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 24290
							output: 24
							thoughts: 2029
						

Thoughts:
**Timestamp Race Analysis in inode_update_timestamps**

Okay, so it looks like KCSAN is flagging something related to `IOP_MGTIME`, but initially, it's not immediately clear what the issue is. Let's dig into `inode_update_timestamps`.

I'm seeing that `inode_set_mtime_to_ts` is a simple assignment. The potential issue revolves around a possible race condition if `relatime_need_update` reads an inconsistent `mtime`. Specifically, I'm concerned about the non-monotonic nature of updates to `i_mtime_sec` and `i_mtime_nsec`. Usually the current time (`now`) is obtained either from `current_time(inode)` or via `inode_set_ctime_current(inode)`. Looking at `inode_update_timestamps`, I see `ctime` is set, and then `mtime` is set to the same value.

The race could be between a write (setting `mtime`) and a read (in `relatime` check). The `relatime` check uses `inode_get_mtime`. If the reader happens to read a partially updated `mtime`, it could give a slightly inaccurate result. Since `atime` is also an inode field, and might be in the middle of an update, it could also be part of the race.

The real point is that `i_mtime_nsec` is 32-bit. It's stored using a 4-byte write, and read using a 4-byte load. This implies the value of `i_mtime_nsec` itself isn't torn, but we have to consider the atomicity between it and `i_mtime_sec`, which is 64-bit. On a 64-bit system, the load/store of the seconds field should be atomic, and we are not expecting bit-tearing there.

So, the issue is really about the order and consistency between writing and reading `i_mtime_sec` and `i_mtime_nsec`. If this were a widespread problem, we'd probably be seeing this in more places. KCSAN's report might just indicate that we're missing `data_race()` annotations or `READ_ONCE`/`WRITE_ONCE` here.

I should also check if `i_mtime_nsec` is used for anything else that might be sensitive, but a quick look at `shmem.c` shows it is only used for `i_crtime` at inode creation, which is less of a concern. I'm also remembering that the `i_version` is updated when `mtime` or `ctime` change.

The race is specifically on `i_mtime_nsec`.

The relevant function `inode_set_mtime_to_ts` was introduced recently as part of the multigrain timestamp work. Let me revisit that function again. It's a simple assignment of the timestamp components.

If `i_mtime_nsec` is being accessed without `WRITE_ONCE`, KCSAN *should* complain, and the developer probably meant it to be a benign race. But, is it harmful? In `relatime_need_update`, we might unnecessarily update `atime`. In the `shmem` case, this just sets a flag. In a real filesystem, it might cause a disk write, but this should only be a performance concern and not a correctness one.

Now I am thinking that userspace code (e.g., build tools) use `mtime` for file change detection. If `stat` returns an inconsistent `mtime`, a build system might get confused. That said, I know that `stat` is already not atomic with respect to writes.

Looking at `inode_get_mtime`, I see that if we wanted to avoid the KCSAN warnings and ensure a level of sanity, we could use `READ_ONCE`. But, using `READ_ONCE` on each individual component of `timespec64` won't solve the consistency between the seconds and nanoseconds.

Can an inconsistent pair of `i_mtime_sec` and `i_mtime_nsec` cause a crash? The `timespec64_compare` function will not crash, since the writer guarantees `nsec` is in a valid range. This race appears to be benign. It's a race on metadata that is allowed to be slightly inconsistent during a concurrent update. Now, I will check if there's any other place where `i_mtime_nsec` is used to be sure.



34/2 2026/02/24 22:07 tool codesearch-find-references
0m Args:
map[Name:inode_get_mtime]

Results:
map[References:<nil> TruncatedOutput:false]

35/2 2026/02/24 22:07 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 27357
							output: 21
							thoughts: 21
						

Thoughts:
Okay, here's my summary, framed as my internal monologue:

**Frustration and Alternative Strategies**

Ugh, *again* `codesearch-find-references` is failing me. I'm hitting a wall here. This is becoming a recurring problem, and it's slowing down my workflow. Time to ditch this particular tool for now. Alright, I'll switch gears and try `grepper`. I've got a feeling it'll give me better results this time, given my experience with the codebase. Hopefully, this detour will get me back on track faster. Let's see if this proves effective for locating what I'm looking for.



36/2 2026/02/24 22:07 tool grepper
0m Args:
map[Expression:inode_get_mtime\(]

Results:
map[Output:drivers/tty/tty_io.c=798=static void tty_update_time(struct tty_struct *tty, bool mtime)
--
drivers/tty/tty_io.c-806-		struct inode *inode = file_inode(priv->file);
drivers/tty/tty_io.c:807:		struct timespec64 time = mtime ? inode_get_mtime(inode) : inode_get_atime(inode);
drivers/tty/tty_io.c-808-
--
fs/afs/write.c=103=static void afs_issue_write_worker(struct work_struct *work)
--
fs/afs/write.c-146-	op->store.i_size	= umax(pos + len, vnode->netfs.remote_i_size);
fs/afs/write.c:147:	op->mtime		= inode_get_mtime(&vnode->netfs.inode);
fs/afs/write.c-148-
--
fs/befs/linuxvfs.c=297=static struct inode *befs_iget(struct super_block *sb, unsigned long ino)
--
fs/befs/linuxvfs.c-365-			0);/* lower 16 bits are not a time */
fs/befs/linuxvfs.c:366:	inode_set_ctime_to_ts(inode, inode_get_mtime(inode));
fs/befs/linuxvfs.c:367:	inode_set_atime_to_ts(inode, inode_get_mtime(inode));
fs/befs/linuxvfs.c-368-
--
fs/ceph/addr.c=717=static int write_folio_nounlock(struct folio *folio,
--
fs/ceph/addr.c-819-
fs/ceph/addr.c:820:	req->r_mtime = inode_get_mtime(inode);
fs/ceph/addr.c-821-	ceph_osdc_start_request(osdc, req);
--
fs/ceph/addr.c=1407=int ceph_submit_write(struct address_space *mapping,
--
fs/ceph/addr.c-1581-
fs/ceph/addr.c:1582:	req->r_mtime = inode_get_mtime(inode);
fs/ceph/addr.c-1583-	ceph_osdc_start_request(&fsc->client->osdc, req);
--
fs/ceph/addr.c=2193=int ceph_uninline_data(struct file *file)
--
fs/ceph/addr.c-2249-
fs/ceph/addr.c:2250:	req->r_mtime = inode_get_mtime(inode);
fs/ceph/addr.c-2251-	ceph_osdc_start_request(&fsc->client->osdc, req);
--
fs/ceph/addr.c-2291-
fs/ceph/addr.c:2292:	req->r_mtime = inode_get_mtime(inode);
fs/ceph/addr.c-2293-	ceph_osdc_start_request(&fsc->client->osdc, req);
--
fs/ceph/addr.c=2349=static int __ceph_pool_perm_get(struct ceph_inode_info *ci,
--
fs/ceph/addr.c-2467-
fs/ceph/addr.c:2468:	wr_req->r_mtime = inode_get_mtime(&ci->netfs.inode);
fs/ceph/addr.c-2469-	ceph_osdc_start_request(&fsc->client->osdc, wr_req);
--
fs/ceph/caps.c=1391=static void __prep_cap(struct cap_msg_args *arg, struct ceph_cap *cap,
--
fs/ceph/caps.c-1448-
fs/ceph/caps.c:1449:	arg->mtime = inode_get_mtime(inode);
fs/ceph/caps.c-1450-	arg->atime = inode_get_atime(inode);
--
fs/ceph/file.c=2565=static int ceph_zero_partial_object(struct inode *inode,
--
fs/ceph/file.c-2595-
fs/ceph/file.c:2596:	req->r_mtime = inode_get_mtime(inode);
fs/ceph/file.c-2597-	ceph_osdc_start_request(&fsc->client->osdc, req);
--
fs/ceph/inode.c=211=struct inode *ceph_get_snapdir(struct inode *parent)
--
fs/ceph/inode.c-239-	inode->i_gid = parent->i_gid;
fs/ceph/inode.c:240:	inode_set_mtime_to_ts(inode, inode_get_mtime(parent));
fs/ceph/inode.c-241-	inode_set_ctime_to_ts(inode, inode_get_ctime(parent));
--
fs/ceph/inode.c=876=void ceph_fill_file_time(struct inode *inode, int issued,
--
fs/ceph/inode.c-883-	struct timespec64 ictime = inode_get_ctime(inode);
fs/ceph/inode.c:884:	struct timespec64 imtime = inode_get_mtime(inode);
fs/ceph/inode.c-885-	int warn = 0;
--
fs/ceph/inode.c=2528=int __ceph_setattr(struct mnt_idmap *idmap, struct inode *inode,
--
fs/ceph/inode.c-2767-	if (ia_valid & ATTR_MTIME) {
fs/ceph/inode.c:2768:		struct timespec64 mtime = inode_get_mtime(inode);
fs/ceph/inode.c-2769-
--
fs/ceph/mds_client.c=4633=static int reconnect_caps_cb(struct inode *inode, int mds, void *arg)
--
fs/ceph/mds_client.c-4701-		rec.v1.size = cpu_to_le64(i_size_read(inode));
fs/ceph/mds_client.c:4702:		ts = inode_get_mtime(inode);
fs/ceph/mds_client.c-4703-		ceph_encode_timespec64(&rec.v1.mtime, &ts);
--
fs/ceph/snap.c=662=int __ceph_finish_cap_snap(struct ceph_inode_info *ci,
--
fs/ceph/snap.c-670-	capsnap->size = i_size_read(inode);
fs/ceph/snap.c:671:	capsnap->mtime = inode_get_mtime(inode);
fs/ceph/snap.c-672-	capsnap->atime = inode_get_atime(inode);
--
fs/erofs/inode.c=28=static int erofs_read_inode(struct inode *inode)
--
fs/erofs/inode.c-191-	inode_set_atime_to_ts(inode,
fs/erofs/inode.c:192:			      inode_set_ctime_to_ts(inode, inode_get_mtime(inode)));
fs/erofs/inode.c-193-
--
fs/exfat/inode.c=20=int __exfat_write_inode(struct inode *inode, int sync)
--
fs/exfat/inode.c-58-			&ep->dentry.file.create_time_cs);
fs/exfat/inode.c:59:	ts = inode_get_mtime(inode);
fs/exfat/inode.c-60-	exfat_set_entry_time(sbi, &ts,
--
fs/ext4/ext4.h=924=do {										\
--
fs/ext4/ext4.h-935-#define EXT4_INODE_SET_MTIME(inode, raw_inode)						\
fs/ext4/ext4.h:936:	EXT4_INODE_SET_XTIME_VAL(i_mtime, inode, raw_inode, inode_get_mtime(inode))
fs/ext4/ext4.h-937-
--
fs/ext4/ialloc.c=925=struct inode *__ext4_new_inode(struct mnt_idmap *idmap,
--
fs/ext4/ialloc.c-1253-	simple_inode_init_ts(inode);
fs/ext4/ialloc.c:1254:	ei->i_crtime = inode_get_mtime(inode);
fs/ext4/ialloc.c-1255-
--
fs/ext4/ioctl.c=312=static void swap_inode_data(struct inode *inode1, struct inode *inode2)
--
fs/ext4/ioctl.c-329-
fs/ext4/ioctl.c:330:	ts1 = inode_get_mtime(inode1);
fs/ext4/ioctl.c:331:	ts2 = inode_get_mtime(inode2);
fs/ext4/ioctl.c-332-	inode_set_mtime_to_ts(inode1, ts2);
--
fs/f2fs/f2fs.h=3538=static inline bool f2fs_is_time_consistent(struct inode *inode)
--
fs/f2fs/f2fs.h-3546-		return false;
fs/f2fs/f2fs.h:3547:	ts = inode_get_mtime(inode);
fs/f2fs/f2fs.h-3548-	if (!timespec64_equal(F2FS_I(inode)->i_disk_time + 2, &ts))
--
fs/f2fs/inode.c=404=static void init_idisk_time(struct inode *inode)
--
fs/f2fs/inode.c-409-	fi->i_disk_time[1] = inode_get_ctime(inode);
fs/f2fs/inode.c:410:	fi->i_disk_time[2] = inode_get_mtime(inode);
fs/f2fs/inode.c-411-}
--
fs/f2fs/namei.c=219=static struct inode *f2fs_new_inode(struct mnt_idmap *idmap,
--
fs/f2fs/namei.c-248-	simple_inode_init_ts(inode);
fs/f2fs/namei.c:249:	fi->i_crtime = inode_get_mtime(inode);
fs/f2fs/namei.c-250-	inode->i_generation = get_random_u32();
--
fs/fat/inode.c=854=static int __fat_write_inode(struct inode *inode, int wait)
--
fs/fat/inode.c-893-	fat_set_start(raw_entry, MSDOS_I(inode)->i_logstart);
fs/fat/inode.c:894:	mtime = inode_get_mtime(inode);
fs/fat/inode.c-895-	fat_time_unix2fat(sbi, &mtime, &raw_entry->time,
--
fs/fuse/inode.c=328=static void fuse_change_attributes_i(struct inode *inode, struct fuse_attr *attr,
--
fs/fuse/inode.c-362-
fs/fuse/inode.c:363:	old_mtime = inode_get_mtime(inode);
fs/fuse/inode.c-364-	fuse_change_attributes_common(inode, attr, sx, attr_valid, cache_mask,
--
fs/fuse/inode.c=1676=static void fuse_fill_attr_from_inode(struct fuse_attr *attr,
--
fs/fuse/inode.c-1679-	struct timespec64 atime = inode_get_atime(&fi->inode);
fs/fuse/inode.c:1680:	struct timespec64 mtime = inode_get_mtime(&fi->inode);
fs/fuse/inode.c-1681-	struct timespec64 ctime = inode_get_ctime(&fi->inode);
--
fs/fuse/readdir.c=445=static int fuse_readdir_cached(struct file *file, struct dir_context *ctx)
--
fs/fuse/readdir.c-479-		if (!ctx->pos && !fi->rdc.size) {
fs/fuse/readdir.c:480:			fi->rdc.mtime = inode_get_mtime(inode);
fs/fuse/readdir.c-481-			fi->rdc.iversion = inode_query_iversion(inode);
--
fs/fuse/readdir.c-491-	if (!ctx->pos) {
fs/fuse/readdir.c:492:		struct timespec64 mtime = inode_get_mtime(inode);
fs/fuse/readdir.c-493-
--
fs/hfs/inode.c=433=int hfs_write_inode(struct inode *inode, struct writeback_control *wbc)
--
fs/hfs/inode.c-486-
fs/hfs/inode.c:487:		rec.dir.MdDat = hfs_u_to_mtime(inode_get_mtime(inode));
fs/hfs/inode.c-488-		rec.dir.Val = cpu_to_be16(inode->i_size - 2);
--
fs/hfs/inode.c-514-		hfs_inode_write_fork(inode, rec.file.ExtRec, &rec.file.LgLen, &rec.file.PyLen);
fs/hfs/inode.c:515:		rec.file.MdDat = hfs_u_to_mtime(inode_get_mtime(inode));
fs/hfs/inode.c-516-
--
fs/hfs/sysdep.c=16=static int hfs_revalidate_dentry(struct inode *dir, const struct qstr *name,
--
fs/hfs/sysdep.c-36-		inode_set_atime(inode, ts.tv_sec + diff, ts.tv_nsec);
fs/hfs/sysdep.c:37:		ts = inode_get_mtime(inode);
fs/hfs/sysdep.c-38-		inode_set_mtime(inode, ts.tv_sec + diff, ts.tv_nsec);
--
fs/hfsplus/inode.c=612=int hfsplus_cat_write_inode(struct inode *inode)
--
fs/hfsplus/inode.c-645-		folder->access_date = hfsp_ut2mt(inode_get_atime(inode));
fs/hfsplus/inode.c:646:		folder->content_mod_date = hfsp_ut2mt(inode_get_mtime(inode));
fs/hfsplus/inode.c-647-		folder->attribute_mod_date = hfsp_ut2mt(inode_get_ctime(inode));
--
fs/hfsplus/inode.c-680-		file->access_date = hfsp_ut2mt(inode_get_atime(inode));
fs/hfsplus/inode.c:681:		file->content_mod_date = hfsp_ut2mt(inode_get_mtime(inode));
fs/hfsplus/inode.c-682-		file->attribute_mod_date = hfsp_ut2mt(inode_get_ctime(inode));
--
fs/inode.c=2054=static bool relatime_need_update(struct vfsmount *mnt, struct inode *inode,
--
fs/inode.c-2064-	atime = inode_get_atime(inode);
fs/inode.c:2065:	mtime = inode_get_mtime(inode);
fs/inode.c-2066-	if (timespec64_compare(&mtime, &atime) >= 0)
--
fs/inode.c=2103=int inode_update_timestamps(struct inode *inode, int flags)
--
fs/inode.c-2109-		struct timespec64 ctime = inode_get_ctime(inode);
fs/inode.c:2110:		struct timespec64 mtime = inode_get_mtime(inode);
fs/inode.c-2111-
--
fs/inode.c=2366=static int file_update_time_flags(struct file *file, unsigned int flags)
--
fs/inode.c-2380-
fs/inode.c:2381:	ts = inode_get_mtime(inode);
fs/inode.c-2382-	if (!timespec64_equal(&ts, &now))
--
fs/isofs/rock.c=309=parse_rock_ridge_inode_internal(struct iso_directory_record *de,
--
fs/isofs/rock.c-539-			inode_set_ctime_to_ts(inode, inode_get_ctime(reloc));
fs/isofs/rock.c:540:			inode_set_mtime_to_ts(inode, inode_get_mtime(reloc));
fs/isofs/rock.c-541-			iput(reloc);
--
fs/jffs2/fs.c=32=int jffs2_do_setattr (struct inode *inode, struct iattr *iattr)
--
fs/jffs2/fs.c-116-	ri->atime = cpu_to_je32(I_SEC((ivalid & ATTR_ATIME)?iattr->ia_atime:inode_get_atime(inode)));
fs/jffs2/fs.c:117:	ri->mtime = cpu_to_je32(I_SEC((ivalid & ATTR_MTIME)?iattr->ia_mtime:inode_get_mtime(inode)));
fs/jffs2/fs.c-118-	ri->ctime = cpu_to_je32(I_SEC((ivalid & ATTR_CTIME)?iattr->ia_ctime:inode_get_ctime(inode)));
--
fs/jffs2/fs.c=372=void jffs2_dirty_inode(struct inode *inode, int flags)
--
fs/jffs2/fs.c-389-	iattr.ia_atime = inode_get_atime(inode);
fs/jffs2/fs.c:390:	iattr.ia_mtime = inode_get_mtime(inode);
fs/jffs2/fs.c-391-	iattr.ia_ctime = inode_get_ctime(inode);
--
fs/jffs2/fs.c=423=struct inode *jffs2_new_inode (struct inode *dir_i, umode_t mode, struct jffs2_raw_inode *ri)
--
fs/jffs2/fs.c-478-	simple_inode_init_ts(inode);
fs/jffs2/fs.c:479:	ri->atime = ri->mtime = ri->ctime = cpu_to_je32(I_SEC(inode_get_mtime(inode)));
fs/jffs2/fs.c-480-
--
fs/jffs2/os-linux.h=20=struct kvec;
--
fs/jffs2/os-linux.h-38-#define JFFS2_F_I_CTIME(f) I_SEC(inode_get_ctime(OFNI_EDONI_2SFFJ(f)))
fs/jffs2/os-linux.h:39:#define JFFS2_F_I_MTIME(f) I_SEC(inode_get_mtime(OFNI_EDONI_2SFFJ(f)))
fs/jffs2/os-linux.h-40-#define JFFS2_F_I_ATIME(f) I_SEC(inode_get_atime(OFNI_EDONI_2SFFJ(f)))
--
fs/nfs/callback_proc.c=28=__be32 nfs4_callback_getattr(void *argp, void *resp,
--
fs/nfs/callback_proc.c-63-	res->ctime = inode_get_ctime(inode);
fs/nfs/callback_proc.c:64:	res->mtime = inode_get_mtime(inode);
fs/nfs/callback_proc.c-65-	res->bitmap[0] = (FATTR4_WORD0_CHANGE | FATTR4_WORD0_SIZE) &
--
fs/nfs/fscache.h=113=static inline void nfs_fscache_update_auxdata(struct nfs_fscache_inode_auxdata *auxdata,
--
fs/nfs/fscache.h-116-	memset(auxdata, 0, sizeof(*auxdata));
fs/nfs/fscache.h:117:	auxdata->mtime_sec  = inode_get_mtime(inode).tv_sec;
fs/nfs/fscache.h:118:	auxdata->mtime_nsec = inode_get_mtime(inode).tv_nsec;
fs/nfs/fscache.h-119-	auxdata->ctime_sec  = inode_get_ctime(inode).tv_sec;
--
fs/nfs/inode.c=644=static void nfs_set_timestamps_to_ts(struct inode *inode, struct iattr *attr)
--
fs/nfs/inode.c-649-		struct timespec64 ctime = inode_get_ctime(inode);
fs/nfs/inode.c:650:		struct timespec64 mtime = inode_get_mtime(inode);
fs/nfs/inode.c-651-		struct timespec64 now;
--
fs/nfs/inode.c=1622=static void nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr *fattr)
--
fs/nfs/inode.c-1642-
fs/nfs/inode.c:1643:	ts = inode_get_mtime(inode);
fs/nfs/inode.c-1644-	if ((fattr->valid & NFS_ATTR_FATTR_PREMTIME)
--
fs/nfs/inode.c=1667=static int nfs_check_inode_attributes(struct inode *inode, struct nfs_fattr *fattr)
--
fs/nfs/inode.c-1697-
fs/nfs/inode.c:1698:		ts = inode_get_mtime(inode);
fs/nfs/inode.c-1699-		if ((fattr->valid & NFS_ATTR_FATTR_MTIME) && !timespec64_equal(&ts, &fattr->mtime))
--
fs/nfs/inode.c=2166=int nfs_post_op_update_inode_force_wcc_locked(struct inode *inode, struct nfs_fattr *fattr)
--
fs/nfs/inode.c-2194-			(fattr->valid & NFS_ATTR_FATTR_PREMTIME) == 0) {
fs/nfs/inode.c:2195:		fattr->pre_mtime = inode_get_mtime(inode);
fs/nfs/inode.c-2196-		fattr->valid |= NFS_ATTR_FATTR_PREMTIME;
--
fs/nfs/nfs4proc.c=6991=static int _nfs4_proc_delegreturn(struct inode *inode, const struct cred *cred,
--
fs/nfs/nfs4proc.c-7052-		if (delegation->type & FMODE_WRITE) {
fs/nfs/nfs4proc.c:7053:			data->sattr.mtime = inode_get_mtime(inode);
fs/nfs/nfs4proc.c-7054-			data->sattr.mtime_set = true;
--
fs/nfsd/blocklayout.c=179=nfsd4_block_commit_blocks(struct inode *inode, struct nfsd4_layoutcommit *lcp,
--
fs/nfsd/blocklayout.c-181-{
fs/nfsd/blocklayout.c:182:	struct timespec64 mtime = inode_get_mtime(inode);
fs/nfsd/blocklayout.c-183-	struct iattr iattr = { .ia_valid = 0 };
--
fs/ntfs3/frecord.c=3134=int ni_write_inode(struct inode *inode, int sync, const char *hint)
--
fs/ntfs3/frecord.c-3176-		/* Update the access times if they have changed. */
fs/ntfs3/frecord.c:3177:		ts = inode_get_mtime(inode);
fs/ntfs3/frecord.c-3178-		dup.m_time = kernel2nt(&ts);
--
fs/ocfs2/dlmglue.c=2161=static void __ocfs2_stuff_meta_lvb(struct inode *inode)
--
fs/ocfs2/dlmglue.c-2190-	lvb->lvb_ictime_packed = cpu_to_be64(ocfs2_pack_timespec(&ts));
fs/ocfs2/dlmglue.c:2191:	ts = inode_get_mtime(inode);
fs/ocfs2/dlmglue.c-2192-	lvb->lvb_imtime_packed = cpu_to_be64(ocfs2_pack_timespec(&ts));
--
fs/ocfs2/file.c=206=int ocfs2_should_update_atime(struct inode *inode,
--
fs/ocfs2/file.c-236-		struct timespec64 atime = inode_get_atime(inode);
fs/ocfs2/file.c:237:		struct timespec64 mtime = inode_get_mtime(inode);
fs/ocfs2/file.c-238-
--
fs/ocfs2/refcounttree.c=4014=static int ocfs2_complete_reflink(struct inode *s_inode,
--
fs/ocfs2/refcounttree.c-4072-
fs/ocfs2/refcounttree.c:4073:		inode_set_mtime_to_ts(t_inode, inode_get_mtime(s_inode));
fs/ocfs2/refcounttree.c-4074-		di->i_mtime = s_di->i_mtime;
--
fs/overlayfs/file.c=294=static void ovl_file_accessed(struct file *file)
--
fs/overlayfs/file.c-310-	uctime = inode_get_ctime(upperinode);
fs/overlayfs/file.c:311:	mtime = inode_get_mtime(inode);
fs/overlayfs/file.c:312:	umtime = inode_get_mtime(upperinode);
fs/overlayfs/file.c-313-	if ((!timespec64_equal(&mtime, &umtime)) ||
fs/overlayfs/file.c-314-	     !timespec64_equal(&ctime, &uctime)) {
fs/overlayfs/file.c:315:		inode_set_mtime_to_ts(inode, inode_get_mtime(upperinode));
fs/overlayfs/file.c-316-		inode_set_ctime_to_ts(inode, uctime);
--
fs/overlayfs/util.c=1517=void ovl_copyattr(struct inode *inode)
--
fs/overlayfs/util.c-1535-	inode_set_atime_to_ts(inode, inode_get_atime(realinode));
fs/overlayfs/util.c:1536:	inode_set_mtime_to_ts(inode, inode_get_mtime(realinode));
fs/overlayfs/util.c-1537-	inode_set_ctime_to_ts(inode, inode_get_ctime(realinode));
--
fs/smb/client/file.c=298=static void cifs_rreq_done(struct netfs_io_request *rreq)
--
fs/smb/client/file.c-304-	atime = inode_set_atime_to_ts(inode, current_time(inode));
fs/smb/client/file.c:305:	mtime = inode_get_mtime(inode);
fs/smb/client/file.c-306-	if (timespec64_compare(&atime, &mtime))
fs/smb/client/file.c:307:		inode_set_atime_to_ts(inode, inode_get_mtime(inode));
fs/smb/client/file.c-308-}
--
fs/smb/client/fscache.h=49=void cifs_fscache_fill_coherency(struct inode *inode,
--
fs/smb/client/fscache.h-52-	struct timespec64 ctime = inode_get_ctime(inode);
fs/smb/client/fscache.h:53:	struct timespec64 mtime = inode_get_mtime(inode);
fs/smb/client/fscache.h-54-
--
fs/smb/client/inode.c=96=cifs_revalidate_cache(struct inode *inode, struct cifs_fattr *fattr)
--
fs/smb/client/inode.c-119-	fattr->cf_mtime = timestamp_truncate(fattr->cf_mtime, inode);
fs/smb/client/inode.c:120:	mtime = inode_get_mtime(inode);
fs/smb/client/inode.c-121-	if (timespec64_equal(&mtime, &fattr->cf_mtime) &&
--
fs/stack.c=63=void fsstack_copy_attr_all(struct inode *dest, const struct inode *src)
--
fs/stack.c-69-	inode_set_atime_to_ts(dest, inode_get_atime(src));
fs/stack.c:70:	inode_set_mtime_to_ts(dest, inode_get_mtime(src));
fs/stack.c-71-	inode_set_ctime_to_ts(dest, inode_get_ctime(src));
--
fs/stat.c=45=void fill_mg_cmtime(struct kstat *stat, u32 request_mask, struct inode *inode)
--
fs/stat.c-54-
fs/stat.c:55:	stat->mtime = inode_get_mtime(inode);
fs/stat.c-56-	stat->ctime.tv_sec = inode->i_ctime_sec;
--
fs/stat.c=82=void generic_fillattr(struct mnt_idmap *idmap, u32 request_mask,
--
fs/stat.c-101-		stat->ctime = inode_get_ctime(inode);
fs/stat.c:102:		stat->mtime = inode_get_mtime(inode);
fs/stat.c-103-	}
--
fs/ubifs/file.c=1353=static inline int mctime_update_needed(const struct inode *inode,
--
fs/ubifs/file.c-1356-	struct timespec64 ctime = inode_get_ctime(inode);
fs/ubifs/file.c:1357:	struct timespec64 mtime = inode_get_mtime(inode);
fs/ubifs/file.c-1358-
--
fs/udf/ialloc.c=30=struct inode *udf_new_inode(struct inode *dir, umode_t mode)
--
fs/udf/ialloc.c-103-	simple_inode_init_ts(inode);
fs/udf/ialloc.c:104:	iinfo->i_crtime = inode_get_mtime(inode);
fs/udf/ialloc.c-105-	if (unlikely(insert_inode_locked(inode) < 0)) {
--
fs/udf/inode.c=1728=static int udf_update_inode(struct inode *inode, int do_sync)
--
fs/udf/inode.c-1834-		udf_time_to_disk_stamp(&fe->accessTime, inode_get_atime(inode));
fs/udf/inode.c:1835:		udf_time_to_disk_stamp(&fe->modificationTime, inode_get_mtime(inode));
fs/udf/inode.c-1836-		udf_time_to_disk_stamp(&fe->attrTime, inode_get_ctime(inode));
--
fs/udf/inode.c-1865-		udf_adjust_time(iinfo, inode_get_atime(inode));
fs/udf/inode.c:1866:		udf_adjust_time(iinfo, inode_get_mtime(inode));
fs/udf/inode.c-1867-		udf_adjust_time(iinfo, inode_get_ctime(inode));
--
fs/udf/inode.c-1871-		udf_time_to_disk_stamp(&efe->modificationTime,
fs/udf/inode.c:1872:				       inode_get_mtime(inode));
fs/udf/inode.c-1873-		udf_time_to_disk_stamp(&efe->createTime, iinfo->i_crtime);
--
fs/vboxsf/utils.c=192=int vboxsf_inode_revalidate(struct dentry *dentry)
--
fs/vboxsf/utils.c-204-	inode = d_inode(dentry);
fs/vboxsf/utils.c:205:	prev_mtime = inode_get_mtime(inode);
fs/vboxsf/utils.c-206-	sf_i = VBOXSF_I(inode);
--
fs/vboxsf/utils.c-227-	 */
fs/vboxsf/utils.c:228:	mtime = inode_get_mtime(inode);
fs/vboxsf/utils.c-229-	if (timespec64_compare(&mtime, &prev_mtime) > 0)
--
fs/xfs/libxfs/xfs_inode_buf.c=317=xfs_inode_to_disk(
--
fs/xfs/libxfs/xfs_inode_buf.c-336-	to->di_atime = xfs_inode_to_disk_ts(ip, inode_get_atime(inode));
fs/xfs/libxfs/xfs_inode_buf.c:337:	to->di_mtime = xfs_inode_to_disk_ts(ip, inode_get_mtime(inode));
fs/xfs/libxfs/xfs_inode_buf.c-338-	to->di_ctime = xfs_inode_to_disk_ts(ip, inode_get_ctime(inode));
--
fs/xfs/scrub/inode_repair.c=1762=xrep_inode_timestamps(
--
fs/xfs/scrub/inode_repair.c-1771-
fs/xfs/scrub/inode_repair.c:1772:	tstamp = inode_get_mtime(inode);
fs/xfs/scrub/inode_repair.c-1773-	xrep_clamp_timestamp(ip, &tstamp);
--
fs/xfs/xfs_bmap_util.c=1507=xfs_swap_extents(
--
fs/xfs/xfs_bmap_util.c-1645-	ctime = inode_get_ctime(VFS_I(ip));
fs/xfs/xfs_bmap_util.c:1646:	mtime = inode_get_mtime(VFS_I(ip));
fs/xfs/xfs_bmap_util.c-1647-	if ((sbp->bs_ctime.tv_sec != ctime.tv_sec) ||
--
fs/xfs/xfs_exchrange.c=83=xfs_exchrange_check_freshness(
--
fs/xfs/xfs_exchrange.c-88-	struct timespec64		ctime = inode_get_ctime(inode2);
fs/xfs/xfs_exchrange.c:89:	struct timespec64		mtime = inode_get_mtime(inode2);
fs/xfs/xfs_exchrange.c-90-
--
fs/xfs/xfs_inode_item.c=551=xfs_inode_to_log_dinode(
--
fs/xfs/xfs_inode_item.c-565-	to->di_atime = xfs_inode_to_log_dinode_ts(ip, inode_get_atime(inode));
fs/xfs/xfs_inode_item.c:566:	to->di_mtime = xfs_inode_to_log_dinode_ts(ip, inode_get_mtime(inode));
fs/xfs/xfs_inode_item.c-567-	to->di_ctime = xfs_inode_to_log_dinode_ts(ip, inode_get_ctime(inode));
--
fs/xfs/xfs_trace.h=5362=TRACE_EVENT(xfs_exchrange_freshness,
--
fs/xfs/xfs_trace.h-5389-
fs/xfs/xfs_trace.h:5390:		ts64 = inode_get_mtime(inode2);
fs/xfs/xfs_trace.h-5391-		__entry->ip2_mtime = ts64.tv_sec;
--
include/linux/fs.h=1642=static inline long inode_get_mtime_nsec(const struct inode *inode)
--
include/linux/fs.h-1646-
include/linux/fs.h:1647:static inline struct timespec64 inode_get_mtime(const struct inode *inode)
include/linux/fs.h-1648-{
--
include/linux/fs_stack.h=22=static inline void fsstack_copy_attr_times(struct inode *dest,
--
include/linux/fs_stack.h-25-	inode_set_atime_to_ts(dest, inode_get_atime(src));
include/linux/fs_stack.h:26:	inode_set_mtime_to_ts(dest, inode_get_mtime(src));
include/linux/fs_stack.h-27-	inode_set_ctime_to_ts(dest, inode_get_ctime(src));
--
mm/shmem.c=3081=static struct inode *__shmem_get_inode(struct mnt_idmap *idmap,
--
mm/shmem.c-3112-	info->flags = (flags & VM_NORESERVE) ? SHMEM_F_NORESERVE : 0;
mm/shmem.c:3113:	info->i_crtime = inode_get_mtime(inode);
mm/shmem.c-3114-	info->fsflags = (dir == NULL) ? 0 :
]

37/2 2026/02/24 22:07 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 37740
							output: 38
							thoughts: 157
						

Thoughts:
**Analyzing a Potential Timestamp Race Condition**

Okay, so I'm seeing a lot of calls to `inode_get_mtime`. That's not inherently concerning, but it does get me thinking about potential race conditions. If there was a serious problem here, we'd be seeing a lot more reports about it. That's my first gut reaction: we're probably okay, but let's dig deeper.

The thing that's jumping out at me is the interaction between `inode_update_timestamps` – which *does* hold the `inode_lock` – and `atime_needs_update`, which *doesn't*. That's where the potential lies.

Specifically, in `atime_needs_update`, we use the result to figure out if we should call `touch_atime`. And if `touch_atime` *is* called, it will eventually call `inode_update_timestamps` (or perhaps `generic_update_time`). And *that* is when the lock should be taken. Or, at the very least, needs to be taken.

Hold on... I'm missing something. Does `touch_atime` itself take the `inode_lock`? I've got to confirm that. Let's look up the implementation of `touch_atime` now. It's the key piece of the puzzle here. We need to be sure it protects itself before we conclude that race conditions are impossible here.



38/2 2026/02/24 22:07 tool codesearch-definition-source
0m Args:
map[ContextFile:fs/inode.c IncludeLines:true Name:touch_atime]

Results:
map[SourceCode:2221:	void touch_atime(const struct path *path)
2222:	{
2223:		struct vfsmount *mnt = path->mnt;
2224:		struct inode *inode = d_inode(path->dentry);
2225:	
2226:		if (!atime_needs_update(path, inode))
2227:			return;
2228:	
2229:		if (!sb_start_write_trylock(inode->i_sb))
2230:			return;
2231:	
2232:		if (mnt_get_write_access(mnt) != 0)
2233:			goto skip_update;
2234:		/*
2235:		 * File systems can error out when updating inodes if they need to
2236:		 * allocate new space to modify an inode (such is the case for
2237:		 * Btrfs), but since we touch atime while walking down the path we
2238:		 * really don't care if we failed to update the atime of the file,
2239:		 * so just ignore the return value.
2240:		 * We may also fail on filesystems that have the ability to make parts
2241:		 * of the fs read only, e.g. subvolumes in Btrfs.
2242:		 */
2243:		inode_update_time(inode, S_ATIME);
2244:		mnt_put_write_access(mnt);
2245:	skip_update:
2246:		sb_end_write(inode->i_sb);
2247:	}
 SourceFile:fs/inode.c]

39/2 2026/02/24 22:07 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 38702
							output: 41
							thoughts: 1044
						

Thoughts:
**Benign Data Race in `shmem` Timestamp Updates**

Okay, let me think this through. The KCSAN report is pointing out a potential race condition when updating access times (`atime`) in the shared memory file system (`shmem`). Let's trace the execution path. `touch_atime` calls `inode_update_time`, which then calls either `inode->i_op->update_time` or `generic_update_time`. In the case of `shmem`, the `update_time` operation is null, so we end up in `generic_update_time`, which calls `inode_update_timestamps`.

Now, the crucial question is whether any of these functions take an `inode_lock`. It doesn't seem like they do. Neither `inode_update_time`, nor `generic_update_time`, nor even `inode_update_timestamps` appear to hold the lock. The caller is responsible for synchronization if needed.

Looking at the `shmem` code, I see that the lock *is* held in `shmem_file_write_iter`, but *not* held in `shmem_file_read_iter` when calling `file_accessed` -> `touch_atime`. This confirms the potential for concurrent `atime` updates, and, by extension, concurrent `atime` and `mtime` updates. We knew that `atime` is a "best effort" update anyway. This KCSAN report is simply highlighting that these accesses are not synchronized.

Given that the race is on the nanosecond field (`i_mtime_nsec`), which is just a 4-byte field containing timestamps, this is a classic benign data race. It's used for the `relatime` heuristic. The worst that could happen is a slightly incorrect timestamp being read, leading to a redundant or missed `atime` update. It shouldn't cause any data corruption.

I should verify that the `i_mtime_nsec` value is always valid. The writer sets it from `ts.tv_nsec`. That value is taken from `now`, which comes from `current_time(inode)` or `inode_set_ctime_current(inode)`. The call chain is `ktime_get_real_ts64` then `timestamp_truncate`. This last step ensures that `tv_nsec` stays within the filesystem's defined range. So, that's fine.

What about a torn read of `i_mtime_sec` on a 32-bit system? But, this report is from an x86_64 system, so 64-bit access is atomic. Even if it were on a 32-bit system, `i_mtime_sec` is `time64_t`. The report is specifically about `i_mtime_nsec` though.

Let me analyze the provided values: `0x038f5599 -> 0x0427ec19`. Converting those to decimal: `59,725,209 -> 69,725,209`. The timestamps are about 10 ms apart, which makes sense for write operations or consecutive writes.

Therefore, this is a benign data race on a timestamp field.

Just to be absolutely sure, I'll double-check if there is any situation that could make this harmful. What if `inode_update_timestamps` is called with `S_VERSION`? If `IS_I_VERSION(inode)` and `inode_maybe_inc_iversion` is true, then `updated |= S_VERSION`. `inode_maybe_inc_iversion` uses atomic operations if it's the new versioning scheme.

The race could occur when reading `mtime` in `relatime_need_update`. If `relatime_need_update` returns true, then `touch_atime` will update `atime` via `inode_update_time` -> `inode_update_timestamps`. If two threads call `touch_atime` concurrently, both might find `atime_needs_update` as true and try updating `atime` simultaneously. That is also okay, they will just both write (or similar) timestamps to `atime`. No problem there.

Wait, I just saw `inode_set_ctime_current` in `inode_update_timestamps`.
```c
2112:			now = inode_set_ctime_current(inode);
```
Let me check that quickly.

Conclusion: this is a benign data race. No real issue.



40/2 2026/02/24 22:08 tool codesearch-definition-source
0m Args:
map[ContextFile:fs/inode.c IncludeLines:true Name:inode_set_ctime_current]

Results:
map[SourceCode:2784:	/**
2785:	 * inode_set_ctime_current - set the ctime to current_time
2786:	 * @inode: inode
2787:	 *
2788:	 * Set the inode's ctime to the current value for the inode. Returns the
2789:	 * current value that was assigned. If this is not a multigrain inode, then we
2790:	 * set it to the later of the coarse time and floor value.
2791:	 *
2792:	 * If it is multigrain, then we first see if the coarse-grained timestamp is
2793:	 * distinct from what is already there. If so, then use that. Otherwise, get a
2794:	 * fine-grained timestamp.
2795:	 *
2796:	 * After that, try to swap the new value into i_ctime_nsec. Accept the
2797:	 * resulting ctime, regardless of the outcome of the swap. If it has
2798:	 * already been replaced, then that timestamp is later than the earlier
2799:	 * unacceptable one, and is thus acceptable.
2800:	 */
2801:	struct timespec64 inode_set_ctime_current(struct inode *inode)
2802:	{
2803:		struct timespec64 now;
2804:		u32 cns, cur;
2805:	
2806:		ktime_get_coarse_real_ts64_mg(&now);
2807:		now = timestamp_truncate(now, inode);
2808:	
2809:		/* Just return that if this is not a multigrain fs */
2810:		if (!is_mgtime(inode)) {
2811:			inode_set_ctime_to_ts(inode, now);
2812:			goto out;
2813:		}
2814:	
2815:		/*
2816:		 * A fine-grained time is only needed if someone has queried
2817:		 * for timestamps, and the current coarse grained time isn't
2818:		 * later than what's already there.
2819:		 */
2820:		cns = smp_load_acquire(&inode->i_ctime_nsec);
2821:		if (cns & I_CTIME_QUERIED) {
2822:			struct timespec64 ctime = { .tv_sec = inode->i_ctime_sec,
2823:						    .tv_nsec = cns & ~I_CTIME_QUERIED };
2824:	
2825:			if (timespec64_compare(&now, &ctime) <= 0) {
2826:				ktime_get_real_ts64_mg(&now);
2827:				now = timestamp_truncate(now, inode);
2828:				mgtime_counter_inc(mg_fine_stamps);
2829:			}
2830:		}
2831:		mgtime_counter_inc(mg_ctime_updates);
2832:	
2833:		/* No need to cmpxchg if it's exactly the same */
2834:		if (cns == now.tv_nsec && inode->i_ctime_sec == now.tv_sec) {
2835:			trace_ctime_xchg_skip(inode, &now);
2836:			goto out;
2837:		}
2838:		cur = cns;
2839:	retry:
2840:		/* Try to swap the nsec value into place. */
2841:		if (try_cmpxchg(&inode->i_ctime_nsec, &cur, now.tv_nsec)) {
2842:			/* If swap occurred, then we're (mostly) done */
2843:			inode->i_ctime_sec = now.tv_sec;
2844:			trace_ctime_ns_xchg(inode, cns, now.tv_nsec, cur);
2845:			mgtime_counter_inc(mg_ctime_swaps);
2846:		} else {
2847:			/*
2848:			 * Was the change due to someone marking the old ctime QUERIED?
2849:			 * If so then retry the swap. This can only happen once since
2850:			 * the only way to clear I_CTIME_QUERIED is to stamp the inode
2851:			 * with a new ctime.
2852:			 */
2853:			if (!(cns & I_CTIME_QUERIED) && (cns | I_CTIME_QUERIED) == cur) {
2854:				cns = cur;
2855:				goto retry;
2856:			}
2857:			/* Otherwise, keep the existing ctime */
2858:			now.tv_sec = inode->i_ctime_sec;
2859:			now.tv_nsec = cur & ~I_CTIME_QUERIED;
2860:		}
2861:	out:
2862:		return now;
2863:	}
 SourceFile:fs/inode.c]

41/2 2026/02/24 22:08 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 42155
							output: 45
							thoughts: 695
						

Thoughts:
**Analyzing the Potential Data Race in Timestamp Updates**

Okay, so I'm looking at this code and trying to figure out why KCSAN is reporting a data race on `mtime`. Specifically, I'm focusing on the `inode_update_timestamps` function and how it updates the `mtime` field.

My initial thought is that the code snippet shows a clear distinction between how `ctime` and `mtime` are handled, especially when dealing with multigrain timestamps. If the inode isn't multigrain, updating `ctime` is a simple assignment. But, if it *is* multigrain, `ctime` updates involve `try_cmpxchg` on the nanosecond part (`i_ctime_nsec`) to ensure atomicity and consistency – something I understand is important for things like change attributes. The `mtime` update, however, is always just a regular, non-atomic assignment.

That seems straightforward. The multigrain timestamp work is clearly focused on making the `ctime` update atomic to reflect changes, so the commit message likely explains that only `ctime` needs the atomicity. The race, though, is on the `mtime` field, and the way it's updated doesn't seem to be employing any of the multigrain mechanisms.

So, the next question is: is this actually a problem? If the race is happening on `mtime`, and `mtime` is being updated in a standard, non-atomic way, this pattern is everywhere in the kernel. Every filesystem in Linux has the potential for this kind of "race" based on these operations. The only reason it's being reported here is probably because `shmem` is being tested and is missing the `data_race()` annotations that other filesystems may have, or potentially even because some of them have them but haven't been reported yet.

I need to confirm that `i_mtime_nsec` is *not* used in a way that *requires* perfect synchronization. A quick check reveals it's just used for comparisons (in `relatime_need_update`) and returned to userspace (in `stat`). If `mtime` *were* used for something like an NFS change attribute, that might be a problem, but NFS relies on `i_version` or `ctime` for its change attribute. Some filesystems *do* use `mtime` as part of their versioning, though. But I don't see anything special with `shmem` in this regard other than simply storing it.

Okay, let's look at `inode_maybe_inc_iversion`. I need to ensure there isn't some side-effect, somehow, where `mtime` being slightly out of sync breaks something in some obscure path.



42/2 2026/02/24 22:08 tool codesearch-definition-source
0m Args:
map[ContextFile:include/linux/iversion.h IncludeLines:true Name:inode_maybe_inc_iversion]

Results:
map[SourceCode:2009:	/**
2010:	 * inode_maybe_inc_iversion - increments i_version
2011:	 * @inode: inode with the i_version that should be updated
2012:	 * @force: increment the counter even if it's not necessary?
2013:	 *
2014:	 * Every time the inode is modified, the i_version field must be seen to have
2015:	 * changed by any observer.
2016:	 *
2017:	 * If "force" is set or the QUERIED flag is set, then ensure that we increment
2018:	 * the value, and clear the queried flag.
2019:	 *
2020:	 * In the common case where neither is set, then we can return "false" without
2021:	 * updating i_version.
2022:	 *
2023:	 * If this function returns false, and no other metadata has changed, then we
2024:	 * can avoid logging the metadata.
2025:	 */
2026:	bool inode_maybe_inc_iversion(struct inode *inode, bool force)
2027:	{
2028:		u64 cur, new;
2029:	
2030:		/*
2031:		 * The i_version field is not strictly ordered with any other inode
2032:		 * information, but the legacy inode_inc_iversion code used a spinlock
2033:		 * to serialize increments.
2034:		 *
2035:		 * We add a full memory barrier to ensure that any de facto ordering
2036:		 * with other state is preserved (either implicitly coming from cmpxchg
2037:		 * or explicitly from smp_mb if we don't know upfront if we will execute
2038:		 * the former).
2039:		 *
2040:		 * These barriers pair with inode_query_iversion().
2041:		 */
2042:		cur = inode_peek_iversion_raw(inode);
2043:		if (!force && !(cur & I_VERSION_QUERIED)) {
2044:			smp_mb();
2045:			cur = inode_peek_iversion_raw(inode);
2046:		}
2047:	
2048:		do {
2049:			/* If flag is clear then we needn't do anything */
2050:			if (!force && !(cur & I_VERSION_QUERIED))
2051:				return false;
2052:	
2053:			/* Since lowest bit is flag, add 2 to avoid it */
2054:			new = (cur & ~I_VERSION_QUERIED) + I_VERSION_INCREMENT;
2055:		} while (!atomic64_try_cmpxchg(&inode->i_version, &cur, new));
2056:		return true;
2057:	}
 SourceFile:fs/libfs.c]

43/2 2026/02/24 22:08 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 44277
							output: 24
							thoughts: 686
						

Thoughts:
**Analyzing a Metadata Race in the Filesystem**

Okay, so I'm looking at this KCSAN report. The crux of it seems to be centered around `inode_maybe_inc_iversion` in `fs/libfs.c`. My initial thought is to trace the flow. It uses `atomic64_try_cmpxchg` on `i_version`, which always piques my interest when I see it in kernel code.

The report mentions a race condition on `mtime`, and that is passed into the function as part of the `updated` flag. Looking at `inode_update_timestamps`, I see the snippet:

```c
2115:			if (!timespec64_equal(&now, &mtime)) {
2116:				inode_set_mtime_to_ts(inode, now);
2117:				updated |= S_MTIME;
2118:			}
2119:			if (IS_I_VERSION(inode) && inode_maybe_inc_iversion(inode, updated))
2120:				updated |= S_VERSION;
```

So, if `mtime` is updated, `S_MTIME` gets set in `updated`.  Then, `inode_maybe_inc_iversion` is called. Because `updated` has `S_MTIME` set (non-zero), the `force` variable within that function will be true. This means `i_version` *will* be incremented.

However, the key observation here is that the code that reads the `mtime` value, such as in `relatime_need_update`, doesn't care about `i_version`. I'm starting to think this is probably a benign metadata race. No corruption or crashes, from the looks of it.

I also considered potential 32-bit vs. 64-bit issues with `i_mtime_nsec`, but we're on a 64-bit system, which would make any tearing less of a problem. Even if it were torn, it is just a timestamp, which leads me to believe this is not a major issue.

Alright, I need to check for other potentially racing fields. The report only calls out `i_mtime_nsec`. I should consider the "value changed" part of the report: `value changed: 0x038f5599 -> 0x0427ec19`. These are just different nanosecond values; the race isn't about corrupting data.

Now, let's revisit `atime_needs_update`. The code:

```c
2211:		if (relatime_need_update(mnt, inode, now))
```

`relatime_need_update` reads `mtime`. A "wrong" value of `mtime` would only make it return `true` instead of `false` (or vice versa). If it reads a slightly stale `mtime`, this would simply cause an unnecessary atime update. It's safe. If it returns false instead of true, it means that `atime` doesn't get updated, which is also safe; it's just a slightly stale atime. No other consequences come to mind.

Now I need to do a broader search. I will look for other KCSAN reports in the kernel relating to timestamping. I'll search for `data_race` in `fs/inode.c`, just to get a broader perspective.



44/2 2026/02/24 22:08 tool grepper
0m Error:
no matches

Args:
map[Expression:data_race.*i_mtime_nsec]

Results:
map[Output:]

45/2 2026/02/24 22:08 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 45772
							output: 17
							thoughts: 17
						

Thoughts:
Okay, here's my attempt at summarizing that thought process, keeping in mind I'm speaking to a fellow expert:

**Investigating a Potential Race Condition in inode.c**

Alright, I'm taking a look at this `fs/inode.c` code and immediately a thought pops into my head: what if we're dealing with a `data_race` here? It's a prime suspect, given the nature of inode operations and the potential for concurrent access and modification. I'm going to zero in on this possibility first and see if my hunch holds any water before going down any other rabbit holes. 'data_race' feels like the simplest, most direct starting point, and I always like to start simple and build on there. This feels like a good lead. Let's see what we find!



46/2 2026/02/24 22:08 tool grepper
0m Args:
map[Expression:data_race]

Results:
map[Output:
Full output is too long, showing 500 out of 1143 lines.
Use more precise expression if possible.

[Documentation/dev-tools/checkpatch.rst=456=Comments
 --
 Documentation/dev-tools/checkpatch.rst-476-  **DATA_RACE**
 Documentation/dev-tools/checkpatch.rst:477:    Applications of data_race() should have a comment so as to document the
 Documentation/dev-tools/checkpatch.rst-478-    reasoning behind why it was deemed safe.
 --
 Documentation/dev-tools/kcsan.rst=87=the below options are available:
 Documentation/dev-tools/kcsan.rst-88-
 Documentation/dev-tools/kcsan.rst:89:* KCSAN understands the ``data_race(expr)`` annotation, which tells KCSAN that
 Documentation/dev-tools/kcsan.rst-90-  any data races due to accesses in ``expr`` should be ignored and resulting
 --
 Documentation/dev-tools/kcsan.rst-93-
 Documentation/dev-tools/kcsan.rst:94:* Similar to ``data_race(...)``, the type qualifier ``__data_racy`` can be used
 Documentation/dev-tools/kcsan.rst-95-  to document that all data races due to accesses to a variable are intended
 --
 Documentation/translations/zh_CN/dev-tools/kcsan.rst=24=KCSAN 提供了几个其他的配置选项来自定义行为(见 ``lib/Kconfig.kcsan`` 中的各自的
 --
 Documentation/translations/zh_CN/dev-tools/kcsan.rst-87-
 Documentation/translations/zh_CN/dev-tools/kcsan.rst:88:* KCSAN 支持使用 ``data_race(expr)`` 注解,这个注解告诉 KCSAN 任何由访问
 Documentation/translations/zh_CN/dev-tools/kcsan.rst-89-  ``expr`` 所引起的数据竞争都应该被忽略,其产生的行为后果被认为是安全的。请查阅
 --
 Documentation/translations/zh_CN/dev-tools/kcsan.rst-91-
 Documentation/translations/zh_CN/dev-tools/kcsan.rst:92:* 与 ``data_race(...)`` 相似,可以使用类型限定符 ``__data_racy`` 来标记一个变量
 Documentation/translations/zh_CN/dev-tools/kcsan.rst-93-  ,所有访问该变量而导致的数据竞争都是故意为之并且应该被 KCSAN 忽略::
 --
 arch/powerpc/kernel/interrupt.c=103=static notrace void check_return_regs_valid(struct pt_regs *regs)
 --
 arch/powerpc/kernel/interrupt.c-171-
 arch/powerpc/kernel/interrupt.c:172:	if (!data_race(warned)) {
 arch/powerpc/kernel/interrupt.c:173:		data_race(warned = true);
 arch/powerpc/kernel/interrupt.c-174-		printk("%sSRR0 was: %lx should be: %lx\n", h, srr0, regs->nip);
 --
 arch/x86/mm/pat/set_memory.c=161=static inline void cpa_inc_4k_install(void)
 arch/x86/mm/pat/set_memory.c-162-{
 arch/x86/mm/pat/set_memory.c:163:	data_race(cpa_4k_install++);
 arch/x86/mm/pat/set_memory.c-164-}
 --
 drivers/block/loop.c=1852=static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx,
 --
 drivers/block/loop.c-1860-
 drivers/block/loop.c:1861:	if (data_race(READ_ONCE(lo->lo_state)) != Lo_bound)
 drivers/block/loop.c-1862-		return BLK_STS_IOERR;
 --
 drivers/block/loop.c=2191=static int loop_control_get_free(int idx)
 --
 drivers/block/loop.c-2204-		if (lo->idr_visible &&
 drivers/block/loop.c:2205:		    data_race(READ_ONCE(lo->lo_state)) == Lo_unbound)
 drivers/block/loop.c-2206-			goto found;
 --
 drivers/block/zloop.c=693=static blk_status_t zloop_queue_rq(struct blk_mq_hw_ctx *hctx,
 --
 drivers/block/zloop.c-699-
 drivers/block/zloop.c:700:	if (data_race(READ_ONCE(zlo->state)) == Zlo_deleting)
 drivers/block/zloop.c-701-		return BLK_STS_IOERR;
 --
 drivers/net/wireless/ath/ath9k/htc_drv_txrx.c=1135=void ath9k_htc_rxep(void *drv_priv, struct sk_buff *skb,
 --
 drivers/net/wireless/ath/ath9k/htc_drv_txrx.c-1144-	/* Check if ath9k_rx_init() completed. */
 drivers/net/wireless/ath/ath9k/htc_drv_txrx.c:1145:	if (!data_race(priv->rx.initialized))
 drivers/net/wireless/ath/ath9k/htc_drv_txrx.c-1146-		goto err;
 --
 drivers/net/wireless/ath/ath9k/wmi.c=138=void ath9k_wmi_event_tasklet(struct tasklet_struct *t)
 --
 drivers/net/wireless/ath/ath9k/wmi.c-158-		/* Check if ath9k_htc_probe_device() completed. */
 drivers/net/wireless/ath/ath9k/wmi.c:159:		if (!data_race(priv->initialized)) {
 drivers/net/wireless/ath/ath9k/wmi.c-160-			kfree_skb(skb);
 --
 drivers/virtio/virtio_ring.c=2651=bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
 --
 drivers/virtio/virtio_ring.c-2655-	if (vq->event_triggered)
 drivers/virtio/virtio_ring.c:2656:		data_race(vq->event_triggered = false);
 drivers/virtio/virtio_ring.c-2657-
 --
 drivers/virtio/virtio_ring.c=2693=irqreturn_t vring_interrupt(int irq, void *_vq)
 --
 drivers/virtio/virtio_ring.c-2713-	if (vq->event)
 drivers/virtio/virtio_ring.c:2714:		data_race(vq->event_triggered = true);
 drivers/virtio/virtio_ring.c-2715-
 --
 fs/btrfs/block-group.c=1788=static int reclaim_bgs_cmp(void *unused, const struct list_head *a,
 --
 fs/btrfs/block-group.c-1802-	 */
 fs/btrfs/block-group.c:1803:	return data_race(bg1->used > bg2->used);
 fs/btrfs/block-group.c-1804-}
 --
 fs/btrfs/block-rsv.h=107=static inline bool btrfs_block_rsv_full(const struct btrfs_block_rsv *rsv)
 fs/btrfs/block-rsv.h-108-{
 fs/btrfs/block-rsv.h:109:	return data_race(rsv->full);
 fs/btrfs/block-rsv.h-110-}
 --
 fs/btrfs/extent-tree.c=1999=static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
 --
 fs/btrfs/extent-tree.c-2016-		 */
 fs/btrfs/extent-tree.c:2017:		max_count = data_race(delayed_refs->num_heads_ready);
 fs/btrfs/extent-tree.c-2018-		min_bytes = U64_MAX;
 --
 fs/btrfs/file.c=137=int btrfs_drop_extents(struct btrfs_trans_handle *trans,
 --
 fs/btrfs/file.c-179-
 fs/btrfs/file.c:180:	if (data_race(args->start >= inode->disk_i_size) && !args->replace_extent)
 fs/btrfs/file.c-181-		modify_tree = 0;
 --
 fs/btrfs/space-info.c=1974=u64 btrfs_account_ro_block_groups_free_space(struct btrfs_space_info *sinfo)
 --
 fs/btrfs/space-info.c-1980-	/* It's df, we don't care if it's racy */
 fs/btrfs/space-info.c:1981:	if (data_race(list_empty(&sinfo->ro_bgs)))
 fs/btrfs/space-info.c-1982-		return 0;
 --
 fs/btrfs/tree-log.c=3732=static int inode_logged(const struct btrfs_trans_handle *trans,
 --
 fs/btrfs/tree-log.c-3743-	 */
 fs/btrfs/tree-log.c:3744:	if (data_race(inode->logged_trans) == trans->transid)
 fs/btrfs/tree-log.c-3745-		return 1;
 --
 fs/inode.c=111=static unsigned long get_mg_ctime_updates(void)
 --
 fs/inode.c-116-	for_each_possible_cpu(i)
 fs/inode.c:117:		sum += data_race(per_cpu(mg_ctime_updates, i));
 fs/inode.c-118-	return sum;
 --
 fs/inode.c=121=static unsigned long get_mg_fine_stamps(void)
 --
 fs/inode.c-126-	for_each_possible_cpu(i)
 fs/inode.c:127:		sum += data_race(per_cpu(mg_fine_stamps, i));
 fs/inode.c-128-	return sum;
 --
 fs/inode.c=131=static unsigned long get_mg_ctime_swaps(void)
 --
 fs/inode.c-136-	for_each_possible_cpu(i)
 fs/inode.c:137:		sum += data_race(per_cpu(mg_ctime_swaps, i));
 fs/inode.c-138-	return sum;
 --
 fs/jbd2/transaction.c=312=static int start_this_handle(journal_t *journal, handle_t *handle,
 --
 fs/jbd2/transaction.c-343-	 */
 fs/jbd2/transaction.c:344:	if (!data_race(journal->j_running_transaction)) {
 fs/jbd2/transaction.c-345-		/*
 --
 fs/jbd2/transaction.c=1491=int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh)
 --
 fs/jbd2/transaction.c-1514-	 */
 fs/jbd2/transaction.c:1515:	if (data_race(jh->b_transaction != transaction &&
 fs/jbd2/transaction.c-1516-	    jh->b_next_transaction != transaction)) {
 --
 fs/jbd2/transaction.c-1521-	}
 fs/jbd2/transaction.c:1522:	if (data_race(jh->b_modified == 1)) {
 fs/jbd2/transaction.c-1523-		/* If it's in our transaction it must be in BJ_Metadata list. */
 fs/jbd2/transaction.c:1524:		if (data_race(jh->b_transaction == transaction &&
 fs/jbd2/transaction.c-1525-		    jh->b_jlist != BJ_Metadata)) {
 --
 fs/xfs/xfs_trace.h=1324=DECLARE_EVENT_CLASS(xfs_dquot_class,
 --
 fs/xfs/xfs_trace.h-1352-		__entry->flags = dqp->q_flags;
 fs/xfs/xfs_trace.h:1353:		__entry->nrefs = data_race(dqp->q_lockref.count);
 fs/xfs/xfs_trace.h-1354-
 --
 include/linux/bpf.h=518=static inline void bpf_long_memcpy(void *dst, const void *src, u32 size)
 --
 include/linux/bpf.h-524-	while (size--)
 include/linux/bpf.h:525:		data_race(*ldst++ = *lsrc++);
 include/linux/bpf.h-526-}
 --
 include/linux/compiler.h=15=void ftrace_likely_update(struct ftrace_likely_data *f, int val,
 --
 include/linux/compiler.h-172-/**
 include/linux/compiler.h:173: * data_race - mark an expression as containing intentional data races
 include/linux/compiler.h-174- *
 include/linux/compiler.h:175: * This data_race() macro is useful for situations in which data races
 include/linux/compiler.h-176- * should be forgiven.  One example is diagnostic code that accesses
 --
 include/linux/compiler.h-180- * be plain C-language accesses and those in the diagnostic code should
 include/linux/compiler.h:181: * use data_race().  This way, KCSAN will complain if buggy lockless
 include/linux/compiler.h-182- * accesses to that variable are introduced, even if the buggy accesses
 --
 include/linux/compiler.h-186- * to tooling that data races here are to be ignored.  If the access must
 include/linux/compiler.h:187: * be atomic *and* KCSAN should ignore the access, use both data_race()
 include/linux/compiler.h:188: * and READ_ONCE(), for example, data_race(READ_ONCE(x)).
 include/linux/compiler.h-189- */
 include/linux/compiler.h:190:#define data_race(expr)							\
 include/linux/compiler.h-191-({									\
 --
 include/linux/console.h=522=static inline short console_srcu_read_flags(const struct console *con)
 --
 include/linux/console.h-529-	 */
 include/linux/console.h:530:	return data_race(READ_ONCE(con->flags));
 include/linux/console.h-531-}
 --
 include/linux/mm.h=2918=static inline void update_hiwater_rss(struct mm_struct *mm)
 --
 include/linux/mm.h-2921-
 include/linux/mm.h:2922:	if (data_race(mm->hiwater_rss) < _rss)
 include/linux/mm.h:2923:		data_race(mm->hiwater_rss = _rss);
 include/linux/mm.h-2924-}
 --
 include/linux/page-flags.h=952=static inline bool page_has_type(const struct page *page)
 include/linux/page-flags.h-953-{
 include/linux/page-flags.h:954:	return page_type_has_type(data_race(page->page_type));
 include/linux/page-flags.h-955-}
 --
 include/linux/page-flags.h=958=static __always_inline bool folio_test_##fname(const struct folio *folio) \
 include/linux/page-flags.h-959-{									\
 include/linux/page-flags.h:960:	return data_race(folio->page.page_type >> 24) == PGTY_##lname;	\
 include/linux/page-flags.h-961-}									\
 include/linux/page-flags.h=962=static __always_inline void __folio_set_##fname(struct folio *folio)	\
 --
 include/linux/page-flags.h-965-		return;							\
 include/linux/page-flags.h:966:	VM_BUG_ON_FOLIO(data_race(folio->page.page_type) != UINT_MAX,	\
 include/linux/page-flags.h-967-			folio);						\
 --
 include/linux/page-flags.h=980=static __always_inline int Page##uname(const struct page *page)		\
 include/linux/page-flags.h-981-{									\
 include/linux/page-flags.h:982:	return data_race(page->page_type >> 24) == PGTY_##lname;	\
 include/linux/page-flags.h-983-}									\
 include/linux/page-flags.h=984=static __always_inline void __SetPage##uname(struct page *page)		\
 --
 include/linux/page-flags.h-987-		return;							\
 include/linux/page-flags.h:988:	VM_BUG_ON_PAGE(data_race(page->page_type) != UINT_MAX, page);	\
 include/linux/page-flags.h-989-	page->page_type = (unsigned int)PGTY_##lname << 24;		\
 --
 include/linux/srcutiny.h=133=static inline void srcu_torture_stats_print(struct srcu_struct *ssp,
 --
 include/linux/srcutiny.h-137-
 include/linux/srcutiny.h:138:	idx = ((data_race(READ_ONCE(ssp->srcu_idx)) + 1) & 0x2) >> 1;
 include/linux/srcutiny.h-139-	pr_alert("%s%s Tiny SRCU per-CPU(idx=%d): (%hd,%hd) gp: %lu->%lu\n",
 include/linux/srcutiny.h-140-		 tt, tf, idx,
 include/linux/srcutiny.h:141:		 data_race(READ_ONCE(ssp->srcu_lock_nesting[!idx])),
 include/linux/srcutiny.h:142:		 data_race(READ_ONCE(ssp->srcu_lock_nesting[idx])),
 include/linux/srcutiny.h:143:		 data_race(READ_ONCE(ssp->srcu_idx)),
 include/linux/srcutiny.h:144:		 data_race(READ_ONCE(ssp->srcu_idx_max)));
 include/linux/srcutiny.h-145-}
 --
 include/net/sock.h=2508=static inline int sock_error(struct sock *sk)
 --
 include/net/sock.h-2514-	 */
 include/net/sock.h:2515:	if (likely(data_race(!sk->sk_err)))
 include/net/sock.h-2516-		return 0;
 --
 io_uring/fdinfo.c=60=static void __io_uring_show_fdinfo(struct io_ring_ctx *ctx, struct seq_file *m)
 --
 io_uring/fdinfo.c-86-	seq_printf(m, "SqTail:\t%u\n", sq_tail);
 io_uring/fdinfo.c:87:	seq_printf(m, "CachedSqHead:\t%u\n", data_race(ctx->cached_sq_head));
 io_uring/fdinfo.c-88-	seq_printf(m, "CqMask:\t0x%x\n", cq_mask);
 --
 io_uring/fdinfo.c-90-	seq_printf(m, "CqTail:\t%u\n", cq_tail);
 io_uring/fdinfo.c:91:	seq_printf(m, "CachedCqTail:\t%u\n", data_race(ctx->cached_cq_tail));
 io_uring/fdinfo.c-92-	seq_printf(m, "SQEs:\t%u\n", sq_tail - sq_head);
 --
 io_uring/refs.h=20=static inline bool req_ref_put_and_test_atomic(struct io_kiocb *req)
 io_uring/refs.h-21-{
 io_uring/refs.h:22:	WARN_ON_ONCE(!(data_race(req->flags) & REQ_F_REFCOUNT));
 io_uring/refs.h-23-	WARN_ON_ONCE(req_ref_zero_or_close_to_overflow(req));
 --
 io_uring/timeout.c=595=int io_timeout(struct io_kiocb *req, unsigned int issue_flags)
 --
 io_uring/timeout.c-614-
 io_uring/timeout.c:615:	tail = data_race(ctx->cached_cq_tail) - atomic_read(&ctx->cq_timeouts);
 io_uring/timeout.c-616-	timeout->target_seq = tail + off;
 --
 kernel/fork.c=1966=__latent_entropy struct task_struct *copy_process(
 --
 kernel/fork.c-2103-	retval = -EAGAIN;
 kernel/fork.c:2104:	if (data_race(nr_threads >= max_threads))
 kernel/fork.c-2105-		goto bad_fork_cleanup_count;
 --
 kernel/irq/irqdesc.c=919=static unsigned int kstat_irqs_desc(struct irq_desc *desc, const struct cpumask *cpumask)
 --
 kernel/irq/irqdesc.c-926-	    !irq_is_nmi(desc))
 kernel/irq/irqdesc.c:927:		return data_race(desc->tot_count);
 kernel/irq/irqdesc.c-928-
 kernel/irq/irqdesc.c-929-	for_each_cpu(cpu, cpumask)
 kernel/irq/irqdesc.c:930:		sum += data_race(per_cpu(desc->kstat_irqs->cnt, cpu));
 kernel/irq/irqdesc.c-931-	return sum;
 --
 kernel/kcsan/core.c=527=kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned long ip)
 --
 kernel/kcsan/core.c-643-	 */
 kernel/kcsan/core.c:644:	if (diff && !kcsan_ignore_data_race(size, type, old, new, diff))
 kernel/kcsan/core.c-645-		value_change = KCSAN_VALUE_CHANGE_TRUE;
 --
 kernel/kcsan/core.c-667-		/*
 kernel/kcsan/core.c:668:		 * No need to increment 'data_races' counter, as the racing
 kernel/kcsan/core.c-669-		 * thread already did.
 --
 kernel/kcsan/debugfs.c=26=static const char *const counter_names[] = {
 --
 kernel/kcsan/debugfs.c-28-	[KCSAN_COUNTER_SETUP_WATCHPOINTS]		= "setup_watchpoints",
 kernel/kcsan/debugfs.c:29:	[KCSAN_COUNTER_DATA_RACES]			= "data_races",
 kernel/kcsan/debugfs.c-30-	[KCSAN_COUNTER_ASSERT_FAILURES]			= "assert_failures",
 --
 kernel/kcsan/debugfs.c=138=static ssize_t insert_report_filterlist(const char *func)
 --
 kernel/kcsan/debugfs.c-156-	 */
 kernel/kcsan/debugfs.c:157:	if (data_race(report_filterlist.used == report_filterlist.size)) {
 kernel/kcsan/debugfs.c-158-		new_size = (report_filterlist.size ?: 4) * 2;
 --
 kernel/kcsan/kcsan_test.c=358=static noinline void test_kernel_write_uninstrumented(void) { test_var++; }
 kernel/kcsan/kcsan_test.c-359-
 kernel/kcsan/kcsan_test.c:360:static noinline void test_kernel_data_race(void) { data_race(test_var++); }
 kernel/kcsan/kcsan_test.c-361-
 --
 kernel/kcsan/kcsan_test.c=976=static void test_zero_size_access(struct kunit *test)
 --
 kernel/kcsan/kcsan_test.c-1001-
 kernel/kcsan/kcsan_test.c:1002:/* Test the data_race() macro. */
 kernel/kcsan/kcsan_test.c-1003-__no_kcsan
 kernel/kcsan/kcsan_test.c:1004:static void test_data_race(struct kunit *test)
 kernel/kcsan/kcsan_test.c-1005-{
 --
 kernel/kcsan/kcsan_test.c-1007-
 kernel/kcsan/kcsan_test.c:1008:	begin_test_checks(test_kernel_data_race, test_kernel_data_race);
 kernel/kcsan/kcsan_test.c-1009-	do {
 --
 kernel/kcsan/kcsan_test.c=1428=static struct kunit_case kcsan_test_cases[] = {
 --
 kernel/kcsan/kcsan_test.c-1441-	KCSAN_KUNIT_CASE(test_zero_size_access),
 kernel/kcsan/kcsan_test.c:1442:	KCSAN_KUNIT_CASE(test_data_race),
 kernel/kcsan/kcsan_test.c-1443-	KCSAN_KUNIT_CASE(test_data_racy_qualifier),
 --
 kernel/kcsan/permissive.h=37=static bool
 kernel/kcsan/permissive.h:38:kcsan_ignore_data_race(size_t size, int type, u64 old, u64 new, u64 diff)
 kernel/kcsan/permissive.h-39-{
 --
 kernel/locking/locktorture.c=1001=static void __torture_print_stats(char *page,
 --
 kernel/locking/locktorture.c-1006-	int i, n_stress;
 kernel/locking/locktorture.c:1007:	long max = 0, min = statp ? data_race(statp[0].n_lock_acquired) : 0;
 kernel/locking/locktorture.c-1008-	long long sum = 0;
 --
 kernel/locking/locktorture.c-1011-	for (i = 0; i < n_stress; i++) {
 kernel/locking/locktorture.c:1012:		if (data_race(statp[i].n_lock_fail))
 kernel/locking/locktorture.c-1013-			fail = true;
 kernel/locking/locktorture.c:1014:		cur = data_race(statp[i].n_lock_acquired);
 kernel/locking/locktorture.c-1015-		sum += cur;
 --
 kernel/locking/osq_lock.c=93=bool osq_lock(struct optimistic_spin_queue *lock)
 --
 kernel/locking/osq_lock.c-163-		 */
 kernel/locking/osq_lock.c:164:		if (data_race(prev->next) == node &&
 kernel/locking/osq_lock.c-165-		    cmpxchg(&prev->next, node, NULL) == node)
 --
 kernel/printk/nbcon.c=966=static bool nbcon_emit_next_record(struct nbcon_write_context *wctxt, bool use_atomic)
 --
 kernel/printk/nbcon.c-1011-	 */
 kernel/printk/nbcon.c:1012:	con_dropped = data_race(READ_ONCE(con->dropped));
 kernel/printk/nbcon.c-1013-
 --
 kernel/rcu/rcutorture.c=2717=rcu_torture_stats_print(void)
 --
 kernel/rcu/rcutorture.c-2776-	pr_cont("barrier: %ld/%ld:%ld ",
 kernel/rcu/rcutorture.c:2777:		data_race(n_barrier_successes),
 kernel/rcu/rcutorture.c:2778:		data_race(n_barrier_attempts),
 kernel/rcu/rcutorture.c:2779:		data_race(n_rcu_torture_barrier_error));
 kernel/rcu/rcutorture.c:2780:	pr_cont("read-exits: %ld ", data_race(n_read_exits)); // Statistic.
 kernel/rcu/rcutorture.c-2781-	pr_cont("nocb-toggles: %ld:%ld ",
 --
 kernel/rcu/srcutree.c=2037=void srcu_torture_stats_print(struct srcu_struct *ssp, char *tt, char *tf)
 --
 kernel/rcu/srcutree.c-2062-			sdp = per_cpu_ptr(ssp->sda, cpu);
 kernel/rcu/srcutree.c:2063:			u0 = data_race(atomic_long_read(&sdp->srcu_ctrs[!idx].srcu_unlocks));
 kernel/rcu/srcutree.c:2064:			u1 = data_race(atomic_long_read(&sdp->srcu_ctrs[idx].srcu_unlocks));
 kernel/rcu/srcutree.c-2065-
 --
 kernel/rcu/srcutree.c-2071-
 kernel/rcu/srcutree.c:2072:			l0 = data_race(atomic_long_read(&sdp->srcu_ctrs[!idx].srcu_locks));
 kernel/rcu/srcutree.c:2073:			l1 = data_race(atomic_long_read(&sdp->srcu_ctrs[idx].srcu_locks));
 kernel/rcu/srcutree.c-2074-
 --
 kernel/rcu/tasks.h=238=static const char *tasks_gp_state_getname(struct rcu_tasks *rtp)
 kernel/rcu/tasks.h-239-{
 kernel/rcu/tasks.h:240:	int i = data_race(rtp->gp_state); // Let KCSAN detect update races
 kernel/rcu/tasks.h-241-	int j = READ_ONCE(i); // Prevent the compiler from reading twice
 --
 kernel/rcu/tasks.h=251=static void cblist_init_generic(struct rcu_tasks *rtp)
 --
 kernel/rcu/tasks.h-302-	pr_info("%s: Setting shift to %d and lim to %d rcu_task_cb_adjust=%d rcu_task_cpu_ids=%d.\n",
 kernel/rcu/tasks.h:303:			rtp->name, data_race(rtp->percpu_enqueue_shift), data_race(rtp->percpu_enqueue_lim),
 kernel/rcu/tasks.h-304-			rcu_task_cb_adjust, rcu_task_cpu_ids);
 --
 kernel/rcu/tasks.h=723=static void show_rcu_tasks_generic_gp_kthread(struct rcu_tasks *rtp, char *s)
 --
 kernel/rcu/tasks.h-732-
 kernel/rcu/tasks.h:733:		if (!data_race(rcu_segcblist_empty(&rtpcp->cblist)))
 kernel/rcu/tasks.h-734-			havecbs = true;
 kernel/rcu/tasks.h:735:		if (data_race(rtpcp->urgent_gp))
 kernel/rcu/tasks.h-736-			haveurgent = true;
 kernel/rcu/tasks.h:737:		if (!data_race(rcu_segcblist_empty(&rtpcp->cblist)) && data_race(rtpcp->urgent_gp))
 kernel/rcu/tasks.h-738-			haveurgentcbs = true;
 --
 kernel/rcu/tasks.h-743-		rtp->kname,
 kernel/rcu/tasks.h:744:		tasks_gp_state_getname(rtp), data_race(rtp->gp_state),
 kernel/rcu/tasks.h:745:		jiffies - data_race(rtp->gp_jiffies),
 kernel/rcu/tasks.h:746:		data_race(rcu_seq_current(&rtp->tasks_gp_seq)),
 kernel/rcu/tasks.h:747:		data_race(rtp->n_ipis_fails), data_race(rtp->n_ipis),
 kernel/rcu/tasks.h:748:		".k"[!!data_race(rtp->kthread_ptr)],
 kernel/rcu/tasks.h-749-		".C"[havecbs],
 --
 kernel/rcu/tasks.h=757=static void rcu_tasks_torture_stats_print_generic(struct rcu_tasks *rtp, char *tt,
 --
 kernel/rcu/tasks.h-765-	pr_alert("%s%s Tasks%s RCU g%ld gp_start %lu gp_jiffies %lu gp_state %d (%s).\n",
 kernel/rcu/tasks.h:766:		 tt, tf, tst, data_race(rtp->tasks_gp_seq),
 kernel/rcu/tasks.h:767:		 j - data_race(rtp->gp_start), j - data_race(rtp->gp_jiffies),
 kernel/rcu/tasks.h:768:		 data_race(rtp->gp_state), tasks_gp_state_getname(rtp));
 kernel/rcu/tasks.h-769-	pr_alert("\tEnqueue shift %d limit %d Dequeue limit %d gpseq %lu.\n",
 kernel/rcu/tasks.h:770:		 data_race(rtp->percpu_enqueue_shift),
 kernel/rcu/tasks.h:771:		 data_race(rtp->percpu_enqueue_lim),
 kernel/rcu/tasks.h:772:		 data_race(rtp->percpu_dequeue_lim),
 kernel/rcu/tasks.h:773:		 data_race(rtp->percpu_dequeue_gpseq));
 kernel/rcu/tasks.h-774-	(void)zalloc_cpumask_var(&cm, GFP_KERNEL);
 --
 kernel/rcu/tasks.h-792-	pr_alert("\tBarrier seq %lu start %lu count %d holdout CPUs ",
 kernel/rcu/tasks.h:793:		 data_race(rtp->barrier_q_seq), j - data_race(rtp->barrier_q_start),
 kernel/rcu/tasks.h-794-		 atomic_read(&rtp->barrier_q_count));
 --
 kernel/rcu/tasks.h=1094=static void check_holdout_task(struct task_struct *t,
 --
 kernel/rcu/tasks.h-1120-		 t->rcu_tasks_nvcsw, t->nvcsw, t->rcu_tasks_holdout,
 kernel/rcu/tasks.h:1121:		 data_race(t->rcu_tasks_idle_cpu), cpu);
 kernel/rcu/tasks.h-1122-	sched_show_task(t);
 --
 kernel/rcu/tasks.h=1893=static void show_stalled_task_trace(struct task_struct *t, bool *firstreport)
 --
 kernel/rcu/tasks.h-1914-			 ".N"[cpu >= 0 && tick_nohz_full_cpu(cpu)],
 kernel/rcu/tasks.h:1915:			 ".B"[!!data_race(t->trc_reader_special.b.blocked)],
 kernel/rcu/tasks.h-1916-			 trc_rdr.nesting,
 --
 kernel/rcu/tasks.h=2085=void show_rcu_tasks_trace_gp_kthread(void)
 --
 kernel/rcu/tasks.h-2089-	snprintf(buf, sizeof(buf), "N%lu h:%lu/%lu/%lu",
 kernel/rcu/tasks.h:2090:		data_race(n_trc_holdouts),
 kernel/rcu/tasks.h:2091:		data_race(n_heavy_reader_ofl_updates),
 kernel/rcu/tasks.h:2092:		data_race(n_heavy_reader_updates),
 kernel/rcu/tasks.h:2093:		data_race(n_heavy_reader_attempts));
 kernel/rcu/tasks.h-2094-	show_rcu_tasks_generic_gp_kthread(&rcu_tasks_trace, buf);
 --
 kernel/rcu/tree.c=1015=static bool rcu_start_this_gp(struct rcu_node *rnp_start, struct rcu_data *rdp,
 --
 kernel/rcu/tree.c-1072-	}
 kernel/rcu/tree.c:1073:	trace_rcu_grace_period(rcu_state.name, data_race(rcu_state.gp_seq), TPS("newreq"));
 kernel/rcu/tree.c-1074-	ret = true;  /* Caller must wake GP kthread. */
 --
 kernel/rcu/tree_exp.h=558=static void synchronize_rcu_expedited_stall(unsigned long jiffies_start, unsigned long j)
 --
 kernel/rcu/tree_exp.h-585-				"N."[!!(rdp->grpmask & rnp->expmaskinitnext)],
 kernel/rcu/tree_exp.h:586:				"D."[!!data_race(rdp->cpu_no_qs.b.exp)]);
 kernel/rcu/tree_exp.h-587-		}
 --
 kernel/rcu/tree_exp.h-589-	pr_cont(" } %lu jiffies s: %lu root: %#lx/%c\n",
 kernel/rcu/tree_exp.h:590:		j - jiffies_start, rcu_state.expedited_sequence, data_race(rnp_root->expmask),
 kernel/rcu/tree_exp.h:591:		".T"[!!data_race(rnp_root->exp_tasks)]);
 kernel/rcu/tree_exp.h-592-	if (ndetected) {
 --
 kernel/rcu/tree_exp.h-599-			pr_cont(" l=%u:%d-%d:%#lx/%c",
 kernel/rcu/tree_exp.h:600:				rnp->level, rnp->grplo, rnp->grphi, data_race(rnp->expmask),
 kernel/rcu/tree_exp.h:601:				".T"[!!data_race(rnp->exp_tasks)]);
 kernel/rcu/tree_exp.h-602-		}
 --
 kernel/rcu/tree_plugin.h=864=dump_blkd_tasks(struct rcu_node *rnp, int ncheck)
 --
 kernel/rcu/tree_plugin.h-879-	pr_info("%s: ->gp_tasks %p ->boost_tasks %p ->exp_tasks %p\n",
 kernel/rcu/tree_plugin.h:880:		__func__, READ_ONCE(rnp->gp_tasks), data_race(rnp->boost_tasks),
 kernel/rcu/tree_plugin.h-881-		READ_ONCE(rnp->exp_tasks));
 --
 kernel/rcu/tree_stall.h=396=static void rcu_dump_cpu_stacks(unsigned long gp_seq)
 --
 kernel/rcu/tree_stall.h-404-		for_each_leaf_node_possible_cpu(rnp, cpu) {
 kernel/rcu/tree_stall.h:405:			if (gp_seq != data_race(rcu_state.gp_seq)) {
 kernel/rcu/tree_stall.h-406-				printk_deferred_exit();
 --
 kernel/rcu/tree_stall.h-409-			}
 kernel/rcu/tree_stall.h:410:			if (!(data_race(rnp->qsmask) & leaf_node_cpu_bit(rnp, cpu)))
 kernel/rcu/tree_stall.h-411-				continue;
 --
 kernel/rcu/tree_stall.h=518=static void print_cpu_stall_info(int cpu)
 --
 kernel/rcu/tree_stall.h-560-	       rdp->softirq_snap, kstat_softirqs_cpu(RCU_SOFTIRQ, cpu),
 kernel/rcu/tree_stall.h:561:	       data_race(rcu_state.n_force_qs) - rcu_state.n_force_qs_gpstart,
 kernel/rcu/tree_stall.h-562-	       rcuc_starved ? buf : "",
 --
 kernel/rcu/tree_stall.h=569=static void rcu_check_gp_kthread_starvation(void)
 --
 kernel/rcu/tree_stall.h-579-		       (long)rcu_seq_current(&rcu_state.gp_seq),
 kernel/rcu/tree_stall.h:580:		       data_race(READ_ONCE(rcu_state.gp_flags)),
 kernel/rcu/tree_stall.h-581-		       gp_state_getname(rcu_state.gp_state),
 kernel/rcu/tree_stall.h:582:		       data_race(READ_ONCE(rcu_state.gp_state)),
 kernel/rcu/tree_stall.h:583:		       gpk ? data_race(READ_ONCE(gpk->__state)) : ~0, cpu);
 kernel/rcu/tree_stall.h-584-		if (gpk) {
 --
 kernel/rcu/tree_stall.h-591-				pr_err("RCU GP kthread last ran on offline CPU %d.\n", cpu);
 kernel/rcu/tree_stall.h:592:			} else if (!(data_race(READ_ONCE(rdp->mynode->qsmask)) & rdp->grpmask)) {
 kernel/rcu/tree_stall.h-593-				pr_err("Stack dump where RCU GP kthread last ran:\n");
 --
 kernel/rcu/tree_stall.h=602=static void rcu_check_gp_kthread_expired_fqs_timer(void)
 --
 kernel/rcu/tree_stall.h-622-		       (long)rcu_seq_current(&rcu_state.gp_seq),
 kernel/rcu/tree_stall.h:623:		       data_race(READ_ONCE(rcu_state.gp_flags)), // Diagnostic read
 kernel/rcu/tree_stall.h-624-		       gp_state_getname(RCU_GP_WAIT_FQS), RCU_GP_WAIT_FQS,
 kernel/rcu/tree_stall.h:625:		       data_race(READ_ONCE(gpk->__state)));
 kernel/rcu/tree_stall.h-626-		pr_err("\tPossible timer handling issue on cpu=%d timer-softirq=%u\n",
 --
 kernel/rcu/tree_stall.h=631=static void print_other_cpu_stall(unsigned long gp_seq, unsigned long gps)
]
]

47/2 2026/02/24 22:08 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 57487
							output: 24
							thoughts: 167
						

Thoughts:
**Analyzing Potential Data Races in the Filesystem Timestamping Code**

Okay, so I'm looking at `fs/inode.c` and immediately I'm spotting `data_race` calls related to multigrain timestamp statistics. Specifically, lines 117, 127, and 137 are using `data_race` around `per_cpu` variables, which makes sense. We're summing up statistics, and that's the expected way to handle it, so far so good. I see the `mg_ctime_updates`, `mg_fine_stamps`, and `mg_ctime_swaps` variables are all protected that way. It's the standard idiom.

But then I had a nagging feeling: Where's the equivalent handling for `i_mtime_nsec`? Shouldn't there be a similar `data_race` around it? It's a timestamp, it's modified. Wait... I should double-check the definition of the `data_race` macro. Perhaps it's being applied upstream and it is defined in a different header. That's it, I need to look at `include/linux/fs.h` and make sure my assumptions are valid regarding the protection of that critical piece of data.



48/2 2026/02/24 22:08 tool grepper
0m Error:
no matches

Args:
map[Expression:data_race.*i_mtime_nsec]

Results:
map[Output:]

49/2 2026/02/24 22:08 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 57963
							output: 19
							thoughts: 1924
						

Thoughts:
**Benign Timestamp Race Analysis**

Okay, so I'm looking at this KCSAN report, and it looks like it's flagging a potential data race with `i_mtime_nsec`. First off, I'm thinking, is the way this variable used, particularly its potential tearing, the root of the problem? On x86_64, a 64-bit value is read and written atomically, which eliminates tearing as a risk. On 32-bit systems, a 32-bit value will be atomically accessed if aligned.

The real concern here is the consistency between `sec` and `nsec`. However, for `atime` updates, this isn't a critical concern.

I need to make sure `shmem` doesn't have any special requirements. It is, after all, used in various contexts, including shared memory. But I think the way `mtime` behaves on `shmem` files should be consistent with how it is used on other files.

I've come across similar discussions in the past regarding timestamp races. Typically, these are either overlooked or categorized as benign.

Let's dissect the write stack. It looks like we have `shmem_file_write_iter` calling `file_update_time`, which calls `file_update_time_flags`, which leads to `inode_update_time`, `generic_update_time`, and finally, `inode_update_timestamps`.

In `file_update_time_flags`, it looks like `mtime` is read without a lock, although it may be called with a lock, such as in `shmem_file_write_iter`.

We're holding `inode_lock(inode)` within `shmem_file_write_iter`, which ensures that the writer is synchronized with other writers. However, the reader, `shmem_file_read_iter`, doesn't take the lock.

This seems like a common pattern in the VFS. Reads are generally lockless (except for some specific areas), whereas writes are synchronized. Metadata updates on read, such as `atime`, also tend to be lockless or use light synchronization.

If KCSAN is reporting this, it's a "real" data race in that synchronization is missing. But given that the consequences are trivial, it's "benign."

I need to ensure that `i_mtime_nsec` isn't used for anything like a pointer or index. It's just a timestamp.

In `inode_set_mtime_to_ts`, it's just an assignment, which isn't problematic.

So far, I'd say this is benign.

Let's see: could this race result in a "negative" time difference? If `mtime` is read as (new_sec, old_nsec) where `new_sec > old_sec`, the timestamp is a bit in the past compared to the "real" new time. If `mtime` is read as (old_sec, new_nsec) where `new_sec > old_sec`, the timestamp is a bit in the future compared to the "real" old time. In either case, it is a valid timestamp and won't cause crashes in `timespec64_compare`.

Looking at `timespec64_compare`, if `lhs->tv_sec == rhs->tv_sec`, it returns `lhs->tv_nsec - rhs->tv_nsec`. If `tv_nsec` is torn (which isn't possible here) or inconsistent, it simply returns a different integer.

The `relatime` check uses `timespec64_compare`. If `mtime` is slightly off, it could return `true` or `false` incorrectly. But the thing is, `atime` is being updated concurrently. `atime` is updated in `touch_atime` -> `inode_update_time(inode, S_ATIME)`. `inode_update_timestamps` updates `atime` by setting `inode_set_atime_to_ts(inode, now)`.

Therefore, `atime` is subject to the same race conditions. However, again, these are just timestamps.

I'm leaning towards marking this as benign.

I'll check for any recent patches that address this. Let's see... looking at the `fs/inode.c`, the current version appears quite recent (it includes multigrain timestamps).

`inode_get_mtime` and `inode_set_mtime_to_ts` were likely introduced as an abstraction layer, potentially to allow synchronization later, similar to what was done for `ctime`. But for now, these are just plain accesses.

If I look at `inode_get_ctime_


50/2 2026/02/24 22:09 tool set-results
0m Args:
map[Benign:true Confident:true]

Results:
map[Benign:true Confident:true]

51/2 2026/02/24 22:09 llm expert
0m Model:
gemini-3-flash-preview

Tokens:
							input: 60947
							output: 614
							thoughts: 433
						

Thoughts:
**Benign Data Race Analysis**

Okay, let's break this down. My concern is the apparent data race, specifically the interaction between `inode_update_timestamps`, which is writing to `i_mtime_nsec`, and `atime_needs_update`, which is reading `i_mtime_nsec` via `inode_get_mtime`. We're dealing with a 32-bit field, `i_mtime_nsec`, within the `struct inode` representing the nanoseconds part of the modification timestamp.

First, the system is x86_64, which is crucial. On this architecture, 32-bit aligned reads and writes are atomic. This means the read operation will either see the old value of `i_mtime_nsec` or the new, updated value, but never a torn value. This alone significantly mitigates the risk.

Then there’s the issue of the whole timestamp not being updated atomically. The `sec` and `nsec` parts of the timestamp are updated separately. There's a small window where we could read a combination of an old `sec` with a new `nsec`, or vice versa. However, this potential inconsistency is minor – less than a second – and is practically accepted in the Linux kernel for file timestamps. It's an acceptable compromise between performance and absolute timestamp accuracy.

The usage of this read is key. The read happens in `relatime_need_update`, which is just a heuristic. It decides whether or not to update the access time (`atime`). If we get the 'wrong' value here, the result is either a redundant `atime` update or a slight delay in the update. This has no serious consequences; it doesn't corrupt the file system, nor does it cause crashes. The system still operates correctly.

The interesting part is the lack of synchronization. The writer, `inode_update_timestamps`, does hold the `inode_lock`. But, the reader, `atime_needs_update`, does *not*. This is by design, a deliberate choice to avoid performance bottlenecks on file read operations.

Finally, the comparison with `i_ctime_nsec` is telling. More recent work specifically *added* synchronization for `i_ctime_nsec` because it's used for the change attribute. The change attribute demands a higher level of consistency. The fact that `i_mtime_nsec` was intentionally left as a plain access, even while the other timestamp was protected, strongly suggests that races on it are indeed considered benign by the kernel developers.

Therefore, given the atomicity of the operation on x86_64, the acceptable inconsistency due to separate `sec` and `nsec` updates, the limited impact of a potential incorrect read on a heuristic, and the intentional lack of synchronization for read performance, I conclude that this data race is indeed benign.



LLM Calls Summary:
Total Calls Total Tokens Avg Tokens Total Duration (Seconds) Avg Duration (Seconds)
Tool Calls Summary:
Total Calls Total Duration (Seconds) Avg Duration (Seconds)