syzbot |
sign-in | mailing list | source | docs | 🏰 |
| ID | Workflow | Result | Correct | Bug | Created | Started | Finished | Revision | Error |
|---|---|---|---|---|---|---|---|---|---|
| fa2efc39-e581-4790-b08d-692f5ecc5e92 | moderation | Actionable: ✅ Confident: ✅ | ❓ | KASAN: slab-use-after-free Read in add_delayed_ref (3) | 2026/02/06 07:03 | 2026/02/06 07:49 | 2026/02/06 08:02 | 9b618abc0cd923b36c7a5ebc58a21174112d8f70 |
The bug report describes a use-after-free (UAF) in the BTRFS subsystem, specifically within the delayed reference handling code during a balance operation. ### Analysis of the Bug: 1. **Access Stack**: The crash occurs in `do_perf_trace_btrfs_delayed_tree_ref`, which is called by `trace_add_delayed_tree_ref` at line 1089 of `fs/btrfs/delayed-ref.c`. This function is attempting to read fields from a `struct btrfs_delayed_ref_node`. 2. **Allocation Stack**: The `btrfs_delayed_ref_node` object was allocated earlier in the same call to `add_delayed_ref` at line 1017. 3. **Free Stack**: The object was freed by a worker thread running `__btrfs_run_delayed_refs`. This thread processed the delayed reference and called `btrfs_put_delayed_ref`, which dropped the refcount to zero and freed the memory. ### Root Cause: In `fs/btrfs/delayed-ref.c:add_delayed_ref`, the code performs the following sequence: 1. Allocates a `node` (refcount = 1). 2. Acquires `delayed_refs->lock`. 3. Inserts the `node` into the delayed refs tree via `insert_delayed_ref`. 4. Releases `delayed_refs->lock`. 5. Calls `trace_add_delayed_tree_ref(trans->fs_info, node)`. The race condition occurs between steps 4 and 5. Once `delayed_refs->lock` is released, another thread (like the async reclaim worker seen in the free stack) can pick up the newly inserted reference, process it, and free it. When the original thread reaches step 5, it attempts to access the now-freed `node` to pass its fields to the tracepoint, resulting in a UAF. ### Conclusion: The report is self-consistent and provides all three necessary stack traces (access, allocation, and free), all of which point to the same object type (`btrfs_delayed_ref_node`) and the same logical flow in the BTRFS subsystem. The line numbers in the stack traces match the source code and confirm the race condition. The report is **actionable** and **consistent**.
BTRFS info (device loop0): balance: start -d -m BTRFS info (device loop0): relocating block group 6881280 flags data|metadata BTRFS info (device loop0): found 2 extents, stage: move data extents BTRFS info (device loop0): found 1 extents, stage: update data pointers BTRFS info (device loop0): relocating block group 5242880 flags data|metadata ================================================================== BUG: KASAN: slab-use-after-free in do_perf_trace_btrfs_delayed_tree_ref include/trace/events/btrfs.h:850 [inline] BUG: KASAN: slab-use-after-free in perf_trace_btrfs_delayed_tree_ref+0x299/0x590 include/trace/events/btrfs.h:850 Read of size 8 at addr ffff888040ef94f8 by task syz.0.0/5342 CPU: 0 UID: 0 PID: 5342 Comm: syz.0.0 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0xca/0x240 mm/kasan/report.c:482 kasan_report+0x118/0x150 mm/kasan/report.c:595 do_perf_trace_btrfs_delayed_tree_ref include/trace/events/btrfs.h:850 [inline] perf_trace_btrfs_delayed_tree_ref+0x299/0x590 include/trace/events/btrfs.h:850 __do_trace_add_delayed_tree_ref include/trace/events/btrfs.h:891 [inline] trace_add_delayed_tree_ref include/trace/events/btrfs.h:891 [inline] add_delayed_ref+0x1d25/0x1e70 fs/btrfs/delayed-ref.c:1089 btrfs_free_tree_block+0x2b6/0xc00 fs/btrfs/extent-tree.c:3456 btrfs_force_cow_block+0xfeb/0x2410 fs/btrfs/ctree.c:566 btrfs_cow_block+0x3c4/0xa80 fs/btrfs/ctree.c:708 btrfs_search_slot+0xd23/0x2ce0 fs/btrfs/ctree.c:2130 relocate_cowonly_block fs/btrfs/relocation.c:2574 [inline] relocate_tree_blocks+0x124e/0x1fa0 fs/btrfs/relocation.c:2634 relocate_block_group+0x76e/0xd70 fs/btrfs/relocation.c:3573 btrfs_relocate_block_group+0x6b6/0xc70 fs/btrfs/relocation.c:3966 btrfs_relocate_chunk+0x12f/0x5c0 fs/btrfs/volumes.c:3424 __btrfs_balance+0x18ff/0x24e0 fs/btrfs/volumes.c:4197 btrfs_balance+0xac2/0x11b0 fs/btrfs/volumes.c:4571 btrfs_ioctl_balance+0x3d3/0x610 fs/btrfs/ioctl.c:3525 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:597 [inline] __se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f632278f7c9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f6323659038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007f63229e6090 RCX: 00007f632278f7c9 RDX: 0000200000000180 RSI: 00000000c4009420 RDI: 000000000000000b RBP: 00007f6322813f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f63229e6128 R14: 00007f63229e6090 R15: 00007fffee852978 </TASK> Allocated by task 5342: kasan_save_stack mm/kasan/common.c:57 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:78 unpoison_slab_object mm/kasan/common.c:340 [inline] __kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:366 kasan_slab_alloc include/linux/kasan.h:253 [inline] slab_post_alloc_hook mm/slub.c:4953 [inline] slab_alloc_node mm/slub.c:5263 [inline] kmem_cache_alloc_noprof+0x37d/0x710 mm/slub.c:5270 add_delayed_ref+0x118/0x1e70 fs/btrfs/delayed-ref.c:1017 btrfs_free_tree_block+0x2b6/0xc00 fs/btrfs/extent-tree.c:3456 btrfs_force_cow_block+0xfeb/0x2410 fs/btrfs/ctree.c:566 btrfs_cow_block+0x3c4/0xa80 fs/btrfs/ctree.c:708 btrfs_search_slot+0xd23/0x2ce0 fs/btrfs/ctree.c:2130 relocate_cowonly_block fs/btrfs/relocation.c:2574 [inline] relocate_tree_blocks+0x124e/0x1fa0 fs/btrfs/relocation.c:2634 relocate_block_group+0x76e/0xd70 fs/btrfs/relocation.c:3573 btrfs_relocate_block_group+0x6b6/0xc70 fs/btrfs/relocation.c:3966 btrfs_relocate_chunk+0x12f/0x5c0 fs/btrfs/volumes.c:3424 __btrfs_balance+0x18ff/0x24e0 fs/btrfs/volumes.c:4197 btrfs_balance+0xac2/0x11b0 fs/btrfs/volumes.c:4571 btrfs_ioctl_balance+0x3d3/0x610 fs/btrfs/ioctl.c:3525 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:597 [inline] __se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Freed by task 1046: kasan_save_stack mm/kasan/common.c:57 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:78 kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:584 poison_slab_object mm/kasan/common.c:253 [inline] __kasan_slab_free+0x5c/0x80 mm/kasan/common.c:285 kasan_slab_free include/linux/kasan.h:235 [inline] slab_free_hook mm/slub.c:2540 [inline] slab_free mm/slub.c:6670 [inline] kmem_cache_free+0x197/0x620 mm/slub.c:6781 btrfs_run_delayed_refs_for_head fs/btrfs/extent-tree.c:1985 [inline] __btrfs_run_delayed_refs+0x1803/0x3af0 fs/btrfs/extent-tree.c:2048 btrfs_run_delayed_refs+0xe6/0x3a0 fs/btrfs/extent-tree.c:2160 flush_space+0x28f/0xd20 fs/btrfs/space-info.c:-1 do_async_reclaim_metadata_space+0x11e/0x350 fs/btrfs/space-info.c:1185 btrfs_async_reclaim_metadata_space+0x89/0xe0 fs/btrfs/space-info.c:1252 process_one_work kernel/workqueue.c:3257 [inline] process_scheduled_works+0xad1/0x1770 kernel/workqueue.c:3340 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3421 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 The buggy address belongs to the object at ffff888040ef94d0 which belongs to the cache btrfs_delayed_ref_node of size 112 The buggy address is located 40 bytes inside of freed 112-byte region [ffff888040ef94d0, ffff888040ef9540) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x40ef9 flags: 0x4fff00000000000(node=1|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 04fff00000000000 ffff8880411738c0 dead000000000122 0000000000000000 raw: 0000000000000000 0000000080170017 00000000f5000000 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x52c40(GFP_NOFS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 5342, tgid 5340 (syz.0.0), ts 87297097779, free_ts 39618639308 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x234/0x290 mm/page_alloc.c:1857 prep_new_page mm/page_alloc.c:1865 [inline] get_page_from_freelist+0x24e0/0x2580 mm/page_alloc.c:3915 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5210 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2486 alloc_slab_page mm/slub.c:3075 [inline] allocate_slab+0x86/0x3b0 mm/slub.c:3248 new_slab mm/slub.c:3302 [inline] ___slab_alloc+0xe53/0x1820 mm/slub.c:4656 __slab_alloc+0x65/0x100 mm/slub.c:4779 __slab_alloc_node mm/slub.c:4855 [inline] slab_alloc_node mm/slub.c:5251 [inline] kmem_cache_alloc_noprof+0x40f/0x710 mm/slub.c:5270 add_delayed_ref+0x118/0x1e70 fs/btrfs/delayed-ref.c:1017 btrfs_alloc_tree_block+0xcf8/0x1220 fs/btrfs/extent-tree.c:5208 btrfs_force_cow_block+0x578/0x2410 fs/btrfs/ctree.c:517 btrfs_cow_block+0x3c4/0xa80 fs/btrfs/ctree.c:708 btrfs_search_slot+0xd23/0x2ce0 fs/btrfs/ctree.c:2130 btrfs_update_root+0xea/0xea0 fs/btrfs/root-tree.c:144 commit_fs_roots+0x56a/0x9a0 fs/btrfs/transaction.c:1531 btrfs_commit_transaction+0x1014/0x3b10 fs/btrfs/transaction.c:2460 page last free pid 15 tgid 15 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1406 [inline] __free_frozen_pages+0xbc8/0xd30 mm/page_alloc.c:2943 __tlb_remove_table_free mm/mmu_gather.c:227 [inline] tlb_remove_table_rcu+0x85/0x100 mm/mmu_gather.c:290 rcu_do_batch kernel/rcu/tree.c:2605 [inline] rcu_core+0xc8e/0x1720 kernel/rcu/tree.c:2857 handle_softirqs+0x22b/0x7c0 kernel/softirq.c:622 run_ksoftirqd+0x36/0x60 kernel/softirq.c:1063 smpboot_thread_fn+0x542/0xa60 kernel/smpboot.c:160 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 Memory state around the buggy address: ffff888040ef9380: fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc ffff888040ef9400: fc fc fc fc fa fb fb fb fb fb fb fb fb fb fb fb >ffff888040ef9480: fb fb fc fc fc fc fc fc fc fc fa fb fb fb fb fb ^ ffff888040ef9500: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff888040ef9580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc fc ==================================================================
| Seq | Timestamp | Type | Name | Duration |
|---|---|---|---|---|
| 0/0 | 2026/02/06 07:49 | flow | moderation |
12mResults:map[Actionable:true Confident:true Explanation:The bug report describes a use-after-free (UAF) in the BTRFS subsystem, specifically within the delayed reference handling code during a balance operation. ### Analysis of the Bug: 1. **Access Stack**: The crash occurs in `do_perf_trace_btrfs_delayed_tree_ref`, which is called by `trace_add_delayed_tree_ref` at line 1089 of `fs/btrfs/delayed-ref.c`. This function is attempting to read fields from a `struct btrfs_delayed_ref_node`. 2. **Allocation Stack**: The `btrfs_delayed_ref_node` object was allocated earlier in the same call to `add_delayed_ref` at line 1017. 3. **Free Stack**: The object was freed by a worker thread running `__btrfs_run_delayed_refs`. This thread processed the delayed reference and called `btrfs_put_delayed_ref`, which dropped the refcount to zero and freed the memory. ### Root Cause: In `fs/btrfs/delayed-ref.c:add_delayed_ref`, the code performs the following sequence: 1. Allocates a `node` (refcount = 1). 2. Acquires `delayed_refs->lock`. 3. Inserts the `node` into the delayed refs tree via `insert_delayed_ref`. 4. Releases `delayed_refs->lock`. 5. Calls `trace_add_delayed_tree_ref(trans->fs_info, node)`. The race condition occurs between steps 4 and 5. Once `delayed_refs->lock` is released, another thread (like the async reclaim worker seen in the free stack) can pick up the newly inserted reference, process it, and free it. When the original thread reaches step 5, it attempts to access the now-freed `node` to pass its fields to the tracepoint, resulting in a UAF. ### Conclusion: The report is self-consistent and provides all three necessary stack traces (access, allocation, and free), all of which point to the same object type (`btrfs_delayed_ref_node`) and the same logical flow in the BTRFS subsystem. The line numbers in the stack traces match the source code and confirm the race condition. The report is **actionable** and **consistent**.] |
| 1/1 | 2026/02/06 07:49 | action | kernel-checkouter |
0mResults:map[KernelSrc:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/src/52d5b556e0818c1fa1e9e0ef8e0fa1ffbbead1f7] |
| 2/1 | 2026/02/06 07:50 | action | kernel-builder |
6mResults:map[KernelObj:/usr/local/google/home/dvyukov/syzkaller/agent/workdir/cache/build/1a0b3a775360b9f55f5d25c45a36903139b94e6f] |
| 3/1 | 2026/02/06 07:56 | action | codesearch-prepare |
3mResults:map[Index:codesearch-index] |
| 4/1 | 2026/02/06 08:00 | agent | expert |
1mModel:gemini-3-flash-preview Results: map[Actionable:true Confident:true] Instruction: You are an experienced Linux kernel developer tasked with determining if the given kernel bug report is actionable or not. Actionable means that it contains enough info to root cause the underlying bug, and that the report is self-consistent and makes sense, rather than e.g. a one-off nonsensical crash induced by a previous memory corruption. The bug report is about a use-after-free bug generated by KASAN tool. It should contain 3 stack traces: the bad memory access stack, the heap block allocation stack, and the heap block free stack. If the report does not contain 3 stacks, it's not actionable. All 3 stack traces should be related to the same object type, and usually be in the same kernel subsystem (at least leaf stack frames). An example of an actionable and consistent report would be: first access stack relates to an access to a field of struct Foo, allocation/free stacks relate to allocation/free of the struct Foo. In inconsistent/nonsensical reports an access may be to a struct Foo, but allocation stack allocates a different structure in a different subsystem. Look for other suspicious signals/inconsistencies that can make this report hard to debug/understand. In the final reply explain why you think the report is self-consistent and actionable, or why it's inconsistent and/or not actionable. Use the provided tools to confirm any assumptions, variables/fields being accessed, etc. In particular, don't make assumptions about the kernel source code, use codesearch tools to read the actual source code. Prefer calling several tools at the same time to save round-trips. Use set-results tool to provide results of the analysis. It must be called exactly once before the final reply. Ignore results of this tool. Prompt:
The bug report is:
BTRFS info (device loop0): balance: start -d -m
BTRFS info (device loop0): relocating block group 6881280 flags data|metadata
BTRFS info (device loop0): found 2 extents, stage: move data extents
BTRFS info (device loop0): found 1 extents, stage: update data pointers
BTRFS info (device loop0): relocating block group 5242880 flags data|metadata
==================================================================
BUG: KASAN: slab-use-after-free in do_perf_trace_btrfs_delayed_tree_ref include/trace/events/btrfs.h:850 [inline]
BUG: KASAN: slab-use-after-free in perf_trace_btrfs_delayed_tree_ref+0x299/0x590 include/trace/events/btrfs.h:850
Read of size 8 at addr ffff888040ef94f8 by task syz.0.0/5342
CPU: 0 UID: 0 PID: 5342 Comm: syz.0.0 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
print_address_description mm/kasan/report.c:378 [inline]
print_report+0xca/0x240 mm/kasan/report.c:482
kasan_report+0x118/0x150 mm/kasan/report.c:595
do_perf_trace_btrfs_delayed_tree_ref include/trace/events/btrfs.h:850 [inline]
perf_trace_btrfs_delayed_tree_ref+0x299/0x590 include/trace/events/btrfs.h:850
__do_trace_add_delayed_tree_ref include/trace/events/btrfs.h:891 [inline]
trace_add_delayed_tree_ref include/trace/events/btrfs.h:891 [inline]
add_delayed_ref+0x1d25/0x1e70 fs/btrfs/delayed-ref.c:1089
btrfs_free_tree_block+0x2b6/0xc00 fs/btrfs/extent-tree.c:3456
btrfs_force_cow_block+0xfeb/0x2410 fs/btrfs/ctree.c:566
btrfs_cow_block+0x3c4/0xa80 fs/btrfs/ctree.c:708
btrfs_search_slot+0xd23/0x2ce0 fs/btrfs/ctree.c:2130
relocate_cowonly_block fs/btrfs/relocation.c:2574 [inline]
relocate_tree_blocks+0x124e/0x1fa0 fs/btrfs/relocation.c:2634
relocate_block_group+0x76e/0xd70 fs/btrfs/relocation.c:3573
btrfs_relocate_block_group+0x6b6/0xc70 fs/btrfs/relocation.c:3966
btrfs_relocate_chunk+0x12f/0x5c0 fs/btrfs/volumes.c:3424
__btrfs_balance+0x18ff/0x24e0 fs/btrfs/volumes.c:4197
btrfs_balance+0xac2/0x11b0 fs/btrfs/volumes.c:4571
btrfs_ioctl_balance+0x3d3/0x610 fs/btrfs/ioctl.c:3525
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:597 [inline]
__se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f632278f7c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f6323659038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f63229e6090 RCX: 00007f632278f7c9
RDX: 0000200000000180 RSI: 00000000c4009420 RDI: 000000000000000b
RBP: 00007f6322813f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f63229e6128 R14: 00007f63229e6090 R15: 00007fffee852978
</TASK>
Allocated by task 5342:
kasan_save_stack mm/kasan/common.c:57 [inline]
kasan_save_track+0x3e/0x80 mm/kasan/common.c:78
unpoison_slab_object mm/kasan/common.c:340 [inline]
__kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:366
kasan_slab_alloc include/linux/kasan.h:253 [inline]
slab_post_alloc_hook mm/slub.c:4953 [inline]
slab_alloc_node mm/slub.c:5263 [inline]
kmem_cache_alloc_noprof+0x37d/0x710 mm/slub.c:5270
add_delayed_ref+0x118/0x1e70 fs/btrfs/delayed-ref.c:1017
btrfs_free_tree_block+0x2b6/0xc00 fs/btrfs/extent-tree.c:3456
btrfs_force_cow_block+0xfeb/0x2410 fs/btrfs/ctree.c:566
btrfs_cow_block+0x3c4/0xa80 fs/btrfs/ctree.c:708
btrfs_search_slot+0xd23/0x2ce0 fs/btrfs/ctree.c:2130
relocate_cowonly_block fs/btrfs/relocation.c:2574 [inline]
relocate_tree_blocks+0x124e/0x1fa0 fs/btrfs/relocation.c:2634
relocate_block_group+0x76e/0xd70 fs/btrfs/relocation.c:3573
btrfs_relocate_block_group+0x6b6/0xc70 fs/btrfs/relocation.c:3966
btrfs_relocate_chunk+0x12f/0x5c0 fs/btrfs/volumes.c:3424
__btrfs_balance+0x18ff/0x24e0 fs/btrfs/volumes.c:4197
btrfs_balance+0xac2/0x11b0 fs/btrfs/volumes.c:4571
btrfs_ioctl_balance+0x3d3/0x610 fs/btrfs/ioctl.c:3525
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:597 [inline]
__se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
Freed by task 1046:
kasan_save_stack mm/kasan/common.c:57 [inline]
kasan_save_track+0x3e/0x80 mm/kasan/common.c:78
kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:584
poison_slab_object mm/kasan/common.c:253 [inline]
__kasan_slab_free+0x5c/0x80 mm/kasan/common.c:285
kasan_slab_free include/linux/kasan.h:235 [inline]
slab_free_hook mm/slub.c:2540 [inline]
slab_free mm/slub.c:6670 [inline]
kmem_cache_free+0x197/0x620 mm/slub.c:6781
btrfs_run_delayed_refs_for_head fs/btrfs/extent-tree.c:1985 [inline]
__btrfs_run_delayed_refs+0x1803/0x3af0 fs/btrfs/extent-tree.c:2048
btrfs_run_delayed_refs+0xe6/0x3a0 fs/btrfs/extent-tree.c:2160
flush_space+0x28f/0xd20 fs/btrfs/space-info.c:-1
do_async_reclaim_metadata_space+0x11e/0x350 fs/btrfs/space-info.c:1185
btrfs_async_reclaim_metadata_space+0x89/0xe0 fs/btrfs/space-info.c:1252
process_one_work kernel/workqueue.c:3257 [inline]
process_scheduled_works+0xad1/0x1770 kernel/workqueue.c:3340
worker_thread+0x8a0/0xda0 kernel/workqueue.c:3421
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
The buggy address belongs to the object at ffff888040ef94d0
which belongs to the cache btrfs_delayed_ref_node of size 112
The buggy address is located 40 bytes inside of
freed 112-byte region [ffff888040ef94d0, ffff888040ef9540)
The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x40ef9
flags: 0x4fff00000000000(node=1|zone=1|lastcpupid=0x7ff)
page_type: f5(slab)
raw: 04fff00000000000 ffff8880411738c0 dead000000000122 0000000000000000
raw: 0000000000000000 0000000080170017 00000000f5000000 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x52c40(GFP_NOFS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 5342, tgid 5340 (syz.0.0), ts 87297097779, free_ts 39618639308
set_page_owner include/linux/page_owner.h:32 [inline]
post_alloc_hook+0x234/0x290 mm/page_alloc.c:1857
prep_new_page mm/page_alloc.c:1865 [inline]
get_page_from_freelist+0x24e0/0x2580 mm/page_alloc.c:3915
__alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5210
alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2486
alloc_slab_page mm/slub.c:3075 [inline]
allocate_slab+0x86/0x3b0 mm/slub.c:3248
new_slab mm/slub.c:3302 [inline]
___slab_alloc+0xe53/0x1820 mm/slub.c:4656
__slab_alloc+0x65/0x100 mm/slub.c:4779
__slab_alloc_node mm/slub.c:4855 [inline]
slab_alloc_node mm/slub.c:5251 [inline]
kmem_cache_alloc_noprof+0x40f/0x710 mm/slub.c:5270
add_delayed_ref+0x118/0x1e70 fs/btrfs/delayed-ref.c:1017
btrfs_alloc_tree_block+0xcf8/0x1220 fs/btrfs/extent-tree.c:5208
btrfs_force_cow_block+0x578/0x2410 fs/btrfs/ctree.c:517
btrfs_cow_block+0x3c4/0xa80 fs/btrfs/ctree.c:708
btrfs_search_slot+0xd23/0x2ce0 fs/btrfs/ctree.c:2130
btrfs_update_root+0xea/0xea0 fs/btrfs/root-tree.c:144
commit_fs_roots+0x56a/0x9a0 fs/btrfs/transaction.c:1531
btrfs_commit_transaction+0x1014/0x3b10 fs/btrfs/transaction.c:2460
page last free pid 15 tgid 15 stack trace:
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1406 [inline]
__free_frozen_pages+0xbc8/0xd30 mm/page_alloc.c:2943
__tlb_remove_table_free mm/mmu_gather.c:227 [inline]
tlb_remove_table_rcu+0x85/0x100 mm/mmu_gather.c:290
rcu_do_batch kernel/rcu/tree.c:2605 [inline]
rcu_core+0xc8e/0x1720 kernel/rcu/tree.c:2857
handle_softirqs+0x22b/0x7c0 kernel/softirq.c:622
run_ksoftirqd+0x36/0x60 kernel/softirq.c:1063
smpboot_thread_fn+0x542/0xa60 kernel/smpboot.c:160
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
Memory state around the buggy address:
ffff888040ef9380: fb fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
ffff888040ef9400: fc fc fc fc fa fb fb fb fb fb fb fb fb fb fb fb
>ffff888040ef9480: fb fb fc fc fc fc fc fc fc fc fa fb fb fb fb fb
^
ffff888040ef9500: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
ffff888040ef9580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc fc
==================================================================
Reply: The bug report describes a use-after-free (UAF) in the BTRFS subsystem, specifically within the delayed reference handling code during a balance operation. ### Analysis of the Bug: 1. **Access Stack**: The crash occurs in `do_perf_trace_btrfs_delayed_tree_ref`, which is called by `trace_add_delayed_tree_ref` at line 1089 of `fs/btrfs/delayed-ref.c`. This function is attempting to read fields from a `struct btrfs_delayed_ref_node`. 2. **Allocation Stack**: The `btrfs_delayed_ref_node` object was allocated earlier in the same call to `add_delayed_ref` at line 1017. 3. **Free Stack**: The object was freed by a worker thread running `__btrfs_run_delayed_refs`. This thread processed the delayed reference and called `btrfs_put_delayed_ref`, which dropped the refcount to zero and freed the memory. ### Root Cause: In `fs/btrfs/delayed-ref.c:add_delayed_ref`, the code performs the following sequence: 1. Allocates a `node` (refcount = 1). 2. Acquires `delayed_refs->lock`. 3. Inserts the `node` into the delayed refs tree via `insert_delayed_ref`. 4. Releases `delayed_refs->lock`. 5. Calls `trace_add_delayed_tree_ref(trans->fs_info, node)`. The race condition occurs between steps 4 and 5. Once `delayed_refs->lock` is released, another thread (like the async reclaim worker seen in the free stack) can pick up the newly inserted reference, process it, and free it. When the original thread reaches step 5, it attempts to access the now-freed `node` to pass its fields to the tracepoint, resulting in a UAF. ### Conclusion: The report is self-consistent and provides all three necessary stack traces (access, allocation, and free), all of which point to the same object type (`btrfs_delayed_ref_node`) and the same logical flow in the BTRFS subsystem. The line numbers in the stack traces match the source code and confirm the race condition. The report is **actionable** and **consistent**. |
| 5/2 | 2026/02/06 08:00 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 7721 output: 54 thoughts: 831 Thoughts: **Analyzing a Btrfs Slab-Use-After-Free Bug** Okay, so I've got a `slab-use-after-free` situation here, specifically within `do_perf_trace_btrfs_delayed_tree_ref`. This function is invoked through `trace_add_delayed_tree_ref`. The crash is a read of size 8 at `ffff888040ef94f8`, and the culprit is a `btrfs_delayed_ref_node` object. Classic Btrfs, keeping things interesting! Let's dive into the stack traces. The access stack immediately points to `perf_trace_btrfs_delayed_tree_ref`, which is called through `trace_add_delayed_tree_ref` inside `add_delayed_ref`, and then within `btrfs_free_tree_block`. Hmm. The allocation stack mirrors the access stack until `add_delayed_ref` and `btrfs_free_tree_block`. That's not too surprising. The interesting part, though, is the free stack. Here, `kmem_cache_free` is called, which happens in `__btrfs_run_delayed_refs`, after which the stack goes through `btrfs_run_delayed_refs`, and then `flush_space`. This all indicates a worker thread is doing the freeing. Aha! Both the allocation *and* the access are within `add_delayed_ref`! It appears the function allocates a delayed ref and then later calls `trace_add_delayed_tree_ref` (which is where the crash occurs). The free stack shows that this same delayed ref is freed later by a worker. This points pretty strongly to the classic use-after-free pattern. My thought is that `add_delayed_ref` allocates at a different point than it reads. If the object gets freed *between* the allocation and the access, then we have our UAF. But, how could it be freed inside `add_delayed_ref`? It usually adds the ref to some list or tree. If the ref is already there or gets merged somewhere, then it could be freed. Time to look at the code at `fs/btrfs/delayed-ref.c:1089` and `fs/btrfs/delayed-ref.c:1017`. I'm guessing that line 1017 is allocation, and 1089 is where the trace happens. I need to understand the logic flow here, and more specifically how `add_delayed_ref` interacts with the delayed-ref lists and trees to see how the free can happen in between. |
| 6/2 | 2026/02/06 08:00 | tool | codesearch-dir-index |
0mArgs:map[Dir:fs/btrfs/] Results: map[Files:[accessors.c accessors.h acl.c acl.h async-thread.c async-thread.h backref.c backref.h bio.c bio.h block-group.c block-group.h block-rsv.c block-rsv.h btrfs_inode.h compression.c compression.h ctree.c ctree.h defrag.c defrag.h delalloc-space.c delalloc-space.h delayed-inode.c delayed-inode.h delayed-ref.c delayed-ref.h dev-replace.c dev-replace.h dir-item.c dir-item.h direct-io.c direct-io.h discard.c discard.h disk-io.c disk-io.h export.c export.h extent-io-tree.c extent-io-tree.h extent-tree.c extent-tree.h extent_io.c extent_io.h extent_map.c extent_map.h fiemap.c fiemap.h file-item.c file-item.h file.c file.h free-space-cache.c free-space-cache.h free-space-tree.c free-space-tree.h fs.c fs.h inode-item.c inode-item.h inode.c ioctl.c ioctl.h locking.c locking.h lru_cache.c lru_cache.h lzo.c messages.c messages.h misc.h ordered-data.c ordered-data.h orphan.c orphan.h print-tree.c print-tree.h props.c props.h qgroup.c qgroup.h raid-stripe-tree.c raid-stripe-tree.h raid56.c raid56.h ref-verify.c ref-verify.h reflink.c reflink.h relocation.c relocation.h root-tree.c root-tree.h scrub.c scrub.h send.c send.h space-info.c space-info.h subpage.c subpage.h super.c super.h sysfs.c sysfs.h transaction.c transaction.h tree-checker.c tree-checker.h tree-log.c tree-log.h tree-mod-log.c tree-mod-log.h ulist.c ulist.h uuid-tree.c uuid-tree.h verity.c verity.h volumes.c volumes.h xattr.c xattr.h zlib.c zoned.c zoned.h zstd.c] Subdirs:[tests]] |
| 7/2 | 2026/02/06 08:00 | tool | codesearch-file-index |
0mArgs:map[SourceFile:fs/btrfs/delayed-ref.c] Results: map[Entities:[map[Kind:function Name:add_delayed_ref] map[Kind:function Name:add_delayed_ref_head] map[Kind:function Name:btrfs_add_delayed_data_ref] map[Kind:function Name:btrfs_add_delayed_extent_op] map[Kind:function Name:btrfs_add_delayed_tree_ref] map[Kind:function Name:btrfs_check_delayed_seq] map[Kind:function Name:btrfs_check_space_for_delayed_refs] map[Kind:function Name:btrfs_dec_delayed_refs_rsv_bg_inserts] map[Kind:function Name:btrfs_dec_delayed_refs_rsv_bg_updates] map[Kind:function Name:btrfs_delayed_ref_exit] map[Kind:function Name:btrfs_delayed_ref_init] map[Kind:function Name:btrfs_delayed_ref_lock] map[Kind:function Name:btrfs_delayed_refs_rsv_refill] map[Kind:function Name:btrfs_delayed_refs_rsv_release] map[Kind:function Name:btrfs_delete_ref_head] map[Kind:function Name:btrfs_destroy_delayed_refs] map[Kind:function Name:btrfs_find_delayed_ref_head] map[Kind:function Name:btrfs_find_delayed_tree_ref] map[Kind:function Name:btrfs_inc_delayed_refs_rsv_bg_inserts] map[Kind:function Name:btrfs_inc_delayed_refs_rsv_bg_updates] map[Kind:function Name:btrfs_init_data_ref] map[Kind:function Name:btrfs_init_tree_ref] map[Kind:function Name:btrfs_merge_delayed_refs] map[Kind:function Name:btrfs_put_delayed_ref] map[Kind:function Name:btrfs_select_delayed_ref] map[Kind:function Name:btrfs_select_ref_head] map[Kind:function Name:btrfs_unselect_ref_head] map[Kind:function Name:btrfs_update_delayed_refs_rsv] map[Kind:function Name:cmp_refs_node] map[Kind:function Name:comp_data_refs] map[Kind:function Name:comp_refs] map[Kind:function Name:drop_delayed_ref] map[Kind:function Name:find_comp] map[Kind:function Name:find_first_ref_head] map[Kind:function Name:init_delayed_ref_common] map[Kind:function Name:init_delayed_ref_head] map[Kind:function Name:insert_delayed_ref] map[Kind:function Name:merge_ref] map[Kind:function Name:tree_insert] map[Kind:function Name:update_existing_head_ref]]] |
| 8/2 | 2026/02/06 08:00 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 10967 output: 45 thoughts: 26 Thoughts: Okay, let's dive into this. Here's my take on `add_delayed_ref` within `fs/btrfs/delayed-ref.c`. **Diving Deep into Delayed References in Btrfs** Alright, I'm thinking, `add_delayed_ref`... that's where the magic happens, or rather, where the *delayed* magic happens, within Btrfs. Specifically, I'm thinking about how we're queuing up metadata modifications without immediately flushing them to disk. It's a key function, of course, for performance, allowing us to batch up a bunch of operations and write them out efficiently later. I need to quickly understand how it works at a high level: I’m guessing it's responsible for managing the bookkeeping involved in deferring these updates. We're talking about things like tracking the location of the data, the type of modification (creation, deletion, update), and the extent that the change affects. I imagine there's a delicate balance to strike between delaying long enough to gain performance advantages and delaying so long that the system suffers if there's a crash. I’m also wondering about the data structures it uses, those queues and tables, to represent and organize the delayed references. Are they simple linked lists, or are we talking about more sophisticated data structures to optimize lookup and retrieval for later processing? I bet there’s some clever use of the B-tree structure, leveraging its inherent indexing capabilities. Then, there's the consideration of how `add_delayed_ref` integrates with the rest of the Btrfs filesystem. It has to be thread-safe, of course, to handle concurrent file system operations. How do we ensure that multiple threads don't step on each other's toes when dealing with these delayed references? Locking mechanisms are likely involved, and I’m curious to see how effectively they are deployed without becoming a major performance bottleneck. Ultimately, I'm expecting this function to be a crucial component in maintaining the performance of Btrfs, and I'm eager to get into the details to see how the developers achieved their goals. |
| 9/2 | 2026/02/06 08:00 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/btrfs/delayed-ref.c IncludeLines:true Name:add_delayed_ref] Results: map[SourceCode: 999: static int add_delayed_ref(struct btrfs_trans_handle *trans,
1000: struct btrfs_ref *generic_ref,
1001: struct btrfs_delayed_extent_op *extent_op,
1002: u64 reserved)
1003: {
1004: struct btrfs_fs_info *fs_info = trans->fs_info;
1005: struct btrfs_delayed_ref_node *node;
1006: struct btrfs_delayed_ref_head *head_ref;
1007: struct btrfs_delayed_ref_head *new_head_ref;
1008: struct btrfs_delayed_ref_root *delayed_refs;
1009: struct btrfs_qgroup_extent_record *record = NULL;
1010: const unsigned long index = (generic_ref->bytenr >> fs_info->sectorsize_bits);
1011: bool qrecord_reserved = false;
1012: bool qrecord_inserted;
1013: int action = generic_ref->action;
1014: bool merged;
1015: int ret;
1016:
1017: node = kmem_cache_alloc(btrfs_delayed_ref_node_cachep, GFP_NOFS);
1018: if (!node)
1019: return -ENOMEM;
1020:
1021: head_ref = kmem_cache_alloc(btrfs_delayed_ref_head_cachep, GFP_NOFS);
1022: if (!head_ref) {
1023: ret = -ENOMEM;
1024: goto free_node;
1025: }
1026:
1027: delayed_refs = &trans->transaction->delayed_refs;
1028:
1029: if (btrfs_qgroup_full_accounting(fs_info) && !generic_ref->skip_qgroup) {
1030: record = kzalloc(sizeof(*record), GFP_NOFS);
1031: if (!record) {
1032: ret = -ENOMEM;
1033: goto free_head_ref;
1034: }
1035: if (xa_reserve(&delayed_refs->dirty_extents, index, GFP_NOFS)) {
1036: ret = -ENOMEM;
1037: goto free_record;
1038: }
1039: qrecord_reserved = true;
1040: }
1041:
1042: ret = xa_reserve(&delayed_refs->head_refs, index, GFP_NOFS);
1043: if (ret) {
1044: if (qrecord_reserved)
1045: xa_release(&delayed_refs->dirty_extents, index);
1046: goto free_record;
1047: }
1048:
1049: init_delayed_ref_common(fs_info, node, generic_ref);
1050: init_delayed_ref_head(head_ref, generic_ref, record, reserved);
1051: head_ref->extent_op = extent_op;
1052:
1053: spin_lock(&delayed_refs->lock);
1054:
1055: /*
1056: * insert both the head node and the new ref without dropping
1057: * the spin lock
1058: */
1059: new_head_ref = add_delayed_ref_head(trans, head_ref, record,
1060: action, &qrecord_inserted);
1061: if (IS_ERR(new_head_ref)) {
1062: xa_release(&delayed_refs->head_refs, index);
1063: spin_unlock(&delayed_refs->lock);
1064: ret = PTR_ERR(new_head_ref);
1065:
1066: /*
1067: * It's only safe to call kfree() on 'qrecord' if
1068: * add_delayed_ref_head() has _not_ inserted it for
1069: * tracing. Otherwise we need to handle this here.
1070: */
1071: if (!qrecord_reserved || qrecord_inserted)
1072: goto free_head_ref;
1073: goto free_record;
1074: }
1075: head_ref = new_head_ref;
1076:
1077: merged = insert_delayed_ref(trans, head_ref, node);
1078: spin_unlock(&delayed_refs->lock);
1079:
1080: /*
1081: * Need to update the delayed_refs_rsv with any changes we may have
1082: * made.
1083: */
1084: btrfs_update_delayed_refs_rsv(trans);
1085:
1086: if (generic_ref->type == BTRFS_REF_DATA)
1087: trace_add_delayed_data_ref(trans->fs_info, node);
1088: else
1089: trace_add_delayed_tree_ref(trans->fs_info, node);
1090: if (merged)
1091: kmem_cache_free(btrfs_delayed_ref_node_cachep, node);
1092:
1093: if (qrecord_inserted)
1094: return btrfs_qgroup_trace_extent_post(trans, record, generic_ref->bytenr);
1095:
1096: kfree(record);
1097: return 0;
1098:
1099: free_record:
1100: kfree(record);
1101: free_head_ref:
1102: kmem_cache_free(btrfs_delayed_ref_head_cachep, head_ref);
1103: free_node:
1104: kmem_cache_free(btrfs_delayed_ref_node_cachep, node);
1105: return ret;
1106: }
SourceFile:fs/btrfs/delayed-ref.c] |
| 10/2 | 2026/02/06 08:00 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 13229 output: 45 thoughts: 499 Thoughts: **Analyzing Potential Race Conditions in `add_delayed_ref`** Okay, let's break this down. I'm looking at the `add_delayed_ref` function and noticing a potential race condition. First, I see the `node` is allocated, and then later inserted and traced. My initial concern was about the order, specifically that if `insert_delayed_ref` merges the node and returns true then the `node` may be freed before the trace occurs. But upon further inspection, it appears that the `merged` flag returned from `insert_delayed_ref` actually means the node *wasn't* inserted due to a merge, making it still valid at the line 1091 check. However, the free stack is showing something more complex and potentially problematic. It indicates that the `node` is, in fact, getting added to the delayed refs tree, and *then* another task, likely an async reclaim process, is running `__btrfs_run_delayed_refs` and freeing it. This is where my alarm bells start ringing. If `insert_delayed_ref` returns `merged = false`, meaning the `node` was successfully inserted into the tree, then this asynchronous reclaim process could run immediately after and free the node, and yet the code in `add_delayed_ref` proceeds to use `node` in the tracing function called on line 1089. This clearly looks like a use-after-free scenario. This needs further investigation in `insert_delayed_ref`. |
| 11/2 | 2026/02/06 08:01 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/btrfs/delayed-ref.c IncludeLines:true Name:insert_delayed_ref] Results: map[SourceCode: 575: /*
576: * Helper to insert the ref_node to the tail or merge with tail.
577: *
578: * Return false if the ref was inserted.
579: * Return true if the ref was merged into an existing one (and therefore can be
580: * freed by the caller).
581: */
582: static bool insert_delayed_ref(struct btrfs_trans_handle *trans,
583: struct btrfs_delayed_ref_head *href,
584: struct btrfs_delayed_ref_node *ref)
585: {
586: struct btrfs_delayed_ref_root *root = &trans->transaction->delayed_refs;
587: struct btrfs_delayed_ref_node *exist;
588: int mod;
589:
590: spin_lock(&href->lock);
591: exist = tree_insert(&href->ref_tree, ref);
592: if (!exist) {
593: if (ref->action == BTRFS_ADD_DELAYED_REF)
594: list_add_tail(&ref->add_list, &href->ref_add_list);
595: spin_unlock(&href->lock);
596: trans->delayed_ref_updates++;
597: return false;
598: }
599:
600: /* Now we are sure we can merge */
601: if (exist->action == ref->action) {
602: mod = ref->ref_mod;
603: } else {
604: /* Need to change action */
605: if (exist->ref_mod < ref->ref_mod) {
606: exist->action = ref->action;
607: mod = -exist->ref_mod;
608: exist->ref_mod = ref->ref_mod;
609: if (ref->action == BTRFS_ADD_DELAYED_REF)
610: list_add_tail(&exist->add_list,
611: &href->ref_add_list);
612: else if (ref->action == BTRFS_DROP_DELAYED_REF) {
613: ASSERT(!list_empty(&exist->add_list));
614: list_del_init(&exist->add_list);
615: } else {
616: ASSERT(0);
617: }
618: } else
619: mod = -ref->ref_mod;
620: }
621: exist->ref_mod += mod;
622:
623: /* remove existing tail if its ref_mod is zero */
624: if (exist->ref_mod == 0)
625: drop_delayed_ref(trans->fs_info, root, href, exist);
626: spin_unlock(&href->lock);
627: return true;
628: }
SourceFile:fs/btrfs/delayed-ref.c] |
| 12/2 | 2026/02/06 08:01 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 14975 output: 50 thoughts: 246 Thoughts: **My Analysis of Delayed Reference Insertion and Processing** Okay, so I'm looking at this code and breaking down what's happening when we're dealing with delayed references, specifically around the `insert_delayed_ref` function. Here's what's going through my head: If `insert_delayed_ref` returns `false`, that tells me that `node` was successfully added into the `href->ref_tree` and potentially the `href->ref_add_list`. Which makes sense, because that's the whole point, right? At this point, I see that the `href` is now part of `delayed_refs->head_refs`. That's important; it's how we'll track it later. The really key thing is the `delayed_refs->lock`. I'm seeing that it's held by `add_delayed_ref` surrounding the crucial `insert_delayed_ref` call. So, we have concurrency protection here, which is critical. We're keeping things coherent. Looking at the code after the lock is released, I note line 1089. This helps me later in debugging, it gives me a tracepoint of where this is happening. After `spin_unlock(&delayed_refs->lock)`, other threads are now free to jump in and start processing. That's the interesting part. `btrfs_run_delayed_refs` is going to come in, find that `head_ref` – the one we just added – and process its `ref_tree`. That's where the real work happens, the freeing of nodes and the rest. I'm making a mental note that I should trace how the freeing happens inside `__btrfs_run_delayed_refs`. That's what I want to investigate next. |
| 13/2 | 2026/02/06 08:01 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/btrfs/extent-tree.c IncludeLines:true Name:__btrfs_run_delayed_refs] Results: map[SourceCode:1995: /*
1996: * Returns 0 on success or if called with an already aborted transaction.
1997: * Returns -ENOMEM or -EIO on failure and will abort the transaction.
1998: */
1999: static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
2000: u64 min_bytes)
2001: {
2002: struct btrfs_fs_info *fs_info = trans->fs_info;
2003: struct btrfs_delayed_ref_root *delayed_refs;
2004: struct btrfs_delayed_ref_head *locked_ref = NULL;
2005: int ret;
2006: unsigned long count = 0;
2007: unsigned long max_count = 0;
2008: u64 bytes_processed = 0;
2009:
2010: delayed_refs = &trans->transaction->delayed_refs;
2011: if (min_bytes == 0) {
2012: /*
2013: * We may be subject to a harmless race if some task is
2014: * concurrently adding or removing a delayed ref, so silence
2015: * KCSAN and similar tools.
2016: */
2017: max_count = data_race(delayed_refs->num_heads_ready);
2018: min_bytes = U64_MAX;
2019: }
2020:
2021: do {
2022: if (!locked_ref) {
2023: locked_ref = btrfs_select_ref_head(fs_info, delayed_refs);
2024: if (IS_ERR_OR_NULL(locked_ref)) {
2025: if (PTR_ERR(locked_ref) == -EAGAIN) {
2026: continue;
2027: } else {
2028: break;
2029: }
2030: }
2031: count++;
2032: }
2033: /*
2034: * We need to try and merge add/drops of the same ref since we
2035: * can run into issues with relocate dropping the implicit ref
2036: * and then it being added back again before the drop can
2037: * finish. If we merged anything we need to re-loop so we can
2038: * get a good ref.
2039: * Or we can get node references of the same type that weren't
2040: * merged when created due to bumps in the tree mod seq, and
2041: * we need to merge them to prevent adding an inline extent
2042: * backref before dropping it (triggering a BUG_ON at
2043: * insert_inline_extent_backref()).
2044: */
2045: spin_lock(&locked_ref->lock);
2046: btrfs_merge_delayed_refs(fs_info, delayed_refs, locked_ref);
2047:
2048: ret = btrfs_run_delayed_refs_for_head(trans, locked_ref, &bytes_processed);
2049: if (ret < 0 && ret != -EAGAIN) {
2050: /*
2051: * Error, btrfs_run_delayed_refs_for_head already
2052: * unlocked everything so just bail out
2053: */
2054: return ret;
2055: } else if (!ret) {
2056: /*
2057: * Success, perform the usual cleanup of a processed
2058: * head
2059: */
2060: ret = cleanup_ref_head(trans, locked_ref, &bytes_processed);
2061: if (ret > 0 ) {
2062: /* We dropped our lock, we need to loop. */
2063: ret = 0;
2064: continue;
2065: } else if (ret) {
2066: return ret;
2067: }
2068: }
2069:
2070: /*
2071: * Either success case or btrfs_run_delayed_refs_for_head
2072: * returned -EAGAIN, meaning we need to select another head
2073: */
2074:
2075: locked_ref = NULL;
2076: cond_resched();
2077: } while ((min_bytes != U64_MAX && bytes_processed < min_bytes) ||
2078: (max_count > 0 && count < max_count) ||
2079: locked_ref);
2080:
2081: return 0;
2082: }
SourceFile:fs/btrfs/extent-tree.c] |
| 14/2 | 2026/02/06 08:01 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 17021 output: 53 thoughts: 16 Thoughts: Okay, here's my summary, presented as a first-person account reflecting on the thought process related to `btrfs_run_delayed_refs_for_head`, tailored for a knowledgeable expert: **Deep Dive into Delayed References and Btrfs** Alright, let's break this down. When I see `btrfs_run_delayed_refs_for_head`, my mind immediately goes into the weeds of Btrfs's delayed reference management. I'm thinking about those pesky situations where metadata updates, especially related to file system structures like inodes, extent trees, and so on, are *deferred* to optimize performance. We're talking about the core mechanics of keeping the file system consistent while allowing for snappy writes. So, the "head" part – that's crucial. I'm visualizing the head of a queue, or more accurately, the start of a linked list, of these *deferred* operations. My understanding is this function is tasked with processing the pending references that are clustered around a specific "head" reference. This "head" typically represents a parent object, or a starting point, around which operations are grouped. This could be a specific inode, a transaction, or a file extent being modified. I'm instantly considering the potential locking implications. It’s got to deal with concurrent access and ensuring data integrity. Are we talking about taking the appropriate locks on the structures affected during the operation? What about potential deadlocks if a thread is waiting on resources that another thread holding deferred references needs? That's a classic concern. Then the questions start: what's the logic here? What are the key variables? Are we talking about scanning for changed references? Do we need to call other functions to trigger the reference modifications? I need to think about how these deferred references are *actually* processed. The exact mechanics behind it are what I am interested in. I'm immediately trying to connect this function with the bigger picture. How does it tie into Btrfs's overall transaction model? What error handling is in place? What happens if processing fails midway? I'm anticipating that these types of problems could lead to inconsistencies down the line. I am really interested in how it deals with failures, as Btrfs is designed to be very resilient in the face of such problems. Ultimately, I'm trying to mentally reconstruct the execution path of the function, the interplay with related code, and the potential impact of its actions on the overall file system health. And I know, from experience, that even small errors in this part of Btrfs can have cascading effects. It’s definitely a core piece of the puzzle. |
| 15/2 | 2026/02/06 08:01 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/btrfs/extent-tree.c IncludeLines:true Name:btrfs_run_delayed_refs_for_head] Results: map[SourceCode:1912: static int btrfs_run_delayed_refs_for_head(struct btrfs_trans_handle *trans,
1913: struct btrfs_delayed_ref_head *locked_ref,
1914: u64 *bytes_released)
1915: {
1916: struct btrfs_fs_info *fs_info = trans->fs_info;
1917: struct btrfs_delayed_ref_root *delayed_refs;
1918: struct btrfs_delayed_extent_op *extent_op;
1919: struct btrfs_delayed_ref_node *ref;
1920: bool must_insert_reserved;
1921: int ret;
1922:
1923: delayed_refs = &trans->transaction->delayed_refs;
1924:
1925: lockdep_assert_held(&locked_ref->mutex);
1926: lockdep_assert_held(&locked_ref->lock);
1927:
1928: while ((ref = btrfs_select_delayed_ref(locked_ref))) {
1929: if (ref->seq &&
1930: btrfs_check_delayed_seq(fs_info, ref->seq)) {
1931: spin_unlock(&locked_ref->lock);
1932: btrfs_unselect_ref_head(delayed_refs, locked_ref);
1933: return -EAGAIN;
1934: }
1935:
1936: rb_erase_cached(&ref->ref_node, &locked_ref->ref_tree);
1937: RB_CLEAR_NODE(&ref->ref_node);
1938: if (!list_empty(&ref->add_list))
1939: list_del(&ref->add_list);
1940: /*
1941: * When we play the delayed ref, also correct the ref_mod on
1942: * head
1943: */
1944: switch (ref->action) {
1945: case BTRFS_ADD_DELAYED_REF:
1946: case BTRFS_ADD_DELAYED_EXTENT:
1947: locked_ref->ref_mod -= ref->ref_mod;
1948: break;
1949: case BTRFS_DROP_DELAYED_REF:
1950: locked_ref->ref_mod += ref->ref_mod;
1951: break;
1952: default:
1953: WARN_ON(1);
1954: }
1955:
1956: /*
1957: * Record the must_insert_reserved flag before we drop the
1958: * spin lock.
1959: */
1960: must_insert_reserved = locked_ref->must_insert_reserved;
1961: /*
1962: * Unsetting this on the head ref relinquishes ownership of
1963: * the rsv_bytes, so it is critical that every possible code
1964: * path from here forward frees all reserves including qgroup
1965: * reserve.
1966: */
1967: locked_ref->must_insert_reserved = false;
1968:
1969: extent_op = locked_ref->extent_op;
1970: locked_ref->extent_op = NULL;
1971: spin_unlock(&locked_ref->lock);
1972:
1973: ret = run_one_delayed_ref(trans, locked_ref, ref, extent_op,
1974: must_insert_reserved);
1975: btrfs_delayed_refs_rsv_release(fs_info, 1, 0);
1976: *bytes_released += btrfs_calc_delayed_ref_bytes(fs_info, 1);
1977:
1978: btrfs_free_delayed_extent_op(extent_op);
1979: if (ret) {
1980: btrfs_unselect_ref_head(delayed_refs, locked_ref);
1981: btrfs_put_delayed_ref(ref);
1982: return ret;
1983: }
1984:
1985: btrfs_put_delayed_ref(ref);
1986: cond_resched();
1987:
1988: spin_lock(&locked_ref->lock);
1989: btrfs_merge_delayed_refs(fs_info, delayed_refs, locked_ref);
1990: }
1991:
1992: return 0;
1993: }
SourceFile:fs/btrfs/extent-tree.c] |
| 16/2 | 2026/02/06 08:01 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 18966 output: 47 thoughts: 158 Thoughts: **Analyzing Btrfs Delayed Reference Management** Okay, so I'm looking at `btrfs_put_delayed_ref`, specifically at line 1985 in the code. I see that this function, when called, checks if the reference count of a `btrfs_delayed_ref_node` structure, represented by `ref`, drops to zero after being decremented. If it *does* hit zero, then `kmem_cache_free` is called, which means the memory occupied by that reference node is released. Seems standard enough for reference counting. I also notice that there's a WARN_ON check to see if the reference node is an empty Red-Black tree node, which, if true, means something is deeply wrong! Now, the refcount is likely initialized somewhere else, and that's where things can get tricky. I need to understand where the initial reference count is set. Let's see... *aha!* Wait a moment, I think `init_delayed_ref_common` is the likely candidate for setting the initial refcount. I'll need to double-check that to confirm. Gotta follow the trail... |
| 17/2 | 2026/02/06 08:01 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/btrfs/delayed-ref.c IncludeLines:true Name:init_delayed_ref_common] Results: map[SourceCode: 910: /*
911: * Initialize the structure which represents a modification to an extent.
912: *
913: * @fs_info: Internal to the mounted filesystem mount structure.
914: *
915: * @ref: The structure which is going to be initialized.
916: *
917: * @bytenr: The logical address of the extent for which a modification is
918: * going to be recorded.
919: *
920: * @num_bytes: Size of the extent whose modification is being recorded.
921: *
922: * @ref_root: The id of the root where this modification has originated, this
923: * can be either one of the well-known metadata trees or the
924: * subvolume id which references this extent.
925: *
926: * @action: Can be one of BTRFS_ADD_DELAYED_REF/BTRFS_DROP_DELAYED_REF or
927: * BTRFS_ADD_DELAYED_EXTENT
928: *
929: * @ref_type: Holds the type of the extent which is being recorded, can be
930: * one of BTRFS_SHARED_BLOCK_REF_KEY/BTRFS_TREE_BLOCK_REF_KEY
931: * when recording a metadata extent or BTRFS_SHARED_DATA_REF_KEY/
932: * BTRFS_EXTENT_DATA_REF_KEY when recording data extent
933: */
934: static void init_delayed_ref_common(struct btrfs_fs_info *fs_info,
935: struct btrfs_delayed_ref_node *ref,
936: struct btrfs_ref *generic_ref)
937: {
938: int action = generic_ref->action;
939: u64 seq = 0;
940:
941: if (action == BTRFS_ADD_DELAYED_EXTENT)
942: action = BTRFS_ADD_DELAYED_REF;
943:
944: if (btrfs_is_fstree(generic_ref->ref_root))
945: seq = atomic64_read(&fs_info->tree_mod_seq);
946:
947: refcount_set(&ref->refs, 1);
948: ref->bytenr = generic_ref->bytenr;
949: ref->num_bytes = generic_ref->num_bytes;
950: ref->ref_mod = 1;
951: ref->action = action;
952: ref->seq = seq;
953: ref->type = btrfs_ref_type(generic_ref);
954: ref->ref_root = generic_ref->ref_root;
955: ref->parent = generic_ref->parent;
956: RB_CLEAR_NODE(&ref->ref_node);
957: INIT_LIST_HEAD(&ref->add_list);
958:
959: if (generic_ref->type == BTRFS_REF_DATA)
960: ref->data_ref = generic_ref->data_ref;
961: else
962: ref->tree_ref = generic_ref->tree_ref;
963: }
SourceFile:fs/btrfs/delayed-ref.c] |
| 18/2 | 2026/02/06 08:01 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 20400 output: 26 thoughts: 575 Thoughts: **My Analysis of a Potential Use-After-Free in Btrfs Delayed References** Okay, let's break this down. Line 947, `refcount_set(&ref->refs, 1);` That's the initial refcount. So, in the `add_delayed_ref` function, the process looks like this: 1. A `node` is allocated, and its reference count is initialized to 1. Makes sense so far. 2. `insert_delayed_ref` places this `node` into the tree structure. Insertion typically implies ownership transfer, but we need to verify. 3. The critical line is the `spin_unlock(&delayed_refs->lock)`. We're releasing the lock that protects the delayed references. 4. Then, a separate thread jumps in and calls `__btrfs_run_delayed_refs`, which eventually calls `btrfs_put_delayed_ref(ref)`. 5. `btrfs_put_delayed_ref` then decrements the `node`'s refcount to 0, and *frees* the `node`. This is where the potential problem is brewing. 6. Back on the original thread, the code calls `trace_add_delayed_tree_ref(trans->fs_info, node)`. And the KASAN report is screaming about a Use-After-Free! The key question is: if `node` is in the tree, shouldn't *something* be holding a reference to it? Isn't that the whole point of a tree? If you insert something into a tree structure, you don't necessarily increment the refcount if you are *transferring* ownership. However, in this scenario, the original thread is still using `node` after it's been inserted into the tree. It looks as though ownership isn't being cleanly transferred. Looking closer, if the `merged` flag is false, here's what happens in `add_delayed_ref`: ```c 1089: trace_add_delayed_tree_ref(trans->fs_info, node); 1090: if (merged) 1091: kmem_cache_free(btrfs_delayed_ref_node_cachep, node); ``` When `merged` is false, it *doesn't* free `node` immediately. But it's already "given" to the tree via the insertion. If the tree "owns" that one reference, the original thread should *either* have incremented the refcount *or* held a lock to prevent the other thread from processing the tree until it's done with the node. The `delayed_refs->lock` was dropped at line 1078, which is before the potentially problematic trace call. So this looks like a textbook Use-After-Free vulnerability, where an object is added to a global structure (the tree/list) and then accessed by another thread without holding a proper reference or maintaining exclusive access using a lock. Let's verify the trace implementation. It's a macro. The KASAN report tells us exactly where the crash happens - `BUG: KASAN: slab-use-after-free in do_perf_trace_btrfs_delayed_tree_ref include/trace/events/btrfs.h:850 [inline]` I'll check `include/trace/events/btrfs.h` now to confirm. |
| 19/2 | 2026/02/06 08:01 | tool | read-file |
0mArgs:map[File:include/trace/events/btrfs.h] Results: map[Contents:/* SPDX-License-Identifier: GPL-2.0 */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM btrfs
#if !defined(_TRACE_BTRFS_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_BTRFS_H
#include <linux/writeback.h>
#include <linux/tracepoint.h>
#include <trace/events/mmflags.h>
struct btrfs_root;
struct btrfs_fs_info;
struct btrfs_inode;
struct extent_map;
struct btrfs_file_extent_item;
struct btrfs_ordered_extent;
struct btrfs_delayed_ref_node;
struct btrfs_delayed_ref_head;
struct btrfs_block_group;
struct btrfs_free_cluster;
struct btrfs_chunk_map;
struct extent_buffer;
struct btrfs_work;
struct btrfs_workqueue;
struct btrfs_qgroup_extent_record;
struct btrfs_qgroup;
struct extent_io_tree;
struct prelim_ref;
struct btrfs_space_info;
struct btrfs_raid_bio;
struct raid56_bio_trace_info;
struct find_free_extent_ctl;
#define show_ref_type(type) \
__print_symbolic(type, \
{ BTRFS_TREE_BLOCK_REF_KEY, "TREE_BLOCK_REF" }, \
{ BTRFS_EXTENT_DATA_REF_KEY, "EXTENT_DATA_REF" }, \
{ BTRFS_SHARED_BLOCK_REF_KEY, "SHARED_BLOCK_REF" }, \
{ BTRFS_SHARED_DATA_REF_KEY, "SHARED_DATA_REF" })
#define __show_root_type(obj) \
__print_symbolic_u64(obj, \
{ BTRFS_ROOT_TREE_OBJECTID, "ROOT_TREE" }, \
{ BTRFS_EXTENT_TREE_OBJECTID, "EXTENT_TREE" }, \
{ BTRFS_CHUNK_TREE_OBJECTID, "CHUNK_TREE" }, \
{ BTRFS_DEV_TREE_OBJECTID, "DEV_TREE" }, \
{ BTRFS_FS_TREE_OBJECTID, "FS_TREE" }, \
{ BTRFS_ROOT_TREE_DIR_OBJECTID, "ROOT_TREE_DIR" }, \
{ BTRFS_CSUM_TREE_OBJECTID, "CSUM_TREE" }, \
{ BTRFS_TREE_LOG_OBJECTID, "TREE_LOG" }, \
{ BTRFS_QUOTA_TREE_OBJECTID, "QUOTA_TREE" }, \
{ BTRFS_TREE_RELOC_OBJECTID, "TREE_RELOC" }, \
{ BTRFS_UUID_TREE_OBJECTID, "UUID_TREE" }, \
{ BTRFS_FREE_SPACE_TREE_OBJECTID, "FREE_SPACE_TREE" }, \
{ BTRFS_BLOCK_GROUP_TREE_OBJECTID, "BLOCK_GROUP_TREE" },\
{ BTRFS_DATA_RELOC_TREE_OBJECTID, "DATA_RELOC_TREE" })
#define show_root_type(obj) \
obj, ((obj >= BTRFS_DATA_RELOC_TREE_OBJECTID) || \
(obj >= BTRFS_ROOT_TREE_OBJECTID && \
obj <= BTRFS_QUOTA_TREE_OBJECTID)) ? __show_root_type(obj) : "-"
#define FLUSH_ACTIONS \
EM( BTRFS_RESERVE_NO_FLUSH, "BTRFS_RESERVE_NO_FLUSH") \
EM( BTRFS_RESERVE_FLUSH_LIMIT, "BTRFS_RESERVE_FLUSH_LIMIT") \
EM( BTRFS_RESERVE_FLUSH_ALL, "BTRFS_RESERVE_FLUSH_ALL") \
EMe(BTRFS_RESERVE_FLUSH_ALL_STEAL, "BTRFS_RESERVE_FLUSH_ALL_STEAL")
#define FI_TYPES \
EM( BTRFS_FILE_EXTENT_INLINE, "INLINE") \
EM( BTRFS_FILE_EXTENT_REG, "REG") \
EMe(BTRFS_FILE_EXTENT_PREALLOC, "PREALLOC")
#define QGROUP_RSV_TYPES \
EM( BTRFS_QGROUP_RSV_DATA, "DATA") \
EM( BTRFS_QGROUP_RSV_META_PERTRANS, "META_PERTRANS") \
EMe(BTRFS_QGROUP_RSV_META_PREALLOC, "META_PREALLOC")
#define IO_TREE_OWNER \
EM( IO_TREE_FS_PINNED_EXTENTS, "PINNED_EXTENTS") \
EM( IO_TREE_FS_EXCLUDED_EXTENTS, "EXCLUDED_EXTENTS") \
EM( IO_TREE_BTREE_INODE_IO, "BTREE_INODE_IO") \
EM( IO_TREE_INODE_IO, "INODE_IO") \
EM( IO_TREE_RELOC_BLOCKS, "RELOC_BLOCKS") \
EM( IO_TREE_TRANS_DIRTY_PAGES, "TRANS_DIRTY_PAGES") \
EM( IO_TREE_ROOT_DIRTY_LOG_PAGES, "ROOT_DIRTY_LOG_PAGES") \
EM( IO_TREE_INODE_FILE_EXTENT, "INODE_FILE_EXTENT") \
EM( IO_TREE_LOG_CSUM_RANGE, "LOG_CSUM_RANGE") \
EMe(IO_TREE_SELFTEST, "SELFTEST")
#define FLUSH_STATES \
EM( FLUSH_DELAYED_ITEMS_NR, "FLUSH_DELAYED_ITEMS_NR") \
EM( FLUSH_DELAYED_ITEMS, "FLUSH_DELAYED_ITEMS") \
EM( FLUSH_DELALLOC, "FLUSH_DELALLOC") \
EM( FLUSH_DELALLOC_WAIT, "FLUSH_DELALLOC_WAIT") \
EM( FLUSH_DELALLOC_FULL, "FLUSH_DELALLOC_FULL") \
EM( FLUSH_DELAYED_REFS_NR, "FLUSH_DELAYED_REFS_NR") \
EM( FLUSH_DELAYED_REFS, "FLUSH_DELAYED_REFS") \
EM( ALLOC_CHUNK, "ALLOC_CHUNK") \
EM( ALLOC_CHUNK_FORCE, "ALLOC_CHUNK_FORCE") \
EM( RUN_DELAYED_IPUTS, "RUN_DELAYED_IPUTS") \
EM( COMMIT_TRANS, "COMMIT_TRANS") \
EMe(RESET_ZONES, "RESET_ZONES")
/*
* First define the enums in the above macros to be exported to userspace via
* TRACE_DEFINE_ENUM().
*/
#undef EM
#undef EMe
#define EM(a, b) TRACE_DEFINE_ENUM(a);
#define EMe(a, b) TRACE_DEFINE_ENUM(a);
FLUSH_ACTIONS
FI_TYPES
QGROUP_RSV_TYPES
IO_TREE_OWNER
FLUSH_STATES
/*
* Now redefine the EM and EMe macros to map the enums to the strings that will
* be printed in the output
*/
#undef EM
#undef EMe
#define EM(a, b) {a, b},
#define EMe(a, b) {a, b}
#define BTRFS_GROUP_FLAGS \
{ BTRFS_BLOCK_GROUP_DATA, "DATA"}, \
{ BTRFS_BLOCK_GROUP_SYSTEM, "SYSTEM"}, \
{ BTRFS_BLOCK_GROUP_METADATA, "METADATA"}, \
{ BTRFS_BLOCK_GROUP_RAID0, "RAID0"}, \
{ BTRFS_BLOCK_GROUP_RAID1, "RAID1"}, \
{ BTRFS_BLOCK_GROUP_DUP, "DUP"}, \
{ BTRFS_BLOCK_GROUP_RAID10, "RAID10"}, \
{ BTRFS_BLOCK_GROUP_RAID5, "RAID5"}, \
{ BTRFS_BLOCK_GROUP_RAID6, "RAID6"}
#define EXTENT_FLAGS \
{ EXTENT_DIRTY, "DIRTY"}, \
{ EXTENT_LOCKED, "LOCKED"}, \
{ EXTENT_DIRTY_LOG1, "DIRTY_LOG1"}, \
{ EXTENT_DIRTY_LOG2, "DIRTY_LOG2"}, \
{ EXTENT_DELALLOC, "DELALLOC"}, \
{ EXTENT_DEFRAG, "DEFRAG"}, \
{ EXTENT_BOUNDARY, "BOUNDARY"}, \
{ EXTENT_NODATASUM, "NODATASUM"}, \
{ EXTENT_CLEAR_META_RESV, "CLEAR_META_RESV"}, \
{ EXTENT_NEED_WAIT, "NEED_WAIT"}, \
{ EXTENT_NORESERVE, "NORESERVE"}, \
{ EXTENT_QGROUP_RESERVED, "QGROUP_RESERVED"}, \
{ EXTENT_CLEAR_DATA_RESV, "CLEAR_DATA_RESV"}, \
{ EXTENT_DELALLOC_NEW, "DELALLOC_NEW"}
#define BTRFS_FSID_SIZE 16
#define TP_STRUCT__entry_fsid __array(u8, fsid, BTRFS_FSID_SIZE)
#define TP_fast_assign_fsid(fs_info) \
({ \
if (fs_info) \
memcpy(__entry->fsid, fs_info->fs_devices->fsid, \
BTRFS_FSID_SIZE); \
else \
memset(__entry->fsid, 0, BTRFS_FSID_SIZE); \
})
#define TP_STRUCT__entry_btrfs(args...) \
TP_STRUCT__entry( \
TP_STRUCT__entry_fsid \
args)
#define TP_fast_assign_btrfs(fs_info, args...) \
TP_fast_assign( \
TP_fast_assign_fsid(fs_info); \
args)
#define TP_printk_btrfs(fmt, args...) \
TP_printk("%pU: " fmt, __entry->fsid, args)
TRACE_EVENT(btrfs_transaction_commit,
TP_PROTO(const struct btrfs_fs_info *fs_info),
TP_ARGS(fs_info),
TP_STRUCT__entry_btrfs(
__field( u64, generation )
__field( u64, root_objectid )
),
TP_fast_assign_btrfs(fs_info,
__entry->generation = fs_info->generation;
__entry->root_objectid = BTRFS_ROOT_TREE_OBJECTID;
),
TP_printk_btrfs("root=%llu(%s) gen=%llu",
show_root_type(__entry->root_objectid),
__entry->generation)
);
DECLARE_EVENT_CLASS(btrfs__inode,
TP_PROTO(const struct inode *inode),
TP_ARGS(inode),
TP_STRUCT__entry_btrfs(
__field( u64, ino )
__field( u64, blocks )
__field( u64, disk_i_size )
__field( u64, generation )
__field( u64, last_trans )
__field( u64, logged_trans )
__field( u64, root_objectid )
),
TP_fast_assign_btrfs(btrfs_sb(inode->i_sb),
__entry->ino = btrfs_ino(BTRFS_I(inode));
__entry->blocks = inode->i_blocks;
__entry->disk_i_size = BTRFS_I(inode)->disk_i_size;
__entry->generation = BTRFS_I(inode)->generation;
__entry->last_trans = BTRFS_I(inode)->last_trans;
__entry->logged_trans = BTRFS_I(inode)->logged_trans;
__entry->root_objectid = btrfs_root_id(BTRFS_I(inode)->root);
),
TP_printk_btrfs("root=%llu(%s) gen=%llu ino=%llu blocks=%llu "
"disk_i_size=%llu last_trans=%llu logged_trans=%llu",
show_root_type(__entry->root_objectid),
__entry->generation,
__entry->ino,
__entry->blocks,
__entry->disk_i_size,
__entry->last_trans,
__entry->logged_trans)
);
DEFINE_EVENT(btrfs__inode, btrfs_inode_new,
TP_PROTO(const struct inode *inode),
TP_ARGS(inode)
);
DEFINE_EVENT(btrfs__inode, btrfs_inode_request,
TP_PROTO(const struct inode *inode),
TP_ARGS(inode)
);
DEFINE_EVENT(btrfs__inode, btrfs_inode_evict,
TP_PROTO(const struct inode *inode),
TP_ARGS(inode)
);
#define __show_map_type(type) \
__print_symbolic_u64(type, \
{ EXTENT_MAP_LAST_BYTE, "LAST_BYTE" }, \
{ EXTENT_MAP_HOLE, "HOLE" }, \
{ EXTENT_MAP_INLINE, "INLINE" })
#define show_map_type(type) \
type, (type >= EXTENT_MAP_LAST_BYTE) ? "-" : __show_map_type(type)
#define show_map_flags(flag) \
__print_flags(flag, "|", \
{ EXTENT_FLAG_PINNED, "PINNED" },\
{ EXTENT_FLAG_COMPRESS_ZLIB, "COMPRESS_ZLIB" },\
{ EXTENT_FLAG_COMPRESS_LZO, "COMPRESS_LZO" },\
{ EXTENT_FLAG_COMPRESS_ZSTD, "COMPRESS_ZSTD" },\
{ EXTENT_FLAG_PREALLOC, "PREALLOC" },\
{ EXTENT_FLAG_LOGGING, "LOGGING" })
TRACE_EVENT_CONDITION(btrfs_get_extent,
TP_PROTO(const struct btrfs_root *root, const struct btrfs_inode *inode,
const struct extent_map *map),
TP_ARGS(root, inode, map),
TP_CONDITION(map),
TP_STRUCT__entry_btrfs(
__field( u64, root_objectid )
__field( u64, ino )
__field( u64, start )
__field( u64, len )
__field( u32, flags )
__field( int, refs )
),
TP_fast_assign_btrfs(root->fs_info,
__entry->root_objectid = btrfs_root_id(root);
__entry->ino = btrfs_ino(inode);
__entry->start = map->start;
__entry->len = map->len;
__entry->flags = map->flags;
__entry->refs = refcount_read(&map->refs);
),
TP_printk_btrfs("root=%llu(%s) ino=%llu start=%llu len=%llu flags=%s refs=%u",
show_root_type(__entry->root_objectid),
__entry->ino,
__entry->start,
__entry->len,
show_map_flags(__entry->flags),
__entry->refs)
);
TRACE_EVENT(btrfs_handle_em_exist,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct extent_map *existing, const struct extent_map *map,
u64 start, u64 len),
TP_ARGS(fs_info, existing, map, start, len),
TP_STRUCT__entry_btrfs(
__field( u64, e_start )
__field( u64, e_len )
__field( u64, map_start )
__field( u64, map_len )
__field( u64, start )
__field( u64, len )
),
TP_fast_assign_btrfs(fs_info,
__entry->e_start = existing->start;
__entry->e_len = existing->len;
__entry->map_start = map->start;
__entry->map_len = map->len;
__entry->start = start;
__entry->len = len;
),
TP_printk_btrfs("start=%llu len=%llu "
"existing(start=%llu len=%llu) "
"em(start=%llu len=%llu)",
__entry->start,
__entry->len,
__entry->e_start,
__entry->e_len,
__entry->map_start,
__entry->map_len)
);
/* file extent item */
DECLARE_EVENT_CLASS(btrfs__file_extent_item_regular,
TP_PROTO(const struct btrfs_inode *bi, const struct extent_buffer *l,
const struct btrfs_file_extent_item *fi, u64 start),
TP_ARGS(bi, l, fi, start),
TP_STRUCT__entry_btrfs(
__field( u64, root_obj )
__field( u64, ino )
__field( loff_t, isize )
__field( u64, disk_isize )
__field( u64, num_bytes )
__field( u64, ram_bytes )
__field( u64, disk_bytenr )
__field( u64, disk_num_bytes )
__field( u64, extent_offset )
__field( u8, extent_type )
__field( u8, compression )
__field( u64, extent_start )
__field( u64, extent_end )
),
TP_fast_assign_btrfs(bi->root->fs_info,
__entry->root_obj = btrfs_root_id(bi->root);
__entry->ino = btrfs_ino(bi);
__entry->isize = bi->vfs_inode.i_size;
__entry->disk_isize = bi->disk_i_size;
__entry->num_bytes = btrfs_file_extent_num_bytes(l, fi);
__entry->ram_bytes = btrfs_file_extent_ram_bytes(l, fi);
__entry->disk_bytenr = btrfs_file_extent_disk_bytenr(l, fi);
__entry->disk_num_bytes = btrfs_file_extent_disk_num_bytes(l, fi);
__entry->extent_offset = btrfs_file_extent_offset(l, fi);
__entry->extent_type = btrfs_file_extent_type(l, fi);
__entry->compression = btrfs_file_extent_compression(l, fi);
__entry->extent_start = start;
__entry->extent_end = (start + __entry->num_bytes);
),
TP_printk_btrfs(
"root=%llu(%s) inode=%llu size=%llu disk_isize=%llu "
"file extent range=[%llu %llu] "
"(num_bytes=%llu ram_bytes=%llu disk_bytenr=%llu "
"disk_num_bytes=%llu extent_offset=%llu type=%s "
"compression=%u",
show_root_type(__entry->root_obj), __entry->ino,
__entry->isize,
__entry->disk_isize, __entry->extent_start,
__entry->extent_end, __entry->num_bytes, __entry->ram_bytes,
__entry->disk_bytenr, __entry->disk_num_bytes,
__entry->extent_offset, __print_symbolic(__entry->extent_type, FI_TYPES),
__entry->compression)
);
DECLARE_EVENT_CLASS(
btrfs__file_extent_item_inline,
TP_PROTO(const struct btrfs_inode *bi, const struct extent_buffer *l,
const struct btrfs_file_extent_item *fi, int slot, u64 start),
TP_ARGS(bi, l, fi, slot, start),
TP_STRUCT__entry_btrfs(
__field( u64, root_obj )
__field( u64, ino )
__field( loff_t, isize )
__field( u64, disk_isize )
__field( u8, extent_type )
__field( u8, compression )
__field( u64, extent_start )
__field( u64, extent_end )
),
TP_fast_assign_btrfs(
bi->root->fs_info,
__entry->root_obj = btrfs_root_id(bi->root);
__entry->ino = btrfs_ino(bi);
__entry->isize = bi->vfs_inode.i_size;
__entry->disk_isize = bi->disk_i_size;
__entry->extent_type = btrfs_file_extent_type(l, fi);
__entry->compression = btrfs_file_extent_compression(l, fi);
__entry->extent_start = start;
__entry->extent_end = (start + btrfs_file_extent_ram_bytes(l, fi));
),
TP_printk_btrfs(
"root=%llu(%s) inode=%llu size=%llu disk_isize=%llu "
"file extent range=[%llu %llu] "
"extent_type=%s compression=%u",
show_root_type(__entry->root_obj), __entry->ino, __entry->isize,
__entry->disk_isize, __entry->extent_start,
__entry->extent_end, __print_symbolic(__entry->extent_type, FI_TYPES),
__entry->compression)
);
DEFINE_EVENT(
btrfs__file_extent_item_regular, btrfs_get_extent_show_fi_regular,
TP_PROTO(const struct btrfs_inode *bi, const struct extent_buffer *l,
const struct btrfs_file_extent_item *fi, u64 start),
TP_ARGS(bi, l, fi, start)
);
DEFINE_EVENT(
btrfs__file_extent_item_regular, btrfs_truncate_show_fi_regular,
TP_PROTO(const struct btrfs_inode *bi, const struct extent_buffer *l,
const struct btrfs_file_extent_item *fi, u64 start),
TP_ARGS(bi, l, fi, start)
);
DEFINE_EVENT(
btrfs__file_extent_item_inline, btrfs_get_extent_show_fi_inline,
TP_PROTO(const struct btrfs_inode *bi, const struct extent_buffer *l,
const struct btrfs_file_extent_item *fi, int slot, u64 start),
TP_ARGS(bi, l, fi, slot, start)
);
DEFINE_EVENT(
btrfs__file_extent_item_inline, btrfs_truncate_show_fi_inline,
TP_PROTO(const struct btrfs_inode *bi, const struct extent_buffer *l,
const struct btrfs_file_extent_item *fi, int slot, u64 start),
TP_ARGS(bi, l, fi, slot, start)
);
#define show_ordered_flags(flags) \
__print_flags(flags, "|", \
{ (1 << BTRFS_ORDERED_REGULAR), "REGULAR" }, \
{ (1 << BTRFS_ORDERED_NOCOW), "NOCOW" }, \
{ (1 << BTRFS_ORDERED_PREALLOC), "PREALLOC" }, \
{ (1 << BTRFS_ORDERED_COMPRESSED), "COMPRESSED" }, \
{ (1 << BTRFS_ORDERED_DIRECT), "DIRECT" }, \
{ (1 << BTRFS_ORDERED_IO_DONE), "IO_DONE" }, \
{ (1 << BTRFS_ORDERED_COMPLETE), "COMPLETE" }, \
{ (1 << BTRFS_ORDERED_IOERR), "IOERR" }, \
{ (1 << BTRFS_ORDERED_TRUNCATED), "TRUNCATED" })
DECLARE_EVENT_CLASS(btrfs__ordered_extent,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered),
TP_STRUCT__entry_btrfs(
__field( u64, ino )
__field( u64, file_offset )
__field( u64, start )
__field( u64, len )
__field( u64, disk_len )
__field( u64, bytes_left )
__field( unsigned long, flags )
__field( int, compress_type )
__field( int, refs )
__field( u64, root_objectid )
__field( u64, truncated_len )
),
TP_fast_assign_btrfs(inode->root->fs_info,
__entry->ino = btrfs_ino(inode);
__entry->file_offset = ordered->file_offset;
__entry->start = ordered->disk_bytenr;
__entry->len = ordered->num_bytes;
__entry->disk_len = ordered->disk_num_bytes;
__entry->bytes_left = ordered->bytes_left;
__entry->flags = ordered->flags;
__entry->compress_type = ordered->compress_type;
__entry->refs = refcount_read(&ordered->refs);
__entry->root_objectid = btrfs_root_id(inode->root);
__entry->truncated_len = ordered->truncated_len;
),
TP_printk_btrfs("root=%llu(%s) ino=%llu file_offset=%llu "
"start=%llu len=%llu disk_len=%llu "
"truncated_len=%llu "
"bytes_left=%llu flags=%s compress_type=%d "
"refs=%d",
show_root_type(__entry->root_objectid),
__entry->ino,
__entry->file_offset,
__entry->start,
__entry->len,
__entry->disk_len,
__entry->truncated_len,
__entry->bytes_left,
show_ordered_flags(__entry->flags),
__entry->compress_type, __entry->refs)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_add,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_remove,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_start,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_put,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_lookup,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_lookup_range,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_lookup_first_range,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_lookup_for_logging,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_lookup_first,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_split,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_dec_test_pending,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
DEFINE_EVENT(btrfs__ordered_extent, btrfs_ordered_extent_mark_finished,
TP_PROTO(const struct btrfs_inode *inode,
const struct btrfs_ordered_extent *ordered),
TP_ARGS(inode, ordered)
);
TRACE_EVENT(btrfs_finish_ordered_extent,
TP_PROTO(const struct btrfs_inode *inode, u64 start, u64 len,
bool uptodate),
TP_ARGS(inode, start, len, uptodate),
TP_STRUCT__entry_btrfs(
__field( u64, ino )
__field( u64, start )
__field( u64, len )
__field( bool, uptodate )
__field( u64, root_objectid )
),
TP_fast_assign_btrfs(inode->root->fs_info,
__entry->ino = btrfs_ino(inode);
__entry->start = start;
__entry->len = len;
__entry->uptodate = uptodate;
__entry->root_objectid = btrfs_root_id(inode->root);
),
TP_printk_btrfs("root=%llu(%s) ino=%llu start=%llu len=%llu uptodate=%d",
show_root_type(__entry->root_objectid),
__entry->ino, __entry->start,
__entry->len, !!__entry->uptodate)
);
DECLARE_EVENT_CLASS(btrfs__writepage,
TP_PROTO(const struct folio *folio, const struct inode *inode,
const struct writeback_control *wbc),
TP_ARGS(folio, inode, wbc),
TP_STRUCT__entry_btrfs(
__field( u64, ino )
__field( pgoff_t, index )
__field( long, nr_to_write )
__field( long, pages_skipped )
__field( loff_t, range_start )
__field( loff_t, range_end )
__field( char, for_kupdate )
__field( char, range_cyclic )
__field( unsigned long, writeback_index )
__field( u64, root_objectid )
),
TP_fast_assign_btrfs(btrfs_sb(inode->i_sb),
__entry->ino = btrfs_ino(BTRFS_I(inode));
__entry->index = folio->index;
__entry->nr_to_write = wbc->nr_to_write;
__entry->pages_skipped = wbc->pages_skipped;
__entry->range_start = wbc->range_start;
__entry->range_end = wbc->range_end;
__entry->for_kupdate = wbc->for_kupdate;
__entry->range_cyclic = wbc->range_cyclic;
__entry->writeback_index = inode->i_mapping->writeback_index;
__entry->root_objectid = btrfs_root_id(BTRFS_I(inode)->root);
),
TP_printk_btrfs("root=%llu(%s) ino=%llu page_index=%lu "
"nr_to_write=%ld pages_skipped=%ld range_start=%llu "
"range_end=%llu for_kupdate=%d "
"range_cyclic=%d writeback_index=%lu",
show_root_type(__entry->root_objectid),
__entry->ino, __entry->index,
__entry->nr_to_write, __entry->pages_skipped,
__entry->range_start, __entry->range_end,
__entry->for_kupdate, __entry->range_cyclic,
__entry->writeback_index)
);
DEFINE_EVENT(btrfs__writepage, extent_writepage,
TP_PROTO(const struct folio *folio, const struct inode *inode,
const struct writeback_control *wbc),
TP_ARGS(folio, inode, wbc)
);
TRACE_EVENT(btrfs_writepage_end_io_hook,
TP_PROTO(const struct btrfs_inode *inode, u64 start, u64 end,
int uptodate),
TP_ARGS(inode, start, end, uptodate),
TP_STRUCT__entry_btrfs(
__field( u64, ino )
__field( u64, start )
__field( u64, end )
__field( int, uptodate )
__field( u64, root_objectid )
),
TP_fast_assign_btrfs(inode->root->fs_info,
__entry->ino = btrfs_ino(inode);
__entry->start = start;
__entry->end = end;
__entry->uptodate = uptodate;
__entry->root_objectid = btrfs_root_id(inode->root);
),
TP_printk_btrfs("root=%llu(%s) ino=%llu start=%llu end=%llu uptodate=%d",
show_root_type(__entry->root_objectid),
__entry->ino, __entry->start,
__entry->end, __entry->uptodate)
);
TRACE_EVENT(btrfs_sync_file,
TP_PROTO(const struct file *file, int datasync),
TP_ARGS(file, datasync),
TP_STRUCT__entry_btrfs(
__field( u64, ino )
__field( u64, parent )
__field( int, datasync )
__field( u64, root_objectid )
),
TP_fast_assign(
const struct dentry *dentry = file->f_path.dentry;
const struct inode *inode = d_inode(dentry);
TP_fast_assign_fsid(btrfs_sb(file->f_path.dentry->d_sb));
__entry->ino = btrfs_ino(BTRFS_I(inode));
__entry->parent = btrfs_ino(BTRFS_I(d_inode(dentry->d_parent)));
__entry->datasync = datasync;
__entry->root_objectid = btrfs_root_id(BTRFS_I(inode)->root);
),
TP_printk_btrfs("root=%llu(%s) ino=%llu parent=%llu datasync=%d",
show_root_type(__entry->root_objectid),
__entry->ino,
__entry->parent,
__entry->datasync)
);
TRACE_EVENT(btrfs_sync_fs,
TP_PROTO(const struct btrfs_fs_info *fs_info, int wait),
TP_ARGS(fs_info, wait),
TP_STRUCT__entry_btrfs(
__field( int, wait )
),
TP_fast_assign_btrfs(fs_info,
__entry->wait = wait;
),
TP_printk_btrfs("wait=%d", __entry->wait)
);
TRACE_EVENT(btrfs_add_block_group,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_block_group *block_group, int create),
TP_ARGS(fs_info, block_group, create),
TP_STRUCT__entry_btrfs(
__field( u64, offset )
__field( u64, size )
__field( u64, flags )
__field( u64, bytes_used )
__field( u64, bytes_super )
__field( int, create )
),
TP_fast_assign_btrfs(fs_info,
__entry->offset = block_group->start;
__entry->size = block_group->length;
__entry->flags = block_group->flags;
__entry->bytes_used = block_group->used;
__entry->bytes_super = block_group->bytes_super;
__entry->create = create;
),
TP_printk_btrfs("block_group offset=%llu size=%llu "
"flags=%llu(%s) bytes_used=%llu bytes_super=%llu "
"create=%d",
__entry->offset,
__entry->size,
__entry->flags,
__print_flags((unsigned long)__entry->flags, "|",
BTRFS_GROUP_FLAGS),
__entry->bytes_used,
__entry->bytes_super, __entry->create)
);
#define show_ref_action(action) \
__print_symbolic(action, \
{ BTRFS_ADD_DELAYED_REF, "ADD_DELAYED_REF" }, \
{ BTRFS_DROP_DELAYED_REF, "DROP_DELAYED_REF" }, \
{ BTRFS_ADD_DELAYED_EXTENT, "ADD_DELAYED_EXTENT" }, \
{ BTRFS_UPDATE_DELAYED_HEAD, "UPDATE_DELAYED_HEAD" })
DECLARE_EVENT_CLASS(btrfs_delayed_tree_ref,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_delayed_ref_node *ref),
TP_ARGS(fs_info, ref),
TP_STRUCT__entry_btrfs(
__field( u64, bytenr )
__field( u64, num_bytes )
__field( int, action )
__field( u64, parent )
__field( u64, ref_root )
__field( int, level )
__field( int, type )
__field( u64, seq )
),
TP_fast_assign_btrfs(fs_info,
__entry->bytenr = ref->bytenr;
__entry->num_bytes = ref->num_bytes;
__entry->action = ref->action;
__entry->parent = ref->parent;
__entry->ref_root = ref->ref_root;
__entry->level = ref->tree_ref.level;
__entry->type = ref->type;
__entry->seq = ref->seq;
),
TP_printk_btrfs("bytenr=%llu num_bytes=%llu action=%s "
"parent=%llu(%s) ref_root=%llu(%s) level=%d "
"type=%s seq=%llu",
__entry->bytenr,
__entry->num_bytes,
show_ref_action(__entry->action),
show_root_type(__entry->parent),
show_root_type(__entry->ref_root),
__entry->level, show_ref_type(__entry->type),
__entry->seq)
);
DEFINE_EVENT(btrfs_delayed_tree_ref, add_delayed_tree_ref,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_delayed_ref_node *ref),
TP_ARGS(fs_info, ref)
);
DEFINE_EVENT(btrfs_delayed_tree_ref, run_delayed_tree_ref,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_delayed_ref_node *ref),
TP_ARGS(fs_info, ref)
);
DECLARE_EVENT_CLASS(btrfs_delayed_data_ref,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_delayed_ref_node *ref),
TP_ARGS(fs_info, ref),
TP_STRUCT__entry_btrfs(
__field( u64, bytenr )
__field( u64, num_bytes )
__field( int, action )
__field( u64, parent )
__field( u64, ref_root )
__field( u64, owner )
__field( u64, offset )
__field( int, type )
__field( u64, seq )
),
TP_fast_assign_btrfs(fs_info,
__entry->bytenr = ref->bytenr;
__entry->num_bytes = ref->num_bytes;
__entry->action = ref->action;
__entry->parent = ref->parent;
__entry->ref_root = ref->ref_root;
__entry->owner = ref->data_ref.objectid;
__entry->offset = ref->data_ref.offset;
__entry->type = ref->type;
__entry->seq = ref->seq;
),
TP_printk_btrfs("bytenr=%llu num_bytes=%llu action=%s "
"parent=%llu(%s) ref_root=%llu(%s) owner=%llu "
"offset=%llu type=%s seq=%llu",
__entry->bytenr,
__entry->num_bytes,
show_ref_action(__entry->action),
show_root_type(__entry->parent),
show_root_type(__entry->ref_root),
__entry->owner,
__entry->offset,
show_ref_type(__entry->type),
__entry->seq)
);
DEFINE_EVENT(btrfs_delayed_data_ref, add_delayed_data_ref,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_delayed_ref_node *ref),
TP_ARGS(fs_info, ref)
);
DEFINE_EVENT(btrfs_delayed_data_ref, run_delayed_data_ref,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_delayed_ref_node *ref),
TP_ARGS(fs_info, ref)
);
DECLARE_EVENT_CLASS(btrfs_delayed_ref_head,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_delayed_ref_head *head_ref,
int action),
TP_ARGS(fs_info, head_ref, action),
TP_STRUCT__entry_btrfs(
__field( u64, bytenr )
__field( u64, num_bytes )
__field( int, action )
__field( int, is_data )
),
TP_fast_assign_btrfs(fs_info,
__entry->bytenr = head_ref->bytenr;
__entry->num_bytes = head_ref->num_bytes;
__entry->action = action;
__entry->is_data = head_ref->is_data;
),
TP_printk_btrfs("bytenr=%llu num_bytes=%llu action=%s is_data=%d",
__entry->bytenr,
__entry->num_bytes,
show_ref_action(__entry->action),
__entry->is_data)
);
DEFINE_EVENT(btrfs_delayed_ref_head, add_delayed_ref_head,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_delayed_ref_head *head_ref,
int action),
TP_ARGS(fs_info, head_ref, action)
);
DEFINE_EVENT(btrfs_delayed_ref_head, run_delayed_ref_head,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_delayed_ref_head *head_ref,
int action),
TP_ARGS(fs_info, head_ref, action)
);
#define show_chunk_type(type) \
__print_flags(type, "|", \
{ BTRFS_BLOCK_GROUP_DATA, "DATA" }, \
{ BTRFS_BLOCK_GROUP_SYSTEM, "SYSTEM"}, \
{ BTRFS_BLOCK_GROUP_METADATA, "METADATA"}, \
{ BTRFS_BLOCK_GROUP_RAID0, "RAID0" }, \
{ BTRFS_BLOCK_GROUP_RAID1, "RAID1" }, \
{ BTRFS_BLOCK_GROUP_DUP, "DUP" }, \
{ BTRFS_BLOCK_GROUP_RAID10, "RAID10"}, \
{ BTRFS_BLOCK_GROUP_RAID5, "RAID5" }, \
{ BTRFS_BLOCK_GROUP_RAID6, "RAID6" })
DECLARE_EVENT_CLASS(btrfs__chunk,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_chunk_map *map, u64 offset, u64 size),
TP_ARGS(fs_info, map, offset, size),
TP_STRUCT__entry_btrfs(
__field( int, num_stripes )
__field( u64, type )
__field( int, sub_stripes )
__field( u64, offset )
__field( u64, size )
__field( u64, root_objectid )
),
TP_fast_assign_btrfs(fs_info,
__entry->num_stripes = map->num_stripes;
__entry->type = map->type;
__entry->sub_stripes = map->sub_stripes;
__entry->offset = offset;
__entry->size = size;
__entry->root_objectid = btrfs_root_id(fs_info->chunk_root);
),
TP_printk_btrfs("root=%llu(%s) offset=%llu size=%llu "
"num_stripes=%d sub_stripes=%d type=%s",
show_root_type(__entry->root_objectid),
__entry->offset,
__entry->size,
__entry->num_stripes, __entry->sub_stripes,
show_chunk_type(__entry->type))
);
DEFINE_EVENT(btrfs__chunk, btrfs_chunk_alloc,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_chunk_map *map, u64 offset, u64 size),
TP_ARGS(fs_info, map, offset, size)
);
DEFINE_EVENT(btrfs__chunk, btrfs_chunk_free,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_chunk_map *map, u64 offset, u64 size),
TP_ARGS(fs_info, map, offset, size)
);
TRACE_EVENT(btrfs_cow_block,
TP_PROTO(const struct btrfs_root *root, const struct extent_buffer *buf,
const struct extent_buffer *cow),
TP_ARGS(root, buf, cow),
TP_STRUCT__entry_btrfs(
__field( u64, root_objectid )
__field( u64, buf_start )
__field( int, refs )
__field( u64, cow_start )
__field( int, buf_level )
__field( int, cow_level )
),
TP_fast_assign_btrfs(root->fs_info,
__entry->root_objectid = btrfs_root_id(root);
__entry->buf_start = buf->start;
__entry->refs = refcount_read(&buf->refs);
__entry->cow_start = cow->start;
__entry->buf_level = btrfs_header_level(buf);
__entry->cow_level = btrfs_header_level(cow);
),
TP_printk_btrfs("root=%llu(%s) refs=%d orig_buf=%llu "
"(orig_level=%d) cow_buf=%llu (cow_level=%d)",
show_root_type(__entry->root_objectid),
__entry->refs,
__entry->buf_start,
__entry->buf_level,
__entry->cow_start,
__entry->cow_level)
);
TRACE_EVENT(btrfs_space_reservation,
TP_PROTO(const struct btrfs_fs_info *fs_info, const char *type, u64 val,
u64 bytes, int reserve),
TP_ARGS(fs_info, type, val, bytes, reserve),
TP_STRUCT__entry_btrfs(
__string( type, type )
__field( u64, val )
__field( u64, bytes )
__field( int, reserve )
),
TP_fast_assign_btrfs(fs_info,
__assign_str(type);
__entry->val = val;
__entry->bytes = bytes;
__entry->reserve = reserve;
),
TP_printk_btrfs("%s: %llu %s %llu", __get_str(type), __entry->val,
__entry->reserve ? "reserve" : "release",
__entry->bytes)
);
TRACE_EVENT(btrfs_trigger_flush,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 flags, u64 bytes,
int flush, const char *reason),
TP_ARGS(fs_info, flags, bytes, flush, reason),
TP_STRUCT__entry_btrfs(
__field( u64, flags )
__field( u64, bytes )
__field( int, flush )
__string( reason, reason )
),
TP_fast_assign_btrfs(fs_info,
__entry->flags = flags;
__entry->bytes = bytes;
__entry->flush = flush;
__assign_str(reason);
),
TP_printk_btrfs("%s: flush=%d(%s) flags=%llu(%s) bytes=%llu",
__get_str(reason), __entry->flush,
__print_symbolic(__entry->flush, FLUSH_ACTIONS),
__entry->flags,
__print_flags((unsigned long)__entry->flags, "|",
BTRFS_GROUP_FLAGS),
__entry->bytes)
);
TRACE_EVENT(btrfs_flush_space,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 flags, u64 num_bytes,
int state, int ret, bool for_preempt),
TP_ARGS(fs_info, flags, num_bytes, state, ret, for_preempt),
TP_STRUCT__entry_btrfs(
__field( u64, flags )
__field( u64, num_bytes )
__field( int, state )
__field( int, ret )
__field( bool, for_preempt )
),
TP_fast_assign_btrfs(fs_info,
__entry->flags = flags;
__entry->num_bytes = num_bytes;
__entry->state = state;
__entry->ret = ret;
__entry->for_preempt = for_preempt;
),
TP_printk_btrfs("state=%d(%s) flags=%llu(%s) num_bytes=%llu ret=%d for_preempt=%d",
__entry->state,
__print_symbolic(__entry->state, FLUSH_STATES),
__entry->flags,
__print_flags((unsigned long)__entry->flags, "|",
BTRFS_GROUP_FLAGS),
__entry->num_bytes, __entry->ret, __entry->for_preempt)
);
DECLARE_EVENT_CLASS(btrfs__reserved_extent,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 start, u64 len),
TP_ARGS(fs_info, start, len),
TP_STRUCT__entry_btrfs(
__field( u64, start )
__field( u64, len )
),
TP_fast_assign_btrfs(fs_info,
__entry->start = start;
__entry->len = len;
),
TP_printk_btrfs("root=%llu(%s) start=%llu len=%llu",
show_root_type(BTRFS_EXTENT_TREE_OBJECTID),
__entry->start,
__entry->len)
);
DEFINE_EVENT(btrfs__reserved_extent, btrfs_reserved_extent_alloc,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 start, u64 len),
TP_ARGS(fs_info, start, len)
);
DEFINE_EVENT(btrfs__reserved_extent, btrfs_reserved_extent_free,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 start, u64 len),
TP_ARGS(fs_info, start, len)
);
TRACE_EVENT(btrfs_find_free_extent,
TP_PROTO(const struct btrfs_root *root,
const struct find_free_extent_ctl *ffe_ctl),
TP_ARGS(root, ffe_ctl),
TP_STRUCT__entry_btrfs(
__field( u64, root_objectid )
__field( u64, num_bytes )
__field( u64, empty_size )
__field( u64, flags )
),
TP_fast_assign_btrfs(root->fs_info,
__entry->root_objectid = btrfs_root_id(root);
__entry->num_bytes = ffe_ctl->num_bytes;
__entry->empty_size = ffe_ctl->empty_size;
__entry->flags = ffe_ctl->flags;
),
TP_printk_btrfs("root=%llu(%s) len=%llu empty_size=%llu flags=%llu(%s)",
show_root_type(__entry->root_objectid),
__entry->num_bytes, __entry->empty_size, __entry->flags,
__print_flags((unsigned long)__entry->flags, "|",
BTRFS_GROUP_FLAGS))
);
TRACE_EVENT(btrfs_find_free_extent_search_loop,
TP_PROTO(const struct btrfs_root *root,
const struct find_free_extent_ctl *ffe_ctl),
TP_ARGS(root, ffe_ctl),
TP_STRUCT__entry_btrfs(
__field( u64, root_objectid )
__field( u64, num_bytes )
__field( u64, empty_size )
__field( u64, flags )
__field( u64, loop )
),
TP_fast_assign_btrfs(root->fs_info,
__entry->root_objectid = btrfs_root_id(root);
__entry->num_bytes = ffe_ctl->num_bytes;
__entry->empty_size = ffe_ctl->empty_size;
__entry->flags = ffe_ctl->flags;
__entry->loop = ffe_ctl->loop;
),
TP_printk_btrfs("root=%llu(%s) len=%llu empty_size=%llu flags=%llu(%s) loop=%llu",
show_root_type(__entry->root_objectid),
__entry->num_bytes, __entry->empty_size, __entry->flags,
__print_flags((unsigned long)__entry->flags, "|", BTRFS_GROUP_FLAGS),
__entry->loop)
);
TRACE_EVENT(btrfs_find_free_extent_have_block_group,
TP_PROTO(const struct btrfs_root *root,
const struct find_free_extent_ctl *ffe_ctl,
const struct btrfs_block_group *block_group),
TP_ARGS(root, ffe_ctl, block_group),
TP_STRUCT__entry_btrfs(
__field( u64, root_objectid )
__field( u64, num_bytes )
__field( u64, empty_size )
__field( u64, flags )
__field( u64, loop )
__field( bool, hinted )
__field( u64, bg_start )
__field( u64, bg_flags )
),
TP_fast_assign_btrfs(root->fs_info,
__entry->root_objectid = btrfs_root_id(root);
__entry->num_bytes = ffe_ctl->num_bytes;
__entry->empty_size = ffe_ctl->empty_size;
__entry->flags = ffe_ctl->flags;
__entry->loop = ffe_ctl->loop;
__entry->hinted = ffe_ctl->hinted;
__entry->bg_start = block_group->start;
__entry->bg_flags = block_group->flags;
),
TP_printk_btrfs(
"root=%llu(%s) len=%llu empty_size=%llu flags=%llu(%s) loop=%llu hinted=%d block_group=%llu bg_flags=%llu(%s)",
show_root_type(__entry->root_objectid),
__entry->num_bytes, __entry->empty_size, __entry->flags,
__print_flags((unsigned long)__entry->flags, "|", BTRFS_GROUP_FLAGS),
__entry->loop, __entry->hinted,
__entry->bg_start, __entry->bg_flags,
__print_flags((unsigned long)__entry->bg_flags, "|",
BTRFS_GROUP_FLAGS))
);
DECLARE_EVENT_CLASS(btrfs__reserve_extent,
TP_PROTO(const struct btrfs_block_group *block_group,
const struct find_free_extent_ctl *ffe_ctl),
TP_ARGS(block_group, ffe_ctl),
TP_STRUCT__entry_btrfs(
__field( u64, bg_objectid )
__field( u64, flags )
__field( int, bg_size_class )
__field( u64, start )
__field( u64, len )
__field( u64, loop )
__field( bool, hinted )
__field( int, size_class )
),
TP_fast_assign_btrfs(block_group->fs_info,
__entry->bg_objectid = block_group->start;
__entry->flags = block_group->flags;
__entry->bg_size_class = block_group->size_class;
__entry->start = ffe_ctl->search_start;
__entry->len = ffe_ctl->num_bytes;
__entry->loop = ffe_ctl->loop;
__entry->hinted = ffe_ctl->hinted;
__entry->size_class = ffe_ctl->size_class;
),
TP_printk_btrfs(
"root=%llu(%s) block_group=%llu flags=%llu(%s) bg_size_class=%d start=%llu len=%llu loop=%llu hinted=%d size_class=%d",
show_root_type(BTRFS_EXTENT_TREE_OBJECTID),
__entry->bg_objectid,
__entry->flags, __print_flags((unsigned long)__entry->flags,
"|", BTRFS_GROUP_FLAGS),
__entry->bg_size_class, __entry->start, __entry->len,
__entry->loop, __entry->hinted, __entry->size_class)
);
DEFINE_EVENT(btrfs__reserve_extent, btrfs_reserve_extent,
TP_PROTO(const struct btrfs_block_group *block_group,
const struct find_free_extent_ctl *ffe_ctl),
TP_ARGS(block_group, ffe_ctl)
);
DEFINE_EVENT(btrfs__reserve_extent, btrfs_reserve_extent_cluster,
TP_PROTO(const struct btrfs_block_group *block_group,
const struct find_free_extent_ctl *ffe_ctl),
TP_ARGS(block_group, ffe_ctl)
);
TRACE_EVENT(btrfs_find_cluster,
TP_PROTO(const struct btrfs_block_group *block_group, u64 start,
u64 bytes, u64 empty_size, u64 min_bytes),
TP_ARGS(block_group, start, bytes, empty_size, min_bytes),
TP_STRUCT__entry_btrfs(
__field( u64, bg_objectid )
__field( u64, flags )
__field( u64, start )
__field( u64, bytes )
__field( u64, empty_size )
__field( u64, min_bytes )
),
TP_fast_assign_btrfs(block_group->fs_info,
__entry->bg_objectid = block_group->start;
__entry->flags = block_group->flags;
__entry->start = start;
__entry->bytes = bytes;
__entry->empty_size = empty_size;
__entry->min_bytes = min_bytes;
),
TP_printk_btrfs("block_group=%llu flags=%llu(%s) start=%llu len=%llu "
"empty_size=%llu min_bytes=%llu", __entry->bg_objectid,
__entry->flags,
__print_flags((unsigned long)__entry->flags, "|",
BTRFS_GROUP_FLAGS), __entry->start,
__entry->bytes, __entry->empty_size, __entry->min_bytes)
);
TRACE_EVENT(btrfs_failed_cluster_setup,
TP_PROTO(const struct btrfs_block_group *block_group),
TP_ARGS(block_group),
TP_STRUCT__entry_btrfs(
__field( u64, bg_objectid )
),
TP_fast_assign_btrfs(block_group->fs_info,
__entry->bg_objectid = block_group->start;
),
TP_printk_btrfs("block_group=%llu", __entry->bg_objectid)
);
TRACE_EVENT(btrfs_setup_cluster,
TP_PROTO(const struct btrfs_block_group *block_group,
const struct btrfs_free_cluster *cluster,
u64 size, int bitmap),
TP_ARGS(block_group, cluster, size, bitmap),
TP_STRUCT__entry_btrfs(
__field( u64, bg_objectid )
__field( u64, flags )
__field( u64, start )
__field( u64, max_size )
__field( u64, size )
__field( int, bitmap )
),
TP_fast_assign_btrfs(block_group->fs_info,
__entry->bg_objectid = block_group->start;
__entry->flags = block_group->flags;
__entry->start = cluster->window_start;
__entry->max_size = cluster->max_size;
__entry->size = size;
__entry->bitmap = bitmap;
),
TP_printk_btrfs("block_group=%llu flags=%llu(%s) window_start=%llu "
"size=%llu max_size=%llu bitmap=%d",
__entry->bg_objectid,
__entry->flags,
__print_flags((unsigned long)__entry->flags, "|",
BTRFS_GROUP_FLAGS), __entry->start,
__entry->size, __entry->max_size, __entry->bitmap)
);
struct extent_state;
TRACE_EVENT(btrfs_alloc_extent_state,
TP_PROTO(const struct extent_state *state,
gfp_t mask, unsigned long IP),
TP_ARGS(state, mask, IP),
TP_STRUCT__entry(
__field(const struct extent_state *, state)
__field(unsigned long, mask)
__field(const void*, ip)
),
TP_fast_assign(
__entry->state = state,
__entry->mask = (__force unsigned long)mask,
__entry->ip = (const void *)IP
),
TP_printk("state=%p mask=%s caller=%pS", __entry->state,
show_gfp_flags(__entry->mask), __entry->ip)
);
TRACE_EVENT(btrfs_free_extent_state,
TP_PROTO(const struct extent_state *state, unsigned long IP),
TP_ARGS(state, IP),
TP_STRUCT__entry(
__field(const struct extent_state *, state)
__field(const void*, ip)
),
TP_fast_assign(
__entry->state = state,
__entry->ip = (const void *)IP
),
TP_printk("state=%p caller=%pS", __entry->state, __entry->ip)
);
DECLARE_EVENT_CLASS(btrfs__work,
TP_PROTO(const struct btrfs_work *work),
TP_ARGS(work),
TP_STRUCT__entry_btrfs(
__field( const void *, work )
__field( const void *, wq )
__field( const void *, func )
__field( const void *, ordered_func )
__field( const void *, normal_work )
),
TP_fast_assign_btrfs(btrfs_work_owner(work),
__entry->work = work;
__entry->wq = work->wq;
__entry->func = work->func;
__entry->ordered_func = work->ordered_func;
__entry->normal_work = &work->normal_work;
),
TP_printk_btrfs("work=%p (normal_work=%p) wq=%p func=%ps ordered_func=%p",
__entry->work, __entry->normal_work, __entry->wq,
__entry->func, __entry->ordered_func)
);
/*
* For situations when the work is freed, we pass fs_info and a tag that matches
* the address of the work structure so it can be paired with the scheduling
* event. DO NOT add anything here that dereferences wtag.
*/
DECLARE_EVENT_CLASS(btrfs__work__done,
TP_PROTO(const struct btrfs_fs_info *fs_info, const void *wtag),
TP_ARGS(fs_info, wtag),
TP_STRUCT__entry_btrfs(
__field( const void *, wtag )
),
TP_fast_assign_btrfs(fs_info,
__entry->wtag = wtag;
),
TP_printk_btrfs("work->%p", __entry->wtag)
);
DEFINE_EVENT(btrfs__work, btrfs_work_queued,
TP_PROTO(const struct btrfs_work *work),
TP_ARGS(work)
);
DEFINE_EVENT(btrfs__work, btrfs_work_sched,
TP_PROTO(const struct btrfs_work *work),
TP_ARGS(work)
);
DEFINE_EVENT(btrfs__work__done, btrfs_all_work_done,
TP_PROTO(const struct btrfs_fs_info *fs_info, const void *wtag),
TP_ARGS(fs_info, wtag)
);
DEFINE_EVENT(btrfs__work, btrfs_ordered_sched,
TP_PROTO(const struct btrfs_work *work),
TP_ARGS(work)
);
DECLARE_EVENT_CLASS(btrfs_workqueue,
TP_PROTO(const struct btrfs_workqueue *wq, const char *name),
TP_ARGS(wq, name),
TP_STRUCT__entry_btrfs(
__field( const void *, wq )
__string( name, name )
),
TP_fast_assign_btrfs(btrfs_workqueue_owner(wq),
__entry->wq = wq;
__assign_str(name);
),
TP_printk_btrfs("name=%s wq=%p", __get_str(name),
__entry->wq)
);
DEFINE_EVENT(btrfs_workqueue, btrfs_workqueue_alloc,
TP_PROTO(const struct btrfs_workqueue *wq, const char *name),
TP_ARGS(wq, name)
);
DECLARE_EVENT_CLASS(btrfs_workqueue_done,
TP_PROTO(const struct btrfs_workqueue *wq),
TP_ARGS(wq),
TP_STRUCT__entry_btrfs(
__field( const void *, wq )
),
TP_fast_assign_btrfs(btrfs_workqueue_owner(wq),
__entry->wq = wq;
),
TP_printk_btrfs("wq=%p", __entry->wq)
);
DEFINE_EVENT(btrfs_workqueue_done, btrfs_workqueue_destroy,
TP_PROTO(const struct btrfs_workqueue *wq),
TP_ARGS(wq)
);
#define BTRFS_QGROUP_OPERATIONS \
{ QGROUP_RESERVE, "reserve" }, \
{ QGROUP_RELEASE, "release" }, \
{ QGROUP_FREE, "free" }
DECLARE_EVENT_CLASS(btrfs__qgroup_rsv_data,
TP_PROTO(const struct inode *inode, u64 start, u64 len,
u64 reserved, int op),
TP_ARGS(inode, start, len, reserved, op),
TP_STRUCT__entry_btrfs(
__field( u64, rootid )
__field( u64, ino )
__field( u64, start )
__field( u64, len )
__field( u64, reserved )
__field( int, op )
),
TP_fast_assign_btrfs(btrfs_sb(inode->i_sb),
__entry->rootid = btrfs_root_id(BTRFS_I(inode)->root);
__entry->ino = btrfs_ino(BTRFS_I(inode));
__entry->start = start;
__entry->len = len;
__entry->reserved = reserved;
__entry->op = op;
),
TP_printk_btrfs("root=%llu ino=%llu start=%llu len=%llu reserved=%llu op=%s",
__entry->rootid, __entry->ino, __entry->start, __entry->len,
__entry->reserved,
__print_flags((unsigned long)__entry->op, "",
BTRFS_QGROUP_OPERATIONS)
)
);
DEFINE_EVENT(btrfs__qgroup_rsv_data, btrfs_qgroup_reserve_data,
TP_PROTO(const struct inode *inode, u64 start, u64 len,
u64 reserved, int op),
TP_ARGS(inode, start, len, reserved, op)
);
DEFINE_EVENT(btrfs__qgroup_rsv_data, btrfs_qgroup_release_data,
TP_PROTO(const struct inode *inode, u64 start, u64 len,
u64 reserved, int op),
TP_ARGS(inode, start, len, reserved, op)
);
DECLARE_EVENT_CLASS(btrfs_qgroup_extent,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_qgroup_extent_record *rec,
u64 bytenr),
TP_ARGS(fs_info, rec, bytenr),
TP_STRUCT__entry_btrfs(
__field( u64, bytenr )
__field( u64, num_bytes )
),
TP_fast_assign_btrfs(fs_info,
__entry->bytenr = bytenr;
__entry->num_bytes = rec->num_bytes;
),
TP_printk_btrfs("bytenr=%llu num_bytes=%llu",
__entry->bytenr, __entry->num_bytes)
);
DEFINE_EVENT(btrfs_qgroup_extent, btrfs_qgroup_account_extents,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_qgroup_extent_record *rec,
u64 bytenr),
TP_ARGS(fs_info, rec, bytenr)
);
DEFINE_EVENT(btrfs_qgroup_extent, btrfs_qgroup_trace_extent,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_qgroup_extent_record *rec,
u64 bytenr),
TP_ARGS(fs_info, rec, bytenr)
);
TRACE_EVENT(btrfs_qgroup_num_dirty_extents,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 transid,
u64 num_dirty_extents),
TP_ARGS(fs_info, transid, num_dirty_extents),
TP_STRUCT__entry_btrfs(
__field( u64, transid )
__field( u64, num_dirty_extents )
),
TP_fast_assign_btrfs(fs_info,
__entry->transid = transid;
__entry->num_dirty_extents = num_dirty_extents;
),
TP_printk_btrfs("transid=%llu num_dirty_extents=%llu",
__entry->transid, __entry->num_dirty_extents)
);
TRACE_EVENT(btrfs_qgroup_account_extent,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 transid, u64 bytenr,
u64 num_bytes, u64 nr_old_roots, u64 nr_new_roots),
TP_ARGS(fs_info, transid, bytenr, num_bytes, nr_old_roots,
nr_new_roots),
TP_STRUCT__entry_btrfs(
__field( u64, transid )
__field( u64, bytenr )
__field( u64, num_bytes )
__field( u64, nr_old_roots )
__field( u64, nr_new_roots )
),
TP_fast_assign_btrfs(fs_info,
__entry->transid = transid;
__entry->bytenr = bytenr;
__entry->num_bytes = num_bytes;
__entry->nr_old_roots = nr_old_roots;
__entry->nr_new_roots = nr_new_roots;
),
TP_printk_btrfs(
"transid=%llu bytenr=%llu num_bytes=%llu nr_old_roots=%llu nr_new_roots=%llu",
__entry->transid,
__entry->bytenr,
__entry->num_bytes,
__entry->nr_old_roots,
__entry->nr_new_roots)
);
TRACE_EVENT(btrfs_qgroup_update_counters,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_qgroup *qgroup,
u64 cur_old_count, u64 cur_new_count),
TP_ARGS(fs_info, qgroup, cur_old_count, cur_new_count),
TP_STRUCT__entry_btrfs(
__field( u64, qgid )
__field( u64, old_rfer )
__field( u64, old_excl )
__field( u64, cur_old_count )
__field( u64, cur_new_count )
),
TP_fast_assign_btrfs(fs_info,
__entry->qgid = qgroup->qgroupid;
__entry->old_rfer = qgroup->rfer;
__entry->old_excl = qgroup->excl;
__entry->cur_old_count = cur_old_count;
__entry->cur_new_count = cur_new_count;
),
TP_printk_btrfs("qgid=%llu old_rfer=%llu old_excl=%llu cur_old_count=%llu cur_new_count=%llu",
__entry->qgid, __entry->old_rfer, __entry->old_excl,
__entry->cur_old_count, __entry->cur_new_count)
);
TRACE_EVENT(btrfs_qgroup_update_reserve,
TP_PROTO(const struct btrfs_fs_info *fs_info, const struct btrfs_qgroup *qgroup,
s64 diff, int type),
TP_ARGS(fs_info, qgroup, diff, type),
TP_STRUCT__entry_btrfs(
__field( u64, qgid )
__field( u64, cur_reserved )
__field( s64, diff )
__field( int, type )
),
TP_fast_assign_btrfs(fs_info,
__entry->qgid = qgroup->qgroupid;
__entry->cur_reserved = qgroup->rsv.values[type];
__entry->diff = diff;
__entry->type = type;
),
TP_printk_btrfs("qgid=%llu type=%s cur_reserved=%llu diff=%lld",
__entry->qgid, __print_symbolic(__entry->type, QGROUP_RSV_TYPES),
__entry->cur_reserved, __entry->diff)
);
TRACE_EVENT(btrfs_qgroup_meta_reserve,
TP_PROTO(const struct btrfs_root *root, s64 diff, int type),
TP_ARGS(root, diff, type),
TP_STRUCT__entry_btrfs(
__field( u64, refroot )
__field( s64, diff )
__field( int, type )
),
TP_fast_assign_btrfs(root->fs_info,
__entry->refroot = btrfs_root_id(root);
__entry->diff = diff;
__entry->type = type;
),
TP_printk_btrfs("refroot=%llu(%s) type=%s diff=%lld",
show_root_type(__entry->refroot),
__print_symbolic(__entry->type, QGROUP_RSV_TYPES), __entry->diff)
);
TRACE_EVENT(btrfs_qgroup_meta_convert,
TP_PROTO(const struct btrfs_root *root, s64 diff),
TP_ARGS(root, diff),
TP_STRUCT__entry_btrfs(
__field( u64, refroot )
__field( s64, diff )
),
TP_fast_assign_btrfs(root->fs_info,
__entry->refroot = btrfs_root_id(root);
__entry->diff = diff;
),
TP_printk_btrfs("refroot=%llu(%s) type=%s->%s diff=%lld",
show_root_type(__entry->refroot),
__print_symbolic(BTRFS_QGROUP_RSV_META_PREALLOC, QGROUP_RSV_TYPES),
__print_symbolic(BTRFS_QGROUP_RSV_META_PERTRANS, QGROUP_RSV_TYPES),
__entry->diff)
);
TRACE_EVENT(btrfs_qgroup_meta_free_all_pertrans,
TP_PROTO(struct btrfs_root *root),
TP_ARGS(root),
TP_STRUCT__entry_btrfs(
__field( u64, refroot )
__field( s64, diff )
__field( int, type )
),
TP_fast_assign_btrfs(root->fs_info,
__entry->refroot = btrfs_root_id(root);
spin_lock(&root->qgroup_meta_rsv_lock);
__entry->diff = -(s64)root->qgroup_meta_rsv_pertrans;
spin_unlock(&root->qgroup_meta_rsv_lock);
__entry->type = BTRFS_QGROUP_RSV_META_PERTRANS;
),
TP_printk_btrfs("refroot=%llu(%s) type=%s diff=%lld",
show_root_type(__entry->refroot),
__print_symbolic(__entry->type, QGROUP_RSV_TYPES), __entry->diff)
);
DECLARE_EVENT_CLASS(btrfs__prelim_ref,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct prelim_ref *oldref,
const struct prelim_ref *newref, u64 tree_size),
TP_ARGS(fs_info, oldref, newref, tree_size),
TP_STRUCT__entry_btrfs(
__field( u64, root_id )
__field( u64, objectid )
__field( u8, type )
__field( u64, offset )
__field( int, level )
__field( int, old_count )
__field( u64, parent )
__field( u64, bytenr )
__field( int, mod_count )
__field( u64, tree_size )
),
TP_fast_assign_btrfs(fs_info,
__entry->root_id = oldref->root_id;
__entry->objectid = oldref->key_for_search.objectid;
__entry->type = oldref->key_for_search.type;
__entry->offset = oldref->key_for_search.offset;
__entry->level = oldref->level;
__entry->old_count = oldref->count;
__entry->parent = oldref->parent;
__entry->bytenr = oldref->wanted_disk_byte;
__entry->mod_count = newref ? newref->count : 0;
__entry->tree_size = tree_size;
),
TP_printk_btrfs("root_id=%llu key=[%llu,%u,%llu] level=%d count=[%d+%d=%d] parent=%llu wanted_disk_byte=%llu nodes=%llu",
__entry->root_id,
__entry->objectid, __entry->type,
__entry->offset, __entry->level,
__entry->old_count, __entry->mod_count,
__entry->old_count + __entry->mod_count,
__entry->parent,
__entry->bytenr,
__entry->tree_size)
);
DEFINE_EVENT(btrfs__prelim_ref, btrfs_prelim_ref_merge,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct prelim_ref *oldref,
const struct prelim_ref *newref, u64 tree_size),
TP_ARGS(fs_info, oldref, newref, tree_size)
);
DEFINE_EVENT(btrfs__prelim_ref, btrfs_prelim_ref_insert,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct prelim_ref *oldref,
const struct prelim_ref *newref, u64 tree_size),
TP_ARGS(fs_info, oldref, newref, tree_size)
);
TRACE_EVENT(btrfs_inode_mod_outstanding_extents,
TP_PROTO(const struct btrfs_root *root, u64 ino, int mod, unsigned outstanding),
TP_ARGS(root, ino, mod, outstanding),
TP_STRUCT__entry_btrfs(
__field( u64, root_objectid )
__field( u64, ino )
__field( int, mod )
__field( unsigned, outstanding )
),
TP_fast_assign_btrfs(root->fs_info,
__entry->root_objectid = btrfs_root_id(root);
__entry->ino = ino;
__entry->mod = mod;
__entry->outstanding = outstanding;
),
TP_printk_btrfs("root=%llu(%s) ino=%llu mod=%d outstanding=%u",
show_root_type(__entry->root_objectid),
__entry->ino, __entry->mod, __entry->outstanding)
);
DECLARE_EVENT_CLASS(btrfs__block_group,
TP_PROTO(const struct btrfs_block_group *bg_cache),
TP_ARGS(bg_cache),
TP_STRUCT__entry_btrfs(
__field( u64, bytenr )
__field( u64, len )
__field( u64, used )
__field( u64, flags )
),
TP_fast_assign_btrfs(bg_cache->fs_info,
__entry->bytenr = bg_cache->start,
__entry->len = bg_cache->length,
__entry->used = bg_cache->used;
__entry->flags = bg_cache->flags;
),
TP_printk_btrfs("bg bytenr=%llu len=%llu used=%llu flags=%llu(%s)",
__entry->bytenr, __entry->len, __entry->used, __entry->flags,
__print_flags(__entry->flags, "|", BTRFS_GROUP_FLAGS))
);
DEFINE_EVENT(btrfs__block_group, btrfs_remove_block_group,
TP_PROTO(const struct btrfs_block_group *bg_cache),
TP_ARGS(bg_cache)
);
DEFINE_EVENT(btrfs__block_group, btrfs_add_unused_block_group,
TP_PROTO(const struct btrfs_block_group *bg_cache),
TP_ARGS(bg_cache)
);
DEFINE_EVENT(btrfs__block_group, btrfs_add_reclaim_block_group,
TP_PROTO(const struct btrfs_block_group *bg_cache),
TP_ARGS(bg_cache)
);
DEFINE_EVENT(btrfs__block_group, btrfs_reclaim_block_group,
TP_PROTO(const struct btrfs_block_group *bg_cache),
TP_ARGS(bg_cache)
);
DEFINE_EVENT(btrfs__block_group, btrfs_skip_unused_block_group,
TP_PROTO(const struct btrfs_block_group *bg_cache),
TP_ARGS(bg_cache)
);
TRACE_EVENT(btrfs_set_extent_bit,
TP_PROTO(const struct extent_io_tree *tree,
u64 start, u64 len, unsigned set_bits),
TP_ARGS(tree, start, len, set_bits),
TP_STRUCT__entry_btrfs(
__field( unsigned, owner )
__field( u64, ino )
__field( u64, rootid )
__field( u64, start )
__field( u64, len )
__field( unsigned, set_bits)
),
TP_fast_assign_btrfs(btrfs_extent_io_tree_to_fs_info(tree),
const struct btrfs_inode *inode = btrfs_extent_io_tree_to_inode(tree);
__entry->owner = tree->owner;
__entry->ino = inode ? btrfs_ino(inode) : 0;
__entry->rootid = inode ? btrfs_root_id(inode->root) : 0;
__entry->start = start;
__entry->len = len;
__entry->set_bits = set_bits;
),
TP_printk_btrfs(
"io_tree=%s ino=%llu root=%llu start=%llu len=%llu set_bits=%s",
__print_symbolic(__entry->owner, IO_TREE_OWNER), __entry->ino,
__entry->rootid, __entry->start, __entry->len,
__print_flags(__entry->set_bits, "|", EXTENT_FLAGS))
);
TRACE_EVENT(btrfs_clear_extent_bit,
TP_PROTO(const struct extent_io_tree *tree,
u64 start, u64 len, unsigned clear_bits),
TP_ARGS(tree, start, len, clear_bits),
TP_STRUCT__entry_btrfs(
__field( unsigned, owner )
__field( u64, ino )
__field( u64, rootid )
__field( u64, start )
__field( u64, len )
__field( unsigned, clear_bits)
),
TP_fast_assign_btrfs(btrfs_extent_io_tree_to_fs_info(tree),
const struct btrfs_inode *inode = btrfs_extent_io_tree_to_inode(tree);
__entry->owner = tree->owner;
__entry->ino = inode ? btrfs_ino(inode) : 0;
__entry->rootid = inode ? btrfs_root_id(inode->root) : 0;
__entry->start = start;
__entry->len = len;
__entry->clear_bits = clear_bits;
),
TP_printk_btrfs(
"io_tree=%s ino=%llu root=%llu start=%llu len=%llu clear_bits=%s",
__print_symbolic(__entry->owner, IO_TREE_OWNER), __entry->ino,
__entry->rootid, __entry->start, __entry->len,
__print_flags(__entry->clear_bits, "|", EXTENT_FLAGS))
);
TRACE_EVENT(btrfs_convert_extent_bit,
TP_PROTO(const struct extent_io_tree *tree,
u64 start, u64 len, unsigned set_bits, unsigned clear_bits),
TP_ARGS(tree, start, len, set_bits, clear_bits),
TP_STRUCT__entry_btrfs(
__field( unsigned, owner )
__field( u64, ino )
__field( u64, rootid )
__field( u64, start )
__field( u64, len )
__field( unsigned, set_bits)
__field( unsigned, clear_bits)
),
TP_fast_assign_btrfs(btrfs_extent_io_tree_to_fs_info(tree),
const struct btrfs_inode *inode = btrfs_extent_io_tree_to_inode(tree);
__entry->owner = tree->owner;
__entry->ino = inode ? btrfs_ino(inode) : 0;
__entry->rootid = inode ? btrfs_root_id(inode->root) : 0;
__entry->start = start;
__entry->len = len;
__entry->set_bits = set_bits;
__entry->clear_bits = clear_bits;
),
TP_printk_btrfs(
"io_tree=%s ino=%llu root=%llu start=%llu len=%llu set_bits=%s clear_bits=%s",
__print_symbolic(__entry->owner, IO_TREE_OWNER), __entry->ino,
__entry->rootid, __entry->start, __entry->len,
__print_flags(__entry->set_bits , "|", EXTENT_FLAGS),
__print_flags(__entry->clear_bits, "|", EXTENT_FLAGS))
);
DECLARE_EVENT_CLASS(btrfs_dump_space_info,
TP_PROTO(struct btrfs_fs_info *fs_info,
const struct btrfs_space_info *sinfo),
TP_ARGS(fs_info, sinfo),
TP_STRUCT__entry_btrfs(
__field( u64, flags )
__field( u64, total_bytes )
__field( u64, bytes_used )
__field( u64, bytes_pinned )
__field( u64, bytes_reserved )
__field( u64, bytes_may_use )
__field( u64, bytes_readonly )
__field( u64, reclaim_size )
__field( int, clamp )
__field( u64, global_reserved )
__field( u64, trans_reserved )
__field( u64, delayed_refs_reserved )
__field( u64, delayed_reserved )
__field( u64, free_chunk_space )
__field( u64, delalloc_bytes )
__field( u64, ordered_bytes )
),
TP_fast_assign_btrfs(fs_info,
__entry->flags = sinfo->flags;
__entry->total_bytes = sinfo->total_bytes;
__entry->bytes_used = sinfo->bytes_used;
__entry->bytes_pinned = sinfo->bytes_pinned;
__entry->bytes_reserved = sinfo->bytes_reserved;
__entry->bytes_may_use = sinfo->bytes_may_use;
__entry->bytes_readonly = sinfo->bytes_readonly;
__entry->reclaim_size = sinfo->reclaim_size;
__entry->clamp = sinfo->clamp;
__entry->global_reserved = fs_info->global_block_rsv.reserved;
__entry->trans_reserved = fs_info->trans_block_rsv.reserved;
__entry->delayed_refs_reserved = fs_info->delayed_refs_rsv.reserved;
__entry->delayed_reserved = fs_info->delayed_block_rsv.reserved;
__entry->free_chunk_space = atomic64_read(&fs_info->free_chunk_space);
__entry->delalloc_bytes = percpu_counter_sum_positive(&fs_info->delalloc_bytes);
__entry->ordered_bytes = percpu_counter_sum_positive(&fs_info->ordered_bytes);
),
TP_printk_btrfs("flags=%s total_bytes=%llu bytes_used=%llu "
"bytes_pinned=%llu bytes_reserved=%llu "
"bytes_may_use=%llu bytes_readonly=%llu "
"reclaim_size=%llu clamp=%d global_reserved=%llu "
"trans_reserved=%llu delayed_refs_reserved=%llu "
"delayed_reserved=%llu chunk_free_space=%llu "
"delalloc_bytes=%llu ordered_bytes=%llu",
__print_flags(__entry->flags, "|", BTRFS_GROUP_FLAGS),
__entry->total_bytes, __entry->bytes_used,
__entry->bytes_pinned, __entry->bytes_reserved,
__entry->bytes_may_use, __entry->bytes_readonly,
__entry->reclaim_size, __entry->clamp,
__entry->global_reserved, __entry->trans_reserved,
__entry->delayed_refs_reserved,
__entry->delayed_reserved, __entry->free_chunk_space,
__entry->delalloc_bytes, __entry->ordered_bytes)
);
DEFINE_EVENT(btrfs_dump_space_info, btrfs_done_preemptive_reclaim,
TP_PROTO(struct btrfs_fs_info *fs_info,
const struct btrfs_space_info *sinfo),
TP_ARGS(fs_info, sinfo)
);
DEFINE_EVENT(btrfs_dump_space_info, btrfs_fail_all_tickets,
TP_PROTO(struct btrfs_fs_info *fs_info,
const struct btrfs_space_info *sinfo),
TP_ARGS(fs_info, sinfo)
);
TRACE_EVENT(btrfs_reserve_ticket,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 flags, u64 bytes,
u64 start_ns, int flush, int error),
TP_ARGS(fs_info, flags, bytes, start_ns, flush, error),
TP_STRUCT__entry_btrfs(
__field( u64, flags )
__field( u64, bytes )
__field( u64, start_ns )
__field( int, flush )
__field( int, error )
),
TP_fast_assign_btrfs(fs_info,
__entry->flags = flags;
__entry->bytes = bytes;
__entry->start_ns = start_ns;
__entry->flush = flush;
__entry->error = error;
),
TP_printk_btrfs("flags=%s bytes=%llu start_ns=%llu flush=%s error=%d",
__print_flags(__entry->flags, "|", BTRFS_GROUP_FLAGS),
__entry->bytes, __entry->start_ns,
__print_symbolic(__entry->flush, FLUSH_ACTIONS),
__entry->error)
);
DECLARE_EVENT_CLASS(btrfs_sleep_tree_lock,
TP_PROTO(const struct extent_buffer *eb, u64 start_ns),
TP_ARGS(eb, start_ns),
TP_STRUCT__entry_btrfs(
__field( u64, block )
__field( u64, generation )
__field( u64, start_ns )
__field( u64, end_ns )
__field( u64, diff_ns )
__field( u64, owner )
__field( int, is_log_tree )
),
TP_fast_assign_btrfs(eb->fs_info,
__entry->block = eb->start;
__entry->generation = btrfs_header_generation(eb);
__entry->start_ns = start_ns;
__entry->end_ns = ktime_get_ns();
__entry->diff_ns = __entry->end_ns - start_ns;
__entry->owner = btrfs_header_owner(eb);
__entry->is_log_tree = (eb->log_index >= 0);
),
TP_printk_btrfs(
"block=%llu generation=%llu start_ns=%llu end_ns=%llu diff_ns=%llu owner=%llu is_log_tree=%d",
__entry->block, __entry->generation,
__entry->start_ns, __entry->end_ns, __entry->diff_ns,
__entry->owner, __entry->is_log_tree)
);
DEFINE_EVENT(btrfs_sleep_tree_lock, btrfs_tree_read_lock,
TP_PROTO(const struct extent_buffer *eb, u64 start_ns),
TP_ARGS(eb, start_ns)
);
DEFINE_EVENT(btrfs_sleep_tree_lock, btrfs_tree_lock,
TP_PROTO(const struct extent_buffer *eb, u64 start_ns),
TP_ARGS(eb, start_ns)
);
DECLARE_EVENT_CLASS(btrfs_locking_events,
TP_PROTO(const struct extent_buffer *eb),
TP_ARGS(eb),
TP_STRUCT__entry_btrfs(
__field( u64, block )
__field( u64, generation )
__field( u64, owner )
__field( int, is_log_tree )
),
TP_fast_assign_btrfs(eb->fs_info,
__entry->block = eb->start;
__entry->generation = btrfs_header_generation(eb);
__entry->owner = btrfs_header_owner(eb);
__entry->is_log_tree = (eb->log_index >= 0);
),
TP_printk_btrfs("block=%llu generation=%llu owner=%llu is_log_tree=%d",
__entry->block, __entry->generation,
__entry->owner, __entry->is_log_tree)
);
#define DEFINE_BTRFS_LOCK_EVENT(name) \
DEFINE_EVENT(btrfs_locking_events, name, \
TP_PROTO(const struct extent_buffer *eb), \
\
TP_ARGS(eb) \
)
DEFINE_BTRFS_LOCK_EVENT(btrfs_tree_unlock);
DEFINE_BTRFS_LOCK_EVENT(btrfs_tree_read_unlock);
DEFINE_BTRFS_LOCK_EVENT(btrfs_try_tree_read_lock);
DECLARE_EVENT_CLASS(btrfs__space_info_update,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_space_info *sinfo, u64 old, s64 diff),
TP_ARGS(fs_info, sinfo, old, diff),
TP_STRUCT__entry_btrfs(
__field( u64, type )
__field( u64, old )
__field( s64, diff )
),
TP_fast_assign_btrfs(fs_info,
__entry->type = sinfo->flags;
__entry->old = old;
__entry->diff = diff;
),
TP_printk_btrfs("type=%s old=%llu diff=%lld",
__print_flags(__entry->type, "|", BTRFS_GROUP_FLAGS),
__entry->old, __entry->diff)
);
DEFINE_EVENT(btrfs__space_info_update, update_bytes_may_use,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_space_info *sinfo, u64 old, s64 diff),
TP_ARGS(fs_info, sinfo, old, diff)
);
DEFINE_EVENT(btrfs__space_info_update, update_bytes_pinned,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_space_info *sinfo, u64 old, s64 diff),
TP_ARGS(fs_info, sinfo, old, diff)
);
DEFINE_EVENT(btrfs__space_info_update, update_bytes_zone_unusable,
TP_PROTO(const struct btrfs_fs_info *fs_info,
const struct btrfs_space_info *sinfo, u64 old, s64 diff),
TP_ARGS(fs_info, sinfo, old, diff)
);
DECLARE_EVENT_CLASS(btrfs_raid56_bio,
TP_PROTO(const struct btrfs_raid_bio *rbio,
const struct bio *bio,
const struct raid56_bio_trace_info *trace_info),
TP_ARGS(rbio, bio, trace_info),
TP_STRUCT__entry_btrfs(
__field( u64, full_stripe )
__field( u64, physical )
__field( u64, devid )
__field( u32, offset )
__field( u32, len )
__field( u8, opf )
__field( u8, total_stripes )
__field( u8, real_stripes )
__field( u8, nr_data )
__field( u8, stripe_nr )
),
TP_fast_assign_btrfs(rbio->bioc->fs_info,
__entry->full_stripe = rbio->bioc->full_stripe_logical;
__entry->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
__entry->len = bio->bi_iter.bi_size;
__entry->opf = bio_op(bio);
__entry->devid = trace_info->devid;
__entry->offset = trace_info->offset;
__entry->stripe_nr = trace_info->stripe_nr;
__entry->total_stripes = rbio->bioc->num_stripes;
__entry->real_stripes = rbio->real_stripes;
__entry->nr_data = rbio->nr_data;
),
/*
* For type output, we need to output things like "DATA1"
* (the first data stripe), "DATA2" (the second data stripe),
* "PQ1" (P stripe),"PQ2" (Q stripe), "REPLACE0" (replace target device).
*/
TP_printk_btrfs(
"full_stripe=%llu devid=%lld type=%s%d offset=%d opf=0x%x physical=%llu len=%u",
__entry->full_stripe, __entry->devid,
(__entry->stripe_nr < __entry->nr_data) ? "DATA" :
((__entry->stripe_nr < __entry->real_stripes) ? "PQ" :
"REPLACE"),
(__entry->stripe_nr < __entry->nr_data) ?
(__entry->stripe_nr + 1) :
((__entry->stripe_nr < __entry->real_stripes) ?
(__entry->stripe_nr - __entry->nr_data + 1) : 0),
__entry->offset, __entry->opf, __entry->physical, __entry->len)
);
DEFINE_EVENT(btrfs_raid56_bio, raid56_read,
TP_PROTO(const struct btrfs_raid_bio *rbio,
const struct bio *bio,
const struct raid56_bio_trace_info *trace_info),
TP_ARGS(rbio, bio, trace_info)
);
DEFINE_EVENT(btrfs_raid56_bio, raid56_write,
TP_PROTO(const struct btrfs_raid_bio *rbio,
const struct bio *bio,
const struct raid56_bio_trace_info *trace_info),
TP_ARGS(rbio, bio, trace_info)
);
TRACE_EVENT(btrfs_insert_one_raid_extent,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 logical, u64 length,
int num_stripes),
TP_ARGS(fs_info, logical, length, num_stripes),
TP_STRUCT__entry_btrfs(
__field( u64, logical )
__field( u64, length )
__field( int, num_stripes )
),
TP_fast_assign_btrfs(fs_info,
__entry->logical = logical;
__entry->length = length;
__entry->num_stripes = num_stripes;
),
TP_printk_btrfs("logical=%llu length=%llu num_stripes=%d",
__entry->logical, __entry->length,
__entry->num_stripes)
);
TRACE_EVENT(btrfs_raid_extent_delete,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 start, u64 end,
u64 found_start, u64 found_end),
TP_ARGS(fs_info, start, end, found_start, found_end),
TP_STRUCT__entry_btrfs(
__field( u64, start )
__field( u64, end )
__field( u64, found_start )
__field( u64, found_end )
),
TP_fast_assign_btrfs(fs_info,
__entry->start = start;
__entry->end = end;
__entry->found_start = found_start;
__entry->found_end = found_end;
),
TP_printk_btrfs("start=%llu end=%llu found_start=%llu found_end=%llu",
__entry->start, __entry->end, __entry->found_start,
__entry->found_end)
);
TRACE_EVENT(btrfs_get_raid_extent_offset,
TP_PROTO(const struct btrfs_fs_info *fs_info, u64 logical, u64 length,
u64 physical, u64 devid),
TP_ARGS(fs_info, logical, length, physical, devid),
TP_STRUCT__entry_btrfs(
__field( u64, logical )
__field( u64, length )
__field( u64, physical )
__field( u64, devid )
),
TP_fast_assign_btrfs(fs_info,
__entry->logical = logical;
__entry->length = length;
__entry->physical = physical;
__entry->devid = devid;
),
TP_printk_btrfs("logical=%llu length=%llu physical=%llu devid=%llu",
__entry->logical, __entry->length, __entry->physical,
__entry->devid)
);
TRACE_EVENT(btrfs_extent_map_shrinker_count,
TP_PROTO(const struct btrfs_fs_info *fs_info, long nr),
TP_ARGS(fs_info, nr),
TP_STRUCT__entry_btrfs(
__field( long, nr )
),
TP_fast_assign_btrfs(fs_info,
__entry->nr = nr;
),
TP_printk_btrfs("nr=%ld", __entry->nr)
);
TRACE_EVENT(btrfs_extent_map_shrinker_scan_enter,
TP_PROTO(const struct btrfs_fs_info *fs_info, long nr),
TP_ARGS(fs_info, nr),
TP_STRUCT__entry_btrfs(
__field( long, nr_to_scan )
__field( long, nr )
__field( u64, last_root_id )
__field( u64, last_ino )
),
TP_fast_assign_btrfs(fs_info,
__entry->nr_to_scan = \
atomic64_read(&fs_info->em_shrinker_nr_to_scan);
__entry->nr = nr;
__entry->last_root_id = fs_info->em_shrinker_last_root;
__entry->last_ino = fs_info->em_shrinker_last_ino;
),
TP_printk_btrfs("nr_to_scan=%ld nr=%ld last_root=%llu(%s) last_ino=%llu",
__entry->nr_to_scan, __entry->nr,
show_root_type(__entry->last_root_id), __entry->last_ino)
);
TRACE_EVENT(btrfs_extent_map_shrinker_scan_exit,
TP_PROTO(const struct btrfs_fs_info *fs_info, long nr_dropped, long nr),
TP_ARGS(fs_info, nr_dropped, nr),
TP_STRUCT__entry_btrfs(
__field( long, nr_dropped )
__field( long, nr )
__field( u64, last_root_id )
__field( u64, last_ino )
),
TP_fast_assign_btrfs(fs_info,
__entry->nr_dropped = nr_dropped;
__entry->nr = nr;
__entry->last_root_id = fs_info->em_shrinker_last_root;
__entry->last_ino = fs_info->em_shrinker_last_ino;
),
TP_printk_btrfs("nr_dropped=%ld nr=%ld last_root=%llu(%s) last_ino=%llu",
__entry->nr_dropped, __entry->nr,
show_root_type(__entry->last_root_id), __entry->last_ino)
);
TRACE_EVENT(btrfs_extent_map_shrinker_remove_em,
TP_PROTO(const struct btrfs_inode *inode, const struct extent_map *em),
TP_ARGS(inode, em),
TP_STRUCT__entry_btrfs(
__field( u64, ino )
__field( u64, root_id )
__field( u64, start )
__field( u64, len )
__field( u32, flags )
),
TP_fast_assign_btrfs(inode->root->fs_info,
__entry->ino = btrfs_ino(inode);
__entry->root_id = btrfs_root_id(inode->root);
__entry->start = em->start;
__entry->len = em->len;
__entry->flags = em->flags;
),
TP_printk_btrfs("ino=%llu root=%llu(%s) start=%llu len=%llu flags=%s",
__entry->ino, show_root_type(__entry->root_id),
__entry->start, __entry->len,
show_map_flags(__entry->flags))
);
#endif /* _TRACE_BTRFS_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
] |
| 20/2 | 2026/02/06 08:01 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 52457 output: 44 thoughts: 836 Thoughts: **UAF in btrfs: A Clear and Actionable Bug** Okay, so let me break this down. We've got a potential use-after-free (UAF) situation in the `btrfs` code, specifically related to delayed tree references. The tracepoint `add_delayed_tree_ref` is the key. I see it's defined to log information about a `btrfs_delayed_ref_node`. The macro `TP_fast_assign_btrfs` within the tracepoint definition, importantly, dereferences the `ref` argument, which, in this context, is the `node`. Now, looking at the code, in `add_delayed_ref`, this `node` is inserted into a tree, and then the protecting lock (`delayed_refs->lock`) is dropped *before* the tracepoint is called. Precisely: the node (`ref`) passed to the tracepoint `trace_add_delayed_tree_ref` is accessed after its insertion into the tree and after the associated lock is released. The logic flow is straightforward: If `insert_delayed_ref` returns `false`, `node` is inserted into the delayed ref tree, and we immediately drop the lock. Then the tracepoint gets hit. Another thread, via the worker function `__btrfs_run_delayed_refs`, could be operating on this tree *concurrently* after the lock is dropped. The worker function grabs the `head_ref`, and `btrfs_select_delayed_ref` selects the `node`. Ultimately, `run_one_delayed_ref` will get called, followed by `btrfs_put_delayed_ref(ref)`. Since the initial refcount of the node was one, `btrfs_put_delayed_ref` will free it. This sequence screams UAF. `node` is accessed in the tracepoint (`trace_add_delayed_tree_ref`) after it has been freed by the other thread. So, is it actionable? Absolutely. Is it consistent? Yes, the process clearly shows the allocation, lock protection that is later released, and then the free. I can see the access stack: it starts with `add_delayed_ref` which is where it's allocated, proceeds to `trace_add_delayed_tree_ref` when it's accessed. The allocation stack is the same, starting at `add_delayed_ref`. The free stack is: `__btrfs_run_delayed_refs` -> `btrfs_put_delayed_ref`. All of these relate to the `btrfs_delayed_ref_node` structure. Wait a second, let me just confirm that the report is correct about `node` being a `btrfs_delayed_ref_node`. The report says the buggy address belongs to `btrfs_delayed_ref_node` of size 112 bytes, and the access is 40 bytes within the freed region. I think the bug is real, I'll need to check the exact memory layout of `btrfs_delayed_ref_node` to be sure. |
| 21/2 | 2026/02/06 08:01 | tool | codesearch-struct-layout |
0mArgs:map[ContextFile:fs/btrfs/delayed-ref.c Name:btrfs_delayed_ref_node] Results: map[Fields:[map[Name:ref_node OffsetBits:0 SizeBits:192] map[Name:add_list OffsetBits:192 SizeBits:128] map[Name:bytenr OffsetBits:320 SizeBits:64] map[Name:num_bytes OffsetBits:384 SizeBits:64] map[Name:seq OffsetBits:448 SizeBits:64] map[Name:ref_root OffsetBits:512 SizeBits:64] map[Name:parent OffsetBits:576 SizeBits:64] map[Name:refs OffsetBits:640 SizeBits:32] map[Name:ref_mod OffsetBits:672 SizeBits:32] map[Name:action OffsetBits:704 SizeBits:8] map[Name:type OffsetBits:712 SizeBits:8] map[Name: OffsetBits:768 SizeBits:128]]] |
| 22/2 | 2026/02/06 08:01 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 54264 output: 47 thoughts: 1373 Thoughts: **Analyzing a Btrfs Delayed Ref Tracepoint Use-After-Free** Okay, let's break this down. It looks like we've got a classic use-after-free issue with a Btrfs tracepoint. Specifically, the offset of 40 bytes (320 bits) in the tracepoint corresponds to `bytenr`, and the tracepoint itself, `TP_fast_assign_btrfs`, aligns perfectly with the relevant field assignment. So far, so good. The initial thought was to simply move the tracepoint. The obvious spots are before dropping the lock, before inserting the node, or before incrementing the refcount. However, after further consideration, placing it before `insert_delayed_ref` seems safest, or potentially even inside the lock. But here's the kicker: `insert_delayed_ref` might merge the reference. If it merges, the `node` isn't actually inserted, and the `merged` flag becomes true. In this scenario, the current thread still owns the `node`, which means that it will be freed at line 1091. If `merged` is false, then the tree owns the node. This presents a subtle but crucial timing issue for tracing. Initially, I had thought moving the tracepoint before `insert_delayed_ref` would work, but the tracepoint might be designed to capture the *post*-insertion state. On second thought, `add_delayed_tree_ref` tracepoint seems designed to record the intent. Looking more closely at where `trace_add_delayed_tree_ref` is called, it happens at line 1089 within `add_delayed_ref`. This makes sense. Now, let's consider the call stack. `add_delayed_ref` is called by `btrfs_free_tree_block` and `btrfs_alloc_tree_block`. According to the allocation stack report, the culprit is `btrfs_free_tree_block`. Line 1017 in `delayed-ref.c` shows the allocation of `node` using `kmem_cache_alloc`. The free stack reveals that `btrfs_run_delayed_refs_for_head` eventually calls `btrfs_put_delayed_ref(ref)`, triggering the free. This is triggered by a flush caused by metadata pressure. The critical access stack confirms the use-after-free. The KASAN report, along with the inline tracepoint calls, clearly points to the issue being at line 1089 in `add_delayed_ref`. This is where `trace_add_delayed_tree_ref` is called, after `node` is allocated, but possibly *before* it's actually fully integrated into the tree if merging occurs. It's a textbook race condition. All the evidence aligns perfectly. And, based on the `flush_space` call in the free stack, this is happening in a worker thread. This makes it more likely to occur under metadata pressure. The report is definitely actionable; the tracepoint is happening too late, after the delayed ref has been allocated, but before it is actually merged or integrated into the tree. Moving it before the allocation/insertion, or protecting it with a lock, should fix this. Just to be safe, I should look for other similar tracepoints in this area to confirm if there are any others. Looking at line 1087, it appears there's `trace_add_delayed_data_ref` also called from `add_delayed_ref`, likely suffering the same problem. And, let's not forget `add_delayed_ref_head`. ```c 1059: new_head_ref = add_delayed_ref_head(trans, head_ref, record, 1060: action, &qrecord_inserted); ``` I'll have to investigate that one as well to see if it suffers from a similar issue. |
| 23/2 | 2026/02/06 08:01 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/btrfs/delayed-ref.c IncludeLines:true Name:add_delayed_ref_head] Results: map[SourceCode: 800: /*
801: * Helper function to actually insert a head node into the xarray. This does all
802: * the dirty work in terms of maintaining the correct overall modification
803: * count.
804: *
805: * The caller is responsible for calling kfree() on @qrecord. More specifically,
806: * if this function reports that it did not insert it as noted in
807: * @qrecord_inserted_ret, then it's safe to call kfree() on it.
808: *
809: * Returns an error pointer in case of an error.
810: */
811: static noinline struct btrfs_delayed_ref_head *
812: add_delayed_ref_head(struct btrfs_trans_handle *trans,
813: struct btrfs_delayed_ref_head *head_ref,
814: struct btrfs_qgroup_extent_record *qrecord,
815: int action, bool *qrecord_inserted_ret)
816: {
817: struct btrfs_fs_info *fs_info = trans->fs_info;
818: struct btrfs_delayed_ref_head *existing;
819: struct btrfs_delayed_ref_root *delayed_refs;
820: const unsigned long index = (head_ref->bytenr >> fs_info->sectorsize_bits);
821:
822: /*
823: * If 'qrecord_inserted_ret' is provided, then the first thing we need
824: * to do is to initialize it to false just in case we have an exit
825: * before trying to insert the record.
826: */
827: if (qrecord_inserted_ret)
828: *qrecord_inserted_ret = false;
829:
830: delayed_refs = &trans->transaction->delayed_refs;
831: lockdep_assert_held(&delayed_refs->lock);
832:
833: #if BITS_PER_LONG == 32
834: if (head_ref->bytenr >= MAX_LFS_FILESIZE) {
835: if (qrecord)
836: xa_release(&delayed_refs->dirty_extents, index);
837: btrfs_err_rl(fs_info,
838: "delayed ref head %llu is beyond 32bit page cache and xarray index limit",
839: head_ref->bytenr);
840: btrfs_err_32bit_limit(fs_info);
841: return ERR_PTR(-EOVERFLOW);
842: }
843: #endif
844:
845: /* Record qgroup extent info if provided */
846: if (qrecord) {
847: /*
848: * Setting 'qrecord' but not 'qrecord_inserted_ret' will likely
849: * result in a memory leakage.
850: */
851: ASSERT(qrecord_inserted_ret != NULL);
852:
853: int ret;
854:
855: ret = btrfs_qgroup_trace_extent_nolock(fs_info, delayed_refs, qrecord,
856: head_ref->bytenr);
857: if (ret) {
858: /* Clean up if insertion fails or item exists. */
859: xa_release(&delayed_refs->dirty_extents, index);
860: if (ret < 0)
861: return ERR_PTR(ret);
862: } else if (qrecord_inserted_ret) {
863: *qrecord_inserted_ret = true;
864: }
865: }
866:
867: trace_add_delayed_ref_head(fs_info, head_ref, action);
868:
869: existing = xa_load(&delayed_refs->head_refs, index);
870: if (existing) {
871: update_existing_head_ref(trans, existing, head_ref);
872: /*
873: * we've updated the existing ref, free the newly
874: * allocated ref
875: */
876: kmem_cache_free(btrfs_delayed_ref_head_cachep, head_ref);
877: head_ref = existing;
878: } else {
879: existing = xa_store(&delayed_refs->head_refs, index, head_ref, GFP_ATOMIC);
880: if (xa_is_err(existing)) {
881: /* Memory was preallocated by the caller. */
882: ASSERT(xa_err(existing) != -ENOMEM);
883: return ERR_PTR(xa_err(existing));
884: } else if (WARN_ON(existing)) {
885: /*
886: * Shouldn't happen we just did a lookup before under
887: * delayed_refs->lock.
888: */
889: return ERR_PTR(-EEXIST);
890: }
891: head_ref->tracked = true;
892: /*
893: * We reserve the amount of bytes needed to delete csums when
894: * adding the ref head and not when adding individual drop refs
895: * since the csum items are deleted only after running the last
896: * delayed drop ref (the data extent's ref count drops to 0).
897: */
898: if (head_ref->is_data && head_ref->ref_mod < 0) {
899: delayed_refs->pending_csums += head_ref->num_bytes;
900: trans->delayed_ref_csum_deletions +=
901: btrfs_csum_bytes_to_leaves(fs_info, head_ref->num_bytes);
902: }
903: delayed_refs->num_heads++;
904: delayed_refs->num_heads_ready++;
905: }
906:
907: return head_ref;
908: }
SourceFile:fs/btrfs/delayed-ref.c] |
| 24/2 | 2026/02/06 08:01 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 58346 output: 43 thoughts: 532 Thoughts: **My Analysis of the Delayed Ref Bug** Okay, here's what I've got. In `add_delayed_ref_head`, the `trace_add_delayed_ref_head` tracepoint call on line 867 *precedes* the potential freeing on line 876 or storage on line 879. And critically, that call occurs while the `delayed_refs->lock` is held. So, I'm fairly confident that's safe. However, `add_delayed_ref` is where things get dicey. I noticed something immediately. The tracepoint `trace_add_delayed_tree_ref` is called *after* `spin_unlock(&delayed_refs->lock)` on line 1078. Then, on line 1089, the tracing call occurs. Bingo. That's the bug right there. Another thread could come in, process delayed refs, and free `node` *before* the tracepoint is called. It's a classic use-after-free scenario. This is actionable. I've got enough data. The stack traces and code analysis confirm a race condition, consistent with the observed behavior. Now, let's just make sure about the `btrfs_delayed_ref_node` refcounting. I reviewed the code. It starts at 1, incremented by the common initialization. `btrfs_put_delayed_ref` decrements and frees if it's zero. Now, if the `merged` flag is true, we know that it's freed at line 1091 in `add_delayed_ref`. But what if `merged` is false? Then, it's added to the tree. The tree *should* hold a reference. But does `tree_insert` actually increment that refcount? I'll need to double check that specific function. I don't think it does. That could mean the thread in `add_delayed_ref` does NOT have a reference after insertion, meaning it can be freed by another thread immediately. I'll need to confirm what `tree_insert` does. |
| 25/2 | 2026/02/06 08:02 | tool | codesearch-definition-source |
0mArgs:map[ContextFile:fs/btrfs/delayed-ref.c IncludeLines:true Name:tree_insert] Results: map[SourceCode: 330: static struct btrfs_delayed_ref_node* tree_insert(struct rb_root_cached *root,
331: struct btrfs_delayed_ref_node *ins)
332: {
333: struct rb_node *node = &ins->ref_node;
334: struct rb_node *exist = rb_find_add_cached(node, root, cmp_refs_node);
335:
336: return rb_entry_safe(exist, struct btrfs_delayed_ref_node, ref_node);
337: }
SourceFile:fs/btrfs/delayed-ref.c] |
| 26/2 | 2026/02/06 08:02 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 59572 output: 19 thoughts: 562 Thoughts: **Analyzing the Btrfs Delayed Reference Issue** Okay, so I've been thinking through this delayed reference issue in Btrfs. My initial thought was that `tree_insert` in `tree_insert` wouldn't touch the reference count, leaving it at 1. The RB tree essentially "owns" that single reference at this point. That makes sense. The problem, as I see it, is that if the original thread still needs to use the data, it must either maintain the lock or increment the reference count *before* the insertion. For this tracepoint, though, the simplest solution is to just move it *inside* the lock's scope. Wait, looking closer, `add_delayed_ref` *already* holds the lock when it calls `insert_delayed_ref`. Good. ```c 1053: spin_lock(&delayed_refs->lock); ... 1077: merged = insert_delayed_ref(trans, head_ref, node); 1078: spin_unlock(&delayed_refs->lock); ``` So, relocating the tracepoint *before* line 1078 should be safe. I need to be certain about the locking though. Let's check. Ah, right. `insert_delayed_ref` also locks `href->lock`: ```c 590: spin_lock(&href->lock); 591: exist = tree_insert(&href->ref_tree, ref); ... 626: spin_unlock(&href->lock); ``` But `add_delayed_ref` is holding `delayed_refs->lock`, which is essentially the *global* lock for the delayed refs structure. I'm remembering correctly, `__btrfs_run_delayed_refs` also grabs `delayed_refs->lock` to select a head: ```c 2023: locked_ref = btrfs_select_ref_head(fs_info, delayed_refs); ``` And `btrfs_select_ref_head` takes `delayed_refs->lock`. Okay, the key here is that as long as `delayed_refs->lock` is held, nobody can grab the `head_ref`, and therefore nobody can grab the `node` itself. Yes, the report is absolutely actionable, and the root cause of the bug is clear. The balance operation using `-d -m` is the trigger, which involves extensive relocation and heavy use of delayed references. The stack traces of `relocate_tree_blocks` and `relocate_block_group` confirms this, matching the log entries. Yep, I'm confident in this assessment. |
| 27/2 | 2026/02/06 08:02 | tool | set-results |
0mArgs:map[Actionable:true Confident:true] Results: map[Actionable:true Confident:true] |
| 28/2 | 2026/02/06 08:02 | llm | expert |
0mModel:gemini-3-flash-preview Tokens: input: 60760 output: 531 thoughts: 13 Thoughts: Okay, here's the summary, framed as your thoughts after analyzing the text, targeted at an expert: **Finalizing the Response** Alright, the deep dive is done. I've processed the entire analysis. Now, I'm at the point where I synthesize everything and formulate the final response. There's no more grinding through the data, no more refinement of the logic. It's time to package the findings into a clear and concise presentation. Time to deliver the goods. |
| Total Calls | Total Tokens | Avg Tokens | Total Duration (Seconds) | Avg Duration (Seconds) |
|---|
| Total Calls | Total Duration (Seconds) | Avg Duration (Seconds) |
|---|