Skip to content

Conversation

@scottopell
Copy link
Contributor

  • feat(observer): Expose observer module as public API
  • Add documentation to public observer APIs
  • docs: Add missing doc comments for public modules
  • feat(capture): add file rotation support for Parquet format

scottopell and others added 9 commits December 19, 2025 10:08
Make the observer module and its subcomponents public to allow
external crates to reuse the procfs and cgroup v2 parsing logic.

Exposed APIs:
- observer::linux::Sampler - high-level sampler for a process tree
- observer::linux::procfs::{memory, stat, uptime} - procfs parsers
- observer::linux::cgroup::v2::{poll, get_path, cpu, memory} - cgroup v2 parsers

This enables fine-grained-monitor and similar tools to leverage
lading's battle-tested observer implementation without vendoring.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Required for #![deny(missing_docs)] when using lading as a library dependency.
Add rotation API to CaptureManager that allows rotating to new output
files without stopping the capture. This enables long-running capture
sessions to produce multiple readable Parquet files with valid footers.

Changes:
- Add RotationRequest/RotationSender types for async rotation requests
- Add start_with_rotation() that spawns event loop and returns sender
- Add replace_format() to StateMachine for IO-agnostic format swapping
- Add rotate() trait method stub to OutputFormat (returns error by default)
- Add rotate_to() inherent method on parquet Format<BufWriter<File>>

The rotation flow:
1. Caller sends RotationRequest with new file path via RotationSender
2. CaptureManager creates new file and format
3. StateMachine.replace_format() flushes and swaps formats
4. Old format is closed (writing Parquet footer)
5. Response sent back to caller

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The CPU Sampler was initializing prev stats to zeros, causing the first
delta calculation to be (cumulative_since_container_start - 0) which
produces an enormous spike in total_cpu_usage_millicores.

Fix by making prev an Option<Stats>. On first poll, we record baseline
stats but skip metric emission. Subsequent polls compute proper deltas.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Same fix as cgroup/v2/cpu.rs: make prev an Option<Stats> and skip
metric emission on first poll to avoid computing delta from
cumulative-since-process-start minus zero.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace MapArray-based label storage with flat l_<key> columns in
Parquet output. This enables predicate pushdown for filtering by
container_id and other labels, avoiding full file scans.

Key changes:
- Dynamic schema generation based on discovered label keys
- Dictionary encoding for low-cardinality label columns
- Lazy ArrowWriter initialization (schema determined at first flush)
- Updated validation and round-trip tests for new schema

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add BloomFilterConfig and BloomFilterColumn types to configure bloom
filters on label columns. Bloom filters enable efficient query-time
filtering by allowing readers to skip row groups that definitely don't
contain a target value.

New APIs:
- Format::with_bloom_filter() - create writer with bloom filter config
- format.bloom_filter_config() - getter for rotation
- CaptureManager::new_parquet_with_bloom_filter()
- CaptureManager::new_multi_with_bloom_filter()

Backwards compatible - existing Format::new() and new_parquet() still
work unchanged using BloomFilterConfig::default().

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The start_with_rotation method now returns (RotationSender, JoinHandle<()>)
instead of just RotationSender. This allows callers to await the JoinHandle
to ensure the CaptureManager has fully drained all buffered metrics and
closed the output file before the process exits.

This is important for short-lived workloads where the 60-tick accumulator
window may contain data that would otherwise be lost if the spawned task
is aborted during runtime shutdown.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@scottopell
Copy link
Contributor Author

Sub-second metric resolution limitation discovered

While working on fine-grained-monitor, I discovered that lading-capture has a hardcoded 1-second tick duration that prevents sub-second metric resolution:

Root cause:

  • lading_capture/src/manager/state_machine.rs:31: pub(crate) const TICK_DURATION_MS: u128 = 1_000;
  • lading_capture/src/manager.rs:36: const TICK_DURATION_MS: u128 = 1_000;

Impact:

  • All metric samples within the same second are bucketed into one tick
  • Timestamps are calculated as: start_ms + (tick * 1000)
  • For gauges, only the last sample per second is preserved
  • Sub-second sampling (e.g., 250ms/4Hz) collects data but loses timestamp granularity

Use case:
For fine-grained-monitor, we wanted 4Hz sampling to detect short-lived containers (~500ms lifetime, e.g., OOM-killed pods). Per Nyquist-Shannon, 4Hz sampling should capture these events, but the 1-second tick bucketing defeats this purpose.

Potential fix:
Make TICK_DURATION_MS configurable, either via:

  1. A constructor parameter to CaptureManager
  2. A generic const parameter on StateMachine/Accumulator

This is not urgent for the current PR, but documenting here for future reference. I have added a validation in fine-grained-monitor that fatally crashes if --interval-ms is set to anything other than 1000ms, with a descriptive error message pointing to this limitation.

scottopell added a commit to DataDog/datadog-agent that referenced this pull request Jan 8, 2026
Add validation that crashes with descriptive error message if interval_ms
is set to anything other than 1000ms. This prevents misconfiguration since
lading-capture has a hardcoded 1-second tick duration (TICK_DURATION_MS).

Sub-second sampling would collect data but timestamps would be bucketed to
1-second resolution, losing the intended granularity. For gauges (most
cgroup metrics), only the last sample per second would be preserved.

The error message explains:
- The exact source locations in lading-capture
- Technical details of the bucketing behavior
- That this is NOT insurmountable - just needs implementation

Also updates the --interval-ms help text to document this limitation.

See: DataDog/lading#1662 (comment)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
scottopell added a commit to DataDog/datadog-agent that referenced this pull request Jan 12, 2026
Add validation that crashes with descriptive error message if interval_ms
is set to anything other than 1000ms. This prevents misconfiguration since
lading-capture has a hardcoded 1-second tick duration (TICK_DURATION_MS).

Sub-second sampling would collect data but timestamps would be bucketed to
1-second resolution, losing the intended granularity. For gauges (most
cgroup metrics), only the last sample per second would be preserved.

The error message explains:
- The exact source locations in lading-capture
- Technical details of the bucketing behavior
- That this is NOT insurmountable - just needs implementation

Also updates the --interval-ms help text to document this limitation.

See: DataDog/lading#1662 (comment)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants