DataFusion Tracing is an extension for Apache DataFusion that helps you monitor and debug queries. It uses tracing and OpenTelemetry to gather DataFusion metrics, trace execution steps, and preview partial query results.
Note: This is not an official Apache Software Foundation release.
When you run queries with DataFusion Tracing enabled, it automatically adds tracing around execution steps, records all native DataFusion metrics such as execution time and output row count, lets you preview partial results for easier debugging, and integrates with OpenTelemetry for distributed tracing. This makes it simpler to understand and improve query performance.
Here's what DataFusion Tracing can look like in practice:
Include DataFusion Tracing in your project's Cargo.toml:
[dependencies]
datafusion = "51.0.0"
datafusion-tracing = "51.0.0"use datafusion::{
arrow::{array::RecordBatch, util::pretty::pretty_format_batches},
error::Result,
execution::SessionStateBuilder,
prelude::*,
};
use datafusion_tracing::{
instrument_with_info_spans, pretty_format_compact_batch, InstrumentationOptions,
};
use std::sync::Arc;
use tracing::field;
#[tokio::main]
async fn main() -> Result<()> {
// Initialize tracing subscriber as usual
// (See examples/otlp.rs for a complete example).
// Set up tracing options (you can customize these).
let options = InstrumentationOptions::builder()
.record_metrics(true)
.preview_limit(5)
.preview_fn(Arc::new(|batch: &RecordBatch| {
pretty_format_compact_batch(batch, 64, 3, 10).map(|fmt| fmt.to_string())
}))
.add_custom_field("env", "production")
.add_custom_field("region", "us-west")
.build();
let instrument_rule = instrument_with_info_spans!(
options: options,
env = field::Empty,
region = field::Empty,
);
let session_state = SessionStateBuilder::new()
.with_default_features()
.with_physical_optimizer_rule(instrument_rule)
.build();
let ctx = SessionContext::new_with_state(session_state);
let results = ctx.sql("SELECT 1").await?.collect().await?;
println!(
"Query Results:\n{}",
pretty_format_batches(results.as_slice())?
);
Ok(())
}A more complete example can be found in the examples directory.
Before diving into DataFusion Tracing, you'll need to set up an OpenTelemetry collector to receive and process the tracing data. There are several options available:
For local development and testing, Jaeger is a great choice. It's an open-source distributed tracing system that's easy to set up. You can run it with Docker using:
docker run --rm --name jaeger \
-p 16686:16686 \
-p 4317:4317 \
-p 4318:4318 \
-p 5778:5778 \
-p 9411:9411 \
jaegertracing/jaeger:2.7.0Once running, you can access the Jaeger UI at http://localhost:16686. For more details, check out their getting started guide.
For a cloud-native approach, DataDog offers a hosted solution for OpenTelemetry data. You can send your traces directly to their platform by configuring your DataDog API key and endpoint - their OpenTelemetry integration guide has all the details.
Of course, you can use any OpenTelemetry-compatible collector. The official OpenTelemetry Collector is a good starting point if you want to build a custom setup.
If you're using custom physical optimizer rules alongside the instrumentation rule, always register the instrumentation rule last in your physical optimizer chain so that:
- You capture the final optimized plan, not an intermediate one.
- Instrumentation stays purely observational—other optimizer rules never have to deal with instrumented nodes.
To keep the instrumentation rule last in the chain, either chain calls:
builder.with_physical_optimizer_rule(rule_a)
.with_physical_optimizer_rule(rule_b)
.with_physical_optimizer_rule(instrument_rule)Or pass a vector:
builder.with_physical_optimizer_rules(vec![..., instrument_rule])Instrumentation is designed to be mostly invisible: with the rule registered last, other optimizer rules typically never see InstrumentedExec at all. The wrapper itself is intentionally private so downstream code cannot depend on its internals; the supported surface is the optimizer rule and the standard ExecutionPlan trait.
The repository is organized as follows:
datafusion-tracing/: Core tracing functionality for DataFusioninstrumented-object-store/: Object store instrumentationintegration-utils/: Integration utilities and helpers for examples and tests (not for production use)examples/: Example applications demonstrating the library usagetests/: Integration testsdocs/: Documentation, including logos and screenshots
Use these commands to build and test:
cargo build --workspace
cargo test --workspaceIntegration tests and examples expect TPCH tables in Parquet format to be present in integration-utils/data (not checked in). Generate them locally with:
cargo install tpchgen-cli
./dev/generate_tpch_parquet.shThis produces all TPCH tables at scale factor 0.1 as single Parquet files in integration-utils/data. CI installs tpchgen-cli and runs the same script automatically before tests. If a required file is missing, the helper library will return a clear error instructing you to run the script.
Contributions are welcome. Make sure your code passes all tests, follow existing formatting and coding styles, and include tests and documentation. See CONTRIBUTING.md for detailed guidelines.
Licensed under the Apache License, Version 2.0. See LICENSE.
This project includes software developed at Datadog (info@datadoghq.com).


