A comprehensive Python library for tracing, profiling, and visualizing function call flows with interactive graphs, call graphs, and OpenTelemetry export. Perfect for understanding code flow, debugging performance bottlenecks, and optimizing code with production-ready observability.
Export your traces to any OpenTelemetry-compatible backend with production-ready features:
# Basic OTel export
from callflow_tracer.opentelemetry_exporter import export_callgraph_to_otel
with trace_scope() as graph:
your_code()
result = export_callgraph_to_otel(
graph,
service_name="my-service",
sampling_rate=0.5,
environment="production"
)
print(f"Exported {result['span_count']} spans")
# CLI usage
callflow-tracer otel trace.json --service-name my-service --sampling-rate 0.5- Exemplars: Link custom metrics to trace spans for correlation
- Sampling: Configurable sampling rates (0.0-1.0) to reduce overhead
- Resource Attributes: Attach metadata (version, environment, host)
- Config Files: YAML/JSON configuration with auto-detection
- Environment Variables: CALLFLOW_OTEL_* overrides for deployment
- Multiple Exporters: Console, OTLP/gRPC, OTLP/HTTP, Jaeger
- Semantic Conventions: OpenTelemetry standard attributes
- Batch Processing: Configurable processor settings
- CLI Integration: Dedicated
otelsubcommand with advanced options - VS Code Integration: Advanced export with interactive prompts
- Python API: Direct function calls for programmatic use
- Comprehensive Tests: 40+ unit tests + integration tests
- Full Documentation: 1,500+ lines of guides and examples
Multi-dimensional SLAs with rolling windows and dynamic thresholds:
from callflow_tracer.custom_metrics import (
SLO, SLI, ErrorBudgetTracker, ExperimentAnalyzer, track_metric
)
# Availability SLO (>= 99% success in last hour)
slo = SLO(
name="checkout-availability",
objective=0.99,
time_window=3600,
sli_type="availability",
metric_name="checkout_success", # 1=success, 0=failure
)
print(slo.compute(tags={"service": "api"}))
# Error budget
budget = ErrorBudgetTracker(slo).compute_budget(tags={"service": "api"})
print(budget)
# Canary comparison (baseline vs canary)
report = ExperimentAnalyzer.canary(
metric_name="latency_ms",
baseline_value="baseline",
canary_value="canary",
group_tag_key="deployment",
time_window=1800,
)
print(report)- Multi-dimensional SLAs: Multiple conditions per metric with operators (gt/lt/eq/gte/lte)
- Rolling Time Windows: Compliance over configurable windows (e.g., 1m, 5m, 1h)
- Dynamic Thresholds: Auto-adjust using IQR-based statistics (stdlib-only)
- SLI/SLO Framework: Availability, error-rate, latency percentile targets
- Error Budgets: Compute allowed error, consumed/remaining budget, burn rate
- Canary & A/B Analysis: Compare baseline vs canary, or A vs B variants via tags with p95 and deltas
Analyze code quality metrics with complexity analysis and technical debt scoring:
# Analyze code quality
callflow-tracer quality . -o quality_report.html
# Track trends over time
callflow-tracer quality . --track-trends --format jsonfrom callflow_tracer.code_quality import analyze_codebase
results = analyze_codebase("./src")
print(f"Average Complexity: {results['summary']['average_complexity']:.2f}")
print(f"Critical Issues: {results['summary']['critical_issues']}")- Complexity Metrics: Cyclomatic and cognitive complexity calculation
- Maintainability Index: 0-100 scale with detailed metrics
- Technical Debt Scoring: Identify and quantify technical debt
- Quality Trends: Track code quality over time
- Halstead Metrics: Volume, difficulty, effort analysis
Predict future performance issues and capacity planning:
# Predict performance issues
callflow-tracer predict history.json -o predictions.htmlfrom callflow_tracer.predictive_analysis import PerformancePredictor
predictor = PerformancePredictor("history.json")
predictions = predictor.predict_performance_issues(current_trace)
for pred in predictions:
if pred.risk_level == "Critical":
print(f"CRITICAL: {pred.function_name}")
print(f" Predicted time: {pred.predicted_time:.4f}s")- Performance Prediction: Predict future performance degradation
- Capacity Planning: Forecast when limits will be reached
- Scalability Analysis: Assess code scalability characteristics
- Resource Forecasting: Predict resource usage trends
- Risk Assessment: Multi-factor risk evaluation
- Confidence Scoring: Data-driven confidence levels
Identify high-risk files using git history and quality correlation:
# Analyze code churn
callflow-tracer churn . --days 90 -o churn_report.htmlfrom callflow_tracer.code_churn import generate_churn_report
report = generate_churn_report(".", days=90)
print(f"High risk files: {report['summary']['high_risk_files']}")
for hotspot in report['hotspots'][:5]:
print(f"{hotspot['file_path']}: {hotspot['hotspot_score']:.1f}")- Git History Analysis: Analyze commits and changes
- Hotspot Identification: Find high-risk files
- Churn Correlation: Correlate with quality metrics
- Bug Prediction: Estimate bug correlation
- Risk Assessment: Comprehensive risk evaluation
- Actionable Recommendations: Specific improvement suggestions
Ready-to-use integrations for popular Python frameworks:
- Flask Integration: Automatic request tracing
- FastAPI Integration: Async endpoint tracing
- Django Integration: View and middleware tracing
- SQLAlchemy Integration: Database query monitoring
- psycopg2 Integration: PostgreSQL query tracing
- Code Snippet Insertion: Ready-to-use integration code
Complete terminal interface for all features - no Python code needed:
# Analyze code quality
callflow-tracer quality . -o quality_report.html
# Predict performance issues
callflow-tracer predict history.json -o predictions.html
# Analyze code churn
callflow-tracer churn . --days 90 -o churn_report.html
# Trace function calls
callflow-tracer trace script.py -o trace.html
# Generate flamegraph
callflow-tracer flamegraph script.py -o flamegraph.html
# Export to OpenTelemetry
callflow-tracer otel trace.json --service-name my-service- 11 CLI Commands: Complete CLI for all features
- No Python Code Needed: Run analysis from terminal
- HTML/JSON Output: Multiple export formats
- Progress Notifications: Real-time feedback
- Batch Processing: Analyze entire projects
- Statistics Dashboard: Total time, calls, depth, slowest function
- 5 Color Schemes: Choose the best view for your analysis
- Real-time Search: Find functions instantly
- SVG Export: High-quality graphics for reports
- Performance Colors: Green=fast, Red=slow (perfect for optimization!)
- Responsive Design: Works on all screen sizes
- CPU Profiling: cProfile integration with detailed statistics
- Memory Tracking: Current and peak memory usage
- I/O Wait Time: Measure time spent waiting
- Health Indicators: Visual performance status
- Bottleneck Detection: Automatically identifies slow functions
- Interactive Network: Zoom, pan, explore call relationships
- Multiple Layouts: Hierarchical, Force-Directed, Circular, Timeline
- Module Filtering: Focus on specific parts of your code
- Rich Tooltips: Detailed metrics on hover
- Color Coding: Performance-based coloring
- Statistics Panel: See total functions, calls, execution time, and bottlenecks at a glance
- Search Functionality: Find specific functions quickly in large graphs
- SVG Export: Export high-quality vector graphics for presentations
- Modern UI: Responsive design with gradients and smooth animations
- Fixed CPU Profiling: Working cProfile integration with actual execution times
- Working Module Filter: Filter by Python module with smooth animations
- All Layouts Working: Hierarchical, Force-Directed, Circular, Timeline
- JSON Export: Fixed export functionality with proper metadata
- Jupyter Integration: Magic commands and inline visualizations
- Simple API: Decorator or context manager - your choice
- Interactive Visualizations: Beautiful HTML graphs with zoom, pan, and filtering
- Async/Await Support: Full support for modern async Python code
- Comparison Mode: Side-by-side before/after optimization analysis
- Memory Leak Detection: Track allocations, find leaks, visualize growth
- Performance Profiling: CPU time, memory usage, I/O wait tracking
- Flamegraph Support: Identify bottlenecks with flame graphs
- Call Graph Analysis: Understand function relationships
- Jupyter Integration: Works seamlessly in notebooks
- Multiple Export Formats: HTML, JSON, SVG
- Zero Config: Works out of the box
- Production Ready: Full OTel compliance
- Exemplars: Link metrics to spans
- Sampling: Reduce overhead in production
- Config Management: YAML/JSON + environment variables
- Multiple Exporters: Console, OTLP, Jaeger
- CLI Integration:
callflow-tracer otelcommand - VS Code Integration: Export from editor
- Complexity Metrics: Cyclomatic and cognitive complexity
- Maintainability Index: 0-100 scale with detailed analysis
- Technical Debt Scoring: Identify and quantify debt
- Quality Trends: Track metrics over time
- Halstead Metrics: Volume, difficulty, effort analysis
- Performance Prediction: Predict future degradation
- Capacity Planning: Forecast limit breaches
- Scalability Analysis: Assess scalability characteristics
- Resource Forecasting: Predict resource usage
- Risk Assessment: Multi-factor evaluation
- Git History Analysis: Analyze commits and changes
- Hotspot Identification: Find high-risk files
- Quality Correlation: Correlate with quality metrics
- Bug Prediction: Estimate bug correlation
- Actionable Recommendations: Specific improvements
- 11 CLI Commands: Complete terminal interface (including
otel) - No Code Required: Run analysis from command line
- Batch Processing: Analyze entire projects
- Multiple Formats: HTML and JSON output
- Statistics Dashboard: Total time, calls, depth, slowest function
- 5 Color Schemes: Choose the best view for your analysis
- Real-time Search: Find functions instantly
- SVG Export: High-quality graphics for reports
- Performance Colors: Green=fast, Red=slow (perfect for optimization!)
- Responsive Design: Works on all screen sizes
- CPU Profiling: cProfile integration with detailed statistics
- Memory Tracking: Current and peak memory usage
- I/O Wait Time: Measure time spent waiting
- Health Indicators: Visual performance status
- Bottleneck Detection: Automatically identifies slow functions
- Interactive Network: Zoom, pan, explore call relationships
- Multiple Layouts: Hierarchical, Force-Directed, Circular, Timeline
- Module Filtering: Focus on specific parts of your code
- Rich Tooltips: Detailed metrics on hover
- Color Coding: Performance-based coloring
- Statistics Panel: See total functions, calls, execution time, and bottlenecks at a glance
- Search Functionality: Find specific functions quickly in large graphs
- SVG Export: Export high-quality vector graphics for presentations
- Modern UI: Responsive design with gradients and smooth animations
- Fixed CPU Profiling: Working cProfile integration with actual execution times
- Working Module Filter: Filter by Python module with smooth animations
- All Layouts Working: Hierarchical, Force-Directed, Circular, Timeline
- JSON Export: Fixed export functionality with proper metadata
- Jupyter Integration: Magic commands and inline visualizations
# Basic installation
pip install callflow-tracer
# With OpenTelemetry support
pip install callflow-tracer[otel]
# With all optional dependencies
pip install callflow-tracer[all]git clone https://github.com/rajveer43/callflow-tracer.git
cd callflow-tracer
pip install -e .
# With OpenTelemetry support
pip install -e ".[otel]"pip install -e .[dev]The OpenTelemetry export functionality requires additional packages. Install with:
pip install callflow-tracer[otel]This includes:
opentelemetry-api>=1.20.0- Core OpenTelemetry APIopentelemetry-sdk>=1.20.0- OpenTelemetry SDKopentelemetry-exporter-otlp>=1.20.0- OTLP exporteropentelemetry-exporter-jaeger>=1.20.0- Jaeger exporteropentelemetry-exporter-prometheus>=1.20.0- Prometheus exporterprotobuf>=3.20.0- Protocol buffers for OTLPgrpcio>=1.50.0- gRPC transport
Note: OpenTelemetry support is optional. The core library works without these dependencies.
from callflow_tracer import trace_scope, export_html
def calculate_fibonacci(n):
if n <= 1:
return n
return calculate_fibonacci(n-1) + calculate_fibonacci(n-2)
# Trace execution
with trace_scope() as graph:
result = calculate_fibonacci(10)
print(f"Result: {result}")
# Export to interactive HTML
export_html(graph, "fibonacci.html", title="Fibonacci Call Graph")Open fibonacci.html in your browser to see the interactive visualization!
Export your traces to any OpenTelemetry-compatible backend with production-ready features:
# Generate config file
callflow-tracer otel --init-config
# Export trace to OTel
callflow-tracer otel trace.json --service-name my-service
# Advanced export
callflow-tracer otel trace.json \
--service-name my-service \
--environment production \
--sampling-rate 0.5 \
--include-metrics.callflow_otel.yaml (auto-generated)
service_name: my-service
environment: production
sampling_rate: 1.0
exporter:
type: otlp_grpc
endpoint: http://localhost:4317
resource_attributes:
service.version: "1.0.0"from callflow_tracer.opentelemetry_exporter import export_callgraph_to_otel
# Basic export
result = export_callgraph_to_otel(graph, service_name="my-service")
# Advanced export with exemplars
result = export_callgraph_to_otel(
graph,
service_name="my-service",
sampling_rate=0.5,
environment="production",
resource_attributes={"service.version": "1.0.0"}
)
# With metrics bridging
from callflow_tracer.opentelemetry_exporter import export_callgraph_with_metrics
result = export_callgraph_with_metrics(graph, metrics, service_name="my-service")What You Get:
- Production Ready: Full OTel compliance with semantic conventions
- Exemplars: Link custom metrics to trace spans for correlation
- Sampling: Configurable sampling rates (0.0-1.0) to reduce overhead
- Resource Attributes: Attach metadata (version, environment, host)
- Config Management: YAML/JSON files with environment variable overrides
- Multiple Exporters: Console, OTLP/gRPC, OTLP/HTTP, Jaeger
- Batch Processing: Configurable processor settings for efficiency
- Error Handling: Graceful degradation if OTel not installed
Analyze code quality metrics with a single command:
# Analyze code quality
callflow-tracer quality . -o quality_report.html
# Track trends over time
callflow-tracer quality . --track-trends --format jsonWhat You Get:
- Complexity Metrics: Cyclomatic and cognitive complexity
- Maintainability Index: 0-100 scale
- Technical Debt: Quantified debt scoring
- Halstead Metrics: Volume, difficulty, effort
- Trend Analysis: Track metrics over time
Python API:
from callflow_tracer.code_quality import analyze_codebase
results = analyze_codebase("./src")
print(f"Average Complexity: {results['summary']['average_complexity']:.2f}")
print(f"Critical Issues: {results['summary']['critical_issues']}")Predict future performance issues:
# Predict performance issues
callflow-tracer predict history.json -o predictions.htmlWhat You Get:
- Performance Prediction: Predict degradation
- Capacity Planning: Forecast limit breaches
- Scalability Analysis: Assess scalability
- Risk Assessment: Multi-factor evaluation
- Confidence Scoring: Data-driven confidence
Python API:
from callflow_tracer.predictive_analysis import PerformancePredictor
predictor = PerformancePredictor("history.json")
predictions = predictor.predict_performance_issues(current_trace)
for pred in predictions:
if pred.risk_level == "Critical":
print(f"CRITICAL: {pred.function_name}")
print(f" Predicted time: {pred.predicted_time:.4f}s")Identify high-risk files using git history:
# Analyze code churn
callflow-tracer churn . --days 90 -o churn_report.htmlWhat You Get:
- Hotspot Identification: Find high-risk files
- Churn Metrics: Commits, changes, authors
- Quality Correlation: Correlate with quality
- Bug Prediction: Estimate bug correlation
- Recommendations: Actionable improvements
Python API:
from callflow_tracer.code_churn import generate_churn_report
report = generate_churn_report(".", days=90)
print(f"High risk files: {report['summary']['high_risk_files']}")
for hotspot in report['hotspots'][:5]:
print(f"{hotspot['file_path']}: {hotspot['hotspot_score']:.1f}")from callflow_tracer import trace_scope
from callflow_tracer.flamegraph import generate_flamegraph
import time
def slow_function():
time.sleep(0.1) # Bottleneck!
return sum(range(10000))
def fast_function():
return sum(range(100))
def main():
return slow_function() + fast_function()
# Trace execution
with trace_scope() as graph:
result = main()
# Generate flamegraph with performance colors
generate_flamegraph(
graph,
"flamegraph.html",
color_scheme="performance", # Green=fast, Red=slow
show_stats=True, # Show statistics
search_enabled=True # Enable search
)Open flamegraph.html and look for wide RED bars - those are your bottlenecks!
CallFlow Tracer now fully supports async/await patterns:
import asyncio
from callflow_tracer.async_tracer import trace_async, trace_scope_async, gather_traced
@trace_async
async def fetch_data(item_id: int):
"""Async function with tracing."""
await asyncio.sleep(0.1)
return f"Data {item_id}"
@trace_async
async def process_data(item_id: int):
"""Process data asynchronously."""
data = await fetch_data(item_id)
await asyncio.sleep(0.05)
return data.upper()
async def main():
# Trace async code
async with trace_scope_async("async_trace.html") as graph:
# Concurrent execution
tasks = [process_data(i) for i in range(10)]
results = await gather_traced(*tasks)
print(f"Processed {len(results)} items concurrently")
# Get async statistics
from callflow_tracer.async_tracer import get_async_stats
stats = get_async_stats(graph)
print(f"Max concurrent tasks: {stats['max_concurrent_tasks']}")
print(f"Efficiency: {stats['efficiency']:.2f}%")
# Run it
asyncio.run(main())Async Features:
- Concurrent Execution Tracking: See which tasks run in parallel
- Await Time Analysis: Separate active time from wait time
- Concurrency Metrics: Max concurrent tasks, timeline events
- gather_traced(): Drop-in replacement for asyncio.gather with tracing
Compare two versions of your code side-by-side:
from callflow_tracer import trace_scope
from callflow_tracer.comparison import export_comparison_html
# Before optimization
def fibonacci_slow(n):
if n <= 1:
return n
return fibonacci_slow(n-1) + fibonacci_slow(n-2)
# After optimization (memoization)
_cache = {}
def fibonacci_fast(n):
if n in _cache:
return _cache[n]
if n <= 1:
return n
result = fibonacci_fast(n-1) + fibonacci_fast(n-2)
_cache[n] = result
return result
# Trace both versions
with trace_scope() as graph_before:
result = fibonacci_slow(20)
with trace_scope() as graph_after:
result = fibonacci_fast(20)
# Generate comparison report
export_comparison_html(
graph_before, graph_after,
"optimization_comparison.html",
label1="Before (Naive)",
label2="After (Memoized)",
title="Fibonacci Optimization"
)Open optimization_comparison.html to see:
- Side-by-Side Graphs: Visual comparison of call patterns
- Performance Metrics: Time saved, percentage improvement
- Improvements: Functions that got faster (green highlighting)
- Regressions: Functions that got slower (red highlighting)
- Detailed Table: Function-by-function comparison
- Summary Stats: Added/removed/modified functions
Combine tracing and profiling for comprehensive analysis:
from callflow_tracer import trace_scope, profile_section, export_html
from callflow_tracer.flamegraph import generate_flamegraph
def application():
# Your application code
process_data()
analyze_results()
# Trace and profile together
with profile_section("Application") as perf_stats:
with trace_scope() as graph:
application()
# Export call graph with profiling data
export_html(
graph,
"callgraph.html",
title="Application Analysis",
profiling_stats=perf_stats.to_dict()
)
# Export flamegraph
generate_flamegraph(
graph,
"flamegraph.html",
title="Performance Flamegraph",
color_scheme="performance",
show_stats=True
)You get:
- callgraph.html: Interactive network showing function relationships + CPU profile
- flamegraph.html: Stacked bars showing time distribution + statistics
from fastapi import FastAPI, HTTPException, status
from pydantic import BaseModel, Field
from contextlib import asynccontextmanager
from callflow_tracer import trace_scope
from callflow_tracer.integrations.fastapi_integration import setup_fastapi_tracing
# Define Pydantic models
class Item(BaseModel):
name: str = Field(..., min_length=3, max_length=50)
price: float = Field(..., gt=0)
in_stock: bool = True
class ItemResponse(Item):
id: int
created_at: str
# Setup tracing with lifespan
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
global _cft_scope
_cft_scope = trace_scope("fastapi_trace.html")
_cft_scope.__enter__()
yield
# Shutdown
_cft_scope.__exit__(None, None, None)
# Create FastAPI app
app = FastAPI(
title="My API",
lifespan=lifespan
)
# Setup automatic tracing
setup_fastapi_tracing(app)
# Add CORS middleware
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Define endpoints
@app.get("/items/{item_id}", response_model=ItemResponse)
async def get_item(item_id: int):
if item_id not in database:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Item {item_id} not found"
)
return {"id": item_id, **database[item_id]}
@app.post("/items", response_model=ItemResponse, status_code=status.HTTP_201_CREATED)
async def create_item(item: Item):
new_id = max(database.keys(), default=0) + 1
database[new_id] = item.dict()
return {"id": new_id, **database[new_id]}Run it:
uvicorn app:app --reload
# Visit http://localhost:8000/docs for interactive API docs
# Trace saved to fastapi_trace.htmlfrom flask import Flask, jsonify, request
from callflow_tracer import trace_scope
from callflow_tracer.integrations.flask_integration import setup_flask_tracing
app = Flask(__name__)
# Setup automatic tracing
setup_flask_tracing(app)
# Initialize trace scope
trace_context = trace_scope("flask_trace.html")
trace_context.__enter__()
@app.route('/api/users/<int:user_id>')
def get_user(user_id):
user = database.get(user_id)
if not user:
return jsonify({"error": "User not found"}), 404
return jsonify(user)
@app.route('/api/users', methods=['POST'])
def create_user():
data = request.get_json()
user_id = len(database) + 1
database[user_id] = data
return jsonify({"id": user_id, **data}), 201
if __name__ == '__main__':
try:
app.run(debug=True)
finally:
trace_context.__exit__(None, None, None)# settings.py
MIDDLEWARE = [
'callflow_tracer.integrations.django_integration.CallFlowTracerMiddleware',
# ... other middleware
]
# views.py
from django.http import JsonResponse
from callflow_tracer.integrations.django_integration import trace_view
@trace_view
def user_list(request):
users = User.objects.all()
return JsonResponse({
'users': list(users.values())
})
@trace_view
def user_detail(request, user_id):
try:
user = User.objects.get(id=user_id)
return JsonResponse(user.to_dict())
except User.DoesNotExist:
return JsonResponse({'error': 'User not found'}, status=404)from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from callflow_tracer import trace_scope
from callflow_tracer.integrations.sqlalchemy_integration import setup_sqlalchemy_tracing
# Create engine
engine = create_engine('sqlite:///example.db')
Base = declarative_base()
# Define model
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
email = Column(String)
# Setup tracing
setup_sqlalchemy_tracing(engine)
# Use with trace scope
with trace_scope("sqlalchemy_trace.html"):
Session = sessionmaker(bind=engine)
session = Session()
# Queries will be traced
users = session.query(User).filter(User.name.like('%John%')).all()
# Inserts will be traced
new_user = User(name="John Doe", email="john@example.com")
session.add(new_user)
session.commit()import psycopg2
from callflow_tracer import trace_scope
from callflow_tracer.integrations.psycopg2_integration import setup_psycopg2_tracing
# Connect to PostgreSQL
conn = psycopg2.connect(
dbname="mydb",
user="user",
password="password",
host="localhost"
)
# Setup tracing
setup_psycopg2_tracing(conn)
# Use with trace scope
with trace_scope("postgres_trace.html"):
cursor = conn.cursor()
# Queries will be traced with execution time
cursor.execute("SELECT * FROM users WHERE age > %s", (18,))
users = cursor.fetchall()
cursor.execute("""
INSERT INTO users (name, email, age)
VALUES (%s, %s, %s)
""", ("Jane Doe", "jane@example.com", 25))
conn.commit()
cursor.close()- Open VS Code
- Press
Ctrl+Shift+X(orCmd+Shift+Xon Mac) - Search for "CallFlow Tracer"
- Click Install
- Open any Python file
- Right-click in the editor
- Select "CallFlow: Trace Current File"
- View the interactive visualization in the side panel
- One-Click Tracing: Trace entire files or selected functions
- Interactive Graphs: Zoom, pan, and explore call relationships
- 3D Visualization: View call graphs in 3D space
- Multiple Layouts: Switch between hierarchical, force-directed, circular, and timeline
- Export Options: Save as PNG or JSON
- Performance Profiling: Built-in CPU profiling
- Module Filtering: Filter by Python modules
CallFlow: Trace Current File- Trace the entire fileCallFlow: Trace Selected Function- Trace only selected functionCallFlow: Show Visualization- Open visualization panelCallFlow: Show 3D Visualization- View in 3DCallFlow: Export as PNG- Export as imageCallFlow: Export as JSON- Export trace data
{
"callflowTracer.pythonPath": "python3",
"callflowTracer.defaultLayout": "force",
"callflowTracer.autoTrace": false,
"callflowTracer.enableProfiling": true
}Track business logic metrics, monitor SLA compliance, and export performance data:
from callflow_tracer import custom_metric, track_metric, MetricsCollector
# Automatic metric tracking with decorator
@custom_metric("order_processing_time", sla_threshold=1.0)
def process_order(order_id, amount):
# Your business logic here
return {"status": "completed", "amount": amount}
# Manual metric tracking
def calculate_total(items):
total = sum(item['price'] * item['quantity'] for item in items)
track_metric("order_total", total, tags={"currency": "USD"})
return total
# Run your code
for i in range(10):
process_order(i, 99.99)
# Export metrics
MetricsCollector.export_metrics("metrics.json")from callflow_tracer import SLAMonitor
sla_monitor = SLAMonitor()
# Set SLA thresholds
sla_monitor.set_threshold("api_response_time", 0.5) # 500ms
sla_monitor.set_threshold("database_query_time", 1.0) # 1 second
# Get compliance report
report = sla_monitor.get_compliance_report()
for metric_name, compliance in report.items():
print(f"{metric_name}: {compliance['compliance_rate']}% compliant")
# Export report
sla_monitor.export_report("sla_report.json")from callflow_tracer import get_business_tracker
tracker = get_business_tracker()
# Track counters
tracker.increment_counter("orders_processed")
tracker.increment_counter("orders_failed")
# Track gauges
tracker.set_gauge("current_queue_size", 42)
tracker.set_gauge("success_rate", 98.5)
# Export metrics
tracker.export_metrics("business_metrics.json")What You Get:
- 📈 Automatic Tracking: @custom_metric decorator tracks execution time
- 🎯 SLA Monitoring: Monitor compliance with service level agreements
- 📊 Business Metrics: Track counters and gauges for business logic
- 🏷️ Tag-Based Filtering: Organize metrics with tags
- 📁 Multiple Export Formats: JSON and CSV export
- 📋 Compliance Reports: Detailed SLA violation reports
- 🔍 Statistical Analysis: Mean, median, min, max, stddev calculations
from callflow_tracer.custom_metrics import SLO
# Latency objective: 95th percentile <= 300ms over 5 minutes
latency_slo = SLO(
name="checkout-latency-p95<=300ms",
objective=1.0, # 1.0 means target met
time_window=300,
sli_type="latency",
metric_name="latency_ms",
params={"threshold": 300.0, "percentile": 95.0},
)
print(latency_slo.compute(tags={"service": "api"}))from callflow_tracer.custom_metrics import ErrorBudgetTracker
availability_slo = SLO(
name="availability>=99.9%",
objective=0.999,
time_window=86400, # 1 day
sli_type="availability",
metric_name="request_success",
params={"success_value": 1.0},
)
eb = ErrorBudgetTracker(availability_slo).compute_budget(tags={"region": "us-east-1"})
print(eb)from callflow_tracer.custom_metrics import ExperimentAnalyzer, track_metric
# While generating metrics, tag them with deployment/variant
track_metric("latency_ms", 240, tags={"deployment": "baseline"})
track_metric("latency_ms", 260, tags={"deployment": "canary"})
canary = ExperimentAnalyzer.canary(
metric_name="latency_ms",
baseline_value="baseline",
canary_value="canary",
group_tag_key="deployment",
time_window=3600,
)
print(canary)
ab = ExperimentAnalyzer.ab_test(
metric_name="conversion_flag", # 1.0=converted, 0.0=not
variant_a="A",
variant_b="B",
group_tag_key="variant",
time_window=7200,
)
print(ab)from callflow_tracer.custom_metrics import SLAMonitor
monitor = SLAMonitor()
# Multiple conditions per metric with rolling windows and dynamic thresholds
monitor.set_threshold("latency_ms", 300, operator="lte", time_window=300, dynamic=True)
monitor.set_threshold("latency_ms", 500, operator="lte", time_window=60, dynamic=False)
# Feed data
monitor.record_metric("latency_ms", 350)
monitor.record_metric("latency_ms", 240)
print(monitor.get_compliance_report(time_window=3600))# In Jupyter notebook
from callflow_tracer import trace_scope, profile_section
from callflow_tracer.jupyter import display_callgraph
def my_function():
return sum(range(1000))
# Trace and display inline
with trace_scope() as graph:
result = my_function()
# Display interactive graph in notebook
display_callgraph(graph.to_dict(), height="600px")
# Or use magic commands
%%callflow_cell_trace
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
result = fibonacci(10)from callflow_tracer import profile_function, profile_section, get_memory_usage
import time
import random
import numpy as np
@profile_function
def process_data(data_size: int) -> float:
"""Process data with CPU and memory profiling."""
# Allocate memory
data = [random.random() for _ in range(data_size)]
# CPU-intensive work
total = sum(data) / len(data) if data else 0
# Simulate I/O
time.sleep(0.1)
return total
def analyze_performance():
"""Example using profile_section context manager."""
with profile_section("Data Processing"):
# Process different data sizes
for size in [1000, 10000, 100000]:
with profile_section(f"Processing {size} elements"):
result = process_data(size)
print(f"Result: {result:.4f}")
# Get memory usage
mem_usage = get_memory_usage()
print(f"Memory usage: {mem_usage:.2f} MB")
if __name__ == "__main__":
analyze_performance()
# Export the profile data to HTML
from callflow_tracer import export_html
export_html("performance_profile.html")After running the above code, you can view the performance data in an interactive HTML report that includes:
- Call hierarchy with timing information
- Memory usage over time
- Hotspots and bottlenecks
- Function execution statistics
from callflow_tracer import trace, trace_scope
@trace
def calculate_fibonacci(n):
if n <= 1:
return n
return calculate_fibonacci(n-1) + calculate_fibonacci(n-2)
@trace
def main():
result = calculate_fibonacci(10)
print(f"Fibonacci(10) = {result}")
# Trace everything and export to HTML
with trace_scope("fibonacci_trace.html"):
main()from callflow_tracer import trace_scope
def process_data():
data = load_data()
cleaned = clean_data(data)
result = analyze_data(cleaned)
return result
def load_data():
return [1, 2, 3, 4, 5]
def clean_data(data):
return [x * 2 for x in data if x > 2]
def analyze_data(data):
return sum(data) / len(data)
# Trace the entire process
with trace_scope("data_processing.html"):
result = process_data()
print(f"Analysis result: {result}")After running your traced code, you'll get an interactive HTML file showing:
- Function Nodes: Each function as a colored node (color indicates performance)
- Call Relationships: Arrows showing which functions call which others
- Performance Metrics: Hover over nodes to see call counts and timing
- Interactive Controls: Filter by module, toggle physics, change layout
- Statistics: Total functions, call relationships, and execution time
from callflow_tracer import trace_scope, export_json, export_html
with trace_scope() as graph:
# Your code here
my_application()
# Export to different formats
export_json(graph, "trace.json")
export_html(graph, "trace.html", title="My App Call Flow")from callflow_tracer import trace
# Only trace specific functions
@trace
def critical_function():
# This will be traced
pass
def regular_function():
# This won't be traced
pass
# Use context manager for broader tracing
with trace_scope("selective_trace.html"):
critical_function() # Traced
regular_function() # Not tracedfrom callflow_tracer import trace_scope, get_current_graph
with trace_scope("performance_analysis.html"):
# Your performance-critical code
optimize_algorithm()
# Get the graph for programmatic analysis
graph = get_current_graph()
for node in graph.nodes.values():
if node.avg_time > 0.1: # Functions taking > 100ms
print(f"Slow function: {node.full_name} ({node.avg_time:.3f}s avg)")from callflow_tracer import export_html
# Customize the HTML output
export_html(
graph,
"custom_trace.html",
title="My Custom Title",
include_vis_js=True # Include vis.js from CDN (requires internet)
)The library automatically truncates function arguments to 100 characters for privacy. For production use, you can modify the CallNode.add_call() method to further anonymize or exclude sensitive data.
callflow-tracer/
├── callflow_tracer/
│ ├── __init__.py # Main API
│ ├── tracer.py # Core tracing logic
│ ├── exporter.py # HTML/JSON export
│ ├── profiling.py # Performance profiling
│ ├── flamegraph.py # Flamegraph generation
│ ├── flamegraph_enhanced.py # Enhanced flamegraph UI
│ └── jupyter.py # Jupyter integration
├── examples/
│ ├── flamegraph_example.py # 7 flamegraph examples
│ ├── flamegraph_enhanced_demo.py # Enhanced features demo
│ ├── jupyter_example.ipynb # Jupyter notebook examples
│ ├── jupyter_standalone_demo.py # Standalone Jupyter demo
│ ├── FLAMEGRAPH_README.md # Flamegraph guide
│ └── JUPYTER_README.md # Jupyter guide
├── tests/
│ ├── test_flamegraph.py # Flamegraph tests (10 tests)
│ ├── test_flamegraph_enhanced.py # Enhanced features tests (10 tests)
│ ├── test_jupyter_integration.py # Jupyter tests (7 tests)
│ └── test_cprofile_fix.py # CPU profiling tests
├── docs/
│ ├── API_DOCUMENTATION.md # Complete API reference
│ ├── FEATURES_COMPLETE.md # All features documented
│ ├── INSTALLATION_GUIDE.md # Installation guide
│ └── USER_GUIDE.md # User guide
├── CHANGELOG.md # Version history
├── TESTING_GUIDE.md # Testing guide
├── QUICK_TEST.md # Quick test reference
├── ENHANCED_FEATURES.md # Enhanced features guide
├── pyproject.toml # Package configuration
├── README.md # This file
└── LICENSE # MIT License
- Interactive Network: Zoom, pan, and explore your call graph
- 4 Layout Options:
- Hierarchical (top-down tree)
- Force-Directed (physics-based)
- Circular (equal spacing)
- Timeline (sorted by execution time)
- Module Filtering: Filter by Python module (FIXED!)
- Color Coding:
- 🔴 Red: Slow functions (>100ms)
- 🟢 Teal: Medium functions (10-100ms)
- 🔵 Blue: Fast functions (<10ms)
- Export Options: PNG images and JSON data
- Rich Tooltips: Detailed performance metrics
- Stacked Bar Chart: Width = time, Height = depth
- Statistics Panel: Key metrics at a glance
- 5 Color Schemes: Default, Hot, Cool, Rainbow, Performance
- Search Functionality: Find functions quickly
- SVG Export: High-quality vector graphics
- Interactive Zoom: Click to zoom, hover for details
- Optimization Tips: Built-in guidance
- Execution Time: Actual CPU time (FIXED!)
- Function Calls: Accurate call counts
- Hot Spots: Automatically identified
- Detailed Output: Complete cProfile data
- Health Indicators: Visual status
- Collapsible UI: Modern, clean interface
- Performance Impact: Tracing adds overhead. Use selectively for production code
- Thread Safety: The tracer is thread-safe and can handle concurrent code
- Memory Usage: Large applications may generate substantial trace data
- Privacy: Function arguments are truncated by default for security
- OTEL_QUICK_REFERENCE.md - One-page OpenTelemetry cheat sheet
- docs/OTEL_ADVANCED_GUIDE.md - Comprehensive OpenTelemetry guide
- OTEL_TESTING_GUIDE.md - Testing workflow and CI/CD
- OTEL_IMPLEMENTATION_SUMMARY.md - Feature overview
- OTEL_INDEX.md - Master index & navigation
- examples/README_OTEL.md - OpenTelemetry examples
- CUSTOM_METRICS_GUIDE.md - Custom metrics tracking guide (NEW!)
- NEW_FEATURES_INDEX.md - Complete v0.3.0 feature index
- CLI_GUIDE.md - Command-line interface reference
- CODE_QUALITY_GUIDE.md - Code quality analysis guide
- PREDICTIVE_ANALYSIS_GUIDE.md - Predictive analytics guide
- CODE_CHURN_GUIDE.md - Code churn analysis guide
- INTEGRATIONS_GUIDE.md - Framework integrations guide
- v0_3_0_RELEASE_NOTES.md - Release notes
- FEATURE_MAPPING.md - Feature mapping and cross-reference
- Quick Test Guide - Fast testing reference
- Testing Guide - Comprehensive testing
- Enhanced Features - New features guide
- Changelog - Version history
- API Documentation - Complete API reference
- Features Documentation - All features explained
- Installation Guide - Setup and configuration
- Flamegraph Guide - Flamegraph documentation
- Jupyter Guide - Jupyter integration guide
examples/flamegraph_example.py- 7 flamegraph examplesexamples/flamegraph_enhanced_demo.py- Enhanced features demo (12 examples)examples/jupyter_example.ipynb- Interactive Jupyter notebookexamples/jupyter_standalone_demo.py- Standalone demosexamples/example_otel_export.py- OpenTelemetry export examples (NEW!)
tests/test_flamegraph.py- 10 flamegraph teststests/test_flamegraph_enhanced.py- 10 enhanced feature teststests/test_jupyter_integration.py- 7 Jupyter teststests/test_cprofile_fix.py- CPU profiling teststests/test_otel_export.py- 40+ OpenTelemetry tests (NEW!)test_otel_integration.py- OpenTelemetry integration tests (NEW!)
# Test flamegraph functionality
python tests/test_flamegraph.py
python tests/test_flamegraph_enhanced.py
# Test Jupyter integration
python tests/test_jupyter_integration.py
# Test CPU profiling fix
python tests/test_cprofile_fix.py
# Test OpenTelemetry export (NEW in v0.3.2)
pytest tests/test_otel_export.py -v
python test_otel_integration.py# Flamegraph examples (generates 7 HTML files)
python examples/flamegraph_example.py
# Enhanced flamegraph demo (generates 12 HTML files)
python examples/flamegraph_enhanced_demo.py
# Jupyter standalone demo (generates 5 HTML files)
python examples/jupyter_standalone_demo.py
# OpenTelemetry export examples (NEW in v0.3.2)
python examples/example_otel_export.pyAll tests should pass with:
============================================================
RESULTS: X passed, 0 failed
============================================================
✓ ALL TESTS PASSED!
generate_flamegraph(graph, "bottlenecks.html", color_scheme="performance")
# Wide RED bars = bottlenecks!export_html(graph, "flow.html", layout="hierarchical")
# See top-down execution flow# Before
with trace_scope() as before:
unoptimized_code()
# After
with trace_scope() as after:
optimized_code()
# Compare flamegraphs side by side# In notebook
with trace_scope() as graph:
ml_pipeline()
display_callgraph(graph.to_dict())- Performance Impact: Tracing adds ~10-30% overhead. Use selectively for production code
- Thread Safety: The tracer is thread-safe and can handle concurrent code
- Memory Usage: Large applications may generate substantial trace data
- Privacy: Function arguments are truncated by default for security
- Browser: Requires modern browser with JavaScript for visualizations
- Internet: CDN resources require internet connection (or use offline mode)
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
For major changes, please open an issue first to discuss.
See CONTRIBUTING.md for detailed guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.
- NetworkX: Graph operations
- vis.js: Interactive call graph visualizations
- D3.js: Flamegraph rendering
- cProfile: CPU profiling
- tracemalloc: Memory tracking
- Inspired by the need for better code understanding and debugging tools
- Built for developers who want to optimize their Python applications
- Community-driven improvements and feedback
- 📧 Email: rathodrajveer1311@gmail.com
- 🐛 Issues: GitHub Issues
- 📖 Documentation: GitHub Wiki
- 💬 Discussions: GitHub Discussions
If you find CallFlow Tracer useful, please star the repository on GitHub! ⭐
Happy Tracing! 🎉
CallFlow Tracer - Making Python performance analysis beautiful and intuitive
from callflow_tracer import trace_scope
from callflow_tracer.flamegraph import generate_flamegraph
with trace_scope() as graph:
your_amazing_code()
generate_flamegraph(graph, "amazing.html", color_scheme="performance")
# Find your bottlenecks in seconds! 🔥