A comprehensive Python library for tracing, profiling, and visualizing function call flows with interactive graphs and call graphs. Perfect for understanding codeflow, debugging performance bottlenecks, and optimizing code.
- Complexity Metrics: Cyclomatic and cognitive complexity calculation
- Maintainability Index: 0-100 scale with detailed metrics
- Halstead Metrics: Volume, difficulty, and effort analysis
- Technical Debt Scoring: Identify and quantify technical debt
- Quality Trends: Track code quality over time
- HTML/JSON Reports: Beautiful interactive reports
- Performance Prediction: Predict future performance degradation
- Capacity Planning: Forecast when limits will be reached
- Scalability Analysis: Assess code scalability characteristics
- Resource Forecasting: Predict resource usage trends
- Risk Assessment: Multi-factor risk evaluation
- Confidence Scoring: Data-driven confidence levels
- Git History Analysis: Analyze commits and changes
- Hotspot Identification: Find high-risk files
- Churn Correlation: Correlate with quality metrics
- Bug Prediction: Estimate bug correlation
- Risk Assessment: Comprehensive risk evaluation
- Actionable Recommendations: Specific improvement suggestions
- Flask Integration: Automatic request tracing
- FastAPI Integration: Async endpoint tracing
- Django Integration: View and middleware tracing
- SQLAlchemy Integration: Database query monitoring
- psycopg2 Integration: PostgreSQL query tracing
- Code Snippet Insertion: Ready-to-use integration code
- 10 CLI Commands: Complete CLI for all features
- No Python Code Needed: Run analysis from terminal
- HTML/JSON Output: Multiple export formats
- Progress Notifications: Real-time feedback
- Batch Processing: Analyze entire projects
- Quality Analysis Command: Analyze code quality from editor
- Performance Prediction: Predict issues from current file
- Churn Analysis: Analyze code changes with git history
- Framework Integration: Insert integration code snippets
- CLI Help: Interactive help for all commands
- Modern FastAPI Example: Production-ready patterns with Pydantic validation
- Lifespan Management: Proper async startup/shutdown handling
- Error Handling: Comprehensive HTTP exception handlers
- CORS Support: Pre-configured middleware for cross-origin requests
- Multiple Endpoints: CRUD operations, search, and calculator examples
- Request/Response Logging: Automatic logging middleware
- OpenAPI Docs: Enhanced interactive API documentation
- FastAPI: Full async/await support with modern patterns
- Flask: Automatic request tracing middleware
- Django: View decorators and middleware integration
- SQLAlchemy: Database query performance monitoring
- Psycopg2: PostgreSQL query tracing
- Interactive Visualization: View call graphs directly in VS Code
- Real-time Tracing: Trace files with a single click
- 3D Visualization: Explore call graphs in 3D space
- Multiple Layouts: Hierarchical, force-directed, circular, timeline
- Export Options: PNG and JSON export from the editor
- Performance Profiling: Built-in CPU profiling integration
- @trace_async Decorator: Trace async functions with full async/await support
- Async Context Manager:
trace_scope_async()for tracing async code blocks - Concurrent Execution Tracking: Visualize concurrent task execution patterns
- Async Statistics: Track await time, active time, and concurrency levels
- gather_traced(): Traced version of asyncio.gather for concurrent operations
- Side-by-Side Comparison: Compare two call graphs in split-screen HTML
- Before/After Analysis: Perfect for optimization validation
- Diff Highlighting: Automatic detection of improvements and regressions
- Performance Metrics: Time saved, functions added/removed/modified
- Visual Indicators: Color-coded improvements (green) and regressions (red)
- Object Allocation Tracking: Track every object allocation and deallocation
- Reference Counting: Monitor reference counts and detect unreleased objects
- Memory Growth Patterns: Identify continuous memory growth
- Leak Visualization: Beautiful HTML reports with charts and metrics
- Reference Cycle Detection: Find and visualize circular references
- Top Memory Consumers: Identify which code uses the most memory
- Statistics Panel: See total functions, calls, execution time, and bottlenecks at a glance
- 5 Color Schemes: Default, Hot, Cool, Rainbow, and Performance (Green=Fast, Red=Slow!)
- Search Functionality: Find specific functions quickly in large graphs
- SVG Export: Export high-quality vector graphics for presentations
- Modern UI: Responsive design with gradients and smooth animations
- Working cProfile Integration: CPU profile now shows actual execution times (not 0.000s!)
- Accurate Call Counts: Real function call statistics
- Hot Spot Identification: Automatically identifies performance bottlenecks
- Complete Profile Data: Full cProfile output with all metrics
- Working Module Filter: Filter by Python module with smooth animations (FIXED!)
- All Layouts Working: Hierarchical, Force-Directed, Circular, Timeline (FIXED!)
- JSON Export: Fixed export functionality with proper metadata (FIXED!)
- Modern CPU Profile UI: Collapsible section with beautiful design
- Magic Commands:
%%callflow_cell_tracefor quick tracing - Inline Visualizations: Display interactive graphs directly in notebooks
- Full Feature Support: All features work seamlessly in Jupyter
- ✅ Fixed tracer stability (programs now run to completion)
- ✅ Fixed CPU profiling (shows actual times)
- ✅ Fixed module filtering (now functional)
- ✅ Fixed circular/timeline layouts (proper positioning)
- ✅ Fixed JSON export (no more errors)
- ✅ Simple API: Decorator or context manager - your choice
- ✅ Interactive Visualizations: Beautiful HTML graphs with zoom, pan, and filtering
- ✅ Async/Await Support: Full support for modern async Python code
- ✅ Comparison Mode: Side-by-side before/after optimization analysis
- ✅ Memory Leak Detection: Track allocations, find leaks, visualize growth
- ✅ Performance Profiling: CPU time, memory usage, I/O wait tracking
- ✅ Flamegraph Support: Identify bottlenecks with flame graphs
- ✅ Call Graph Analysis: Understand function relationships
- ✅ Jupyter Integration: Works seamlessly in notebooks
- ✅ Multiple Export Formats: HTML, JSON, SVG
- ✅ Zero Config: Works out of the box
- ✅ Complexity Metrics: Cyclomatic and cognitive complexity
- ✅ Maintainability Index: 0-100 scale with detailed analysis
- ✅ Technical Debt Scoring: Identify and quantify debt
- ✅ Quality Trends: Track metrics over time
- ✅ Halstead Metrics: Volume, difficulty, effort analysis
- ✅ Performance Prediction: Predict future degradation
- ✅ Capacity Planning: Forecast limit breaches
- ✅ Scalability Analysis: Assess scalability characteristics
- ✅ Resource Forecasting: Predict resource usage
- ✅ Risk Assessment: Multi-factor evaluation
- ✅ Git History Analysis: Analyze commits and changes
- ✅ Hotspot Identification: Find high-risk files
- ✅ Quality Correlation: Correlate with quality metrics
- ✅ Bug Prediction: Estimate bug correlation
- ✅ Actionable Recommendations: Specific improvements
- ✅ 10 CLI Commands: Complete terminal interface
- ✅ No Code Required: Run analysis from command line
- ✅ Batch Processing: Analyze entire projects
- ✅ Multiple Formats: HTML and JSON output
- 📊 Statistics Dashboard: Total time, calls, depth, slowest function
- 🎨 5 Color Schemes: Choose the best view for your analysis
- 🔍 Real-time Search: Find functions instantly
- 💾 SVG Export: High-quality graphics for reports
- ⚡ Performance Colors: Green=fast, Red=slow (perfect for optimization!)
- 📱 Responsive Design: Works on all screen sizes
- 🔥 CPU Profiling: cProfile integration with detailed statistics
- 💾 Memory Tracking: Current and peak memory usage
- ⏱️ I/O Wait Time: Measure time spent waiting
- 📊 Health Indicators: Visual performance status
- 🎯 Bottleneck Detection: Automatically identifies slow functions
- 🌐 Interactive Network: Zoom, pan, explore call relationships
- 🎨 Multiple Layouts: Hierarchical, Force-Directed, Circular, Timeline
- 🔍 Module Filtering: Focus on specific parts of your code
- 📊 Rich Tooltips: Detailed metrics on hover
- 🎯 Color Coding: Performance-based coloring
# Analyze code quality
callflow-tracer quality . -o quality_report.html
# Predict performance issues
callflow-tracer predict history.json -o predictions.html
# Analyze code churn
callflow-tracer churn . --days 90 -o churn_report.html
# Trace function calls
callflow-tracer trace script.py -o trace.html
# Generate flamegraph
callflow-tracer flamegraph script.py -o flamegraph.htmlAll commands generate beautiful HTML reports! 📊
pip install callflow-tracergit clone https://github.com/rajveer43/callflow-tracer.git
cd callflow-tracer
pip install -e .pip install -e .[dev]- Open VS Code
- Go to Extensions (Ctrl+Shift+X)
- Search for "CallFlow Tracer"
- Click Install
- Right-click any Python file → "CallFlow: Trace Current File"
from callflow_tracer import trace_scope, export_html
def calculate_fibonacci(n):
if n <= 1:
return n
return calculate_fibonacci(n-1) + calculate_fibonacci(n-2)
# Trace execution
with trace_scope() as graph:
result = calculate_fibonacci(10)
print(f"Result: {result}")
# Export to interactive HTML
export_html(graph, "fibonacci.html", title="Fibonacci Call Graph")Open fibonacci.html in your browser to see the interactive visualization!
Analyze code quality metrics with a single command:
# Analyze code quality
callflow-tracer quality . -o quality_report.html
# Track trends over time
callflow-tracer quality . --track-trends --format jsonWhat You Get:
- 📈 Complexity Metrics: Cyclomatic and cognitive complexity
- 📊 Maintainability Index: 0-100 scale
- 💾 Technical Debt: Quantified debt scoring
- 🎯 Halstead Metrics: Volume, difficulty, effort
- 📋 Trend Analysis: Track metrics over time
Python API:
from callflow_tracer.code_quality import analyze_codebase
results = analyze_codebase("./src")
print(f"Average Complexity: {results['summary']['average_complexity']:.2f}")
print(f"Critical Issues: {results['summary']['critical_issues']}")Predict future performance issues:
# Predict performance issues
callflow-tracer predict history.json -o predictions.htmlWhat You Get:
- 🎯 Performance Prediction: Predict degradation
- 📈 Capacity Planning: Forecast limit breaches
- 🔍 Scalability Analysis: Assess scalability
- 💡 Risk Assessment: Multi-factor evaluation
- 📊 Confidence Scoring: Data-driven confidence
Python API:
from callflow_tracer.predictive_analysis import PerformancePredictor
predictor = PerformancePredictor("history.json")
predictions = predictor.predict_performance_issues(current_trace)
for pred in predictions:
if pred.risk_level == "Critical":
print(f"CRITICAL: {pred.function_name}")
print(f" Predicted time: {pred.predicted_time:.4f}s")Identify high-risk files using git history:
# Analyze code churn
callflow-tracer churn . --days 90 -o churn_report.htmlWhat You Get:
- 🔥 Hotspot Identification: Find high-risk files
- 📊 Churn Metrics: Commits, changes, authors
- 🔗 Quality Correlation: Correlate with quality
- 🐛 Bug Prediction: Estimate bug correlation
- 💡 Recommendations: Actionable improvements
Python API:
from callflow_tracer.code_churn import generate_churn_report
report = generate_churn_report(".", days=90)
print(f"High risk files: {report['summary']['high_risk_files']}")
for hotspot in report['hotspots'][:5]:
print(f"{hotspot['file_path']}: {hotspot['hotspot_score']:.1f}")from callflow_tracer import trace_scope
from callflow_tracer.flamegraph import generate_flamegraph
import time
def slow_function():
time.sleep(0.1) # Bottleneck!
return sum(range(10000))
def fast_function():
return sum(range(100))
def main():
return slow_function() + fast_function()
# Trace execution
with trace_scope() as graph:
result = main()
# Generate flamegraph with performance colors
generate_flamegraph(
graph,
"flamegraph.html",
color_scheme="performance", # Green=fast, Red=slow
show_stats=True, # Show statistics
search_enabled=True # Enable search
)Open flamegraph.html and look for wide RED bars - those are your bottlenecks! 🎯
CallFlow Tracer now fully supports async/await patterns:
import asyncio
from callflow_tracer.async_tracer import trace_async, trace_scope_async, gather_traced
@trace_async
async def fetch_data(item_id: int):
"""Async function with tracing."""
await asyncio.sleep(0.1)
return f"Data {item_id}"
@trace_async
async def process_data(item_id: int):
"""Process data asynchronously."""
data = await fetch_data(item_id)
await asyncio.sleep(0.05)
return data.upper()
async def main():
# Trace async code
async with trace_scope_async("async_trace.html") as graph:
# Concurrent execution
tasks = [process_data(i) for i in range(10)]
results = await gather_traced(*tasks)
print(f"Processed {len(results)} items concurrently")
# Get async statistics
from callflow_tracer.async_tracer import get_async_stats
stats = get_async_stats(graph)
print(f"Max concurrent tasks: {stats['max_concurrent_tasks']}")
print(f"Efficiency: {stats['efficiency']:.2f}%")
# Run it
asyncio.run(main())Async Features:
- 🔄 Concurrent Execution Tracking: See which tasks run in parallel
- ⏱️ Await Time Analysis: Separate active time from wait time
- 📊 Concurrency Metrics: Max concurrent tasks, timeline events
- 🎯 gather_traced(): Drop-in replacement for asyncio.gather with tracing
Compare two versions of your code side-by-side:
from callflow_tracer import trace_scope
from callflow_tracer.comparison import export_comparison_html
# Before optimization
def fibonacci_slow(n):
if n <= 1:
return n
return fibonacci_slow(n-1) + fibonacci_slow(n-2)
# After optimization (memoization)
_cache = {}
def fibonacci_fast(n):
if n in _cache:
return _cache[n]
if n <= 1:
return n
result = fibonacci_fast(n-1) + fibonacci_fast(n-2)
_cache[n] = result
return result
# Trace both versions
with trace_scope() as graph_before:
result = fibonacci_slow(20)
with trace_scope() as graph_after:
result = fibonacci_fast(20)
# Generate comparison report
export_comparison_html(
graph_before, graph_after,
"optimization_comparison.html",
label1="Before (Naive)",
label2="After (Memoized)",
title="Fibonacci Optimization"
)Open optimization_comparison.html to see:
- ✅ Side-by-Side Graphs: Visual comparison of call patterns
- 📈 Performance Metrics: Time saved, percentage improvement
- 🟢 Improvements: Functions that got faster (green highlighting)
- 🔴 Regressions: Functions that got slower (red highlighting)
- 📋 Detailed Table: Function-by-function comparison
- 🎯 Summary Stats: Added/removed/modified functions
Detect memory leaks with comprehensive tracking and visualization:
from callflow_tracer.memory_leak_detector import detect_leaks, track_allocations
# Method 1: Context Manager
with detect_leaks("leak_report.html") as detector:
# Your code here
data = []
for i in range(1000):
data.append([0] * 1000) # Potential leak
detector.take_snapshot(f"Iteration_{i}")
# Method 2: Decorator
@track_allocations
def process_data():
leaked_objects = []
for i in range(100):
leaked_objects.append([0] * 10000)
return leaked_objects
result = process_data()Memory Leak Detection Features:
- 🔍 Object Tracking: Track every object allocation
- 📊 Growth Patterns: Detect continuous memory growth
- 🔄 Reference Cycles: Find circular references
- 📈 Memory Snapshots: Compare memory state over time
- 💡 Top Consumers: Identify memory-hungry code
- 📋 Beautiful Reports: HTML visualization with charts
What You Get:
- Memory growth charts
- Object type distribution
- Suspected leak detection
- Reference cycle identification
- Snapshot comparisons
- Actionable recommendations
Common Leak Scenarios Detected:
- ✅ Cache that never evicts entries
- ✅ Event listeners never removed
- ✅ Database connections not closed
- ✅ Closures capturing large data
- ✅ Reference cycles in objects
Combine tracing and profiling for comprehensive analysis:
from callflow_tracer import trace_scope, profile_section, export_html
from callflow_tracer.flamegraph import generate_flamegraph
def application():
# Your application code
process_data()
analyze_results()
# Trace and profile together
with profile_section("Application") as perf_stats:
with trace_scope() as graph:
application()
# Export call graph with profiling data
export_html(
graph,
"callgraph.html",
title="Application Analysis",
profiling_stats=perf_stats.to_dict()
)
# Export flamegraph
generate_flamegraph(
graph,
"flamegraph.html",
title="Performance Flamegraph",
color_scheme="performance",
show_stats=True
)You get:
- callgraph.html: Interactive network showing function relationships + CPU profile
- flamegraph.html: Stacked bars showing time distribution + statistics
from fastapi import FastAPI, HTTPException, status
from pydantic import BaseModel, Field
from contextlib import asynccontextmanager
from callflow_tracer import trace_scope
from callflow_tracer.integrations.fastapi_integration import setup_fastapi_tracing
# Define Pydantic models
class Item(BaseModel):
name: str = Field(..., min_length=3, max_length=50)
price: float = Field(..., gt=0)
in_stock: bool = True
class ItemResponse(Item):
id: int
created_at: str
# Setup tracing with lifespan
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
global _cft_scope
_cft_scope = trace_scope("fastapi_trace.html")
_cft_scope.__enter__()
yield
# Shutdown
_cft_scope.__exit__(None, None, None)
# Create FastAPI app
app = FastAPI(
title="My API",
lifespan=lifespan
)
# Setup automatic tracing
setup_fastapi_tracing(app)
# Add CORS middleware
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Define endpoints
@app.get("/items/{item_id}", response_model=ItemResponse)
async def get_item(item_id: int):
if item_id not in database:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail=f"Item {item_id} not found"
)
return {"id": item_id, **database[item_id]}
@app.post("/items", response_model=ItemResponse, status_code=status.HTTP_201_CREATED)
async def create_item(item: Item):
new_id = max(database.keys(), default=0) + 1
database[new_id] = item.dict()
return {"id": new_id, **database[new_id]}Run it:
uvicorn app:app --reload
# Visit http://localhost:8000/docs for interactive API docs
# Trace saved to fastapi_trace.htmlfrom flask import Flask, jsonify, request
from callflow_tracer import trace_scope
from callflow_tracer.integrations.flask_integration import setup_flask_tracing
app = Flask(__name__)
# Setup automatic tracing
setup_flask_tracing(app)
# Initialize trace scope
trace_context = trace_scope("flask_trace.html")
trace_context.__enter__()
@app.route('/api/users/<int:user_id>')
def get_user(user_id):
user = database.get(user_id)
if not user:
return jsonify({"error": "User not found"}), 404
return jsonify(user)
@app.route('/api/users', methods=['POST'])
def create_user():
data = request.get_json()
user_id = len(database) + 1
database[user_id] = data
return jsonify({"id": user_id, **data}), 201
if __name__ == '__main__':
try:
app.run(debug=True)
finally:
trace_context.__exit__(None, None, None)# settings.py
MIDDLEWARE = [
'callflow_tracer.integrations.django_integration.CallFlowTracerMiddleware',
# ... other middleware
]
# views.py
from django.http import JsonResponse
from callflow_tracer.integrations.django_integration import trace_view
@trace_view
def user_list(request):
users = User.objects.all()
return JsonResponse({
'users': list(users.values())
})
@trace_view
def user_detail(request, user_id):
try:
user = User.objects.get(id=user_id)
return JsonResponse(user.to_dict())
except User.DoesNotExist:
return JsonResponse({'error': 'User not found'}, status=404)from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from callflow_tracer import trace_scope
from callflow_tracer.integrations.sqlalchemy_integration import setup_sqlalchemy_tracing
# Create engine
engine = create_engine('sqlite:///example.db')
Base = declarative_base()
# Define model
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
email = Column(String)
# Setup tracing
setup_sqlalchemy_tracing(engine)
# Use with trace scope
with trace_scope("sqlalchemy_trace.html"):
Session = sessionmaker(bind=engine)
session = Session()
# Queries will be traced
users = session.query(User).filter(User.name.like('%John%')).all()
# Inserts will be traced
new_user = User(name="John Doe", email="john@example.com")
session.add(new_user)
session.commit()import psycopg2
from callflow_tracer import trace_scope
from callflow_tracer.integrations.psycopg2_integration import setup_psycopg2_tracing
# Connect to PostgreSQL
conn = psycopg2.connect(
dbname="mydb",
user="user",
password="password",
host="localhost"
)
# Setup tracing
setup_psycopg2_tracing(conn)
# Use with trace scope
with trace_scope("postgres_trace.html"):
cursor = conn.cursor()
# Queries will be traced with execution time
cursor.execute("SELECT * FROM users WHERE age > %s", (18,))
users = cursor.fetchall()
cursor.execute("""
INSERT INTO users (name, email, age)
VALUES (%s, %s, %s)
""", ("Jane Doe", "jane@example.com", 25))
conn.commit()
cursor.close()- Open VS Code
- Press
Ctrl+Shift+X(orCmd+Shift+Xon Mac) - Search for "CallFlow Tracer"
- Click Install
- Open any Python file
- Right-click in the editor
- Select "CallFlow: Trace Current File"
- View the interactive visualization in the side panel
- One-Click Tracing: Trace entire files or selected functions
- Interactive Graphs: Zoom, pan, and explore call relationships
- 3D Visualization: View call graphs in 3D space
- Multiple Layouts: Switch between hierarchical, force-directed, circular, and timeline
- Export Options: Save as PNG or JSON
- Performance Profiling: Built-in CPU profiling
- Module Filtering: Filter by Python modules
CallFlow: Trace Current File- Trace the entire fileCallFlow: Trace Selected Function- Trace only selected functionCallFlow: Show Visualization- Open visualization panelCallFlow: Show 3D Visualization- View in 3DCallFlow: Export as PNG- Export as imageCallFlow: Export as JSON- Export trace data
{
"callflowTracer.pythonPath": "python3",
"callflowTracer.defaultLayout": "force",
"callflowTracer.autoTrace": false,
"callflowTracer.enableProfiling": true
}# In Jupyter notebook
from callflow_tracer import trace_scope, profile_section
from callflow_tracer.jupyter import display_callgraph
def my_function():
return sum(range(1000))
# Trace and display inline
with trace_scope() as graph:
result = my_function()
# Display interactive graph in notebook
display_callgraph(graph.to_dict(), height="600px")
# Or use magic commands
%%callflow_cell_trace
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n-1) + fibonacci(n-2)
result = fibonacci(10)from callflow_tracer import profile_function, profile_section, get_memory_usage
import time
import random
import numpy as np
@profile_function
def process_data(data_size: int) -> float:
"""Process data with CPU and memory profiling."""
# Allocate memory
data = [random.random() for _ in range(data_size)]
# CPU-intensive work
total = sum(data) / len(data) if data else 0
# Simulate I/O
time.sleep(0.1)
return total
def analyze_performance():
"""Example using profile_section context manager."""
with profile_section("Data Processing"):
# Process different data sizes
for size in [1000, 10000, 100000]:
with profile_section(f"Processing {size} elements"):
result = process_data(size)
print(f"Result: {result:.4f}")
# Get memory usage
mem_usage = get_memory_usage()
print(f"Memory usage: {mem_usage:.2f} MB")
if __name__ == "__main__":
analyze_performance()
# Export the profile data to HTML
from callflow_tracer import export_html
export_html("performance_profile.html")After running the above code, you can view the performance data in an interactive HTML report that includes:
- Call hierarchy with timing information
- Memory usage over time
- Hotspots and bottlenecks
- Function execution statistics
from callflow_tracer import trace, trace_scope
@trace
def calculate_fibonacci(n):
if n <= 1:
return n
return calculate_fibonacci(n-1) + calculate_fibonacci(n-2)
@trace
def main():
result = calculate_fibonacci(10)
print(f"Fibonacci(10) = {result}")
# Trace everything and export to HTML
with trace_scope("fibonacci_trace.html"):
main()from callflow_tracer import trace_scope
def process_data():
data = load_data()
cleaned = clean_data(data)
result = analyze_data(cleaned)
return result
def load_data():
return [1, 2, 3, 4, 5]
def clean_data(data):
return [x * 2 for x in data if x > 2]
def analyze_data(data):
return sum(data) / len(data)
# Trace the entire process
with trace_scope("data_processing.html"):
result = process_data()
print(f"Analysis result: {result}")After running your traced code, you'll get an interactive HTML file showing:
- Function Nodes: Each function as a colored node (color indicates performance)
- Call Relationships: Arrows showing which functions call which others
- Performance Metrics: Hover over nodes to see call counts and timing
- Interactive Controls: Filter by module, toggle physics, change layout
- Statistics: Total functions, call relationships, and execution time
from callflow_tracer import trace_scope, export_json, export_html
with trace_scope() as graph:
# Your code here
my_application()
# Export to different formats
export_json(graph, "trace.json")
export_html(graph, "trace.html", title="My App Call Flow")from callflow_tracer import trace
# Only trace specific functions
@trace
def critical_function():
# This will be traced
pass
def regular_function():
# This won't be traced
pass
# Use context manager for broader tracing
with trace_scope("selective_trace.html"):
critical_function() # Traced
regular_function() # Not tracedfrom callflow_tracer import trace_scope, get_current_graph
with trace_scope("performance_analysis.html"):
# Your performance-critical code
optimize_algorithm()
# Get the graph for programmatic analysis
graph = get_current_graph()
for node in graph.nodes.values():
if node.avg_time > 0.1: # Functions taking > 100ms
print(f"Slow function: {node.full_name} ({node.avg_time:.3f}s avg)")from callflow_tracer import export_html
# Customize the HTML output
export_html(
graph,
"custom_trace.html",
title="My Custom Title",
include_vis_js=True # Include vis.js from CDN (requires internet)
)The library automatically truncates function arguments to 100 characters for privacy. For production use, you can modify the CallNode.add_call() method to further anonymize or exclude sensitive data.
callflow-tracer/
├── callflow_tracer/
│ ├── __init__.py # Main API
│ ├── tracer.py # Core tracing logic
│ ├── exporter.py # HTML/JSON export
│ ├── profiling.py # Performance profiling
│ ├── flamegraph.py # Flamegraph generation
│ ├── flamegraph_enhanced.py # Enhanced flamegraph UI
│ └── jupyter.py # Jupyter integration
├── examples/
│ ├── flamegraph_example.py # 7 flamegraph examples
│ ├── flamegraph_enhanced_demo.py # Enhanced features demo
│ ├── jupyter_example.ipynb # Jupyter notebook examples
│ ├── jupyter_standalone_demo.py # Standalone Jupyter demo
│ ├── FLAMEGRAPH_README.md # Flamegraph guide
│ └── JUPYTER_README.md # Jupyter guide
├── tests/
│ ├── test_flamegraph.py # Flamegraph tests (10 tests)
│ ├── test_flamegraph_enhanced.py # Enhanced features tests (10 tests)
│ ├── test_jupyter_integration.py # Jupyter tests (7 tests)
│ └── test_cprofile_fix.py # CPU profiling tests
├── docs/
│ ├── API_DOCUMENTATION.md # Complete API reference
│ ├── FEATURES_COMPLETE.md # All features documented
│ ├── INSTALLATION_GUIDE.md # Installation guide
│ └── USER_GUIDE.md # User guide
├── CHANGELOG.md # Version history
├── TESTING_GUIDE.md # Testing guide
├── QUICK_TEST.md # Quick test reference
├── ENHANCED_FEATURES.md # Enhanced features guide
├── pyproject.toml # Package configuration
├── README.md # This file
└── LICENSE # MIT License
- Interactive Network: Zoom, pan, and explore your call graph
- 4 Layout Options:
- Hierarchical (top-down tree)
- Force-Directed (physics-based)
- Circular (equal spacing)
- Timeline (sorted by execution time)
- Module Filtering: Filter by Python module (FIXED!)
- Color Coding:
- 🔴 Red: Slow functions (>100ms)
- 🟢 Teal: Medium functions (10-100ms)
- 🔵 Blue: Fast functions (<10ms)
- Export Options: PNG images and JSON data
- Rich Tooltips: Detailed performance metrics
- Stacked Bar Chart: Width = time, Height = depth
- Statistics Panel: Key metrics at a glance
- 5 Color Schemes: Default, Hot, Cool, Rainbow, Performance
- Search Functionality: Find functions quickly
- SVG Export: High-quality vector graphics
- Interactive Zoom: Click to zoom, hover for details
- Optimization Tips: Built-in guidance
- Execution Time: Actual CPU time (FIXED!)
- Function Calls: Accurate call counts
- Hot Spots: Automatically identified
- Detailed Output: Complete cProfile data
- Health Indicators: Visual status
- Collapsible UI: Modern, clean interface
- Performance Impact: Tracing adds overhead. Use selectively for production code
- Thread Safety: The tracer is thread-safe and can handle concurrent code
- Memory Usage: Large applications may generate substantial trace data
- Privacy: Function arguments are truncated by default for security
- NEW_FEATURES_INDEX.md - Complete v0.3.0 feature index
- CLI_GUIDE.md - Command-line interface reference
- CODE_QUALITY_GUIDE.md - Code quality analysis guide
- PREDICTIVE_ANALYSIS_GUIDE.md - Predictive analytics guide
- CODE_CHURN_GUIDE.md - Code churn analysis guide
- INTEGRATIONS_GUIDE.md - Framework integrations guide
- v0_3_0_RELEASE_NOTES.md - Release notes
- FEATURE_MAPPING.md - Feature mapping and cross-reference
- Quick Test Guide - Fast testing reference
- Testing Guide - Comprehensive testing
- Enhanced Features - New features guide
- Changelog - Version history
- API Documentation - Complete API reference
- Features Documentation - All features explained
- Installation Guide - Setup and configuration
- Flamegraph Guide - Flamegraph documentation
- Jupyter Guide - Jupyter integration guide
examples/flamegraph_example.py- 7 flamegraph examplesexamples/flamegraph_enhanced_demo.py- Enhanced features demo (12 examples)examples/jupyter_example.ipynb- Interactive Jupyter notebookexamples/jupyter_standalone_demo.py- Standalone demos
tests/test_flamegraph.py- 10 flamegraph teststests/test_flamegraph_enhanced.py- 10 enhanced feature teststests/test_jupyter_integration.py- 7 Jupyter teststests/test_cprofile_fix.py- CPU profiling tests
# Test flamegraph functionality
python tests/test_flamegraph.py
python tests/test_flamegraph_enhanced.py
# Test Jupyter integration
python tests/test_jupyter_integration.py
# Test CPU profiling fix
python tests/test_cprofile_fix.py# Flamegraph examples (generates 7 HTML files)
python examples/flamegraph_example.py
# Enhanced flamegraph demo (generates 12 HTML files)
python examples/flamegraph_enhanced_demo.py
# Jupyter standalone demo (generates 5 HTML files)
python examples/jupyter_standalone_demo.pyAll tests should pass with:
============================================================
RESULTS: X passed, 0 failed
============================================================
✓ ALL TESTS PASSED!
generate_flamegraph(graph, "bottlenecks.html", color_scheme="performance")
# Wide RED bars = bottlenecks!export_html(graph, "flow.html", layout="hierarchical")
# See top-down execution flow# Before
with trace_scope() as before:
unoptimized_code()
# After
with trace_scope() as after:
optimized_code()
# Compare flamegraphs side by side# In notebook
with trace_scope() as graph:
ml_pipeline()
display_callgraph(graph.to_dict())- Performance Impact: Tracing adds ~10-30% overhead. Use selectively for production code
- Thread Safety: The tracer is thread-safe and can handle concurrent code
- Memory Usage: Large applications may generate substantial trace data
- Privacy: Function arguments are truncated by default for security
- Browser: Requires modern browser with JavaScript for visualizations
- Internet: CDN resources require internet connection (or use offline mode)
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
For major changes, please open an issue first to discuss.
See CONTRIBUTING.md for detailed guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.
- NetworkX: Graph operations
- vis.js: Interactive call graph visualizations
- D3.js: Flamegraph rendering
- cProfile: CPU profiling
- tracemalloc: Memory tracking
- Inspired by the need for better code understanding and debugging tools
- Built for developers who want to optimize their Python applications
- Community-driven improvements and feedback
- 📧 Email: rathodrajveer1311@gmail.com
- 🐛 Issues: GitHub Issues
- 📖 Documentation: GitHub Wiki
- 💬 Discussions: GitHub Discussions
If you find CallFlow Tracer useful, please star the repository on GitHub! ⭐
Happy Tracing! 🎉
CallFlow Tracer - Making Python performance analysis beautiful and intuitive
from callflow_tracer import trace_scope
from callflow_tracer.flamegraph import generate_flamegraph
with trace_scope() as graph:
your_amazing_code()
generate_flamegraph(graph, "amazing.html", color_scheme="performance")
# Find your bottlenecks in seconds! 🔥