This is a FastAPI-based service that provides AI-powered explanations of compiler assembly output for the Compiler Explorer website. The service uses Anthropic's Claude API to analyze source code and its compiled assembly, providing educational explanations of compiler transformations and optimizations.
The service is designed to run both locally for development and as an AWS Lambda function via Mangum adapter. It provides intelligent analysis of compiler output, helping users understand how their code translates to assembly language.
For detailed design documentation, see claude_explain.md.
See the source code for the current project structure. Key entry points:
app/main.py
- FastAPI application entry pointtest-explain.sh
- Integration test script
- Python 3.13+
- uv package manager
Create a .env
file (NOT in git) with your Anthropic API key:
ANTHROPIC_API_KEY=<your-key-here>
uv sync --group dev
# Start development server
uv run fastapi dev
The service will be available at http://localhost:8000
# Basic test
./test-explain.sh
# Pretty formatted output
./test-explain.sh --pretty
# Run all tests
uv run pytest
# Run specific test
uv run pytest app/explain_test.py::test_process_request_success
# Run pre-commit hooks (ruff linting/formatting, shellcheck)
uv run pre-commit run --all-files
# Manual linting
uv run ruff check
uv run ruff format
- Smart assembly filtering for large compiler outputs
- AWS CloudWatch metrics integration when deployed
- Local development with
.env
file configuration
POST /
(root path)
{
"language": "c++",
"compiler": "g112",
"code": "int square(int x) { return x * x; }",
"compilationOptions": ["-O2"],
"instructionSet": "amd64",
"asm": [
{
"text": "square(int):",
"source": null,
"labels": []
},
{
"text": " push rbp",
"source": {
"line": 1,
"column": 21
},
"labels": []
}
]
}
{
"explanation": "The compiler generates efficient assembly...",
"status": "success",
"model": "claude-3-5-haiku-20241022",
"usage": {
"input_tokens": 123,
"output_tokens": 456,
"total_tokens": 579
},
"cost": {
"input_cost": 0.000123,
"output_cost": 0.000456,
"total_cost": 0.000579
}
}
See app/explain.py
for current limits and model configuration. The service includes configurable limits for input size, assembly processing, and response length.
The service is designed for AWS Lambda deployment with API Gateway. See the Terraform configuration in the repository for infrastructure setup. The version of the service that runs in production is controlled by the terraform in our infra repository.