This repository contains the backend API for the F1 What-If Simulator. It is a high-performance asynchronous API built with Python, FastAPI, and Pydantic, designed to serve data and run machine learning-powered simulations for its React Frontend counterpart.
The API fetches data from the public OpenF1 API, processes it, and runs a simulation based on user-defined strategic changes.
- Asynchronous Performance: Built with FastAPI and
asyncio
to handle concurrent requests efficiently without blocking - Data Validation: Robust request and response validation powered by Pydantic ensures data integrity
- Machine Learning Integration: Loads a pre-trained
scikit-learn
model to predict lap times as the core of the simulation engine - Layered Architecture: A clean, decoupled architecture (API, Service, Data Access layers) for high maintainability and testability
- Structured Logging: Production-ready structured logging with
structlog
for easy monitoring and debugging - Containerized: Comes with a multi-stage
Dockerfile
for building lightweight, production-ready images
Follow these instructions to get a copy of the project up and running on your local machine for development and testing purposes.
-
Clone the repository:
git clone https://github.com/your-username/f1-simulator-api.git cd f1-simulator-api
-
Create and activate a virtual environment:
# For Unix/macOS python3 -m venv venv source venv/bin/activate # For Windows python -m venv venv .\venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set up environment variables: This project uses a
.env
file for configuration. Copy the example file to get started:cp env.example .env
No changes are needed in the
.env
file to run the application locally with default settings.
To start the Uvicorn server with live reloading:
uvicorn app.main:app --reload
The API will be available at http://127.0.0.1:8000.
Once the server is running, you can access the interactive Swagger UI documentation generated by FastAPI by navigating to:
This interface allows you to explore and test all available endpoints directly from your browser.
You can also build and run the application using Docker for a more isolated environment.
Build the Docker image:
docker build -t f1-simulator-api .
Run the Docker container:
docker run -p 8000:8000 --env-file .env f1-simulator-api
The API will be accessible at http://127.0.0.1:8000.
Once the server is running, you can access:
- Interactive API Docs: http://localhost:8000/docs
- ReDoc Documentation: http://localhost:8000/redoc
- OpenAPI Schema: http://localhost:8000/openapi.json
GET /api/v1/health
- Service health status
GET /api/v1/drivers?season=2024
- Get all drivers for a season
GET /api/v1/tracks?season=2024
- Get all tracks for a season
POST /api/v1/simulate
- Run a what-if simulationGET /api/v1/simulation/{simulation_id}
- Get simulation results
The API follows a strict layered architecture to ensure a clean separation of concerns:
- API Layer (
/app/api
): The entry point of the application. Handles HTTP requests, validates incoming data using Pydantic schemas, and delegates business logic to the service layer - Service Layer (
/app/services
): Contains all the core business logic. It orchestrates the simulation process, calling the data access layer and the ML model as needed - Data Access Layer (
/app/external
): A dedicated client for communicating with the external OpenF1 API. It uses an asynchronous HTTP client (httpx
) and implements caching to reduce latency and external calls - Core Layer (
/app/core
): Contains application-wide logic, such as configuration management, custom exception definitions, and logging setup
app/
βββ main.py # FastAPI app, middleware, exception handlers
βββ api/ # API layer (endpoints, schemas)
β βββ v1/
β βββ endpoints.py # Lean endpoint functions
β βββ schemas.py # Pydantic models
βββ services/ # Business logic layer
β βββ simulation_service.py
βββ core/ # Configuration and utilities
β βββ config.py # Environment-based settings
β βββ exceptions.py # Custom business exceptions
β βββ logging_config.py
βββ models/ # ML model management
β βββ model_loader.py
βββ external/ # External API clients
βββ openf1_client.py
- Input Validation: All requests validated with Pydantic models
- Custom Exceptions: Business-specific error handling
- Structured Logging: JSON-formatted logs with request tracking
- CORS Configuration: Configurable allowed origins
- Rate Limiting: Built-in protection against abuse
The API includes a machine learning component for lap time predictions:
- Model Loading: Automatic model loading with fallback to dummy model
- Feature Engineering: Historical data processing and feature extraction
- Prediction Pipeline: End-to-end prediction workflow
- Confidence Scoring: Quality assessment of predictions
This project uses Pytest for unit and integration testing.
To run the entire test suite:
pytest
To get a detailed coverage report:
pytest --cov=app
uvicorn app.main:app --reload
- Starts the development server with hot reloadingpytest
- Runs the test suitepytest --cov=app
- Runs tests with coverage reportdocker build -t f1-simulator-api .
- Builds the Docker imagedocker run -p 8000:8000 --env-file .env f1-simulator-api
- Runs the Docker container
This project uses pre-commit hooks to ensure code quality. The hooks automatically run:
- Code formatting: Black and Ruff format
- Linting: Ruff for code quality checks
- File hygiene: Trailing whitespace, end-of-file fixes
- Tests: Pytest runs before pushing to ensure all tests pass
-
Install pre-commit:
pip install pre-commit
-
Install the git hooks:
pre-commit install pre-commit install --hook-type pre-push
- Automatic: Hooks run automatically on
git commit
andgit push
- Manual: Run all checks manually:
./scripts/check-code.sh
- Individual hooks: Run specific hooks:
pre-commit run black --all-files pre-commit run ruff --all-files
If you need to skip hooks (not recommended):
git commit --no-verify -m "Emergency fix"
The project includes several GitHub Actions workflows that run automatically:
When a branch is created with an issue number, the workflow automatically:
- Extracts issue numbers from branch names using patterns:
FWI-BE-XXX
formatfeature/FWI-BE-XXX
formatbugfix/FWI-BE-XXX
formathotfix/FWI-BE-XXX
formatissue-XXX
format
- Moves issues to "In Progress" status in the GitHub project board
- Adds comments to issues with branch creation details
- Logs helpful information when no issue numbers are found
How to use:
- Create branches with issue numbers:
feature/FWI-BE-123
,bugfix/FWI-BE-456
- The workflow will automatically move the issue to In Progress status
- Issue comments will include branch details and status updates
Triggers:
- Branch creation (push to new branch)
- Excludes
main
,develop
, andreleases/**
branches
When a Pull Request is created, the workflow automatically:
- Extracts linked issues from PR title and body using patterns:
FWI-BE-XXX
format#XXX
formatcloses #XXX
,fixes #XXX
,resolves #XXX
- Moves issues to "Review" status in the GitHub project board
- Adds helpful comments to PRs with status updates
- Notifies when no linked issues are found
Complete Workflow:
-
Start Development:
- Create branch with issue number:
feature/FWI-BE-123
- Issue automatically moves to "In Progress" status
- Issue gets comment with branch details
- Create branch with issue number:
-
Submit for Review:
- Create PR with issue reference:
FWI-BE-123: Add new feature
- Issue automatically moves to "Review" status
- PR gets comment with status update
- Create PR with issue reference:
-
Complete Work:
- Merge PR with closing keywords:
Closes #123
- Issue automatically moves to "Done" status
- Merge PR with closing keywords:
Triggers:
- PR opened, reopened, or synchronized
- Targets
main
anddevelop
branches
- Linting & Formatting: Runs
black
,ruff
, andmypy
- Testing: Runs all tests with pytest
- Code Quality: Ensures all pre-commit hooks pass
# Run all tests
python3 -m pytest tests/ -v
# Run with coverage
python3 -m pytest --cov=app --cov-report=html
# Run specific test file
python3 -m pytest tests/test_simulation_service.py -v
The project uses several tools for code quality:
- Black: Code formatting
- Ruff: Linting and import sorting
- MyPy: Type checking
# Format code
black app/ tests/
# Lint code
ruff check app/ tests/
# Type check
mypy app/
- Add schema to
app/api/v1/schemas.py
- Add endpoint to
app/api/v1/endpoints.py
- Add business logic to
app/services/
- Add tests in
tests/
Variable | Description | Default |
---|---|---|
HOST |
Server host | 0.0.0.0 |
PORT |
Server port | 8000 |
DEBUG |
Debug mode | false |
LOG_LEVEL |
Logging level | INFO |
LOG_FORMAT |
Log format | json |
OPENF1_API_URL |
OpenF1 API URL | https://api.openf1.org |
OPENF1_API_TIMEOUT |
API timeout (seconds) | 30 |
MODEL_PATH |
ML model path | app/models/lap_time_predictor.joblib |
Contributions, issues, and feature requests are welcome! We are excited to see the community get involved.
Please read our CONTRIBUTING.md for details on our code of conduct and the process for submitting pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.
For support and questions:
- Create an issue in the repository
- Check the API documentation at
/docs
- Review the logs for detailed error information
- Database integration for persistent storage
- Real-time WebSocket support
- Advanced ML model training pipeline
- Performance monitoring and metrics
- Authentication and authorization
- Rate limiting and API quotas