Skip to content

🏎️ High-performance FastAPI backend for F1 What-If Simulator with ML-powered lap time predictions and OpenF1 data integration

License

Notifications You must be signed in to change notification settings

donkasun/f1-what-if-simulator-api

F1 What-If Simulator: Backend API

Python FastAPI Pydantic Docker Pytest

This repository contains the backend API for the F1 What-If Simulator. It is a high-performance asynchronous API built with Python, FastAPI, and Pydantic, designed to serve data and run machine learning-powered simulations for its React Frontend counterpart.

The API fetches data from the public OpenF1 API, processes it, and runs a simulation based on user-defined strategic changes.

🏎️ Features

  • Asynchronous Performance: Built with FastAPI and asyncio to handle concurrent requests efficiently without blocking
  • Data Validation: Robust request and response validation powered by Pydantic ensures data integrity
  • Machine Learning Integration: Loads a pre-trained scikit-learn model to predict lap times as the core of the simulation engine
  • Layered Architecture: A clean, decoupled architecture (API, Service, Data Access layers) for high maintainability and testability
  • Structured Logging: Production-ready structured logging with structlog for easy monitoring and debugging
  • Containerized: Comes with a multi-stage Dockerfile for building lightweight, production-ready images

🏎️ Getting Started

Follow these instructions to get a copy of the project up and running on your local machine for development and testing purposes.

Prerequisites

  • Python (v3.10 or later)
  • An API client like Insomnia or Postman to test the endpoints

Installation & Setup

  1. Clone the repository:

    git clone https://github.com/your-username/f1-simulator-api.git
    cd f1-simulator-api
  2. Create and activate a virtual environment:

    # For Unix/macOS
    python3 -m venv venv
    source venv/bin/activate
    
    # For Windows
    python -m venv venv
    .\venv\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Set up environment variables: This project uses a .env file for configuration. Copy the example file to get started:

    cp env.example .env

    No changes are needed in the .env file to run the application locally with default settings.

Running the Development Server

To start the Uvicorn server with live reloading:

uvicorn app.main:app --reload

The API will be available at http://127.0.0.1:8000.

Interactive API Documentation

Once the server is running, you can access the interactive Swagger UI documentation generated by FastAPI by navigating to:

http://127.0.0.1:8000/docs

This interface allows you to explore and test all available endpoints directly from your browser.

🐳 Running with Docker

You can also build and run the application using Docker for a more isolated environment.

Build the Docker image:

docker build -t f1-simulator-api .

Run the Docker container:

docker run -p 8000:8000 --env-file .env f1-simulator-api

The API will be accessible at http://127.0.0.1:8000.

πŸ“– API Documentation

Once the server is running, you can access:

🏁 API Endpoints

Health Check

  • GET /api/v1/health - Service health status

Drivers

  • GET /api/v1/drivers?season=2024 - Get all drivers for a season

Tracks

  • GET /api/v1/tracks?season=2024 - Get all tracks for a season

Simulations

  • POST /api/v1/simulate - Run a what-if simulation
  • GET /api/v1/simulation/{simulation_id} - Get simulation results

πŸ—οΈ Architecture

The API follows a strict layered architecture to ensure a clean separation of concerns:

  • API Layer (/app/api): The entry point of the application. Handles HTTP requests, validates incoming data using Pydantic schemas, and delegates business logic to the service layer
  • Service Layer (/app/services): Contains all the core business logic. It orchestrates the simulation process, calling the data access layer and the ML model as needed
  • Data Access Layer (/app/external): A dedicated client for communicating with the external OpenF1 API. It uses an asynchronous HTTP client (httpx) and implements caching to reduce latency and external calls
  • Core Layer (/app/core): Contains application-wide logic, such as configuration management, custom exception definitions, and logging setup

Detailed Structure

app/
β”œβ”€β”€ main.py              # FastAPI app, middleware, exception handlers
β”œβ”€β”€ api/                 # API layer (endpoints, schemas)
β”‚   └── v1/
β”‚       β”œβ”€β”€ endpoints.py # Lean endpoint functions
β”‚       └── schemas.py   # Pydantic models
β”œβ”€β”€ services/            # Business logic layer
β”‚   └── simulation_service.py
β”œβ”€β”€ core/                # Configuration and utilities
β”‚   β”œβ”€β”€ config.py        # Environment-based settings
β”‚   β”œβ”€β”€ exceptions.py    # Custom business exceptions
β”‚   └── logging_config.py
β”œβ”€β”€ models/              # ML model management
β”‚   └── model_loader.py
└── external/            # External API clients
    └── openf1_client.py

πŸ›‘οΈ Security & Error Handling

  • Input Validation: All requests validated with Pydantic models
  • Custom Exceptions: Business-specific error handling
  • Structured Logging: JSON-formatted logs with request tracking
  • CORS Configuration: Configurable allowed origins
  • Rate Limiting: Built-in protection against abuse

πŸ€– Machine Learning

The API includes a machine learning component for lap time predictions:

  • Model Loading: Automatic model loading with fallback to dummy model
  • Feature Engineering: Historical data processing and feature extraction
  • Prediction Pipeline: End-to-end prediction workflow
  • Confidence Scoring: Quality assessment of predictions

πŸ§ͺ Testing

This project uses Pytest for unit and integration testing.

To run the entire test suite:

pytest

To get a detailed coverage report:

pytest --cov=app

⚑ Available Scripts

  • uvicorn app.main:app --reload - Starts the development server with hot reloading
  • pytest - Runs the test suite
  • pytest --cov=app - Runs tests with coverage report
  • docker build -t f1-simulator-api . - Builds the Docker image
  • docker run -p 8000:8000 --env-file .env f1-simulator-api - Runs the Docker container

πŸ”§ Development

Pre-commit Hooks

This project uses pre-commit hooks to ensure code quality. The hooks automatically run:

  • Code formatting: Black and Ruff format
  • Linting: Ruff for code quality checks
  • File hygiene: Trailing whitespace, end-of-file fixes
  • Tests: Pytest runs before pushing to ensure all tests pass

Setup

  1. Install pre-commit:

    pip install pre-commit
  2. Install the git hooks:

    pre-commit install
    pre-commit install --hook-type pre-push

Usage

  • Automatic: Hooks run automatically on git commit and git push
  • Manual: Run all checks manually:
    ./scripts/check-code.sh
  • Individual hooks: Run specific hooks:
    pre-commit run black --all-files
    pre-commit run ruff --all-files

Skipping Hooks

If you need to skip hooks (not recommended):

git commit --no-verify -m "Emergency fix"

πŸ€– Automated Workflows

The project includes several GitHub Actions workflows that run automatically:

Branch Issue Management

When a branch is created with an issue number, the workflow automatically:

  • Extracts issue numbers from branch names using patterns:
    • FWI-BE-XXX format
    • feature/FWI-BE-XXX format
    • bugfix/FWI-BE-XXX format
    • hotfix/FWI-BE-XXX format
    • issue-XXX format
  • Moves issues to "In Progress" status in the GitHub project board
  • Adds comments to issues with branch creation details
  • Logs helpful information when no issue numbers are found

How to use:

  1. Create branches with issue numbers: feature/FWI-BE-123, bugfix/FWI-BE-456
  2. The workflow will automatically move the issue to In Progress status
  3. Issue comments will include branch details and status updates

Triggers:

  • Branch creation (push to new branch)
  • Excludes main, develop, and releases/** branches

PR Issue Management

When a Pull Request is created, the workflow automatically:

  • Extracts linked issues from PR title and body using patterns:
    • FWI-BE-XXX format
    • #XXX format
    • closes #XXX, fixes #XXX, resolves #XXX
  • Moves issues to "Review" status in the GitHub project board
  • Adds helpful comments to PRs with status updates
  • Notifies when no linked issues are found

Complete Workflow:

  1. Start Development:

    • Create branch with issue number: feature/FWI-BE-123
    • Issue automatically moves to "In Progress" status
    • Issue gets comment with branch details
  2. Submit for Review:

    • Create PR with issue reference: FWI-BE-123: Add new feature
    • Issue automatically moves to "Review" status
    • PR gets comment with status update
  3. Complete Work:

    • Merge PR with closing keywords: Closes #123
    • Issue automatically moves to "Done" status

Triggers:

  • PR opened, reopened, or synchronized
  • Targets main and develop branches

CI/CD Pipeline

  • Linting & Formatting: Runs black, ruff, and mypy
  • Testing: Runs all tests with pytest
  • Code Quality: Ensures all pre-commit hooks pass

Running Tests

# Run all tests
python3 -m pytest tests/ -v

# Run with coverage
python3 -m pytest --cov=app --cov-report=html

# Run specific test file
python3 -m pytest tests/test_simulation_service.py -v

Code Quality

The project uses several tools for code quality:

  • Black: Code formatting
  • Ruff: Linting and import sorting
  • MyPy: Type checking
# Format code
black app/ tests/

# Lint code
ruff check app/ tests/

# Type check
mypy app/

Adding New Endpoints

  1. Add schema to app/api/v1/schemas.py
  2. Add endpoint to app/api/v1/endpoints.py
  3. Add business logic to app/services/
  4. Add tests in tests/

βš™οΈ Environment Variables

Variable Description Default
HOST Server host 0.0.0.0
PORT Server port 8000
DEBUG Debug mode false
LOG_LEVEL Logging level INFO
LOG_FORMAT Log format json
OPENF1_API_URL OpenF1 API URL https://api.openf1.org
OPENF1_API_TIMEOUT API timeout (seconds) 30
MODEL_PATH ML model path app/models/lap_time_predictor.joblib

πŸ‘₯ Contributing

Contributions, issues, and feature requests are welcome! We are excited to see the community get involved.

Please read our CONTRIBUTING.md for details on our code of conduct and the process for submitting pull requests.

πŸ“œ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ†˜ Support

For support and questions:

  • Create an issue in the repository
  • Check the API documentation at /docs
  • Review the logs for detailed error information

🏁 Roadmap

  • Database integration for persistent storage
  • Real-time WebSocket support
  • Advanced ML model training pipeline
  • Performance monitoring and metrics
  • Authentication and authorization
  • Rate limiting and API quotas

About

🏎️ High-performance FastAPI backend for F1 What-If Simulator with ML-powered lap time predictions and OpenF1 data integration

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published