Skip to content

wahyudesu/Fastapi-AI-Production-Template

Repository files navigation

Fastapi AI Production Boilerplate

Simple starter repo for your Machine Learning/AI projects

FastAPI AI Production Template Thumbnail

Created date Last Commit Python Version Quality Gate Status Followers

Python FastAPI Docker Langchain Pydantic

GitHub - The simplest starter repository for building ML/AI projects | Product Hunt

Use Case

  • Build and serve machine learning models via production-ready APIs
  • Create scalable and easily deployable AI/ML backend services
  • Develop AI Agent applications based on FastAPI
  • Support end-to-end model experimentation, serving, and deployment

Features

  • ✅Built in Security and API endpoint protection
  • ✅Lightweight Dockerfile with best practices
  • ✅Router serving support for ML, AI models and AI agents
  • ✅Project dependencies, env using uv
  • ✅Simple Logging using loguru
  • ✅Kubernetes manifests: Deployment, Service, HPA, Ingress
  • ✅Ready for production and educational use
  • ✅Lint and formatting using ruff
  • ✅Jupyter notebook for experiment ml and building ai agent
  • ✅Rate limiter and Middleware
  • ✅Very well documentation for easy understanding
  • ... Adding MCP Features

Project Structure

root-project/
├── app/
│   ├── main.py                # FastAPI entrypoint
│   ├── logger.py              # Logging 
│   ├── middleware.py          # Middleware logging and rate limiter
│   ├── model/                 # Model artifacts (e.g., pickle files)
│   └── routers/               # API routers (chatbot, predict, etc.)
│       ├── agent.py           # Agent research endpoints
│       ├── chatbot.py         # Chatbot endpoints (file upload, entity extraction, etc.)
│       └── predict.py         # Prediction endpoints (ML, summarization, etc.)
├── data/                      # Dataset
├── k8s/                       # Kubernetes
└── notebook/                  # Jupyter notebooks for experiments

This structure makes code management and feature development easier.

  • For LLM, use notebooks such as notebooks/langgraph.ipynb for experiments.

  • For ML, use notebooks like notebook/bayesian-regression.ipynb, the data folder for datasets, and the model folder for models and training/prediction code.

  • Model serving and API endpoints are organized in the app/routers folder.

For more details, see the FastAPI Documentation.

Installation & Setup

Make sure you have uv installed .

# Clone repository
git clone https://github.com/wahyudesu/fastapi-ai-template

cd fastapi-ai-template

# Development
uv venv
.venv\Scripts\activate

uv sync

# Copy and edit .env file
cp .env.example .env
# Edit .env according to your needs
# Edit token for security and groq api key if u use llm

Linter

uv run ruff check

Run on local

uv run uvicorn app.main:app --reload

After running the command above, your FastAPI application will be available at http://localhost:8000.

You can access the beautiful interactive API documentation at http://localhost:8000/scalar.

You can also access the interactive API documentation on default swagger-ui at http://localhost:8000/docs.

To access the MCP endpoint, go to http://localhost:8000/mcp.

Docker

Build the Docker image with:

docker build -t fastapi-app .

Run the Docker container locally with:

docker run -p 8000:80 fastapi-app

Deployment

DigitalOcean Google Cloud Render Railway

You can use virtually any cloud provider to deploy your FastAPI application. Before deploying, make sure you understand the basic concepts.

You can read more about deployment concepts here .

This project is developed for modern LLMOps/ML pipelines and is ready for deployment on both cloud platforms and VPS.

🤝 Contributing

  1. Fork this repository;
  2. Create your branch: git checkout -b my-new-feature;
  3. Commit your changes: git commit -m 'Add some feature';
  4. Push to the branch: git push origin my-new-feature.
  5. After your pull request is merged, you can safely delete your branch.

⏭️ What's Next?

Saya membuat repo ini se-minimal dan se-sederhana mungkin, dengan fitur yang cukup lengkap, agar memudahkan pemula dalam mengembangkan project melalui repo ini. Jika repo ini dirasa bermanfaat bisa bintangin dan share ke teman yang membutuhkan, jika cukup ramai saya ada rencana untuk membuat versi lanjutannya yang lebih advanced, dengan fitur tambahan seperti JWT security, ORM, Grafana, dan ML integration. Yang bakal fokus spesifik ke ML dan LLMOps

FAQ

Why FastAPI?
  • FastAPI is a modern, high-performance web framework for building APIs with Python. For AI apps, it serves as the interface between your AI models and the outside world, allowing external systems to send data to your models and receive predictions or processing results. What makes FastAPI particularly appealing is its simplicity and elegance—it provides everything you need without unnecessary complexity.
What is Uvicorn?
  • Uvicorn is a lightning-fast ASGI server implementation for Python, commonly used to run FastAPI applications in production. It enables asynchronous request handling and is well-suited for modern web frameworks.
Is this boilerplate connected to a database?
  • You can add a database such as PostgreSQL, MySQL, or SQLite depending on your use case. If you are only serving models, a database may not be necessary. This repository is designed to be as simple as possible so users can get started quickly.
How about security?
  • The project includes built-in security features such as API endpoint protection, authentication, and rate limiting. You can further enhance security by configuring environment variables and using HTTPS in production.
What can I develop with this?
  • It depends on your project use case. For serving AI or ML models, this boilerplate is more than sufficient. If you need more features, you can add observability and monitoring tools such as Opik, Comet, or MLflow.

About

Simple starter template for your ML/AI projects (uv package manager, RestAPI with FastAPI and Dockerfile support)

Topics

Resources

License

Stars

Watchers

Forks

Contributors