Skip to content

πŸš€ FullStack FastAPI + Next.js Project Template Generator with PydanticAI/LangChain agents, WebSocket streaming, 20+ enterprise integrations, and Logfire/LangSmith observability. CLI tool to generate production-ready FastAPI + Next.js projects with AI agents, auth, and observability.

License

Notifications You must be signed in to change notification settings

vstorm-co/full-stack-fastapi-nextjs-llm-template

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Full-Stack FastAPI + Next.js Template for AI/LLM Applications

GitHub Stars License Python PyPI Coverage 20+ Integrations

Production-ready project generator for AI/LLM applications with 20+ enterprise integrations.
Built with FastAPI, Next.js 15, PydanticAI/LangChain, and everything you need for professional business applications.

Why This Template β€’ Features β€’ Demo β€’ Quick Start β€’ Architecture β€’ AI Agent β€’ Logfire β€’ Documentation

Related Projects

Building advanced AI agents? Check out pydantic-deep - a deep agent framework built on pydantic-ai with planning, filesystem, and subagent capabilities.


🎯 Why This Template

Building AI/LLM applications requires more than just an API wrapper. You need:

  • Type-safe AI agents with tool/function calling
  • Real-time streaming responses via WebSocket
  • Conversation persistence and history management
  • Production infrastructure - auth, rate limiting, observability
  • Enterprise integrations - background tasks, webhooks, admin panels

This template gives you all of that out of the box, with 20+ configurable integrations so you can focus on building your AI product, not boilerplate.

Perfect For

  • πŸ€– AI Chatbots & Assistants - PydanticAI or LangChain agents with streaming responses
  • πŸ“Š ML Applications - Background task processing with Celery/Taskiq
  • 🏒 Enterprise SaaS - Full auth, admin panel, webhooks, and more
  • πŸš€ Startups - Ship fast with production-ready infrastructure

AI-Agent Friendly

Generated projects include CLAUDE.md and AGENTS.md files optimized for AI coding assistants (Claude Code, Codex, Copilot, Cursor, Zed). Following progressive disclosure best practices - concise project overview with pointers to detailed docs when needed.


✨ Features

πŸ€– AI/LLM First

  • PydanticAI or LangChain - Choose your preferred AI framework
  • WebSocket Streaming - Real-time responses with full event access
  • Conversation Persistence - Save chat history to database
  • Custom Tools - Easily extend agent capabilities
  • Multi-provider Support - OpenAI, Anthropic, OpenRouter
  • Observability - Logfire for PydanticAI, LangSmith for LangChain

⚑ Backend (FastAPI)

  • FastAPI + Pydantic v2 - High-performance async API
  • Multiple Databases - PostgreSQL (async), MongoDB (async), SQLite
  • Authentication - JWT + Refresh tokens, API Keys, OAuth2 (Google)
  • Background Tasks - Celery, Taskiq, or ARQ
  • Django-style CLI - Custom management commands with auto-discovery

🎨 Frontend (Next.js 15)

  • React 19 + TypeScript + Tailwind CSS v4
  • AI Chat Interface - WebSocket streaming, tool call visualization
  • Authentication - HTTP-only cookies, auto-refresh
  • Dark Mode + i18n (optional)

πŸ”Œ 20+ Enterprise Integrations

Category Integrations
AI Frameworks PydanticAI, LangChain
Caching & State Redis, fastapi-cache2
Security Rate limiting, CORS, CSRF protection
Observability Logfire, LangSmith, Sentry, Prometheus
Admin SQLAdmin panel with auth
Events Webhooks, WebSockets
DevOps Docker, GitHub Actions, GitLab CI, Kubernetes

🎬 Demo

FastAPI Fullstack Generator Demo


πŸš€ Quick Start

Installation

# pip
pip install fastapi-fullstack

# uv (recommended)
uv tool install fastapi-fullstack

# pipx
pipx install fastapi-fullstack

Create Your Project

# Interactive wizard (recommended)
fastapi-fullstack new

# Quick mode with options
fastapi-fullstack create my_ai_app \
  --database postgresql \
  --auth jwt \
  --frontend nextjs

# Use presets for common setups
fastapi-fullstack create my_ai_app --preset production   # Full production setup
fastapi-fullstack create my_ai_app --preset ai-agent     # AI agent with streaming

# Minimal project (no extras)
fastapi-fullstack create my_ai_app --minimal

Start Development

After generating your project, follow these steps:

1. Install dependencies

cd my_ai_app
make install

Windows Users: The make command requires GNU Make which is not available by default on Windows. Install via Chocolatey (choco install make), use WSL, or run raw commands manually. Each generated project includes a "Manual Commands Reference" section in its README with all commands.

2. Start the database

# PostgreSQL (with Docker)
make docker-db

3. Create and apply database migrations

⚠️ Important: Both commands are required! db-migrate creates the migration file, db-upgrade applies it to the database.

# Create initial migration (REQUIRED first time)
make db-migrate
# Enter message: "Initial migration"

# Apply migrations to create tables
make db-upgrade

4. Create admin user

make create-admin
# Enter email and password when prompted

5. Start the backend

make run

6. Start the frontend (new terminal)

cd frontend
bun install
bun dev

Access:

Quick Start with Docker

Run everything in Docker:

make docker-up       # Start backend + database
make docker-frontend # Start frontend

Using the Project CLI

Each generated project has a CLI named after your project_slug. For example, if you created my_ai_app:

cd backend

# The CLI command is: uv run <project_slug> <command>
uv run my_ai_app server run --reload     # Start dev server
uv run my_ai_app db migrate -m "message" # Create migration
uv run my_ai_app db upgrade              # Apply migrations
uv run my_ai_app user create-admin       # Create admin user

Use make help to see all available Makefile shortcuts.


πŸ“Έ Screenshots

Chat Interface

Light Mode Dark Mode
Chat Light Chat Dark

Authentication

Register Login
Register Login

Observability

Logfire (PydanticAI) LangSmith (LangChain)
Logfire LangSmith

Admin, Monitoring & API

Celery Flower SQLAdmin Panel
Flower Admin
API Documentation
API Docs

πŸ—οΈ Architecture

graph TB
    subgraph Frontend["Frontend (Next.js 15)"]
        UI[React Components]
        WS[WebSocket Client]
        Store[Zustand Stores]
    end

    subgraph Backend["Backend (FastAPI)"]
        API[API Routes]
        Services[Services Layer]
        Repos[Repositories]
        Agent[AI Agent]
    end

    subgraph Infrastructure
        DB[(PostgreSQL/MongoDB)]
        Redis[(Redis)]
        Queue[Celery/Taskiq]
    end

    subgraph External
        LLM[OpenAI/Anthropic]
        Webhook[Webhook Endpoints]
    end

    UI --> API
    WS <--> Agent
    API --> Services
    Services --> Repos
    Services --> Agent
    Repos --> DB
    Agent --> LLM
    Services --> Redis
    Services --> Queue
    Services --> Webhook
Loading

Layered Architecture

The backend follows a clean Repository + Service pattern:

graph LR
    A[API Routes] --> B[Services]
    B --> C[Repositories]
    C --> D[(Database)]

    B --> E[External APIs]
    B --> F[AI Agents]
Loading
Layer Responsibility
Routes HTTP handling, validation, auth
Services Business logic, orchestration
Repositories Data access, queries

See Architecture Documentation for details.


πŸ€– AI Agent

Choose between PydanticAI or LangChain when generating your project, with support for multiple LLM providers:

# PydanticAI with OpenAI (default)
fastapi-fullstack create my_app --ai-agent --ai-framework pydantic_ai

# PydanticAI with Anthropic
fastapi-fullstack create my_app --ai-agent --ai-framework pydantic_ai --llm-provider anthropic

# PydanticAI with OpenRouter
fastapi-fullstack create my_app --ai-agent --ai-framework pydantic_ai --llm-provider openrouter

# LangChain with OpenAI
fastapi-fullstack create my_app --ai-agent --ai-framework langchain

# LangChain with Anthropic
fastapi-fullstack create my_app --ai-agent --ai-framework langchain --llm-provider anthropic

Supported LLM Providers

Framework OpenAI Anthropic OpenRouter
PydanticAI βœ“ βœ“ βœ“
LangChain βœ“ βœ“ -

PydanticAI Integration

Type-safe agents with full dependency injection:

# app/agents/assistant.py
from pydantic_ai import Agent, RunContext

@dataclass
class Deps:
    user_id: str | None = None
    db: AsyncSession | None = None

agent = Agent[Deps, str](
    model="openai:gpt-4o-mini",
    system_prompt="You are a helpful assistant.",
)

@agent.tool
async def search_database(ctx: RunContext[Deps], query: str) -> list[dict]:
    """Search the database for relevant information."""
    # Access user context and database via ctx.deps
    ...

LangChain Integration

Flexible agents with LangGraph:

# app/agents/langchain_assistant.py
from langchain.tools import tool
from langgraph.prebuilt import create_react_agent

@tool
def search_database(query: str) -> list[dict]:
    """Search the database for relevant information."""
    ...

agent = create_react_agent(
    model=ChatOpenAI(model="gpt-4o-mini"),
    tools=[search_database],
    prompt="You are a helpful assistant.",
)

WebSocket Streaming

Both frameworks use the same WebSocket endpoint with real-time streaming:

@router.websocket("/ws")
async def agent_ws(websocket: WebSocket):
    await websocket.accept()

    # Works with both PydanticAI and LangChain
    async for event in agent.stream(user_input):
        await websocket.send_json({
            "type": "text_delta",
            "content": event.content
        })

Observability

Each framework has its own observability solution:

Framework Observability Dashboard
PydanticAI Logfire Agent runs, tool calls, token usage
LangChain LangSmith Traces, feedback, datasets

See AI Agent Documentation for more.


πŸ“Š Observability

Logfire (for PydanticAI)

Logfire provides complete observability for your application - from AI agents to database queries. Built by the Pydantic team, it offers first-class support for the entire Python ecosystem.

graph LR
    subgraph Your App
        API[FastAPI]
        Agent[PydanticAI]
        DB[(Database)]
        Cache[(Redis)]
        Queue[Celery/Taskiq]
        HTTP[HTTPX]
    end

    subgraph Logfire
        Traces[Traces]
        Metrics[Metrics]
        Logs[Logs]
    end

    API --> Traces
    Agent --> Traces
    DB --> Traces
    Cache --> Traces
    Queue --> Traces
    HTTP --> Traces
Loading
Component What You See
PydanticAI Agent runs, tool calls, LLM requests, token usage, streaming events
FastAPI Request/response traces, latency, status codes, route performance
PostgreSQL/MongoDB Query execution time, slow queries, connection pool stats
Redis Cache hits/misses, command latency, key patterns
Celery/Taskiq Task execution, queue depth, worker performance
HTTPX External API calls, response times, error rates

LangSmith (for LangChain)

LangSmith provides observability specifically designed for LangChain applications:

Feature Description
Traces Full execution traces for agent runs and chains
Feedback Collect user feedback on agent responses
Datasets Build evaluation datasets from production data
Monitoring Track latency, errors, and token usage

LangSmith is automatically configured when you choose LangChain:

# .env
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY=your-api-key
LANGCHAIN_PROJECT=my_project

Configuration

Enable Logfire and select which components to instrument:

fastapi-fullstack new
# βœ“ Enable Logfire observability
#   βœ“ Instrument FastAPI
#   βœ“ Instrument Database
#   βœ“ Instrument Redis
#   βœ“ Instrument Celery
#   βœ“ Instrument HTTPX

Usage

# Automatic instrumentation in app/main.py
import logfire

logfire.configure()
logfire.instrument_fastapi(app)
logfire.instrument_asyncpg()
logfire.instrument_redis()
logfire.instrument_httpx()
# Manual spans for custom logic
with logfire.span("process_order", order_id=order.id):
    await validate_order(order)
    await charge_payment(order)
    await send_confirmation(order)

For more details, see Logfire Documentation.


πŸ› οΈ Django-style CLI

Each generated project includes a powerful CLI inspired by Django's management commands:

Built-in Commands

# Server
my_app server run --reload
my_app server routes

# Database (Alembic wrapper)
my_app db init
my_app db migrate -m "Add users"
my_app db upgrade

# Users
my_app user create --email admin@example.com --superuser
my_app user list

Custom Commands

Create your own commands with auto-discovery:

# app/commands/seed.py
from app.commands import command, success, error
import click

@command("seed", help="Seed database with test data")
@click.option("--count", "-c", default=10, type=int)
@click.option("--dry-run", is_flag=True)
def seed_database(count: int, dry_run: bool):
    """Seed the database with sample data."""
    if dry_run:
        info(f"[DRY RUN] Would create {count} records")
        return

    # Your logic here
    success(f"Created {count} records!")

Commands are automatically discovered from app/commands/ - just create a file and use the @command decorator.

my_app cmd seed --count 100
my_app cmd seed --dry-run

πŸ“ Generated Project Structure

my_project/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ main.py              # FastAPI app with lifespan
β”‚   β”‚   β”œβ”€β”€ api/
β”‚   β”‚   β”‚   β”œβ”€β”€ routes/v1/       # Versioned API endpoints
β”‚   β”‚   β”‚   β”œβ”€β”€ deps.py          # Dependency injection
β”‚   β”‚   β”‚   └── router.py        # Route aggregation
β”‚   β”‚   β”œβ”€β”€ core/                # Config, security, middleware
β”‚   β”‚   β”œβ”€β”€ db/models/           # SQLAlchemy/MongoDB models
β”‚   β”‚   β”œβ”€β”€ schemas/             # Pydantic schemas
β”‚   β”‚   β”œβ”€β”€ repositories/        # Data access layer
β”‚   β”‚   β”œβ”€β”€ services/            # Business logic
β”‚   β”‚   β”œβ”€β”€ agents/              # AI agents with centralized prompts
β”‚   β”‚   β”œβ”€β”€ commands/            # Django-style CLI commands
β”‚   β”‚   └── worker/              # Background tasks
β”‚   β”œβ”€β”€ cli/                     # Project CLI
β”‚   β”œβ”€β”€ tests/                   # pytest test suite
β”‚   └── alembic/                 # Database migrations
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ app/                 # Next.js App Router
β”‚   β”‚   β”œβ”€β”€ components/          # React components
β”‚   β”‚   β”œβ”€β”€ hooks/               # useChat, useWebSocket, etc.
β”‚   β”‚   └── stores/              # Zustand state management
β”‚   └── e2e/                     # Playwright tests
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ Makefile
└── README.md

Generated projects include version metadata in pyproject.toml for tracking:

[tool.fastapi-fullstack]
generator_version = "0.1.5"
generated_at = "2024-12-21T10:30:00+00:00"

βš™οΈ Configuration Options

Core Options

Option Values Description
Database postgresql, mongodb, sqlite, none Async by default
ORM sqlalchemy, sqlmodel SQLModel for simplified syntax
Auth jwt, api_key, both, none JWT includes user management
OAuth none, google Social login
AI Framework pydantic_ai, langchain Choose your AI agent framework
LLM Provider openai, anthropic, openrouter OpenRouter only with PydanticAI
Background Tasks none, celery, taskiq, arq Distributed queues
Frontend none, nextjs Next.js 15 + React 19

Presets

Preset Description
--preset production Full production setup with Redis, Sentry, Kubernetes, Prometheus
--preset ai-agent AI agent with WebSocket streaming and conversation persistence
--minimal Minimal project with no extras

Integrations

Select what you need:

fastapi-fullstack new
# βœ“ Redis (caching/sessions)
# βœ“ Rate limiting (slowapi)
# βœ“ Pagination (fastapi-pagination)
# βœ“ Admin Panel (SQLAdmin)
# βœ“ AI Agent (PydanticAI or LangChain)
# βœ“ Webhooks
# βœ“ Sentry
# βœ“ Logfire / LangSmith
# βœ“ Prometheus
# ... and more

πŸ“š Documentation

Document Description
Architecture Repository + Service pattern, layered design
Frontend Next.js setup, auth, state management
AI Agent PydanticAI, tools, WebSocket streaming
Observability Logfire integration, tracing, metrics
Deployment Docker, Kubernetes, production setup
Development Local setup, testing, debugging
Changelog Version history and release notes

Star History

Star History Chart


πŸ™ Inspiration

This project is inspired by:


🀝 Contributing

Contributions are welcome! Please read our Contributing Guide for details.


πŸ“„ License

MIT License - see LICENSE for details.


Made with ❀️ by VStorm