A blazing-fast, local-first AI chat application inspired by T3 Chat's performance philosophy. Built with Next.js 15, TypeScript, and IndexedDB for instant responses and seamless offline functionality.
This project takes T3 Chat's core insights - that local-first architecture can deliver 2x ChatGPT speed - and implements them with modern tooling. Special thanks to @t3dotgg for the inspiration and the excellent breakdown in How I Built T3 Chat in 5 Days.
- β‘ Instant Navigation - Local-first architecture with IndexedDB via Dexie
- π€ Multi-Provider Support - Anthropic, OpenAI, Groq, DeepSeek with easy model switching
- π¨ Beautiful UI - Tailwind CSS with Catppuccin Macchiato theme
- π System Prompts - Customizable prompts for different use cases
- π Real-time Streaming - Vercel AI SDK with edge runtime support
- πΎ Persistent Storage - All chats and messages stored locally
- π Optimized Performance - React optimizations for 60+ FPS rendering
I am using a local-first streaming approach with IndexedDB via Dexie.js and making use of the Vercel AI SDK.
Here's how it's implemented:
-
Database Layer (db.ts):
- Using Dexie to manage IndexedDB
- Two tables: chats and messages
- Full CRUD operations for chats and messages
- Proper indexing for efficient queries
-
Reactive Data Layer (usePersistentChat.ts)
- Using useLiveQuery from dexie-react-hooks for reactive queries
- Automatic UI updates when data changes in IndexedDB
- Real-time chat and message loading
- Proper message persistence
-
Message Flow:
- Messages are stored locally in IndexedDB
- UI renders directly from IndexedDB data
- New messages are immediately persisted
- Changes trigger automatic UI updates
-
Local-First Benefits:
- Instant data availability
- Real-time UI updates
- Smooth user experience
The local first approach is what makes the interface feel fast, especially switching between chats.
- Delete chats functionality
- System prompts with customization
- Multi-model support (Anthropic, OpenAI, Groq, DeepSeek)
- Local persistence with IndexedDB
- File Upload Support - Drag-and-drop with multimodal AI integration
- Image, PDF, and document support
- In-chat previews with react-pdf
- Thumbnail generation for images
- Tab Management - Multiple concurrent chats with easy switching
- Search Functionality - Full-text search across all chats and messages
- Improved Code Blocks - Syntax highlighting with better performance
- Cross-Device Sync - Optional P2P sync using WebRTC
- Export/import for manual backup
- Room-based sync with QR codes
- Privacy-first, no central server
- Enhanced Model Selector - Favorites, comparison mode, quick switching
- Markdown Optimization - Chunked rendering for 60+ FPS
- Virtual Scrolling - Handle thousands of messages smoothly
- Easy Deployment - One-click deploy to Vercel, Docker, or self-hosted
- Environment Management - Guided setup with validation
- Authentication - Optional user accounts with data isolation
- Team Collaboration - Shared workspaces with permissions
- Analytics Dashboard - Usage metrics and model performance
- Plugin System - Extensible architecture for custom features
See Implementation Plan for detailed technical specifications.
- Framework: Next.js 15 with App Router and Turbopack
- Language: TypeScript with strict mode
- Styling: Tailwind CSS with Catppuccin Macchiato theme
- Database: IndexedDB via Dexie.js for local-first persistence
- AI Integration: Vercel AI SDK with multiple provider support
- State Management: React hooks with reactive Dexie queries
- Package Manager: Bun for fast installs and builds
- Node.js 20+ (LTS recommended)
- Bun package manager (
npm install -g bun
) - API keys for the AI providers you want to use:
ANTHROPIC_API_KEY
- Get from Anthropic ConsoleOPENAI_API_KEY
- Get from OpenAI PlatformGROQ_API_KEY
- Get from Groq CloudDEEPSEEK_API_KEY
- Get from DeepSeek Platform
- Clone the repository:
git clone https://github.com/yourusername/mk3y-chat.git
cd mk3y-chat
- Install dependencies:
bun install
- Set up environment variables:
cp .env.example .env
- Add your API keys to
.env
:
# Required: At least one AI provider key
ANTHROPIC_API_KEY=your_anthropic_key
OPENAI_API_KEY=your_openai_key
GROQ_API_KEY=your_groq_key
DEEPSEEK_API_KEY=your_deepseek_key
# Optional: Analytics (if you want metrics)
# POSTHOG_API_KEY=your_posthog_key
# AXIOM_API_KEY=your_axiom_key
- Start the development server:
bun run dev
- Open http://localhost:3000 in your browser.
Note: The app works with just one API key. Add multiple providers for model variety and fallback options.
bun run dev
- Start development server with Turbopackbun run build
- Create production buildbun run start
- Start production serverbun run lint
- Run ESLintbun run format
- Format code with Prettierbun run test:format
- Check code formatting
faster-chat/
βββ src/
β βββ app/ # Next.js app router pages and API routes
β β βββ api/chat/ # Streaming AI endpoint with edge runtime
β β βββ page.tsx # Main chat interface
β βββ components/ # React components
β β βββ chat/ # Chat-related components
β β βββ ui/ # Reusable UI components
β β βββ settings/ # Settings and configuration
β βββ hooks/ # Custom React hooks
β β βββ usePersistentChat.ts # Main chat data hook
β βββ lib/ # Core utilities
β β βββ db.ts # Dexie database configuration
β β βββ constants/ # Models and prompts configuration
β βββ types/ # TypeScript definitions
Based on T3 Chat's architecture, this project implements several performance optimizations:
- Local-First Architecture: All data operations happen locally first via IndexedDB
- Reactive Queries: UI renders directly from local data using Dexie's reactive hooks
- Optimistic Updates: Changes appear immediately without waiting for network
- Smart Prefetching: Preload data for instant navigation
- Markdown Chunking: Stream and render markdown in blocks for smooth updates
- Edge Runtime: API routes use edge runtime for faster streaming
docker build -t mk3y-chat .
docker run -p 3000:3000 --env-file .env mk3y-chat
See the deployment guide for detailed instructions.
I welcome contributions!
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
- T3 Chat by @t3dotgg - Performance inspiration and architecture patterns
- Dexie.js - Making IndexedDB actually usable
- Vercel AI SDK - Streaming AI responses
- Catppuccin - Beautiful color scheme
MIT License - Build something awesome!
Made with β€οΈ by 1337Hero
β Star us on GitHub