Skip to content

1337hero/faster-next-chat

Repository files navigation

⚑ Faster Chat

Next.js TypeScript Dexie TailwindCSS Vercel AI SDK Bun MIT License

About

A blazing-fast, local-first AI chat application inspired by T3 Chat's performance philosophy. Built with Next.js 15, TypeScript, and IndexedDB for instant responses and seamless offline functionality.

This project takes T3 Chat's core insights - that local-first architecture can deliver 2x ChatGPT speed - and implements them with modern tooling. Special thanks to @t3dotgg for the inspiration and the excellent breakdown in How I Built T3 Chat in 5 Days.

Features

  • ⚑ Instant Navigation - Local-first architecture with IndexedDB via Dexie
  • πŸ€– Multi-Provider Support - Anthropic, OpenAI, Groq, DeepSeek with easy model switching
  • 🎨 Beautiful UI - Tailwind CSS with Catppuccin Macchiato theme
  • πŸ“ System Prompts - Customizable prompts for different use cases
  • πŸ”„ Real-time Streaming - Vercel AI SDK with edge runtime support
  • πŸ’Ύ Persistent Storage - All chats and messages stored locally
  • πŸš€ Optimized Performance - React optimizations for 60+ FPS rendering

Structure

I am using a local-first streaming approach with IndexedDB via Dexie.js and making use of the Vercel AI SDK.

Here's how it's implemented:

  1. Database Layer (db.ts):

    • Using Dexie to manage IndexedDB
    • Two tables: chats and messages
    • Full CRUD operations for chats and messages
    • Proper indexing for efficient queries
  2. Reactive Data Layer (usePersistentChat.ts)

    • Using useLiveQuery from dexie-react-hooks for reactive queries
    • Automatic UI updates when data changes in IndexedDB
    • Real-time chat and message loading
    • Proper message persistence
  3. Message Flow:

    • Messages are stored locally in IndexedDB
    • UI renders directly from IndexedDB data
    • New messages are immediately persisted
    • Changes trigger automatic UI updates
  4. Local-First Benefits:

  • Instant data availability
  • Real-time UI updates
  • Smooth user experience

The local first approach is what makes the interface feel fast, especially switching between chats.

Roadmap

βœ… Completed

  • Delete chats functionality
  • System prompts with customization
  • Multi-model support (Anthropic, OpenAI, Groq, DeepSeek)
  • Local persistence with IndexedDB

🚧 In Progress / Planned Features

Phase 1: File Uploads & Enhanced UX

  • File Upload Support - Drag-and-drop with multimodal AI integration
    • Image, PDF, and document support
    • In-chat previews with react-pdf
    • Thumbnail generation for images
  • Tab Management - Multiple concurrent chats with easy switching
  • Search Functionality - Full-text search across all chats and messages
  • Improved Code Blocks - Syntax highlighting with better performance

Phase 2: Sync & Collaboration

  • Cross-Device Sync - Optional P2P sync using WebRTC
    • Export/import for manual backup
    • Room-based sync with QR codes
    • Privacy-first, no central server
  • Enhanced Model Selector - Favorites, comparison mode, quick switching

Phase 3: Performance & Deployment

  • Markdown Optimization - Chunked rendering for 60+ FPS
  • Virtual Scrolling - Handle thousands of messages smoothly
  • Easy Deployment - One-click deploy to Vercel, Docker, or self-hosted
  • Environment Management - Guided setup with validation

Phase 4: Enterprise Features

  • Authentication - Optional user accounts with data isolation
  • Team Collaboration - Shared workspaces with permissions
  • Analytics Dashboard - Usage metrics and model performance
  • Plugin System - Extensible architecture for custom features

See Implementation Plan for detailed technical specifications.

Tech Stack

  • Framework: Next.js 15 with App Router and Turbopack
  • Language: TypeScript with strict mode
  • Styling: Tailwind CSS with Catppuccin Macchiato theme
  • Database: IndexedDB via Dexie.js for local-first persistence
  • AI Integration: Vercel AI SDK with multiple provider support
  • State Management: React hooks with reactive Dexie queries
  • Package Manager: Bun for fast installs and builds

Prerequisites

Getting Started

  1. Clone the repository:
git clone https://github.com/yourusername/mk3y-chat.git
cd mk3y-chat
  1. Install dependencies:
bun install
  1. Set up environment variables:
cp .env.example .env
  1. Add your API keys to .env:
# Required: At least one AI provider key
ANTHROPIC_API_KEY=your_anthropic_key
OPENAI_API_KEY=your_openai_key
GROQ_API_KEY=your_groq_key
DEEPSEEK_API_KEY=your_deepseek_key

# Optional: Analytics (if you want metrics)
# POSTHOG_API_KEY=your_posthog_key
# AXIOM_API_KEY=your_axiom_key
  1. Start the development server:
bun run dev
  1. Open http://localhost:3000 in your browser.

Note: The app works with just one API key. Add multiple providers for model variety and fallback options.

Development Commands

  • bun run dev - Start development server with Turbopack
  • bun run build - Create production build
  • bun run start - Start production server
  • bun run lint - Run ESLint
  • bun run format - Format code with Prettier
  • bun run test:format - Check code formatting

Project Structure

faster-chat/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ app/              # Next.js app router pages and API routes
β”‚   β”‚   β”œβ”€β”€ api/chat/     # Streaming AI endpoint with edge runtime
β”‚   β”‚   └── page.tsx      # Main chat interface
β”‚   β”œβ”€β”€ components/       # React components
β”‚   β”‚   β”œβ”€β”€ chat/         # Chat-related components
β”‚   β”‚   β”œβ”€β”€ ui/           # Reusable UI components
β”‚   β”‚   └── settings/     # Settings and configuration
β”‚   β”œβ”€β”€ hooks/            # Custom React hooks
β”‚   β”‚   └── usePersistentChat.ts  # Main chat data hook
β”‚   β”œβ”€β”€ lib/              # Core utilities
β”‚   β”‚   β”œβ”€β”€ db.ts         # Dexie database configuration
β”‚   β”‚   └── constants/    # Models and prompts configuration
β”‚   └── types/            # TypeScript definitions

Performance Features

Based on T3 Chat's architecture, this project implements several performance optimizations:

  • Local-First Architecture: All data operations happen locally first via IndexedDB
  • Reactive Queries: UI renders directly from local data using Dexie's reactive hooks
  • Optimistic Updates: Changes appear immediately without waiting for network
  • Smart Prefetching: Preload data for instant navigation
  • Markdown Chunking: Stream and render markdown in blocks for smooth updates
  • Edge Runtime: API routes use edge runtime for faster streaming

Deployment

Quick Deploy to Vercel

Deploy with Vercel

Docker

docker build -t mk3y-chat .
docker run -p 3000:3000 --env-file .env mk3y-chat

Self-Hosted

See the deployment guide for detailed instructions.

Contributing

I welcome contributions!

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

Acknowledgments

License

MIT License - Build something awesome!


Made with ❀️ by 1337Hero
⭐ Star us on GitHub