A modern AI-powered chat application built with Stream Chat, OpenAI, and live web search. This fullβstack app provides an intelligent writing assistant for content creation, research, and realβtime collaboration.
Live Demo: https://ai-chat-assistant-1-sct6.onrender.com/
Maintainer: Janki Parmar
- Features
- Architecture
- Project Structure
- Prerequisites
- Quick Start (Local)
- Environment Variables
- Running the App
- Deployment (Render β used in this project)
- How GetStream Works in This App
- AI Agent System
- UI Stack
- API Endpoints
- Security
- Troubleshooting
- Contributing
- License
- Realβtime Chat: Powered by GetStream for reliable, lowβlatency messaging.
- AI Writing Assistant: OpenAI integration for content, summaries, and rewriting.
- Live Web Search: Uses Tavily API to fetch current information.
- Modern UI: Beautiful React + Tailwind with light/dark mode.
- Prompt Library: Readyβmade prompts (business, content, communication, creative).
- Agent Management: Create/stop AI agents per channel with autoβcleanup.
- JWT Auth: Secure, shortβlived tokens issued by the backend.
- Responsive: Mobileβfirst design and accessible components.
- Node.js/Express server
- Stream Chat serverβside auth & channel utilities
- OpenAI for AI responses
- Tavily for web search
- Agent lifecycle with automatic cleanup on inactivity
- React + TypeScript
- Stream Chat React components
- Tailwind CSS + shadcn/ui for styling
- Vite dev/build tooling
chat-ai-app/
ββ nodejs-ai-assistant/ # Backend (Express, Stream, OpenAI, Tavily)
β ββ src/ # Source code (controllers, routes, services)
β ββ package.json
β ββ .env.example
β ββ (tsconfig.json | js files)
β
ββ react-stream-ai-assistant/ # Frontend (React, Vite, Tailwind, Stream Chat)
ββ src/
ββ index.html
ββ package.json
ββ .env.example
ββ vite.config.ts
Folder names above match this README. If your repo differs, keep the env keys and commands the same but adjust the paths.
- Node.js 20+
- npm or yarn
- Accounts/Keys: GetStream, OpenAI, Tavily
Clone and enter the project root:
git clone <your-repository-url>
cd chat-ai-appcd nodejs-ai-assistant
cp .env.example .env # then fill in values
npm install
npm run dev # or: npm startDefault dev server: http://localhost:3000
cd ../react-stream-ai-assistant
cp .env.example .env # then fill in values
npm install
npm run devVite dev server: http://localhost:8080 (or shown in terminal)
Create .env files in both backend and frontend folders.
# GetStream (https://getstream.io/dashboard)
STREAM_API_KEY=your_stream_api_key_here
STREAM_API_SECRET=your_stream_api_secret_here
# OpenAI (https://platform.openai.com/api-keys)
OPENAI_API_KEY=your_openai_api_key_here
# Tavily (https://tavily.com)
TAVILY_API_KEY=your_tavily_api_key_here
# Optional
PORT=3000
NODE_ENV=development
CORS_ORIGIN=http://localhost:8080
TOKEN_TTL_SECONDS=3600# Stream Chat public key for the browser
VITE_STREAM_API_KEY=your_stream_api_key_here
# Backend URL
VITE_BACKEND_URL=http://localhost:3000Ensure VITE_BACKEND_URL points to your running backend (local or deployed).
cd nodejs-ai-assistant
npm run dev
# or: npm startcd react-stream-ai-assistant
npm run devOpen your browser to the printed Vite URL (e.g., http://localhost:8080).
Your live app is hosted here: https://ai-chat-assistant-1-sct6.onrender.com/
You can deploy both backend and frontend on Render as separate services:
- Push your repo to GitHub.
- In Render, New > Web Service β connect your repo.
- Root Directory:
nodejs-ai-assistant - Environment: Node
- Build Command:
- If TypeScript:
npm install && npm run build - If JavaScript only:
npm install
- If TypeScript:
- Start Command:
- If TypeScript build:
npm run start - If JavaScript only:
npm start(ornode server.js)
- If TypeScript build:
- Environment Variables: add
STREAM_API_KEYSTREAM_API_SECRETOPENAI_API_KEYTAVILY_API_KEYPORT=3000(Render setsPORTautomatically; your app should use it)CORS_ORIGIN= URL of your frontend Render site
- Deploy. Note the backend URL Render gives you.
- In Render, New > Static Site β connect the same repo.
- Root Directory:
react-stream-ai-assistant - Build Command:
npm install && npm run build - Publish Directory:
dist - Environment Variables:
VITE_STREAM_API_KEY= your Stream API keyVITE_BACKEND_URL= the Render backend URL from step 1
- Deploy. Visit the static site URL (this is your public app).
Tip: If you prefer a single βmonolithicβ Web Service, you can serve the built frontend from Express. In that case, point Express to the frontend
dist/directory and deploy only one service.
Core Concepts
- Client β the browser SDK handles realtime messaging and presence
- Channels β chat rooms where users send messages
- Users β authenticated entities with Stream-issued JWTs from your backend
- Messages β text, attachments, reactions, threads
- Tokens β shortβlived JWTs signed serverβside
Integration Flow
graph TD
A[Frontend React App] --> B[Stream Chat React Components]
B --> C[Stream Chat API]
C --> D[Backend Node.js Server]
D --> E[OpenAI API]
D --> F[Tavily Web Search]
D --> G[AI Agent Management]
Lifecycle
- Create β agent is created per channel when requested
- Initialize β wires OpenAI + web search context
- Handle β processes messages and responds
- Search β Tavily fetches current info when needed
- Cleanup β autoβdisposes after inactivity
Backend Routes (typical)
POST /start-ai-agentβ start agent for a channelPOST /stop-ai-agentβ stop/cleanup agentGET /agent-statusβ check agent statePOST /tokenβ issue Stream JWT for a user
- Radix UI primitives
- Tailwind CSS utilities
- shadcn/ui components
- Lucide React icons
- Dark/Light theme support
GET /β health checkPOST /start-ai-agentβ initialize AI agent for a channelPOST /stop-ai-agentβ stop and cleanup AI agentGET /agent-statusβ current statusPOST /tokenβ generate user auth token for Stream Chat
Exact request/response shapes may vary depending on your implementation, but the route purposes match the above.
- JWT Auth β shortβlived tokens from the backend
- CORS β restrict to your frontend domain
- Secrets in .env β never commit keys
- Input Validation β validate all server inputs
- Token Expiry/Refresh β avoid longβlived tokens in the browser
- CORS errors: Ensure
CORS_ORIGIN(backend) matches your deployed frontend URL. - Unauthorized with Stream: Make sure the frontend uses tokens issued by your backend; do not embed a secret in the browser.
- OpenAI/Tavily failures: Doubleβcheck keys and usage limits in the respective dashboards.
- Frontend canβt reach backend: Confirm
VITE_BACKEND_URLis the public backend URL. - Render build loops: Clear cache, verify build command and Node version.
- Fork this repo
- Create a feature branch
- Make changes
- Add tests if applicable
- Open a PR
This project is licensed under the MIT License. See LICENSE for details.
Β© 2025 Janki Parmar