AI Chat Interface with real-time responses
This app is a full-stack application featuring a chat interface powered by LLM, built with React and FastAPI. It includes a load testing capability to evaluate performance under various conditions.
- 🤖 AI-powered chat interface
- 🎨 Modern UI with Tailwind CSS
- 📊 Built-in load testing capabilities
- 🔄 Real-time response handling
- 🌐 FastAPI backend with async support
- Python 3.8+
- Node.js 18.x+
- npm or yarn
- A Databricks workspace (for AI model serving)
- Clone the repository:
git clone <repository-url>- Create and activate a Python virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: .\venv\Scripts\activate- Install Python dependencies:
pip install -r requirements.txt- Navigate to the client directory:
cd client- Install dependencies:
npm install- Build the production version:
npm run build- For development with hot-reload:
# Terminal 1 - Frontend
cd client
npm start
# Terminal 2 - Backend
hypercorn app:app --bind 127.0.0.1:8000- For production:
hypercorn app:app --bind 127.0.0.1:8000-
For Databricks Apps deployment:
a. Install the Databricks CLI:
brew install databricks
b. Create the app in your workspace:
databricks apps create chat-app
c. Create an
app.yamlfile in the root directory:command: - "hypercorn" - "app:app" - "--bind" - "127.0.0.1:8000" env: - name: "SERVING_ENDPOINT_NAME" valueFrom: "serving_endpoint"
The
app.yamlconfiguration uses Hypercorn as the ASGI server to run your FastAPI application. The environment section definesSERVING_ENDPOINT_NAMEwhich is configured (serving_endpoint) through apps creation in Databricks, securly storing and accessing sensitive values.For detials on how to create an app in Databricks, please refer to the Databricks Apps Documentation.
d. Sync your local files to Databricks workspace:
# Add node_modules/ and venv/ to .gitignore first if not already present databricks sync --watch . /Workspace/Users/<your-email>/chat-app
e. Deploy the app:
databricks apps deploy chat-app --source-code-path /Workspace/Users/<your-email>/chat-app
The application will be available at your Databricks Apps URL:
- Production URL: https://chat-app-[id].cloud.databricksapps.com
The application includes built-in load testing capabilities. To run a load test:
curl "http://localhost:8000/api/load-test?users=200&spawn_rate=2&test_time=10"
Run load tests in the Databricks Apps UI
Parameters:
users: Number of concurrent users (default: 10)spawn_rate: Users to spawn per second (default: 2)test_time: Duration of test in seconds (default: 30)
-
Gradual Scaling
- Start with smaller numbers and gradually increase
- Monitor system performance metrics
- Watch for error rates and response times
-
Production Testing
- Schedule load tests during off-peak hours
- Alert relevant team members before large-scale tests
- Monitor application logs and metrics during tests
-
Testing Scenarios
https://chat-app-[id].cloud.databricksapps.com/api/load-test?users=200&spawn_rate=10&test_time=30
https://chat-app-[id].cloud.databricksapps.com/api/load-test?users=1000&spawn_rate=100&test_time=30
chatbot-app/
├── app.py # FastAPI backend application
├── load_tester.py # Load testing endpoint
├── requirements.txt # Python dependencies
├── client/ # React frontend
│ ├── src/ # Source code
│ ├── public/ # Static assets
│ ├── build/ # # Static frontend files
│ └── package.json # Node.js dependencies
└── .env # Environment variables
GET /api/: Health check endpointPOST /api/chat: Chat endpoint for AI interactionsGET /api/load-test: Load testing endpoint
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request