A full-stack intelligent agent combining machine learning models (data processing + inference) with a modern, type-safe web interface (FastAPI backend + Next.js frontend). The repository contains separate backend and frontend modules and is built with Python, TypeScript, and common data libraries. ([GitHub][1])
AI-Assistant is an intelligent agent project that demonstrates a production-oriented layout: asynchronous FastAPI backend for serving inference and data pipelines, and a Next.js + TypeScript frontend for a responsive UI. It’s intended as a starter template or reference for deploying a small ML-enabled web service. ([GitHub][1])
- Low-latency AI inference served from FastAPI (async). ([GitHub][1])
- Data processing pipelines using NumPy / Pandas. ([GitHub][1])
- Type-safe frontend using Next.js + TypeScript and responsive UI components. ([GitHub][1])
- Clear separation between backend and frontend for independent development and deployment.
- Backend: FastAPI (Python), Uvicorn. ([GitHub][1])
- Frontend: Next.js with TypeScript (Bootstrap for UI). ([GitHub][1])
- Data & ML: NumPy, Pandas (example libs shown in repo). ([GitHub][1])
Typical structure (reflects this repository):
AI-Assistant/
├─ Backend_assistantai/ # FastAPI backend code
├─ Frontend_AssistantAI/ # Next.js frontend code
├─ .gitignore
├─ readme.md
Adjust paths if your local folder names differ. The repo contains backend and frontend folders that should be started independently. ([GitHub][1])
- Python 3.10+ (or compatible 3.x line)
- Node.js 18+ and npm/yarn (for Next.js dev server)
- Git (to clone the repo)
Note: Commands below assume you are at the repository root.
- Change into backend directory:
cd Backend_assistantai- Create & activate a virtual environment:
python -m venv venv
# macOS / Linux
source venv/bin/activate
# Windows (PowerShell)
venv\Scripts\Activate.ps1- Install dependencies:
pip install -r requirements.txt- Run the backend (development):
uvicorn main:app --reloadThis should start FastAPI (Uvicorn) on the default port (8000). Adjust module path if your main app file has a different name. ([GitHub][1])
- Change into frontend directory:
cd ../Frontend_AssistantAI- Install packages:
npm install
# or
yarn- Run dev server:
npm run dev
# or
yarn devThis will start the Next.js dev server (commonly on http://localhost:3000). Update package.json scripts if different.
Create a .env file at the root of backend/frontend as required by your code. Example placeholders:
# Backend
PORT=8000
DATABASE_URL=postgresql://user:pass@localhost:5432/dbname
SECRET_KEY=your-secret-key
# Optional ML/3rd-party keys
OPENAI_API_KEY=sk-...
Adjust names & values to match the actual code that reads environment values. If you want, I can scan the repo to enumerate exact env vars used (I will read Backend_assistantai source files to list them).
- Work on backend API endpoints inside
Backend_assistantai/. - Work on UI components & pages inside
Frontend_AssistantAI/. - Keep API contracts (paths, request/response JSON) documented and stable while iterating on UI.
- Use type hints + Pydantic models in backend and TypeScript interfaces in frontend to maintain type safety.
- Add/extend unit tests in the backend (e.g., using
pytest) and frontend (e.g.,jest/testing-library) as needed. - For API testing, tools like
httpie,curl, or Postman are useful:
# Example: check health endpoint
curl http://localhost:8000/health(Replace /health with whatever endpoint the repo exposes.)
- Fork the repo.
- Create a feature branch:
git checkout -b feat/your-feature - Commit changes & open a PR describing the change and motivation.
- Follow consistent code style: lint Python (e.g.,
black,ruff), and TypeScript linting for the frontend.
If you want, I can prepare a CONTRIBUTING.md template and a recommended pre-commit configuration.
- Add example API documentation (OpenAPI/Swagger is available with FastAPI — link it in README).
- Provide
docker-composefor local full-stack run (Postgres + backend + frontend). - Add CI (GitHub Actions) for linting/testing on push/PR.
- Add a short demo GIF or screenshots showing the UI in action.