A web application for Question Answering over local documents using local Large Language Models (option to use GPT5.2-nano with own API key). Users can upload PDFs and images, get summaries and ask questions from their documents.
- File upload support for PDFs and images
- Document summarization
- Question answering based on documents
- PostgreSQL database integration
- Python 3.9+
- PostgreSQL
- Node.js 16+ and npm (for frontend)
- uv (Python package manager)
- Ollama
Install steps for all platforms (macOS, Linux, Windows) are in Setup below. To run with Docker instead of local setup, see the Docker guide.
- Clone the repository:
git clone <repository-url>
cd doc-chat-
Install uv and Node.js (if not already installed):
uv (Python package manager):
- macOS/Linux:
curl -LsSf https://astral.sh/uv/install.sh | shthen restart your shell orsource $HOME/.local/bin/env(or add it to your PATH). - Windows (PowerShell):
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex". - Other options (pip, Homebrew, etc.): uv docs.
Node.js and npm (for frontend):
- macOS:
brew install nodeor install from nodejs.org. - Linux: Use your distro’s package manager (e.g.
sudo apt install nodejs npmon Debian/Ubuntu) or nodejs.org. - Windows: Download the LTS installer from nodejs.org.
Check versions:
uv --versionandnode --version(Node 16+). - macOS/Linux:
-
Create and activate a virtual environment:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate- Install dependencies:
uv pip install -r requirements.txt- Create a
.envfile in the root directory:
DATABASE_URL=postgresql+asyncpg://user:password@localhost/dbname
RAG_ENABLED=false
DISABLE_AUTH=true-
Database: PostgreSQL is used to store document metadata, parsed text, user settings (e.g. selected model, prompts), and conversation history. Ensure PostgreSQL is installed and running, then create a database (and optionally a user/password) for the app. The app creates tables on first run; it does not create the database or user.
Install & start PostgreSQL
- macOS (Homebrew):
brew install postgresql@16thenbrew services start postgresql@16 - Linux: Install the
postgresql(e.g.sudo apt install postgresql) package for your distro and start the service - Windows: Download the installer from postgresql.org/download/windows and run it. During setup, set a password for the
postgressuperuser. The PostgreSQL service usually starts automatically; you can manage it in Services (e.g.services.msc) or via pgAdmin.
Create a database and user
- macOS (Homebrew): Usually your OS user already exists as a role. Run:
In
createdb doc_chat
.envuse:DATABASE_URL=postgresql+asyncpg://YOUR_OS_USERNAME@localhost/doc_chat(no password; replaceYOUR_OS_USERNAMEwith your username). - Linux: The PostgreSQL role matching your OS user (e.g.
ubuntu) often does not exist. Create it first, then the database:The app connects over TCP and requires a password. Set one for your user, then use it insudo -u postgres createuser -s $USER createdb doc_chat.env:Insudo -u postgres psql -c "ALTER USER $USER WITH PASSWORD 'dev';".envuse your actual username (e.g.ubuntu). The.envfile is not processed by the shell, so$USERwill not expand—write the name explicitly:DATABASE_URL=postgresql+asyncpg://ubuntu:dev@localhost/doc_chat - Windows: Open SQL Shell (psql) or a terminal where
psqlis on PATH (e.g.C:\Program Files\PostgreSQL\16\bin). Connect aspostgres(use the password you set during install), then create a dedicated user and database:InCREATE USER doc_chat_user WITH PASSWORD 'your_password'; CREATE DATABASE doc_chat OWNER doc_chat_user;
.envuse:DATABASE_URL=postgresql+asyncpg://doc_chat_user:your_password@localhost/doc_chat
- macOS (Homebrew):
-
First run: Start the app (see Running the Application). Database tables are created automatically on first startup; the database and user must already exist (step 6).
If you want to use local LLMs via Ollama, you'll need to install and run Ollama separately:
- macOS/Linux:
curl -fsSL https://ollama.ai/install.sh | sh - Windows: Download and run the installer from ollama.ai/download
- macOS/Linux: In a terminal, run
ollama serve. This starts Ollama onhttp://localhost:11434(the default port the application expects). - Windows: Ollama may start from the Start menu or system tray after install. To run from a terminal, open PowerShell or CMD and run
ollama serve.
Models are auto-downloaded when first used. To pull in advance, run in a terminal (PowerShell or CMD on Windows):
ollama pull gemma3:1bWhen running locally, the application will automatically connect to Ollama at http://localhost:11434. When using Docker, it connects to http://ollama:11434 within the container network.
If you need to use a different Ollama URL, set the environment variable:
OLLAMA_API_BASE_URL=http://your-ollama-host:11434From the project root (with your virtual environment activated and .env configured):
- macOS/Linux:
./startup.sh - Windows: Use WSL (Windows Subsystem for Linux) and run
./startup.shthere, or run the steps manually:cd frontend,npm install,npm run build, then from the repo root copyfrontend/buildintosrc/doc_chat/static, thenuvicorn src.doc_chat.main:app --host 0.0.0.0 --port 8001.
This builds the frontend, copies it into the backend static folder, and starts the API. Default port is 8001; set PORT to override.
For containerized deployment, see Docker guide.
The API will be available at http://localhost:8001.
Once the server is running, you can access:
- Swagger UI:
http://localhost:8001/docs - ReDoc:
http://localhost:8001/redoc
pytestThe project uses:
- Ruff for linting
- Black for code formatting
- MyPy for type checking
Run the formatters:
ruff check .
ruff format .This project is licensed under the MIT License - see the LICENSE file for details.