A Python demonstration project showcasing retrieval-augmented generation (RAG) using LangGraph, OpenAI embeddings, and Weaviate vector store.
- Python 3.13.7 (see
.tool-versions) - uv package manager
- Docker and Docker Compose
-
Clone and install dependencies:
uv sync --dev
-
Set up environment variables: Copy the example environment file and fill in your API keys:
cp .env.example .env
Edit
.envwith your OpenAI API key and other required variables. -
Start the Weaviate vector store:
docker compose up -d
-
Install pre-commit hooks:
pre-commit install
- URL: http://localhost:8080
- API Documentation: http://localhost:8080/v1
- GraphQL Playground: http://localhost:8080/v1/graphql
- Configuration: Supports OpenAI embeddings with automatic sparse vectors for text fields
Run the main application:
uv run retrieval-demo# Format code
uv run ruff format
# Check linting
uv run ruff check
# Run tests
uv run pytest
# Run all pre-commit hooks
pre-commit run --all-files# Start services
docker compose up -d
# View logs
docker compose logs -f
# Stop services
docker compose down
# Reset Weaviate data
docker compose down -v