This is Till's version of the Chat Langchain.
The original Chat Langchain is a chatbot specifically focused on question answering over the LangChain documentation. Built with LangChain, FastAPI, and Next.js.
From there I modified it (and plan to modify it) in the following ways:
- Running locally with Ollama ✅
- Offering multiple local models
- Offering different sources: Another website (maybe a manual of another software), a Wiki space...
- Providing functionality to evaluate performance, i.e. if the answers are close to what we expect.
- Find out ifthe
_scriptsdirectory is needed. - Server component should list the available models so client offers option to choose
- Server component offers different brainz, i.e. sites that have been scraped and ingested
- 2024-04-25 Renamed Github Repo
- 2024-04-24 Move from poetry to
requirements file- as poetry is too complex for a hobby programmer like me. - 2024-04-22 Simplify the original project and get it running fully locally
- Create a local Python environment:
python3.12 -m venv .venv - Activate it with
source .venv/bin/activate - Install backend dependencies:
pip install -r requirements.txt. - Makle sure Ollama is running and has the provided models pulled. You can check it with
ollama list. - Configure your constants in
backend/constants.py - Run
python backend/ingest.pyto ingest LangChain docs data into the Chroma vectorstore (only needs to be done once). - Start the Python backend with
python backend/main.py. - Install frontend dependencies by running
cd ./frontend, thenyarn. - Run the frontend with
yarn devfor frontend. - Open localhost:3000 in your browser.