Skip to content

tillg/smart-chat

 
 

Repository files navigation

smart chat

This is Till's version of the Chat Langchain.

The original Chat Langchain is a chatbot specifically focused on question answering over the LangChain documentation. Built with LangChain, FastAPI, and Next.js.

From there I modified it (and plan to modify it) in the following ways:

  • Running locally with Ollama ✅
  • Offering multiple local models
  • Offering different sources: Another website (maybe a manual of another software), a Wiki space...
  • Providing functionality to evaluate performance, i.e. if the answers are close to what we expect.

Todo / backlog

  • Find out ifthe _scripts directory is needed.
  • Server component should list the available models so client offers option to choose
  • Server component offers different brainz, i.e. sites that have been scraped and ingested

Done

  • 2024-04-25 Renamed Github Repo
  • 2024-04-24 Move from poetry to requirements file - as poetry is too complex for a hobby programmer like me.
  • 2024-04-22 Simplify the original project and get it running fully locally

Running locally

  1. Create a local Python environment: python3.12 -m venv .venv
  2. Activate it with source .venv/bin/activate
  3. Install backend dependencies: pip install -r requirements.txt.
  4. Makle sure Ollama is running and has the provided models pulled. You can check it with ollama list.
  5. Configure your constants in backend/constants.py
  6. Run python backend/ingest.py to ingest LangChain docs data into the Chroma vectorstore (only needs to be done once).
  7. Start the Python backend with python backend/main.py.
  8. Install frontend dependencies by running cd ./frontend, then yarn.
  9. Run the frontend with yarn dev for frontend.
  10. Open localhost:3000 in your browser.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 68.2%
  • TypeScript 31.2%
  • Other 0.6%