Skip to content

Latest commit

 

History

History
93 lines (64 loc) · 7.28 KB

README.md

File metadata and controls

93 lines (64 loc) · 7.28 KB

🧮 localAGI 🧮

Fulltime nerd. Passionate developer. DevOp at heart.

Thats me. :bowtie: Building AGI on local hardware.

Building contaners for effectively running a local artificial general intelligence. 🦾

You want to run your own inferences with ease? Good you are awake.

Contact: Find me on AGiXT Discord Server or open an issue here.

🧗‍♀️ Motivation 🧗

Initially started my work for deployments of josh-XT/AGiXT

Having reproducable software environments to spin up services on demand for testing and sky-netting. Setup and streamline docker containers for quick and user friendly usage.

🚀 CUDA enabled. 🖥️ BLAS enabled. 😏 Conda-less. 🧅 Matrix builds. 🏢 Multiarch builds. 🧒 🧑 🧓 For everyone.

🌺 Sharing is caring 🌺

With strong expertise in docker and github workflows I want to test and follow AI-related projects in a comfortable manner.

Working on AI Pipeline to share best practices over several repositories and projects.

🌟 When you like any of my work, leave a star! Thank you! 🌟

State of work

The following projects are built using the AI pipeline.

My maintenance is focussed on build stabilty and availability of service containers. >200h of work. 50.000h of experience.

  • Build Passing == Working
  • (:heavy_check_mark:) == Working soonish
  • (WIP) == some unstable state

Services for running inference

Service Release Model-types Model-quantisations API Original Repo
FastChat e.g. Vicuna, T5 T5, HF OpenAI lm-sys/FastChat
oobabooga LLama HF, GGML, GPTQ oobabooga oobabooga/text-generation-webui
llama-cpp-python LLama HF, GGML OpenAI abetlen/llama-cpp-python
llama.cpp LLama HF, GGML ? ggerganov/llama.cpp
gpt4all see backend ? ? nomic-ai/gpt4all
gpt4all-ui GPTJ, LLAMA, MTP (??) GGML...? ? nomic-ai/gpt4all-ui
gpt4free xtekky/gpt4free
stablediffusion2 WIP

Services for using inference

Service Release Original Repo
AGiXT josh-XT/AGiXT
AGiXT-Frontend ✔️ JamesonRGrieve/Agent-LLM-Frontend
gpt-code-ui ricklamers/gpt-code-ui

CLI tools and packages

for quantization, quantization, cli-inference etc.

Tool Release Model-types Model-quantisations Original Repo
llama.cpp Llama HF, GGML ggerganov/llama.cpp
ggml WIP Llama GGML ggerganov/ggml
llama-gptq (:heavy_check_mark:) Llama GPTQ oobabooga/GPTQ-for-Llama
qwopqwop200/GPTQ-for-Llama
AutoGPTQ WIP Llama GPTQ PanQiWei/AutoGPTQ

Requests

Any? Contact me (curently on AGiXT-Discord)

Things to consider

  • conda is commercial. The license prohibits any commercial use. We try to omit it on our builds, but it's your responsibility.
  • NVidia images have a license. Make sure you read them.
  • streamlit app collects heavy analytics even when running locally. This includes events for every page load, form submission including metadata on queries (like length), browser and client information including host ips. These are all transmitted to a 3rd party analytics group, Segment.com.