Skip to content

Commit 95168d0

Browse files
authored
Added Running Large Language Models (LLMs) Locally section.
1 parent 98508e5 commit 95168d0

File tree

1 file changed

+52
-3
lines changed

1 file changed

+52
-3
lines changed

README.md

Lines changed: 52 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,9 @@
2727
- [Books](https://github.com/mikeroyal/Machine-Learning-Guide#books)
2828

2929
2. [ML Frameworks, Libraries, and Tools](https://github.com/mikeroyal/Machine-Learning-Guide#ML-frameworks-libraries-and-tools)
30-
30+
31+
- [Running Large Language Models (LLMs) Locally](#running-llms-locally)
32+
3133
3. [Algorithms](https://github.com/mikeroyal/Machine-Learning-Guide#Algorithms)
3234

3335
4. [PyTorch Development](https://github.com/mikeroyal/Machine-Learning-Guide#pytorch-development)
@@ -267,8 +269,6 @@
267269

268270
[Semantic Kernel (SK)](https://aka.ms/semantic-kernel) is a lightweight SDK enabling integration of AI Large Language Models (LLMs) with conventional programming languages. The SK extensible programming model combines natural language semantic functions, traditional code native functions, and embeddings-based memory unlocking new potential and adding value to applications with AI.
269271

270-
[LocalAI](https://localai.io/) is a self-hosted, community-driven, local OpenAI-compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware with no GPU required. It's an API to run ggml compatible models: llama, gpt4all, rwkv, whisper, vicuna, koala, gpt4all-j, cerebras, falcon, dolly, starcoder, and many others.
271-
272272
[Pandas AI](https://github.com/gventuri/pandas-ai) is a Python library that integrates generative artificial intelligence capabilities into Pandas, making dataframes conversational.
273273

274274
[NCNN](https://github.com/Tencent/ncnn) is a high-performance neural network inference framework optimized for the mobile platform.
@@ -410,6 +410,55 @@ top of Chatbot UI Lite using Next.js, TypeScript, and Tailwind CSS. This version
410410

411411
[Sec-PaLM](https://cloud.google.com/blog/products/identity-security/rsa-google-cloud-security-ai-workbench-generative-ai) is a large language models (LLMs), that accelerate the ability to help people who are responsible for keeping their organizations safe. These new models not only give people a more natural and creative way to understand and manage security.
412412

413+
### Running LLMs Locally
414+
415+
[Back to the Top](#table-of-contents)
416+
417+
* [A comprehensive guide to running Llama 2 locally](https://replicate.com/blog/run-llama-locally)
418+
* [Leaderboard by lmsys.org](https://chat.lmsys.org/?leaderboard)
419+
* [LLM-Leaderboard](https://github.com/LudwigStumpp/llm-leaderboard)
420+
* [Open LLM Leaderboard by Hugging Face](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
421+
* [Holistic Evaluation of Language Models (HELM)](https://crfm.stanford.edu/helm/latest/?groups=1)
422+
* [TextSynth Server Benchmarks](https://bellard.org/ts_server/)
423+
424+
[LocalAI](https://localai.io/) is a self-hosted, community-driven, local OpenAI-compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware with no GPU required. It's an API to run ggml compatible models: llama, gpt4all, rwkv, whisper, vicuna, koala, gpt4all-j, cerebras, falcon, dolly, starcoder, and many others.
425+
426+
[llama.cpp](https://github.com/ggerganov/llama.cpp) is a Port of Facebook's LLaMA model in C/C++.
427+
428+
[ollama](https://ollama.ai/) is a tool to get up and running with Llama 2 and other large language models locally.
429+
430+
[LocalAI](https://localai.io/) is a self-hosted, community-driven, local OpenAI-compatible API. Drop-in replacement for OpenAI running LLMs on consumer-grade hardware with no GPU required. It's an API to run ggml compatible models: llama, gpt4all, rwkv, whisper, vicuna, koala, gpt4all-j, cerebras, falcon, dolly, starcoder, and many others.
431+
432+
[Serge](https://github.com/serge-chat/serge) is a web interface for chatting with Alpaca through llama.cpp. Fully self-hosted & dockerized, with an easy to use API.
433+
434+
[OpenLLM](https://github.com/bentoml/OpenLLM) is an open platform for operating large language models (LLMs) in production. Fine-tune, serve, deploy, and monitor any LLMs with ease.
435+
436+
[Llama-gpt](https://github.com/getumbrel/llama-gpt) is a self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device.
437+
438+
[Llama2 webui](https://github.com/liltom-eth/llama2-webui) is a tool to run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.
439+
440+
[Llama2.c](https://github.com/karpathy/llama2.c) is a tool to Train the Llama 2 LLM architecture in PyTorch then inference it with one simple 700-line C file ([run.c](https://github.com/karpathy/llama2.c/blob/master/run.c)).
441+
442+
[Alpaca.cpp](https://github.com/antimatter15/alpaca.cpp) is a fast ChatGPT-like model locally on your device. It combines the [LLaMA foundation model](https://github.com/facebookresearch/llama) with an [open reproduction](https://github.com/tloen/alpaca-lora) of [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) a fine-tuning of the base model to obey instructions (akin to the [RLHF](https://huggingface.co/blog/rlhf) used to train ChatGPT) and a set of modifications to [llama.cpp](https://github.com/ggerganov/llama.cpp) to add a chat interface.
443+
444+
[GPT4All](https://github.com/nomic-ai/gpt4all) is an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue based on [LLaMa](https://github.com/facebookresearch/llama).
445+
446+
[MiniGPT-4](https://minigpt-4.github.io/) is an enhancing Vision-language Understanding with Advanced Large Language Models
447+
448+
[LoLLMS WebUI](https://github.com/ParisNeo/lollms-webui) is a the hub for LLM (Large Language Model) models. It aims to provide a user-friendly interface to access and utilize various LLM models for a wide range of tasks. Whether you need help with writing, coding, organizing data, generating images, or seeking answers to your questions.
449+
450+
[LM Studio](https://lmstudio.ai/) is a tool to Discover, download, and run local LLMs.
451+
452+
[Gradio Web UI](https://github.com/oobabooga/text-generation-webui) is a tool for Large Language Models. Supports transformers, GPTQ, llama.cpp (ggml/gguf), Llama models.
453+
454+
[OpenPlayground](https://github.com/nat/openplayground) is a playfround for running ChatGPT-like models locally on your device.
455+
456+
[Vicuna](https://vicuna.lmsys.org/) is an open source chatbot trained by fine tuning LLaMA. It apparently achieves more than 90% quality of chatgpt and costs $300 to train.
457+
458+
[Yeagar ai](https://github.com/yeagerai/yeagerai-agent) is a Langchain Agent creator designed to help you build, prototype, and deploy AI-powered agents with ease.
459+
460+
[KoboldCpp](https://github.com/LostRuins/koboldcpp) is an easy-to-use AI text-generation software for GGML models. It's a single self contained distributable from Concedo, that builds off llama.cpp, and adds a versatile Kobold API endpoint, additional format support, backward compatibility, as well as a fancy UI with persistent stories, editing tools, save formats, memory, world info, author's note, characters, and scenarios.
461+
413462
# Algorithms
414463
[Back to the Top](https://github.com/mikeroyal/Machine-Learning-Guide#table-of-contents)
415464

0 commit comments

Comments
 (0)