Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.
-
Updated
Mar 18, 2025 - Python
Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
LLM-PowerHouse: Unleash LLMs' potential through curated tutorials, best practices, and ready-to-use code for custom training and inferencing.
LLM (Large Language Model) FineTuning
LLMs and Machine Learning done easily
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
A list of LLMs Tools & Projects
This is a PHP library for Ollama. Ollama is an open-source project that serves as a powerful and user-friendly platform for running LLMs on your local machine. It acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience.
Run Open Source/Open Weight LLMs locally with OpenAI compatible APIs
Samples on how to build industry solution leveraging generative AI capabilities on top of SAP BTP and integrated with SAP S/4HANA Cloud.
EmbeddedLLM: API server for Embedded Device Deployment. Currently support CUDA/OpenVINO/IpexLLM/DirectML/CPU
Fine-tune open-source Language Models (LLMs) on E-commerce data, leveraging Amazon's sales data. Showcase a tailored solution for enhanced language understanding and generation with a focus on custom E-commerce datasets.
Read your local files and answer your queries
Multi-agent workflows with Llama3: A private on-device multi-agent framework
LocalPrompt is an AI-powered tool designed to refine and optimize AI prompts, helping users run locally hosted AI models like Mistral-7B for privacy and efficiency. Ideal for developers seeking to run LLMs locally without external APIs.
In this project, we leverage Weaviate, a vector database, to power our retrieval-augmented generation (RAG) application. Weaviate enables efficient vector similarity search, which is crucial for building effective RAG systems. Additionally, we use local language model (LLM) and embedding models.
Create a Small LLM using EleutherAI/gpt-neo-2.7B - Fine Tune It for a Specalized Purpouse and Leverage as a Co-Pilot
LocalPrompt is an AI-powered tool designed to refine and optimize AI prompts, helping users run locally hosted AI models like Mistral-7B for privacy and efficiency. Ideal for developers seeking to run LLMs locally without external APIs.
Add a description, image, and links to the open-source-llm topic page so that developers can more easily learn about it.
To associate your repository with the open-source-llm topic, visit your repo's landing page and select "manage topics."