Run serverless GPU workloads with fast cold starts on bare-metal servers, anywhere in the world
-
Updated
Feb 23, 2025 - Go
Run serverless GPU workloads with fast cold starts on bare-metal servers, anywhere in the world
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
A holistic way of understanding how Llama and its components run in practice, with code and detailed documentation.
A diverse, simple, and secure all-in-one LLMOps platform
An AI assisted kubectl helper
Implement RAG (using LangChain and PostgreSQL) for Go applications to improve the accuracy and relevance of LLM outputs
Interactive Fiction in the Age of AI
Go package and example utilities for using Ollama / LLMs
AWS Go SDK examples for Amazon Bedrock
Inference Llama 2 in one file of pure go
Go framework for language model-powered applications with composability and chaining. Inspired by LangChain.
With an emphasis on pluggable architecture and platform flexibility, Thor is a highly modular chat engine integrated into Go.
This repository is a work in progress (WIP).
Vectoria is an embedded vector database.
A declarative DSL (domain-specific language) for IDD (Inference-Driven-Development) and testing on any codebase in any programming language
Onefile can both serialize and deserialize code, enabling the conversion of project files into a single text file and vice versa for seamless integration with LLM queries.
Open science package for LLM-powered semantic synthesis and precise extraction of information from unstructured texts.
Add a description, image, and links to the large-language-models topic page so that developers can more easily learn about it.
To associate your repository with the large-language-models topic, visit your repo's landing page and select "manage topics."