Skip to content

ComputeIO/OpenLLaMA

Repository files navigation

OpenLLaMA

Build Status LICENSE Language Rust Report Card CII Best Practices OpenSSF Scorecard Codecov CLOMonitor Release Tag Chat

Overview

A Rust LLaMA project to load, serve and extend LLM models.

Key Objectives

  • Support both GGML and HF(HuggingFace) models
  • Support a standard web server for inference
  • Support download HF models through hf-hub
  • Support Nvidia GPUs
  • Support AMD GPUs
  • Support macOS, Linux, Windows, etc.
  • OpenAI compatible API spec
  • Support more GPUs
  • Support LPCP(Large-scale Parallel Central Processing)

Usage

Introduction

License

OpenLLaMA is licensed under the MIT. For detail see LICENSE.

Note

The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in order to get a stable set of binaries.

Star History

Star History Chart

About

A Rust project for LLM, inspired by https://github.com/ggerganov/llama.cpp and https://github.com/ollama/ollama. Welcom to join our conversations on Zulip: https://openllama.zulipchat.com/.

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 5