-
Notifications
You must be signed in to change notification settings - Fork 83
Open
Labels
documentationImprovements or additions to documentationImprovements or additions to documentationenhancementNew feature or requestNew feature or request
Description
🌟 The World's First Self-Learning LLMs Optimized for Claude Code
Published the first-ever language models specifically optimized for Claude Code development workflows.
🎯 HuggingFace Repository
https://huggingface.co/ruv/ruvltra
🎯 What Makes This Special
This isn't just another code model. RuvLTRA introduces three breakthrough capabilities:
- 🧠 Self-Learning (SONA): Models continuously improve from interactions, learning YOUR coding patterns
- 🐝 Swarm-Optimized: Built for distributed multi-agent workflows with claude-flow coordination
- 🔄 Real-time Adaptation: <0.05ms adaptation latency - the model gets smarter as you code
📦 Published Models
All models available at: https://huggingface.co/ruv/ruvltra
| Model File | Size | Parameters | Use Case |
|---|---|---|---|
ruvltra-claude-code-0.5b-q4_k_m.gguf |
398 MB | 0.5B | ⭐ Claude Code workflows |
ruvltra-small-0.5b-q4_k_m.gguf |
398 MB | 0.5B | Edge devices, IoT |
ruvltra-medium-1.1b-q4_k_m.gguf |
669 MB | 1.1B | General purpose |
Quick Download
wget https://huggingface.co/ruv/ruvltra/resolve/main/ruvltra-claude-code-0.5b-q4_k_m.gguf✨ Key Features
Self-Learning Architecture (SONA)
User Interaction → Pattern Recognition → MicroLoRA Adaptation → Improved Model
↓
EWC++ Memory
(Prevents Forgetting)
Swarm Coordination
- Multiple RuvLTRA instances collaborating
- Shared learning across agents via HNSW (150x-12,500x faster)
- Byzantine fault-tolerant coordination
Hardware Efficiency
- Runs on 1GB RAM minimum
- Supports Apple Neural Engine, Metal, CUDA
- Edge-ready (Raspberry Pi compatible)
🔧 Code Changes
crates/ruvllm/src/hub/mod.rs- AddedHUGGINGFACE_API_KEYenv var supportcrates/ruvllm/src/hub/registry.rs- Updated to point toruv/ruvltraexamples/ruvLLM/src/bin/export.rs- Consistent token handlingdocs/adr/ADR-013-huggingface-publishing.md- Publishing strategy
📊 Quick Usage
use ruvllm::hub::ModelDownloader;
let path = ModelDownloader::new()
.download("ruv/ruvltra", None)
.await?;📋 Checklist
- Create consolidated
ruv/ruvltrarepository - Upload GGUF model files (Claude Code, Small, Medium)
- Create premium model card with badges, tutorials, architecture
- Update code to support HUGGINGFACE_API_KEY
- Update registry to point to ruv/ruvltra
- Create ADR-013 for publishing strategy
- Add Q8 quantization variants (future)
- Add larger model variants (3B, 7B)
- Automated CI/CD publishing
🔗 Links
- HuggingFace: https://huggingface.co/ruv/ruvltra
- ADR: docs/adr/ADR-013-huggingface-publishing.md
Metadata
Metadata
Assignees
Labels
documentationImprovements or additions to documentationImprovements or additions to documentationenhancementNew feature or requestNew feature or request