Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
-
Updated
Mar 7, 2026 - Python
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
Upload your data → Get a fine-tuned SLM. Free.
Automatically exported from code.google.com/p/audiotools
Azure IoT Operations with Edge AI in-a-Box
Fine-tuning open-source large and small instruct/chat language models (LLMs & SLMs) from the Hugging Face Model Hub using public datasets.
Tool-calling AI agents that run locally.
The core code used by the 'Zest' Natural Language to CLI commands tool from Spicy Lemonade
Kurtis is a fine-tuning, inference and evaluation tool built for SLMs (Small Language Models), such as Huggingface's SmolLM2.
🧳 SayHalo – AI-powered (gen-ai) SLM Aggregator for seamless chat experience. 💬🤖 Built with Next.js, TypeScript, Tailwind & Resend. ⚡✨ Early access available now! 🔥
This repository documents the journey of building a Small Language Model (SLM) from the ground up using Python and PyTorch. The model is a decoder-only Transformer, trained on a custom corpus of academic papers and articles focused on artificial intelligence theory.
[ICLR 2026 Accepted paper] BeyondBench: Contamination-Resistant Evaluation of Reasoning in Language Models
Fine-tuned TinyLlama-1.1B (Decoder-Only) via 3-phase training (domain pretraining → instruction tuning → DPO) and T5-small (Encoder-Decoder) for summarization — both using LoRA.
Accoutre aims to equip SLMs with tools and measure the gains - A zero-build playground for skill orchestration and benchmarking.
Add a description, image, and links to the slms topic page so that developers can more easily learn about it.
To associate your repository with the slms topic, visit your repo's landing page and select "manage topics."