finetuning
Here are 300 public repositories matching this topic...
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama for WhatsApp & Messenger.
-
Updated
Nov 1, 2024 - Jupyter Notebook
H2O LLM Studio - a framework and no-code GUI for fine-tuning LLMs. Documentation: https://docs.h2o.ai/h2o-llmstudio/
-
Updated
Nov 1, 2024 - Python
A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
-
Updated
Nov 1, 2024 - Jupyter Notebook
Efficient Triton Kernels for LLM Training
-
Updated
Nov 2, 2024 - Python
Interact with your SQL database, Natural Language to SQL using LLMs
-
Updated
Jul 24, 2024 - Python
A PyTorch Library for Meta-learning Research
-
Updated
Jun 7, 2024 - Python
Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6
-
Updated
Sep 23, 2024 - Python
Curated tutorials and resources for Large Language Models, Text2SQL, Text2DSL、Text2API、Text2Vis and more.
-
Updated
Oct 28, 2024
🎯 Task-oriented embedding tuning for BERT, CLIP, etc.
-
Updated
Mar 11, 2024 - Python
Easiest and laziest way for building multi-agent LLMs applications.
-
Updated
Nov 1, 2024 - Python
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.
-
Updated
Oct 25, 2024 - Python
Webui for using XTTS and for finetuning it
-
Updated
Oct 17, 2024 - Python
优质稳定的OpenAI的API接口-For企业和开发者。OpenAI的api proxy,支持ChatGPT的API调用,支持openai的API接口,支持:gpt-4,gpt-3.5。不需要openai Key, 不需要买openai的账号,不需要美元的银行卡,通通不用的,直接调用就行,稳定好用!!智增增
-
Updated
Oct 15, 2024 - PHP
Finetuning large language models for GDScript generation.
-
Updated
May 26, 2023 - Python
[IJCAI 2023 survey track]A curated list of resources for chemical pre-trained models
-
Updated
Jun 17, 2023
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeed
-
Updated
Jun 14, 2023 - Python
End-to-End recipes for pre-training and fine-tuning BERT using Azure Machine Learning Service
-
Updated
Jun 12, 2023 - Jupyter Notebook
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
-
Updated
Nov 2, 2024 - Go
Improve this page
Add a description, image, and links to the finetuning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the finetuning topic, visit your repo's landing page and select "manage topics."