Skip to content

Commit

Permalink
opensource: update Alpaca
Browse files Browse the repository at this point in the history
The LLaMA PR has been merged into HuggingFace Transformers library:
huggingface/transformers#21955
  • Loading branch information
cedrickchee committed Mar 17, 2023
1 parent 5abaf88 commit 2f98ee4
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -421,7 +421,7 @@ could only deliver the GPT-NeoX 20B model despite all the free compute, etc.-->
- An extensible retrieval system enabling you to augment bot responses with information from a document repository, API, or other live-updating information source at inference time, with open-source examples for using Wikipedia or a web search API.
- A moderation model, fine-tuned from GPT-JT-6B, designed to filter which questions the bot responds to, also available on HuggingFace.
- They collaborated with the tremendous communities at LAION and friends to create the Open Instruction Generalist (OIG) 43M dataset used for these models.
- [Alpaca: A Strong Open-Source Instruction-Following Model](https://crfm.stanford.edu/2023/03/13/alpaca.html) by Stanford - Alpaca was fine-tuned from the LLaMA model. Simon Willison wrote about [_Alpaca, and the acceleration of on-device large language model development_](https://simonwillison.net/2023/Mar/13/alpaca/). The team at Stanford just released the [Alpaca training code](https://github.com/tatsu-lab/stanford_alpaca#fine-tuning) for fine-tuning LLaMA with Hugging Face's transformers library. Also, the [PR implementing LLaMA models](https://github.com/huggingface/transformers/pull/21955) support in Hugging Face was approved yesterday.
- [Alpaca: A Strong Open-Source Instruction-Following Model](https://crfm.stanford.edu/2023/03/13/alpaca.html) by Stanford - Alpaca was fine-tuned from the LLaMA model. Simon Willison wrote about [_Alpaca, and the acceleration of on-device large language model development_](https://simonwillison.net/2023/Mar/13/alpaca/). The team at Stanford just released the [Alpaca training code](https://github.com/tatsu-lab/stanford_alpaca#fine-tuning) for fine-tuning LLaMA with [Hugging Face's transformers library](https://huggingface.co/docs/transformers/main/en/model_doc/llama). ~Also, the [PR implementing LLaMA models](https://github.com/huggingface/transformers/pull/21955) support in Hugging Face was approved yesterday.~

See [cedrickchee/awesome-transformer-nlp](https://github.com/cedrickchee/awesome-transformer-nlp) for more info.

Expand Down

0 comments on commit 2f98ee4

Please sign in to comment.