Llama Guard is a new experimental model that provides input and output guardrails for LLM deployments.
In order to download the model weights and tokenizer, please visit the Meta website and accept our License.
Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download.
Pre-requisites: Make sure you have wget and md5sum installed. Then to run the
script: ./download.sh
.
Keep in mind that the links expire after 24 hours and a certain amount of
downloads. If you start seeing errors such as 403: Forbidden
, you can always
re-request a link.
Since Llama Guard is a fine-tuned Llama-7B model (see our model card for more information), the same quick start steps outlined in our README file for Llama2 apply here.
In addition to that, we added examples using Llama Guard in the Llama 2 recipes repository.
Please report any software bug, or other problems with the models through one of the following means:
- Reporting issues with the Llama Guard model: github.com/facebookresearch/purplellama
- Reporting issues with Llama in general: github.com/facebookresearch/llama
- Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback
- Reporting bugs and security concerns: facebook.com/whitehat/info
Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements.
The same license as Llama 2 applies: see the LICENSE file, as well as our accompanying Acceptable Use Policy.