From 38e116860851979f9a9c25dfa395e45555cb7206 Mon Sep 17 00:00:00 2001 From: Lucy Qiu Date: Wed, 5 Jun 2024 15:27:38 -0700 Subject: [PATCH] Update llama readme, use main branch for llama3 Summary: Use main branch for llama3. Looks like the warning was removed in D56358723 ? See: https://github.com/pytorch/executorch/issues/3850 Differential Revision: D58212855 --- examples/models/llama2/README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/examples/models/llama2/README.md b/examples/models/llama2/README.md index 247575817d..56ab9f7186 100644 --- a/examples/models/llama2/README.md +++ b/examples/models/llama2/README.md @@ -1,5 +1,5 @@ # Summary -This example demonstrates how to run a [Llama 2] (https://llama.meta.com/llama2/) 7B or [Llama 3](https://ai.meta.com/llama/) 8B model on mobile via ExecuTorch. We use XNNPACK to accelerate the performance and 4-bit groupwise PTQ quantization to fit the model on a phone. +This example demonstrates how to run a [Llama 2](https://llama.meta.com/llama2/) 7B or [Llama 3](https://ai.meta.com/llama/) 8B model on mobile via ExecuTorch. We use XNNPACK to accelerate the performance and 4-bit groupwise PTQ quantization to fit the model on a phone. For more details, see [Llama 2 repo](https://github.com/facebookresearch/llama) or [Llama 3 repo](https://github.com/facebookresearch/llama3). @@ -107,6 +107,7 @@ If you want to deploy and run a smaller model for educational purposes. From `ex ### Option C: Download and export Llama 3 8B instruct model You can export and run the original Llama 3 8B instruct model. +> :warning: **use the main branch**: Llama 3 is only supported on the ExecuTorch main branch (not release 2.0) 1. Llama 3 pretrained parameters can be downloaded from [Meta's official Llama 3 repository](https://github.com/meta-llama/llama3/).