-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Add initial hf huggingface GPU example * Small fix * Add llama3 gpu pytorch model example * Add llama 3 hf transformers CPU example * Add llama 3 pytorch model CPU example * Fixes * Small fix * Small fixes * Small fix * Small fix * Add links * update repo id * change prompt tuning url * remove system header if there is no system prompt --------- Co-authored-by: Yuwen Hu <yuwen.hu@intel.com> Co-authored-by: Yuwen Hu <54161268+Oscilloscope98@users.noreply.github.com>
- Loading branch information
1 parent
754b0ff
commit 8153c30
Showing
10 changed files
with
763 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
68 changes: 68 additions & 0 deletions
68
python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama3/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
# Llama3 | ||
In this directory, you will find examples on how you could apply IPEX-LLM INT4 optimizations on Llama3 models. For illustration purposes, we utilize the [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a reference Llama3 model. | ||
|
||
## 0. Requirements | ||
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. | ||
|
||
## Example: Predict Tokens using `generate()` API | ||
In the example [generate.py](./generate.py), we show a basic use case for a Llama3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. | ||
### 1. Install | ||
We suggest using conda to manage environment: | ||
```bash | ||
conda create -n llm python=3.11 | ||
conda activate llm | ||
|
||
pip install --pre --upgrade ipex-llm[all] # install ipex-llm with 'all' option | ||
|
||
# transformers>=4.33.0 is required for Llama3 with IPEX-LLM optimizations | ||
pip install transformers==4.37.0 | ||
``` | ||
|
||
### 2. Run | ||
``` | ||
python ./generate.py --repo-id-or-model-path REPO_ID_OR_MODEL_PATH --prompt PROMPT --n-predict N_PREDICT | ||
``` | ||
|
||
Arguments info: | ||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama3 model (e.g. `meta-llama/Meta-Llama-3-8B-Instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Meta-Llama-3-8B-Instruct'`. | ||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`. | ||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. | ||
|
||
> **Note**: When loading the model in 4-bit, IPEX-LLM converts linear layers in the model into INT4 format. In theory, a *X*B model saved in 16-bit will requires approximately 2*X* GB of memory for loading, and ~0.5*X* GB memory for further inference. | ||
> | ||
> Please select the appropriate size of the Llama3 model based on the capabilities of your machine. | ||
#### 2.1 Client | ||
On client Windows machine, it is recommended to run directly with full utilization of all cores: | ||
```powershell | ||
python ./generate.py | ||
``` | ||
|
||
#### 2.2 Server | ||
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket. | ||
|
||
E.g. on Linux, | ||
```bash | ||
# set IPEX-LLM env variables | ||
source ipex-llm-init | ||
|
||
# e.g. for a server with 48 cores per socket | ||
export OMP_NUM_THREADS=48 | ||
numactl -C 0-47 -m 0 python ./generate.py | ||
``` | ||
|
||
#### 2.3 Sample Output | ||
#### [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | ||
```log | ||
Inference time: xxxx s | ||
-------------------- Prompt -------------------- | ||
<|begin_of_text|><|start_header_id|>user<|end_header_id|> | ||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|> | ||
-------------------- Output (skip_special_tokens=False) -------------------- | ||
<|begin_of_text|><|start_header_id|>user<|end_header_id|> | ||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|> | ||
A question that gets to the heart of the 21st century! | ||
Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that | ||
``` |
81 changes: 81 additions & 0 deletions
81
python/llm/example/CPU/HF-Transformers-AutoModels/Model/llama3/generate.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,81 @@ | ||
# | ||
# Copyright 2016 The BigDL Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
|
||
import torch | ||
import time | ||
import argparse | ||
|
||
from ipex_llm.transformers import AutoModelForCausalLM | ||
from transformers import AutoTokenizer | ||
|
||
# you could tune the prompt based on your own model, | ||
# here the prompt tuning refers to https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3 | ||
DEFAULT_SYSTEM_PROMPT = """\ | ||
""" | ||
|
||
def get_prompt(user_input: str, chat_history: list[tuple[str, str]], | ||
system_prompt: str) -> str: | ||
prompt_texts = [f'<|begin_of_text|>'] | ||
|
||
if system_prompt != '': | ||
prompt_texts.append(f'<|start_header_id|>system<|end_header_id|>\n{system_prompt}<|eot_id|>') | ||
|
||
for history_input, history_response in chat_history: | ||
prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n{history_input.strip()}<|eot_id|>') | ||
prompt_texts.append(f'<|start_header_id|>assistant<|end_header_id|>\n{history_response.strip()}<|eot_id|>') | ||
|
||
prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n{user_input.strip()}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n') | ||
return ''.join(prompt_texts) | ||
|
||
if __name__ == '__main__': | ||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Llama3 model') | ||
parser.add_argument('--repo-id-or-model-path', type=str, default="meta-llama/Meta-Llama-3-8B-Instruct", | ||
help='The huggingface repo id for the Llama3 (e.g. `meta-llama/Meta-Llama-3-8B-Instruct`) to be downloaded' | ||
', or the path to the huggingface checkpoint folder') | ||
parser.add_argument('--prompt', type=str, default="What is AI?", | ||
help='Prompt to infer') | ||
parser.add_argument('--n-predict', type=int, default=32, | ||
help='Max tokens to predict') | ||
|
||
args = parser.parse_args() | ||
model_path = args.repo_id_or_model_path | ||
|
||
# Load model in 4 bit, | ||
# which convert the relevant layers in the model into INT4 format | ||
model = AutoModelForCausalLM.from_pretrained(model_path, | ||
load_in_4bit=True, | ||
optimize_model=True, | ||
trust_remote_code=True, | ||
use_cache=True) | ||
|
||
# Load tokenizer | ||
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) | ||
|
||
# Generate predicted tokens | ||
with torch.inference_mode(): | ||
prompt = get_prompt(args.prompt, [], system_prompt=DEFAULT_SYSTEM_PROMPT) | ||
input_ids = tokenizer.encode(prompt, return_tensors="pt") | ||
st = time.time() | ||
output = model.generate(input_ids, | ||
max_new_tokens=args.n_predict) | ||
end = time.time() | ||
output_str = tokenizer.decode(output[0], skip_special_tokens=False) | ||
print(f'Inference time: {end-st} s') | ||
print('-'*20, 'Prompt', '-'*20) | ||
print(prompt) | ||
print('-'*20, 'Output (skip_special_tokens=False)', '-'*20) | ||
print(output_str) | ||
|
68 changes: 68 additions & 0 deletions
68
python/llm/example/CPU/PyTorch-Models/Model/llama3/README.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
# Llama3 | ||
In this directory, you will find examples on how you could use IPEX-LLM `optimize_model` API to accelerate Llama3 models. For illustration purposes, we utilize the [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a reference Llama3 model. | ||
|
||
## Requirements | ||
To run these examples with IPEX-LLM, we have some recommended requirements for your machine, please refer to [here](../README.md#recommended-requirements) for more information. | ||
|
||
## Example: Predict Tokens using `generate()` API | ||
In the example [generate.py](./generate.py), we show a basic use case for a Llama3 model to predict the next N tokens using `generate()` API, with IPEX-LLM INT4 optimizations. | ||
### 1. Install | ||
We suggest using conda to manage the Python environment. For more information about conda installation, please refer to [here](https://docs.conda.io/en/latest/miniconda.html#). | ||
|
||
After installing conda, create a Python environment for IPEX-LLM: | ||
```bash | ||
conda create -n llm python=3.11 # recommend to use Python 3.11 | ||
conda activate llm | ||
|
||
pip install --pre --upgrade ipex-llm[all] # install the latest ipex-llm nightly build with 'all' option | ||
|
||
# transformers>=4.33.0 is required for Llama3 with IPEX-LLM optimizations | ||
pip install transformers==4.37.0 | ||
``` | ||
|
||
### 2. Run | ||
After setting up the Python environment, you could run the example by following steps. | ||
|
||
#### 2.1 Client | ||
On client Windows machines, it is recommended to run directly with full utilization of all cores: | ||
```powershell | ||
python ./generate.py --prompt 'What is AI?' | ||
``` | ||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section. | ||
|
||
#### 2.2 Server | ||
For optimal performance on server, it is recommended to set several environment variables (refer to [here](../README.md#best-known-configuration-on-linux) for more information), and run the example with all the physical cores of a single socket. | ||
|
||
E.g. on Linux, | ||
```bash | ||
# set IPEX-LLM env variables | ||
source ipex-llm-init | ||
|
||
# e.g. for a server with 48 cores per socket | ||
export OMP_NUM_THREADS=48 | ||
numactl -C 0-47 -m 0 python ./generate.py --prompt 'What is AI?' | ||
``` | ||
More information about arguments can be found in [Arguments Info](#23-arguments-info) section. The expected output can be found in [Sample Output](#24-sample-output) section. | ||
|
||
#### 2.3 Arguments Info | ||
In the example, several arguments can be passed to satisfy your requirements: | ||
|
||
- `--repo-id-or-model-path REPO_ID_OR_MODEL_PATH`: argument defining the huggingface repo id for the Llama3 model (e.g. `meta-llama/Meta-Llama-3-8B-Instruct`) to be downloaded, or the path to the huggingface checkpoint folder. It is default to be `'meta-llama/Meta-Llama-3-8B-Instruct'`. | ||
- `--prompt PROMPT`: argument defining the prompt to be infered (with integrated prompt format for chat). It is default to be `'What is AI?'`. | ||
- `--n-predict N_PREDICT`: argument defining the max number of tokens to predict. It is default to be `32`. | ||
|
||
#### 2.3 Sample Output | ||
#### [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | ||
```log | ||
Inference time: xxxx s | ||
-------------------- Prompt -------------------- | ||
<|begin_of_text|><|start_header_id|>user<|end_header_id|> | ||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|> | ||
-------------------- Output (skip_special_tokens=False) -------------------- | ||
<|begin_of_text|><|start_header_id|>user<|end_header_id|> | ||
What is AI?<|eot_id|><|start_header_id|>assistant<|end_header_id|> | ||
A question that gets to the heart of the 21st century! | ||
Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that | ||
``` |
82 changes: 82 additions & 0 deletions
82
python/llm/example/CPU/PyTorch-Models/Model/llama3/generate.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,82 @@ | ||
# | ||
# Copyright 2016 The BigDL Authors. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
# | ||
|
||
import torch | ||
import time | ||
import argparse | ||
|
||
from transformers import AutoModelForCausalLM, AutoTokenizer | ||
from ipex_llm import optimize_model | ||
|
||
# you could tune the prompt based on your own model, | ||
# here the prompt tuning refers to https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3 | ||
DEFAULT_SYSTEM_PROMPT = """\ | ||
""" | ||
|
||
def get_prompt(user_input: str, chat_history: list[tuple[str, str]], | ||
system_prompt: str) -> str: | ||
prompt_texts = [f'<|begin_of_text|>'] | ||
|
||
if system_prompt != '': | ||
prompt_texts.append(f'<|start_header_id|>system<|end_header_id|>\n{system_prompt}<|eot_id|>') | ||
|
||
for history_input, history_response in chat_history: | ||
prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n{history_input.strip()}<|eot_id|>') | ||
prompt_texts.append(f'<|start_header_id|>assistant<|end_header_id|>\n{history_response.strip()}<|eot_id|>') | ||
|
||
prompt_texts.append(f'<|start_header_id|>user<|end_header_id|>\n{user_input.strip()}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n') | ||
return ''.join(prompt_texts) | ||
|
||
if __name__ == '__main__': | ||
parser = argparse.ArgumentParser(description='Predict Tokens using `generate()` API for Llama3 model') | ||
parser.add_argument('--repo-id-or-model-path', type=str, default="meta-llama/Meta-Llama-3-8B-Instruct", | ||
help='The huggingface repo id for the Llama3 (e.g. `meta-llama/Meta-Llama-3-8B-Instruct`) to be downloaded' | ||
', or the path to the huggingface checkpoint folder') | ||
parser.add_argument('--prompt', type=str, default="What is AI?", | ||
help='Prompt to infer') | ||
parser.add_argument('--n-predict', type=int, default=32, | ||
help='Max tokens to predict') | ||
|
||
args = parser.parse_args() | ||
model_path = args.repo_id_or_model_path | ||
|
||
# Load model | ||
model = AutoModelForCausalLM.from_pretrained(model_path, | ||
trust_remote_code=True, | ||
torch_dtype='auto', | ||
low_cpu_mem_usage=True, | ||
use_cache=True) | ||
|
||
# With only one line to enable IPEX-LLM optimization on model | ||
model = optimize_model(model) | ||
|
||
# Load tokenizer | ||
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) | ||
|
||
# Generate predicted tokens | ||
with torch.inference_mode(): | ||
prompt = get_prompt(args.prompt, [], system_prompt=DEFAULT_SYSTEM_PROMPT) | ||
input_ids = tokenizer.encode(prompt, return_tensors="pt") | ||
st = time.time() | ||
output = model.generate(input_ids, | ||
max_new_tokens=args.n_predict) | ||
end = time.time() | ||
output_str = tokenizer.decode(output[0], skip_special_tokens=False) | ||
print(f'Inference time: {end-st} s') | ||
print('-'*20, 'Prompt', '-'*20) | ||
print(prompt) | ||
print('-'*20, 'Output (skip_special_tokens=False)', '-'*20) | ||
print(output_str) |
Oops, something went wrong.