We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 6372540 commit ded1a6aCopy full SHA for ded1a6a
examples/Aquila/Aquila-chat/README.md
@@ -131,7 +131,7 @@ python generate_chat_bminf.py
131
aquila-7b 模型名称,注意需要小写
132
aquila_experiment 实验名称,可自定义
133
```
134
-
+ **如果启动deepspeed微调(在单张V100上运行微调为例),上一步改为运行**
135
136
**如果启动LoRA微调(在单张V100上运行微调为例),上一步改为运行**
137
examples/Aquila/Aquila-chat/aquila_chat.py
@@ -7,7 +7,6 @@
7
import gc
8
gc.collect()
9
torch.cuda.empty_cache()
10
-import sys;sys.path.append("/data2/yzd/FlagAI/")
11
from flagai.auto_model.auto_loader import AutoLoader
12
from flagai.data.tokenizer import Tokenizer
13
from flagai.env_args import EnvArgs
0 commit comments