Skip to content

Commit 364aa18

Browse files
committed
feat: create an azure-ml pipeline for eval_prompts()
1 parent e7c86f5 commit 364aa18

File tree

1 file changed

+32
-0
lines changed

1 file changed

+32
-0
lines changed

azureml/eval_prompts.yml

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
2+
command: >
3+
python -m autora.doc.pipelines.main eval-prompts
4+
${{inputs.data_dir}}/data.jsonl
5+
${{inputs.data_dir}}/all_prompt.json
6+
--model-path ${{inputs.model_path}}
7+
--param do_sample=${{inputs.do_sample}}
8+
--param temperature=${{inputs.temperature}}
9+
--param top_k=${{inputs.top_k}}
10+
--param top_p=${{inputs.top_p}}
11+
code: ../src
12+
inputs:
13+
data_dir:
14+
type: uri_folder
15+
path: azureml://datastores/workspaceblobstore/paths/data/sweetpea/
16+
# Currently models are loading faster directly from HuggingFace vs Azure Blob Storage
17+
# model_dir:
18+
# type: uri_folder
19+
# path: azureml://datastores/workspaceblobstore/paths/base_models
20+
model_path: meta-llama/Llama-2-7b-chat-hf
21+
temperature: 0.01
22+
do_sample: 0
23+
top_p: 0.95
24+
top_k: 1
25+
# using a curated environment doesn't work because we need additional packages
26+
environment: # azureml://registries/azureml/environments/acpt-pytorch-2.0-cuda11.7/versions/21
27+
image: mcr.microsoft.com/azureml/curated/acpt-pytorch-2.0-cuda11.7:21
28+
conda_file: conda.yml
29+
display_name: autodoc_multi_prompts_prediction
30+
compute: azureml:v100cluster
31+
experiment_name: evaluation_multi_prompts
32+
description: Run code-to-documentation generation on data_file for each prompt in prompts_file

0 commit comments

Comments
 (0)