Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Nemo Megatron #138

Merged
merged 7 commits into from
Feb 9, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 21 additions & 9 deletions examples/nemo/multi_node.yaml
Original file line number Diff line number Diff line change
@@ -1,20 +1,32 @@
run_name: # run-name-here
cluster: # mcloud-cluster-name-here
run_name: nemo-megatron-gpt-124m-gpu-16
cluster: r0z0 # Update with your cluster here!
gpu_num: 16
image: nvcr.io/nvidia/nemo:22.09

# For the latest NeMo container version, see https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo
image: nvcr.io/nvidia/nemo:22.11

env_variables:
- key: PYTHONUNBUFFERED
value: '1'
command: |
# Configure Python to not buffer stdout and stderr, so output shows up in console immediately
- key: PYTHONUNBUFFERED
value: '1'

integrations:
- integration_type: apt_packages
# Install parallel to launch multiple processes per node with rank per process
packages:
- parallel

command: |
# getting the vocab, merge files for the tokenizer
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt

apt update
apt install -y parallel
# Make sure to prepare and download the training data, as defined in NeMo documentation:
# https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/gpt/gpt_training.html

seq 0 8 | parallel -u \ 'CUDA_VISIBLE_DEVICES={} RANK=$(( $NODE_RANK * 8 + {} )) python3 examples/nlp/language_modeling/megatron_gpt_pretraining.py \
# Make sure to update the training dataset path below
seq 0 7 | parallel -u \ 'CUDA_VISIBLE_DEVICES={} RANK=$(( $NODE_RANK * 8 + {} )) \
python3 examples/nlp/language_modeling/megatron_gpt_pretraining.py \
--config-path=/workspace/nemo/examples/nlp/language_modeling/conf/ \
--config-name=megatron_gpt_config.yaml \
model.data.data_prefix=[1.0,/your_dataset_path_here/] \
Expand Down
22 changes: 15 additions & 7 deletions examples/nemo/single_node.yaml
Original file line number Diff line number Diff line change
@@ -1,16 +1,24 @@
run_name: # run-name-here
cluster: # mcloud-cluster-name-here
run_name: nemo-megatron-gpt-124m-gpu-8
cluster: r0z0 # Update with your cluster here!
gpu_num: 8
image: nvcr.io/nvidia/nemo:22.09

# For the latest NeMo container version, see https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo
image: nvcr.io/nvidia/nemo:22.11

env_variables:
- key: PYTHONUNBUFFERED
value: '1'
command: |
# Configure Python to not buffer stdout and stderr, so output shows up in console immediately
- key: PYTHONUNBUFFERED
value: '1'

# getting the vocab, merge files for the tokenizer
command: |
# Getting the tokenizer vocab and merge files
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-vocab.json
wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-merges.txt

# Make sure to prepare and download the training data, as defined in NeMo documentation:
# https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/gpt/gpt_training.html

# Make sure to update the training dataset path below
python3 examples/nlp/language_modeling/megatron_gpt_pretraining.py \
--config-path=/workspace/nemo/examples/nlp/language_modeling/conf/ \
--config-name=megatron_gpt_config.yaml \
Expand Down