Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Data] Additional args for Data + Train benchmark #37839

Merged
merged 7 commits into from
Jul 31, 2023

Conversation

scottjlee
Copy link
Contributor

@scottjlee scottjlee commented Jul 26, 2023

Why are these changes needed?

As a followup to #37624, add the following additional parameters for the multi-node training benchmark:

  • File type (image, parquet)
  • local shuffle buffer size
  • preserve_order (train config)
  • increases default # epochs to 10

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Scott Lee added 3 commits July 26, 2023 14:34
Signed-off-by: Scott Lee <sjl@anyscale.com>
Signed-off-by: Scott Lee <sjl@anyscale.com>
Signed-off-by: Scott Lee <sjl@anyscale.com>
@scottjlee scottjlee marked this pull request as ready for review July 27, 2023 20:18
Scott Lee added 2 commits July 27, 2023 16:20
Signed-off-by: Scott Lee <sjl@anyscale.com>
Signed-off-by: Scott Lee <sjl@anyscale.com>
@scottjlee
Copy link
Contributor Author

Release test outputs:

  • read_images_train_4_cpu:
{"read_image_train_4workers": {"ray_TorchTrainer_fit": {"THROUGHPUT": 5.410256495549528}}, "success": 1}
  • read_images_train_16_cpu:
{"read_image_train_16workers": {"ray_TorchTrainer_fit": {"THROUGHPUT": 0.942105386854395}}, "success": 1}
  • read_images_train_16_cpu_preserve_order:
{"read_image_train_16workers": {"ray_TorchTrainer_fit": {"THROUGHPUT": 0.939200517950852}}, "success": 1}
  • read_parquet_train_4_cpu:
{"read_parquet_train_4workers": {"ray_TorchTrainer_fit": {"THROUGHPUT": 117.9580262429022}}, "success": 1}
  • read_parquet_train_16_cpu:
{"read_parquet_train_16workers": {"ray_TorchTrainer_fit": {"THROUGHPUT": 28.508951310541576}}, "success": 1}

Signed-off-by: Scott Lee <sjl@anyscale.com>
@stephanie-wang
Copy link
Contributor

Release test outputs:

* `read_images_train_4_cpu`:
{"read_image_train_4workers": {"ray_TorchTrainer_fit": {"THROUGHPUT": 5.410256495549528}}, "success": 1}
* `read_images_train_16_cpu`:
{"read_image_train_16workers": {"ray_TorchTrainer_fit": {"THROUGHPUT": 0.942105386854395}}, "success": 1}
* `read_images_train_16_cpu_preserve_order`:
{"read_image_train_16workers": {"ray_TorchTrainer_fit": {"THROUGHPUT": 0.939200517950852}}, "success": 1}
* `read_parquet_train_4_cpu`:
{"read_parquet_train_4workers": {"ray_TorchTrainer_fit": {"THROUGHPUT": 117.9580262429022}}, "success": 1}
* `read_parquet_train_16_cpu`:
{"read_parquet_train_16workers": {"ray_TorchTrainer_fit": {"THROUGHPUT": 28.508951310541576}}, "success": 1}

The changes look good, but aren't the throughputs here a bit odd? I thought the throughput should go up instead of down for 4 -> 16 CPUs?

cluster_env: app_config.yaml
cluster_compute: ../../air_tests/air_benchmarks/compute_gpu_4x4_gce.yaml

- name: read_images_train_16_cpu_preserve_order
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cpu -> gpu right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If yes, can you also update line 6364 - name: read_images_train_4_cpu

cluster_env: app_config.yaml
cluster_compute: ../../air_tests/air_benchmarks/compute_gpu_4x4_gce.yaml

- name: read_parquet_train_4_cpu
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

cluster_env: app_config.yaml
cluster_compute: ../../air_tests/air_benchmarks/compute_gpu_2x2_gce.yaml

- name: read_parquet_train_16_cpu
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

)
result = torch_trainer.fit()

# Report the throughput of the last training epoch.
metrics["ray.TorchTrainer.fit"] = list(result.metrics_dataframe["tput"])[-1]
metrics["ray_TorchTrainer_fit"] = list(result.metrics_dataframe["tput"])[-1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how this metrics_dataframe being aggregated across training workers? is this the sum up of individual tput from each worker?

Signed-off-by: Scott Lee <sjl@anyscale.com>
Copy link
Contributor

@c21 c21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LG, let's debug the throughput performance separately.

@c21 c21 merged commit d6f8910 into ray-project:master Jul 31, 2023
NripeshN pushed a commit to NripeshN/ray that referenced this pull request Aug 15, 2023
As a followup to ray-project#37624, add the following additional parameters for the multi-node training benchmark:
- File type (image, parquet)
- local shuffle buffer size
- preserve_order (train config)
- increases default # epochs to 10

Signed-off-by: Scott Lee <sjl@anyscale.com>
Signed-off-by: NripeshN <nn2012@hw.ac.uk>
arvind-chandra pushed a commit to lmco/ray that referenced this pull request Aug 31, 2023
As a followup to ray-project#37624, add the following additional parameters for the multi-node training benchmark:
- File type (image, parquet)
- local shuffle buffer size
- preserve_order (train config)
- increases default # epochs to 10

Signed-off-by: Scott Lee <sjl@anyscale.com>
Signed-off-by: e428265 <arvind.chandramouli@lmco.com>
vymao pushed a commit to vymao/ray that referenced this pull request Oct 11, 2023
As a followup to ray-project#37624, add the following additional parameters for the multi-node training benchmark:
- File type (image, parquet)
- local shuffle buffer size
- preserve_order (train config)
- increases default # epochs to 10

Signed-off-by: Scott Lee <sjl@anyscale.com>
Signed-off-by: Victor <vctr.y.m@example.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants