Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: Test that MPI can access view data #130

Merged
merged 7 commits into from
Feb 19, 2025

Conversation

cwpearson
Copy link
Collaborator

@cwpearson cwpearson commented Dec 18, 2024

Merge these first, and rebase

When a GPU backend is enabled, this serves as a basic GPU-aware MPI test.

On my misconfigured system, running the tests fails like this:

Test command: /usr/bin/mpiexec "-n" "2" "./test-main"
...
3: ./test-main (KokkosComm 0.2.0)
3: [==========] Running 121 tests from 34 test suites.
3: [----------] Global test environment set-up.
3: [----------] 6 tests from TestGtest
3: [ RUN      ] TestGtest.all_fail_nonfatal
3: [       OK ] TestGtest.all_fail_nonfatal (0 ms)
3: [ RUN      ] TestGtest.0_fail_nonfatal
3: [       OK ] TestGtest.0_fail_nonfatal (0 ms)
3: [ RUN      ] TestGtest.1_fail_nonfatal
3: [       OK ] TestGtest.1_fail_nonfatal (0 ms)
3: [ RUN      ] TestGtest.all_fail_fatal
3: [       OK ] TestGtest.all_fail_fatal (0 ms)
3: [ RUN      ] TestGtest.0_fail_fatal
3: [       OK ] TestGtest.0_fail_fatal (0 ms)
3: [ RUN      ] TestGtest.1_fail_fatal
3: [       OK ] TestGtest.1_fail_fatal (0 ms)
3: [----------] 6 tests from TestGtest (0 ms total)
3: 
3: [----------] 1 test from MpiViewAccess
3: [ RUN      ] MpiViewAccess.Basic
3: sending buffer is 0x302010080-0x302810080
3: recving buffer is 0x302010080-0x302810080
3: [s1099653:2904613] Read -1, expected 8388608, errno = 14
3: [s1099653:2904612] *** Process received signal ***
3: [s1099653:2904612] Signal: Segmentation fault (11)
3: [s1099653:2904612] Signal code: Invalid permissions (2)
3: [s1099653:2904612] Failing at address: 0x302010080
3: [s1099653:2904612] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x75092be42520]
3: [s1099653:2904612] [ 1] /lib/x86_64-linux-gnu/libc.so.6(+0x1a67cd)[0x75092bfa67cd]
3: [s1099653:2904612] [ 2] /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_btl_vader.so(+0x3244)[0x75092a83f244]
3: [s1099653:2904612] [ 3] /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_pml_ob1.so(mca_pml_ob1_send_request_schedule_once+0x1b6)[0x75092a816556]
3: [s1099653:2904612] [ 4] /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_pml_ob1.so(mca_pml_ob1_recv_frag_callback_ack+0x201)[0x75092a814811]
3: [s1099653:2904612] [ 5] /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_btl_vader.so(mca_btl_vader_poll_handle_frag+0x95)[0x75092a843ae5]
3: [s1099653:2904612] [ 6] /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_btl_vader.so(+0x7db1)[0x75092a843db1]
3: [s1099653:2904612] [ 7] /lib/x86_64-linux-gnu/libopen-pal.so.40(opal_progress+0x34)[0x75092c9d0714]
3: [s1099653:2904612] [ 8] /lib/x86_64-linux-gnu/libopen-pal.so.40(ompi_sync_wait_mt+0xbd)[0x75092c9dd38d]
3: [s1099653:2904612] [ 9] /usr/lib/x86_64-linux-gnu/openmpi/lib/openmpi3/mca_pml_ob1.so(mca_pml_ob1_send+0x9f8)[0x75092a813ee8]
3: [s1099653:2904612] [10] /lib/x86_64-linux-gnu/libmpi.so.40(PMPI_Send+0x123)[0x75092e925003]
3: [s1099653:2904612] [11] ./test-main(+0x30628)[0x5e74e8653628]
3: [s1099653:2904612] [12] ./test-main(+0x1d9c5f)[0x5e74e87fcc5f]
3: [s1099653:2904612] [13] ./test-main(+0x1c99e6)[0x5e74e87ec9e6]
3: [s1099653:2904612] [14] ./test-main(+0x1c9c75)[0x5e74e87ecc75]
3: [s1099653:2904612] [15] ./test-main(+0x1cc263)[0x5e74e87ef263]
3: [s1099653:2904612] [16] ./test-main(+0x1cf17c)[0x5e74e87f217c]
3: [s1099653:2904612] [17] ./test-main(+0x1da227)[0x5e74e87fd227]
3: [s1099653:2904612] [18] ./test-main(+0x1c9d60)[0x5e74e87ecd60]
3: [s1099653:2904612] [19] ./test-main(+0x21e3a)[0x5e74e8644e3a]
3: [s1099653:2904612] [20] /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x75092be29d90]
3: [s1099653:2904612] [21] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x75092be29e40]
3: [s1099653:2904612] [22] ./test-main(+0x2c5c5)[0x5e74e864f5c5]
3: [s1099653:2904612] *** End of error message ***
3: --------------------------------------------------------------------------
3: Primary job  terminated normally, but 1 process returned
3: a non-zero exit code. Per user-direction, the job has been aborted.
3: --------------------------------------------------------------------------
3: --------------------------------------------------------------------------
3: mpiexec noticed that process rank 0 with PID 0 on node s1099653 exited on signal 11 (Segmentation fault).
3: --------------------------------------------------------------------------
3/3 Test #3: test-main ........................***Failed    1.91 sec

67% tests passed, 1 tests failed out of 3

@cwpearson cwpearson force-pushed the test/mpi-view-access branch from 7c0c4ca to 0074ad7 Compare December 18, 2024 22:48
@cedricchevalier19
Copy link
Member

With OpenMPI, we can check support using mpi-ext.h: https://www.open-mpi.org/faq/?category=runcuda#mpi-cuda-aware-support .

@dssgabriel
Copy link
Collaborator

MPICH also provides a way to check for CUDA-aware support, but only at runtime (same interface as OpenMPI): https://www.mpich.org/static/docs/v4.3.x/www3/MPIX_Query_cuda_support.html

@cwpearson cwpearson force-pushed the test/mpi-view-access branch from 0074ad7 to 389a971 Compare January 27, 2025 19:22
@cwpearson
Copy link
Collaborator Author

Once #136 goes in, we can optionally test certain extensions depending on the vendor.

@cwpearson cwpearson force-pushed the test/mpi-view-access branch from 389a971 to 3680d08 Compare January 28, 2025 19:03
@cwpearson
Copy link
Collaborator Author

cwpearson commented Jan 28, 2025

MPICH also provides a way to check for CUDA-aware support, but only at runtime (same interface as OpenMPI): mpich.org/static/docs/v4.3.x/www3/MPIX_Query_cuda_support.html

I discovered 2 problems with mpich + Ubuntu/Debian while working on this:

  1. In 24.04, the package maintainers linked against the Open MPI version of pmix, so if you don't have the Open MPI package installed, MPICH doesn't work
  2. If 22.02, the package maintainers added link-time optimization flags, so you can't use it with nvcc

@cwpearson
Copy link
Collaborator Author

I tested this with Open MPI and MPICH with CUDA support

dssgabriel
dssgabriel previously approved these changes Feb 19, 2025
@cwpearson cwpearson force-pushed the test/mpi-view-access branch from c412caa to 0723a29 Compare February 19, 2025 15:38
Signed-off-by: Carl Pearson <cwpears@sandia.gov>
Signed-off-by: Carl Pearson <cwpears@sandia.gov>
Signed-off-by: Carl Pearson <cwpears@sandia.gov>
Signed-off-by: Carl Pearson <cwpears@sandia.gov>
@cwpearson
Copy link
Collaborator Author

I'm merging this since it was previously accepted, just split into two PRs and rebased.

@cwpearson cwpearson merged commit 40395c0 into kokkos:develop Feb 19, 2025
8 checks passed
@cwpearson cwpearson deleted the test/mpi-view-access branch February 19, 2025 23:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants