Skip to content

MPI Refactor #831

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 24 commits into
base: master
Choose a base branch
from
Draft

MPI Refactor #831

wants to merge 24 commits into from

Conversation

wilfonba
Copy link
Contributor

@wilfonba wilfonba commented May 8, 2025

Description

This PR refactors a lot of MPI code to reduce duplicated code and shorten the MPI-related code in the codebase. Significant testing is needed to verify the changes' correctness, but I'm opening this as a draft now so that people know what's being changed and can start reviews/make suggestions early.

Type of change

Please delete options that are not relevant.

  • Something else

Scope

  • This PR comprises a set of related changes with a common goal

How Has This Been Tested?

Black lines in all videos show where processor boundaries and ghost cell regions are.

  • 2D advection one rank vs. multi-rank comparison (for good measure) -- This test is done using a slightly modified version of the examples/2D_advection case file. It is ran on 1 and 4 ranks with A100 GPUs. The video shows the advection of volume fraction through MPI boundaries.
test.mp4
  • 3D advection one rank vs. multi-rank comparison (for good measure) -- This test is done using a 3D analog of the examples/2D_advection case file. It is ran on 1 and 8 ranks with A100 GPUs. The video shows the advection of volume fraction contour through MPI boundaries. This video shows the advecting sphere for the case with 1 and 8 ranks. The half of the sphere from the one rank simulation is shown in red, and the half from the eight rank simulation is in blue.
test.mp4
  • 3D IBM one rank vs. multi-rank comparison (simple geometry) -- This test uses a modified version fo the examples/3D_ibm_bowshock case file (moved sphere so it's at the processor boundary). It is ran with 1 and 8 ranks on A100 GPUs.

  • 2D IBM one rank vs. multi-rank comparison (simple geometry) -- This test uses a modified version of the examples/2D_ibm_cfl_dt case file (moved circle so it's at the processor boundary and to break symmetry). It is ran with 1 and 4 ranks on A100 GPUs. The video shows the density around a simple immersed boundary located at processor boundaries.

  • 3D IBM one rank vs. multi-rank comparison (complex geometry) -- This test uses the 3D_ibm_stl_ellipsoid case. It is ran with 1 and 8 MPI ranks on A100 GPUs.

  • 2D IBM one rank vs. multi-rank comparison (complex geometry) -- This test uses a modified version of the examples/2D_ibm_stl_MFCCharacter case (change domain extents to get 2D MPI decomposition). It is run with 1 and 4 ranks on A100 GPUs.

  • 3D surface tension one rank vs. multi-rank comparison -- This test uses a modified version of the examples/3D_recovering_sphere case (remove the use of symmetry and have meaningful halo exchange of the color function and move square off center). It is ran with 1 and 8 ranks on A100 GPUs. This video shows slices of the color function in all three dimensions with 1 and 4 ranks.

test.mp4
  • 2D surface tension one rank vs. multi-rank comparison -- This test uses a 2D analog of the examples/3D_recovering_sphere case in 2D with the square off center. It is ran on 1 and 4 ranks with A100 GPUs. The video shows the volume fraction and color function with 1 and 4 ranks.
test.mp4
  • 3D QBMM one rank vs. multi-rank comparison -- This case is adapted from /examples/1D_qbmm. A high-pressure region is placed off-center in the middle of the bubble cloud to break symmetry. The video shows nV003 along three slices across the domain for the one and eight rank case on A100 GPUs.
test.mp4
  • 2D QBMM one rank vs. multi-rank comparison -- This case is adapted from /examples/1D_qbmm. Two high-pressure regions are added to create blast waves and break symmetry. The video shows pressure and nV001 for the one and four rank case on A100 GPUs.
test.mp4
  • EL Bubbles simulation and post-process verification -- This test uses the /examples/3D_lagrange_bubblescreen case. It is ran on 1 and 8 ranks on A100 GPUs. The video shows the void fraction in the bubble cloud through three slices. The left column is one rank, and the right is eight ranks.
test.mp4
  • Verify that the existing 2MPI rank golden files are correct

Checklist

  • I ran ./mfc.sh format before committing my code
  • New and existing tests pass locally with my changes, including with GPU capability enabled (both NVIDIA hardware with NVHPC compilers and AMD hardware with CRAY compilers) and disabled
  • This PR does not introduce any repeated code (it follows the DRY principle)
  • I cannot think of a way to condense this code and reduce any introduced additional line count

If your code changes any code source files (anything in src/simulation)

To make sure the code is performing as expected on GPU devices, I have:

  • Checked that the code compiles using NVHPC compilers
  • Checked that the code compiles using CRAY compilers
  • Ran the code on either V100, A100, or H100 GPUs and ensured the new feature performed as expected (the GPU results match the CPU results)
  • Ran the code on MI200+ GPUs and ensure the new features performed as expected (the GPU results match the CPU results)
  • Ran a Nsight Systems profile using ./mfc.sh run XXXX --gpu -t simulation --nsys, and have attached the output file (.nsys-rep) and plain text results to this PR
    MPIRefactor.txt
    https://drive.google.com/file/d/1pmM3s8q2UbqNmLsumdCs12u-6p3Tm_8C/view?usp=sharing
  • Ran an Omniperf profile using ./mfc.sh run XXXX --gpu -t simulation --omniperf, and have attached the output file and plain text results to this PR.
  • Ran my code using various numbers of different GPUs (1, 2, and 8, for example) in parallel and made sure that the results scale similarly to what happens if you run without the new code/feature

Copy link

codecov bot commented May 14, 2025

Codecov Report

Attention: Patch coverage is 48.52459% with 314 lines in your changes missing coverage. Please review.

Project coverage is 45.48%. Comparing base (2f8eef1) to head (da744d7).
Report is 3 commits behind head on master.

Files with missing lines Patch % Lines
src/common/m_mpi_common.fpp 41.69% 132 Missing and 26 partials ⚠️
src/simulation/m_mpi_proxy.fpp 8.00% 65 Missing and 4 partials ⚠️
src/common/m_boundary_common.fpp 69.15% 57 Missing and 9 partials ⚠️
src/post_process/m_data_input.f90 62.50% 5 Missing and 1 partial ⚠️
src/simulation/m_viscous.fpp 0.00% 0 Missing and 6 partials ⚠️
src/simulation/m_weno.fpp 0.00% 4 Missing ⚠️
src/post_process/m_start_up.f90 75.00% 1 Missing and 2 partials ⚠️
src/post_process/m_global_parameters.fpp 83.33% 0 Missing and 1 partial ⚠️
src/simulation/m_ibm.fpp 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #831      +/-   ##
==========================================
+ Coverage   43.47%   45.48%   +2.01%     
==========================================
  Files          68       67       -1     
  Lines       19766    18659    -1107     
  Branches     2375     2248     -127     
==========================================
- Hits         8593     8487     -106     
+ Misses       9726     8812     -914     
+ Partials     1447     1360      -87     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants