-
Notifications
You must be signed in to change notification settings - Fork 105
MPI Refactor #831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
wilfonba
wants to merge
24
commits into
MFlowCode:master
Choose a base branch
from
wilfonba:MPIRefactor
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
MPI Refactor #831
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
sbryngelson
reviewed
May 14, 2025
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #831 +/- ##
==========================================
+ Coverage 43.47% 45.48% +2.01%
==========================================
Files 68 67 -1
Lines 19766 18659 -1107
Branches 2375 2248 -127
==========================================
- Hits 8593 8487 -106
+ Misses 9726 8812 -914
+ Partials 1447 1360 -87 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR refactors a lot of MPI code to reduce duplicated code and shorten the MPI-related code in the codebase. Significant testing is needed to verify the changes' correctness, but I'm opening this as a draft now so that people know what's being changed and can start reviews/make suggestions early.
Type of change
Please delete options that are not relevant.
Scope
How Has This Been Tested?
Black lines in all videos show where processor boundaries and ghost cell regions are.
examples/2D_advection
case file. It is ran on 1 and 4 ranks with A100 GPUs. The video shows the advection of volume fraction through MPI boundaries.test.mp4
examples/2D_advection
case file. It is ran on 1 and 8 ranks with A100 GPUs. The video shows the advection of volume fraction contour through MPI boundaries. This video shows the advecting sphere for the case with 1 and 8 ranks. The half of the sphere from the one rank simulation is shown in red, and the half from the eight rank simulation is in blue.test.mp4
3D IBM one rank vs. multi-rank comparison (simple geometry) -- This test uses a modified version fo the
examples/3D_ibm_bowshock
case file (moved sphere so it's at the processor boundary). It is ran with 1 and 8 ranks on A100 GPUs.2D IBM one rank vs. multi-rank comparison (simple geometry) -- This test uses a modified version of the
examples/2D_ibm_cfl_dt
case file (moved circle so it's at the processor boundary and to break symmetry). It is ran with 1 and 4 ranks on A100 GPUs. The video shows the density around a simple immersed boundary located at processor boundaries.3D IBM one rank vs. multi-rank comparison (complex geometry) -- This test uses the
3D_ibm_stl_ellipsoid
case. It is ran with 1 and 8 MPI ranks on A100 GPUs.2D IBM one rank vs. multi-rank comparison (complex geometry) -- This test uses a modified version of the
examples/2D_ibm_stl_MFCCharacter
case (change domain extents to get 2D MPI decomposition). It is run with 1 and 4 ranks on A100 GPUs.3D surface tension one rank vs. multi-rank comparison -- This test uses a modified version of the
examples/3D_recovering_sphere
case (remove the use of symmetry and have meaningful halo exchange of the color function and move square off center). It is ran with 1 and 8 ranks on A100 GPUs. This video shows slices of the color function in all three dimensions with 1 and 4 ranks.test.mp4
examples/3D_recovering_sphere
case in 2D with the square off center. It is ran on 1 and 4 ranks with A100 GPUs. The video shows the volume fraction and color function with 1 and 4 ranks.test.mp4
/examples/1D_qbmm
. A high-pressure region is placed off-center in the middle of the bubble cloud to break symmetry. The video shows nV003 along three slices across the domain for the one and eight rank case on A100 GPUs.test.mp4
/examples/1D_qbmm
. Two high-pressure regions are added to create blast waves and break symmetry. The video shows pressure and nV001 for the one and four rank case on A100 GPUs.test.mp4
/examples/3D_lagrange_bubblescreen
case. It is ran on 1 and 8 ranks on A100 GPUs. The video shows the void fraction in the bubble cloud through three slices. The left column is one rank, and the right is eight ranks.test.mp4
Checklist
./mfc.sh format
before committing my codeIf your code changes any code source files (anything in
src/simulation
)To make sure the code is performing as expected on GPU devices, I have:
./mfc.sh run XXXX --gpu -t simulation --nsys
, and have attached the output file (.nsys-rep
) and plain text results to this PRMPIRefactor.txt
https://drive.google.com/file/d/1pmM3s8q2UbqNmLsumdCs12u-6p3Tm_8C/view?usp=sharing
./mfc.sh run XXXX --gpu -t simulation --omniperf
, and have attached the output file and plain text results to this PR.