Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add comments on MPI ports vs sockets #176

Merged
merged 4 commits into from
Jul 5, 2022
Merged

Conversation

MakisH
Copy link
Member

@MakisH MakisH commented Jun 27, 2022

It looks like users do not really understand the issue with MPI ports, and they do not understand how much of a compromise sockets are.
This adds a reference to Benjamin's dissertation (scaling results), but we may need to add even more information.

@uekerman what else could we improve here?

It looks like users do not really understand the issue with MPI ports, and they do not understand how much of a compromise sockets are.
This adds a reference to Benjamin's dissertation (scaling results), but we may need to add even more information.
Copy link
Member

@uekerman uekerman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. I added a bit more.

pages/docs/configuration/configuration-communication.md Outdated Show resolved Hide resolved
MakisH and others added 2 commits June 27, 2022 15:55
@MakisH
Copy link
Member Author

MakisH commented Jun 27, 2022

@uekerman I added a few more clarifications, based on our discussion.

@Scriptkiddi
Copy link
Contributor

would it be possible to add a quantification for very large coupling meshes?
like very large coupling meshes ( > 1000000 vertices) to help people decide quickly in which category they fall? Because I would guess different fields have different understandings of what a very large mesh is.

@fsimonis
Copy link
Member

@Scriptkiddi The basic guideline is to use sockets.

The size of the communicated mesh depend on many factors. Vertex count is only a part of the story. Mesh connectivity, defined mappings, defined and exchanged data, watch integrals and filtering strategies all impact this communicated size.

You will need to use the inbuilt profiling, or you do your own.

The general guideline for the inbuilt profiling (which is missing in the docs) is:

  1. Start using sockets
  2. Enable sync mode in the configuration.
  3. Run your full testcase for a few time steps.
  4. Establish your baselines:
    • For the mesh transfer, checkout the events partition.sendGlobalMesh in initialize
    • For the data transfer, m2n.sendData and m2n.receiveData events in advance.
  5. Figure out if these impact the overall simulation time. If not, then you are done.
  6. Change to <m2n:mpi > and rerun the case.
  7. Compare the events in 4 to your established baseline and select the preferred method.

@uekerman
Copy link
Member

would it be possible to add a quantification for very large coupling meshes?

it mainly depends on how this relates to the computational cost of your solver. good point, but hard to quantify 😕

Co-authored-by: Frédéric Simonis <simonisfrederic@gmail.com>
@MakisH MakisH merged commit 7401104 into master Jul 5, 2022
@MakisH MakisH deleted the mpi-ports-clarifications branch July 5, 2022 12:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants