-
-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add comments on MPI ports vs sockets #176
Conversation
It looks like users do not really understand the issue with MPI ports, and they do not understand how much of a compromise sockets are. This adds a reference to Benjamin's dissertation (scaling results), but we may need to add even more information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. I added a bit more.
Co-authored-by: Benjamin Uekermann <benjamin.uekermann@gmail.com>
@uekerman I added a few more clarifications, based on our discussion. |
would it be possible to add a quantification for |
@Scriptkiddi The basic guideline is to use sockets. The size of the communicated mesh depend on many factors. Vertex count is only a part of the story. Mesh connectivity, defined mappings, defined and exchanged data, watch integrals and filtering strategies all impact this communicated size. You will need to use the inbuilt profiling, or you do your own. The general guideline for the inbuilt profiling (which is missing in the docs) is:
|
it mainly depends on how this relates to the computational cost of your solver. good point, but hard to quantify 😕 |
Co-authored-by: Frédéric Simonis <simonisfrederic@gmail.com>
It looks like users do not really understand the issue with MPI ports, and they do not understand how much of a compromise sockets are.
This adds a reference to Benjamin's dissertation (scaling results), but we may need to add even more information.
@uekerman what else could we improve here?