Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add comments on MPI ports vs sockets #176

Merged
merged 4 commits into from
Jul 5, 2022
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions pages/docs/configuration/configuration-communication.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,11 @@ The alternative to TCP/IP sockets is MPI ports (an MPI 2.0 feature):
<m2n:mpi .../>
```

As the ports functionality is not a highly used feature of MPI, it has robustness issues for several MPI implementations ([for OpenMPI, for example](https://github.com/precice/precice/issues/746)). In principle, MPI gives you faster communication roughly by a factor of 10, but, for most applications, you will not feel any difference as both are very fast. We recommend using `sockets`.
In preCICE, we always start simulations in separated MPI communicators (remember: we start solvers in different terminals, with their own `mpirun` commands), a feature that highly improves flexibility (solvers do not need to be in the same MPI communicator at any time). As the MPI ports functionality is not a highly used feature of MPI (at least not with separated `MPI_COMM_WORLD` communicators), it has robustness issues for several MPI implementations ([for OpenMPI, for example](https://github.com/precice/precice/issues/746)). In principle, MPI gives you faster communication roughly by a factor of 10 (see [Benjamin Uekermann's dissertation](https://mediatum.ub.tum.de/doc/1320661/document.pdf), section 4.2.3), but, for most applications, you will not feel any difference as both are very fast. We recommend using `sockets` by default, unless you are performing large performance-critical simulations with very large coupling meshes.
MakisH marked this conversation as resolved.
Show resolved Hide resolved

Which participant is `from` and which one is `to` makes almost no difference and cannot lead to deadlock. Only for massively parallel runs, it can make a performance difference at initialization. For such cases, [ask us for advice](https://precice.discourse.group/new-topic).

The `exchange-directory` should point to the same location for both participants. We use this location to exchange hidden files with initial connection information. It defaults to `"."`, i.e. both participants need to be started in the same folder. We give some best practices on how to arrange your folder structure and start the coupled solvers [here](TODO).
The `exchange-directory` should point to the same location for both participants. We use this location to exchange hidden files with initial connection information. It defaults to `"."`, i.e. both participants need to be started in the same folder. We give some best practices on how to arrange your folder structure and start the coupled solvers in the page [running on a local machine](running-simple.html).

{% important %}
If you face any problems with establishing the communication, have a look [at this Discourse post](https://precice.discourse.group/t/help-the-participants-are-not-finding-each-other/646/2).
Expand Down