-
Notifications
You must be signed in to change notification settings - Fork 3
Mpi4py detection #29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Mpi4py detection #29
Conversation
Another thing I have thought of concerning the matching is that One potential way to guard against that is to have the mpich comparison first because from my anecdotal tests openmpi does not seem to include full library paths and would not suffer from having |
I'm happy to reorder the checks. In general I think we need a better solution for this, though that isn't needed for this PR. I've created an issue: #30. |
I tried this branch with this MFE import pytest
@pytest.mark.parallel(nprocs=5) # run in parallel with 5 processes
def test_my_code_on_5_procs():
print("hello") and got this error
Running the mpiexec command directly from my terminal did not give problems
Is there a way that I can obtain more information about the error from pytest? |
Can you rerun with the |
Unfortunately I think that this is because some OpenMPI distributions don't work with calling Since it is calling Very unhelpfully it seems to work fine on some distributions. For example on Arch Linux (and CI!) I can run this without issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Thanks!
This was using a docker image with ubuntu:25:10. Which docker image do you recommend I use? archlinux:latest? I will also try switching to mpich |
Is there a particular reason that you do not want to use MPI "on the outside"? i.e.
|
Some of my tests are hanging indefinitely so I wanted to explore other ways around it. I think it has to do with how some of the pytest's resources are getting cleaned up |
This sounds like a slightly different problem. And this PR thread probably isn't the best place to discuss this. Would you be able to create a discussion on the Firedrake repo explaining your issue in more detail? @JHopeCollins and I would be very happy to help you there. |
Uses
mpi4py
to perform MPI implementation in forking mode. This does not require MPI initialization or finalization.I have also taken the liberty of adding a few extra version string comparisons to find the backend implementation just in case there is some other weird open mpi name that pops up.
Unfortunately, mpi4py only returns a string and then it is required to be checked for the version type.
I have noticed that mpich, openmpi, and microsoft mpi all like to have their name as the first thing in the returned string, so it could be possible to use
version.startswith("open")
instead of thein
statement that is currently implemented. However that is not necessarily a guarantee and if the format changes this version should hopefully still function.I took the
fixes #27, fixes #28