Skip to content

Restore ability to use MPI implementations with unknown ABIs #574

Open
@sloede

Description

@sloede

Based on comments by @vchuravy and as far as I understand from the code here, with the new MPIPreferences/"build-less" approach there is currently no way anymore to use MPI.jl with MPI backends that are not already known to MPI.jl. IMHO this is very unfortunate, since at least up to the current release of MPI.jl it was possible to use just any MPI implementation with MPI.jl and have the auto-detection mechanism figure out the ABI.

To use Julia with an unknown MPI implementations seems to only possible with via MPItrampoline at the moment. While MPItrampoline is a great tool and will certainly (hopefully) make things much smoother in the future, it is still comparably new and has not yet taken hold in most supercomputer centers. Therefore, HPC systems with non-compatible MPI ABIs (such as HPE's MPT, which is not compatible to any other MPI ABI) are precluded from using MPI.jl.

Since the current MPI.jl release still works technically flawless with unknown MPI (at least for our system with HPE's MPT), I strongly suggest for the time being we restore the ability to support other MPI ABI's than the big 3 + MPItrampoline. Ideally, one could have a (non-exported?) function to trigger the generation of an MPI constants file that one could either feed locally into own's one MPI.jl package (e.g. via the use of preferences) or that can be used as a basis to creating a PR to MPI.jl to add as a new officially supported ABI (where it would be appropriate). Otherwise it makes it much harder to support Julia with MPI on systems such as HLRS's Hawk, where the default MPI implementation is MPT and most available parallel tools such as HDF5 are provided for MPT.

cc @luraess

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions