Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added env option OPENPMD_HDF5_ALIGNMENT #830

Merged
merged 5 commits into from
Dec 4, 2020
Merged

Conversation

guj
Copy link
Contributor

@guj guj commented Dec 2, 2020

Added env option OPENPMD_HDF5_ALIGNMENT, when assigned to a positive <bytes>,
the H5Pset_alignment(m_fileAccessProperty, 0, bytes) will be called for Parallel HDF5 calls.

@ax3l
Copy link
Member

ax3l commented Dec 4, 2020

Let's also document this in the table in docs/source/backends/hdf5.rst

Do you think we can add a reasonable default for this? [1]

For MPI IO and other parallel systems, choose an alignment which is a multiple of the disk block size.

On Cori [2] the recommendation seems to be to align to Lustre stripes.

@ax3l ax3l self-assigned this Dec 4, 2020
@guj
Copy link
Contributor Author

guj commented Dec 4, 2020

Let's also document this in the table in docs/source/backends/hdf5.rst

Do you think we can add a reasonable default for this? [1]

For MPI IO and other parallel systems, choose an alignment which is a multiple of the disk block size.

On Cori [2] the recommendation seems to be to align to Lustre stripes.

I saw. that you changed the documentation.
I would like to add that 1 is not a good default. It means 1 byte..

My suggestion is that by default this option shall not be setup, and let HDF5 deal with its default behavior. The two known cases to spell out this option are:
(1) on Lustre systems, alignment with stripe size is suggested to be explicitly spelled out through this option. (2) on Summit, recommendatation is to use this option with 16MB

Does it sound clear? I can make the change in the doc, or you can do it.

@ax3l
Copy link
Member

ax3l commented Dec 4, 2020

Thanks for taking a look!
I read in the docs: https://support.hdfgroup.org/HDF5/doc/RM/H5P/H5Pset_alignment.htm

Default values for threshold and alignment are one, implying no alignment

Agreed:

My suggestion is that by default this option shall not be setup, ...

The logic does not set it up by default now either (see > 1)

Feel free to push to the PR once more to change it though.

@guj
Copy link
Contributor Author

guj commented Dec 4, 2020

Thanks for taking a look!
I read in the docs: https://support.hdfgroup.org/HDF5/doc/RM/H5P/H5Pset_alignment.htm

Default values for threshold and alignment are one, implying no alignment

I meant no need for openPMD-api to set it again. It would make sense to set a default if this function will always to called by openPMD-api with a specific value (other than 1).

Agreed:

My suggestion is that by default this option shall not be setup, ...

The logic does not set it up by default now either (see > 1)

Feel free to push to the PR once more to change it though.

OK, will update doc

@ax3l ax3l merged commit 5f19473 into openPMD:dev Dec 4, 2020
@ax3l ax3l added the MPI label Dec 4, 2020
@ax3l
Copy link
Member

ax3l commented Jan 4, 2021

I think we can also safely increase the alignment sizes to at least 4MB, I used quite large numbers in the past to match the filesystem block size and also increase the collective buffer sizes to few 100MB: ComputationalRadiationPhysics/libSplash#108

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants