-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spack with AFQMC #2237
Comments
@naromero77 has the most experience with QMCPACK and spack I think. |
The input I need from you @fdmalone and @mmorale3 isn't about spack in particular, it is essentially about any known issues or conflicts or dependencies with the build process other than what I listed above. If there are, then those should be added to the spack package. That way it would prevent other users from running into those problems when trying to build. For example, are there any issues with particular compilers you guys are aware of with paritcular versions of qmcpack, or any other dependencies that should be added? If not, I can talk to @naromero77 about getting my changes into the spack package itself. |
gcc >= 6.1, clang >= 4.1 and intel2019 should all work and I believe they are tested on cdash. For cpu builds both real and complex builds should work. For cuda the code should be complex build only. Only very recent versions of mvapich should be used, openmpi and mpich should be ok. |
How recent for mvapich are we talking? |
This needs to be tested. Shared memory was not working correctly before mvapich2 v2.3. The fix was patched into LLNL compilers as of 2.3, but I don’t know if and when the fix was added to their official release.
…Sent from my iPhone
On Jan 22, 2020, at 12:01 PM, Cody Melton ***@***.***> wrote:
Only very recent versions of mvapich should be used, openmpi and mpich should be ok.
How recent for mvapich are we talking?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I think it is very reasonable to support only newer/the newest versions here, unless we know of existing users or systems that need to use old versions. |
@fdmalone oxygen.ornl.gov which runs most of the nightlies only has AFQMC enabled for complex builds:
If there any reason we shouldn't invert this rule so that we always build AFQMC unless cuda+real? Do the builds work without MKL sparse availability? |
For develop I believe everything should build including cuda+real. There may not be tests for all of these build options, but that's a separate issue in a way. Setting this up on cdash would be helpful as I only use gcc on a daily basis. For available release versions cuda + real should be avoided, but the cpu code should at least build for real and complex. Note that real builds are only really practically useful for chemistry applications currently (kpoint code is preferred over supercell calculations), and there aren't many tests for this. I believe sparse MKL is not a hard requirement, but the code may be much slower without it. @mmorale3 can correct me here. |
Note: support for AFQMC in the spack package was added via spack/spack#14882 |
I have been using spack for all of my build purposes, and as I am now doing some AFQMC calculations, I noticed that AFQMC is not currently supported with the spack install. I have added it to my spack fork and would like to push it into the actual spack package.
However, I wanted to check here to see if there are any build conflicts or other issues that I am not aware of that should be included in the spack package. Currently I have:
Is anyone aware of any other conflicts that should be addressed? @fdmalone suggested that since the code has changed so much since 3.7.0, it may be better to throw a conflict if you try to build with afqmc for any version prior to 3.7.0. Spack goes all the way to 3.1.0, so the afqmc build could in principle be supported for all of those versions with spack if desired.
Are there any strong opinions on this? Once I get feedback, I would like to add this to the spack package
The text was updated successfully, but these errors were encountered: