-
Notifications
You must be signed in to change notification settings - Fork 0
sg0/gtfock
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Notes: ====== This is a version of gtfock that aims to use the modified AXPY interface of Elemental distributed dense linear algebra package (original: https://github.com/elemental/Elemental) (ongoing work on AxpyInterface: https://github.com/sg0/elemental). Original gtfock uses PNNL Global Arrays toolkit. ================================================================ A. Setup ================================================================ export WORK_TOP=$PWD ================================================================ B. Installing Global Arrays using B0 or B1 ================================================================ # Skip this step if Global Arrays has already been installed. ================================================================ B0. Installing Global Arrays on MPI-3 ================================================================ 1. Installing armci-mpi cd $WORK_TOP git clone git://git.mpich.org/armci-mpi.git || git clone http://git.mpich.org/armci-mpi.git cd armci-mpi git checkout mpi3rma ./autogen.sh mkdir build cd build ../configure CC=mpiicc --prefix=$WORK_TOP/external-armci make -j12 make install 2. Installing Global Arrays cd $WORK_TOP wget http://hpc.pnl.gov/globalarrays/download/ga-5-3.tgz tar xzf ga-5-3.tgz cd ga-5-3 # carefully set the mpi executables that you want to use ./configure CC=mpiicc MPICC=mpiicc CXX=mpiicpc MPICXX=mpiicpc \ F77=mpiifort MPIF77=mpiifort FC=mpiifort MPIFC=mpiifort\ --with-mpi --with-armci=$WORK_TOP/external-armci --prefix=$WORK_TOP/GAlib make -j12 install ================================================================ B1. Installing Global Arrays on ARMCI ================================================================ cp config_ga.py $WORK_TOP # openib means using Infiniband python config_ga.py download openib =============================================================== C. Installing the OptErd library =============================================================== cd $WORK_TOP svn co https://github.com/Maratyszcza/OptErd/trunk OptErd cd $WORK_TOP/OptErd make -j12 prefix=$WORK_TOP/ERDlib make install # The ERD libraries will been installed in $WORK_TOP/ERDlib. =============================================================== D. Installing GTFock =============================================================== cd $WORK_TOP svn co http://gtfock.googlecode.com/svn/trunk gtfock cd $WORK_TOP/gtfock/ # Change the following variables in make.in BLAS_INCDIR = /opt/intel/mkl/include/ BLAS_LIBDIR = /opt/intel/mkl/lib/intel64/ BLAS_LIBS = -lmkl_intel_lp64 -lmkl_core -lmkl_intel_thread -lpthread -lm SCALAPACK_INCDIR = /opt/intel/mkl/include/ SCALAPACK_LIBDIR = /opt/intel/mkl/lib/intel64/ SCALAPACK_LIBS = -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64 MPI_LIBDIR = /opt/mpich2/lib/ MPI_LIBS = MPI_INCDIR = /opt/mpich2/include # Set GA_TOP, ARMCI_TOP and ERD_TOP, which indicate where Global Arrays, # ARMCI and ERD libraries are installed respectively # For example, if you install Global Arrays using B0 export GA_TOP=$WORK_TOP/GAlib export ARMCI_TOP=$WORK_TOP/external-armci export ERD_TOP=$WORK_TOP/ERDlib # Compile make # The gtfock libraries will be installed in $WORK_TOP/gtfock/install/. # The example SCF code can be found in $WORK_TOP/gtfock/pscf/. =============================================================== E. Running the example SCF code =============================================================== mpirun -np <nprocs> ./scf <basis> <xyz> \ <nprow> <npcol> <np2> <ntasks> <niters> # NOTE: nprow x npcol must be equal to nprocs # NOTE: np2 x np2 x np2 must be smaller than nprocs # NOTE: suggested values for ntasks: 3, 4, 5 # <nprocs>: number of MPI processes # <basis>: basis file # <xyz>: xyz file # <nprow>: number of MPI processes per row # <npcol>: number of MPI processes per col # <np2>: number of MPI processes per one dimension for purification (eigenvalue solve) # <ntasks>: each MPI process has ntasks x ntasks tasks # <niters>: number of SCF iterations # For example, the following command run the SCF code on graphene_12_54_114.xyz # with 12 MPI processes for 10 iterations. The number of processes used for # purification is 2 x 2 x 2 = 8. mpirun -np 12 ./pscf/scf data/guess/cc-pvdz.gbs \ data/graphene/graphene_12_54_114.xyz 3 4 2 4 10
About
Automatically exported from code.google.com/p/gtfock
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published