A library to automate integration and sampling tasks of quantities in the Brillouin zone of a crystal. Oriented to high-perfomance computing.
This is a work in progress!
We strongly recommend the compilers contained in the Intel oneAPI toolkit ifort
or ifx
. The GNU Fortran compiler gfortran
is also an option. As for this release, intense testing has been conducted only for these compilers. Future plans involve testing and creating builds for more compilers, with special emphasis on the Nvidia HPC compiler.
-Intel oneAPI Math Kernel Libraries (MKL), contained in the Intel oneAPI toolkit, or any other implementation of BLAS and LAPACK libraries.
-Any implementation of OpenMP and MPI libraries, we recommend those of the Intel oneAPI toolkit.
Python3's 're' and 'glob' libraries.
-
Clone this repository with the flag
--recurse-submodules
on a destination of your choice.bash:/path/of/your/choice$ git clone --recurse-submodules https://github.com/irukoa/SsTC.git
-
Change directory to SsTC.
bash:/path/of/your/choice$ cd SsTC/
-
Run make. This will create the (static) library file
./bin/libSsTC.a
and the module file./bin/sstc.mod
.bash:/path/of/your/choice/SsTC$ make
The default build is done with the
ifort
compiler. Theconfig
folder contains several Makefiles for different compilers. The user can copy the Makefile of his/hers choice into the root SsTC directoty and runmake
. -
To uninstall run
make uninstall
in the installation directory.
The idea behind SsTC is to automate the most common tasks involved in solid state physics. These include sampling (and sometimes integration) of functions with the form
Sampling/integration tasks come in different flavours and are specified as Fortran derived types in the implementation level. These include:
- type(SsTC_BZ_integral_task). Holds the task to integrate
$C^{\alpha}(k;\beta)$ in the BZ. - type(SsTC_kpath_task). Holds the task to sample
$C^{\alpha}(k;\beta)$ along a predefined path in the BZ. - type(SsTC_kslice_task). Holds the task to sample
$C^{\alpha}(k;\beta)$ along a predefined slice in the BZ. - type(SsTC_sampling_task). Holds the task to sample
$C^{\alpha}(k;\beta)$ in a regular grid, or along a predefined set of points in the BZ.
To create tasks one needs to employ the task constructors SsTC_BZ_integral_task_constructor
, SsTC_kpath_constructor
, SsTC_kslice_task_constructor
or SsTC_sampling_task_constructor
, respectively. While the constructors take different arguments related to the particular task being created (a k-path descriptor in the case of a kpath task... see the User's guide), all four take as inputs some common descriptors of
-
The number of
$\alpha$ indicesN_int_ind
. An integer greater than 0. -
The number of possible values of each of the
$\alpha_i$ indices. An integer array with elements greater than 0 and of sizeN_int_ind
. -
The number of
$\beta$ indicesN_ext_vars
. An integer greater than 0. -
The starting and ending value of each of the
$\beta_i$ indices and the number of steps between these two values. The first two are double precision arrays of sizeN_ext_vars
and the last is an integer array with elements greater than 0 and of sizeN_ext_vars
. -
A procedure pointer
g_calculator
to the user's implementation of$C^{\alpha}(k;\beta)$ . This must be a function with interface,function SsTC_global_calculator(task, system, k, error) result(u) class(SsTC_global_k_data), intent(in) :: task type(SsTC_sys), intent(in) :: system real(kind=dp), intent(in) :: k(3) logical, intent(inout) :: error complex(kind=dp) :: u(product(task%integer_indices), product(task%continuous_indices)) end function SsTC_global_calculator
so the output is an array storing
$C^{\alpha}(k;\beta)$ for each combination of the values$\alpha_i$ and$\beta_i$ . It is suggested to iterate through all these possible combinations by using the SsTC jagged array indexing functionsSsTC_integer_memory_element_to_array_element
andSsTC_continuous_memory_element_to_array_element
asdo i = 1, product(task%integer_indices) i_arr = SsTC_integer_memory_element_to_array_element(task, i) !i_arr(j) contains the particular value $\alpha_j$ has in this iteration. do r = 1, product(task%continuous_indices) r_arr = SsTC_continuous_memory_element_to_array_element(task, r) !r_arr(j) contains the particular iteration of the variable $\beta_j$, !with value task%ext_var_data(j)%data(r_arr(j)). u(i, r) = ... enddo enddo
The user can also pass as an input a cristalline system, which corresponds to the tight-binding representation of said system. This is a file given in the Wannier90 format, containing the Hamiltonian matrix elements and the position operator's matrix elements in the Wannier basis. A system is a derived type type(SsTC_sys)
, which can be loaded from a file by using the SsTC_sys_constructor
. The user is then encouraged to employ the Wannier interpolation routines contained in the local_k_quantities.F90
module to compute the Hamiltonian and Berry connection matrices and its derivatives in the Wannier gauge.
Once created a task, this can be sampled/integrated by using the task specific routines SsTC_sample_and_integrate_BZ_integral_task
, SsTC_kpath_sampler
, SsTC_sample_kslice_task
and SsTC_sample_sampling_task
. Which take as inputs a task of the corresponding type and a system of type type(SsTC_sys)
. This will write to the particular result variable contained in the type (see User's guide). Lastly, the results can be plotted to files by using the routines SsTC_print_BZ_integral_task
, SsTC_print_kpath
, SsTC_print_kslice
, and SsTC_print_sampling
. These will write a file for each possible combination of
The sampling/integration routines are hybrid MPI+OpenMP routines. The working principle is that for every batch of SsTC_init
, before any SsTC tasks/routines have been employed. For personal computers, we recommend running applications using in the MPI singleton case (a single MPI rank).
Lastly, the user is free to employ SsTC routines inside OpenMP parallel regions or other parallel processes in his/hers code (such as parallelization of some or all external variables omp_set_max_active_levels
routine (to avoid unecessary overhead) and to check that there is no interference between the user's intended use of his/hers MPI directives and those of SsTC.
-
Include the line
use SsTC
in your application preamble. -
Before creating any SsTC task, sampling/integrating in the BZ or writing to files, make sure to have the MPI enviroment initialized and the line
call SsTC_init()
in your application after the MPI initialization call. -
To link SsTC to your program, the compilation command should have the form:
bash:/path/to/application/$ $(F90) $(F90FLAGS) my_application.F90 -I/path/of/your/choice/SsTC/bin /path/of/your/choice/SsTC/bin/libSsTC.a -o "my_application.x"
Where
$(F90) = mpiifort/mpiifx/mpif90
, and$(F90FLAGS)
should include, at least,-qopenmp -lmkl_intel_lp64 -lmkl_core -lmkl_gnu_thread -pthread
or the compiler-specific alternatives.
Note 1: SsTC uses double precision numbers for real and complex kinds.
Note 2: It is recommended that each task is defined within a BLOCK construct to help in derived-type finalization.
As an example, consider Example 1 in the User's guide,
program example01
USE OMP_LIB
USE MPI_F08
use SsTC
implicit none
integer, parameter :: dp = 8
integer :: ierror
type(SsTC_sys) :: dummy
call MPI_INIT(ierror)
call SsTC_init()
dummy = SsTC_sys_constructor("dummy", "./", efermi=0.0_dp)
block
type(SsTC_BZ_integral_task) :: test_integral
call SsTC_BZ_integral_task_constructor(task=test_integral, name="example01-test", &
g_calculator=test_calculator, &
method="rectangle", samples=(/33, 1, 1/), &
N_int_ind=2, int_ind_range=(/3, 3/), &
N_ext_vars=1, ext_vars_start=(/1.0_dp/), &
ext_vars_end=(/10.0_dp/), ext_vars_steps=(/10/))
call SsTC_sample_and_integrate_BZ_integral_task(task=test_integral, &
system=dummy)
call SsTC_print_BZ_integral_task(task=test_integral, &
system=dummy)
end block
call MPI_FINALIZE(ierror)
contains
function test_calculator(task, system, k, error) result(u)
class(SsTC_global_k_data), intent(in) :: task
type(SsTC_sys), intent(in) :: system
real(kind=dp), intent(in) :: k(3)
logical, intent(inout) :: error
complex(kind=dp) :: u(product(task%integer_indices), product(task%continuous_indices))
integer :: i, r, & !Indices for memory layout.
i_arr(2), r_arr(1) !Indices for array layout.
u = cmplx(0.0, 0.0, dp)
do i = 1, product(task%integer_indices)
i_arr = SsTC_integer_memory_element_to_array_element(task, i)
do r = 1, product(task%continuous_indices)
r_arr = SsTC_continuous_memory_element_to_array_element(task, r)
!C^{\alpha}(k;\beta) = (\alpha_1 + \alpha_2)exp(k_x*\beta_1).
u(i, r) = real(i_arr(1) + i_arr(2), dp)* &
cmplx(exp(k(1)*task%ext_var_data(1)%data(r_arr(1))))
enddo
enddo
end function test_calculator
end program example01
As another example, an application calculating the jerk current of the system GaAs, as in Example 2 of the User's guide, should look like:
bash:/path/to/application/$ cat my_jerk_application.F90
program my_jerk_application
USE OMP_LIB
USE MPI_F08
use SsTC
implicit none
integer, parameter :: dp = 8
integer :: ierror
type(SsTC_sys) :: GaAs
call MPI_INIT(ierror)
call SsTC_init()
GaAs = SsTC_sys_constructor("GaAs", "./", efermi = 7.7414_dp)
block
type(optical_BZ_integral_task) :: jerk
call jerk_current_constructor(optical_task = jerk, method = "rectangle", samples = (/100, 100, 100/), &
omegastart = 0.0_dp, omegaend = 10.0_dp, omegasteps = 100)
call SsTC_sample_and_integrate_BZ_integral_task(task = jerk, &
system = GaAs)
call SsTC_print_BZ_integral_task(task = jerk, &
system = GaAs)
end block
call MPI_FINALIZE(ierror)
end program my_jerk_application