-
Notifications
You must be signed in to change notification settings - Fork 5
/
INSTALL
138 lines (96 loc) · 5.19 KB
/
INSTALL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
**********************************************************************
TbSLAS, INSTALL FILE
**********************************************************************
This file provides an step-by-step instruction to build and compile
TbSLAS; The required libraries, guide to build on a local machine and LRZ
Linux cluster. It also includes the the procedures to compile the example
codes which are included in the package and how to run the corresponding
executables.
**********************************************************************
Required Software
**********************************************************************
In order to compile TbSLAS, at least, the following Softwares are required:
1- C++ compiler
2- MPI: an implementation of Message passing interface
3- BLAS: Basic Linear Algebra subprograms routines
4- FFTW3: C subroutine library for computing discrete Fourier transform
5- PVFMM: library for solving certain types of elliptic partial differential
equations.
-NOTE: before following the instruction to build TbSLAS make sure that you
have all the mentioned APIs up and running on your machine.
**********************************************************************
Compilation of the examples
**********************************************************************
In order to compile the examples following steps has to be taken:
1- set the following environmental variables:
> export PVFMM_DIR=PATH/TO/PVFMM/
> export TBSLAS_DIR=PATH/TO/TBSLAS/
2- 'cd' to the examples subdirectory and type make:
> cd examples
> make
The executable files will be placed in /examples/bin.
**********************************************************************
Building and compilation on Linux cluster (LRZ)
**********************************************************************
Compilation procedure on the LRZ cluster is the same as the
procedure on a local machine, the only difference is to load the settings
for packages using the module mechanism:
> module load fftw
After loading the mentioned modules follow the procedures, described in
'Compile the examples':
> export PVFMM_DIR=PATH/TO/PVFMM/
> export TBSLAS_DIR=PATH/TO/TBSLAS/
> cd examples
> make
-NOTE: if the MPI, Intel compiler and Math Kernel Library (which includes
BLAS) are not already loaded, use the following commands:
> module load mpi
> module load intel
> module load mkl
**********************************************************************
NOTEs on the required libraries (for LRZ cluster)
**********************************************************************
I-All the required libraries which were mentioned in the Required Software
section, except for PVFMM, are already installed on LRZ cluster and
accessible through Module mechanism. so the user just loads the required
environmental variables using the following command:
> module load DESIRED_MODULE
II-Installing PVFMM on the cluster is a bit tricky and it is explained
in here:
1- load the required modules:
> module load fftw
> module load gcc
2- 'cd' to the directory containing the package's source code and type
'./autogen.sh'. This will generate a configure.sh in the same directory
which is then used to configure the compilation process.
> cd /PATH/TO/PVFMM
> ./autogen.sh
3- run configure.sh using the following flags and then invoke make
> ./configure --prefix=DESIRED_DIR --with-fftw="$FFTW_BASE" FFLAGS="$MKL_F90_LIB"
> make
> make install
-NOTE: refer to the PVFMM Installation guide for more information and
other configuration options.
**********************************************************************
Run the executables
**********************************************************************
I- run:
In order to run the executables, mpirun/mpiexec is invoked. For example
, after the compilation, executable 'advdiff' is created in
'$TBSLAS_DIR/examples/bin/' and it can be run as following:
> cd $TBSLAS_DIR/examples/bin/
> mpirun -n 4 ./bin/advection -N 8 -omp 8
This command invokes parallel execution of the code on 4 machines with
8 cores (1 thread per code) for a problem with N=8.
II- outputs:
The simulation results can be saved as .vtk files. In order to get them,
one needs to follow this instruction before running the executable:
1- create the output directory where you want to save the files (if the
directory does not exist the output will not be saved)
> cd tbslas/examples
> mkdir results
2- set the following environmental variable:
> export TBSLAS_RESULT_DIR=$TBSLAS_DIR/examples/bin/results
3- run the program (in this case with 4 MPI processes and 8 theards for
each process)
> mpirun -n 4 ./bin/advection -N 8 -omp 8