-
Notifications
You must be signed in to change notification settings - Fork 153
Compilation
This page describe the concerns when compiling this toolbox.
To conduct parallel training with multiple devices/nodes, we use MPI to facilitate inter-process communication. Specifically, we use the OpenMPI software. In the gather layer, we used GPU-aware MPI subroutines, which are provided after OpenMPI version 1.7.4
.
Due the causes above, OpenMPI later than 1.7.4
is required for enabling parallel training functionalities in this package (by specifying USE_MPI=ON
in cmake
). However, you can also compile without OpenMPI if you don't need parallel training.
So a full command to specify the MPI dependency will be like
cmake -DUSE_MPI=ON -DMPI_CXX_COMPILER=<the path to mpicxx binary>
Then use make
command
make -j install
make -j runtest (optional)
In this package, we use the CMake system for config and make. CMake will try to automatically detect Matlab and Python libraries. If found, it will compile the corresponding interfaces by default. If it cannot find them, it will show information in cmake
's outputs. You can then manually specify the library locations, using cmake
or its textual GUI ccmake
.
For Python interfaces, we recommend using the Anaconda
distribution to ease the installation of related Python packages.