Fastor V0.5
Fastor V0.5 is one hell of a release as it brings a lot of new features, fundamental performance improvements, improved flexibility working with Tensors and many bug fixes:
New Features
- Improved IO formatting. Flexible, configurable formatting for all derived tensor classes
- Generic matmul function for AbstractTensors and expressions
- Introduce a new Tensor type
SingleValueTensor
for tensors of any size and dimension that have all their values the same. It is extremely space efficient as it stores a single value under the hood. It provides a more optimised route for certain linear algebra functions. For instance matmul of aTensor
andSingleValueTensor
is O(n) and transpose is O(1) - New evaluation methods for all expressions
teval
andteval_s
that provide fast evaluation of higher order tensors cast
method to cast a tensor to a tensor of different data typeget_mem_index
andget_flat_index
to generalise indexing across all tensor classes. Eval methods now use these- Binary comparison operators for expressions that evaluate lazy. Also binary comparison operators for SIMDVectors
- Constructing column major tensors is now supported by using
Tensor(external_data,ColumnMajor)
tocolumnmajor
andtorowmajor
free functionsall_of
,any_of
andnone_of
free function reducers that work boolean expressions- Fixed views now support
noalias
feature FASTOR_IF_CONSTEXPR
macro for C++17
Performance and other key improvements
Tensor
class can now be treated as a compile time type as it can be initialised as constexpr by defining the macroFASTOR_ZERO_INITIALISE
- Higher order einsum functions now dispatch to matmul whenever possible which is much faster
- Much faster generic permutation, contraction and einsum algorithms that definitely beat the speed of hand-written C-code now based on recursive templates.
CONTRACT_OPT
is no longer necessary - A much faster loop tiling based transpose function. It is at least 2X faster than implementations in other ET libraries
- Introducing libxsmm backend for matmul. The switch from in-built to libxsmm routines for matmul can be configured by the user using
BLAS_SWITCH_MATRIX_SIZE_S
for square matrices andBLAS_SWITCH_MATRIX_SIZE_NS
for non-square matrices. Default sizes are 16 and 13 respectively. libxsmm brings substantial improvement for bigger size matrices - Condensed unary ops and binary ops into a single more maintainable macro
FASTOR_ASSERT
is now a macro toassert
which optimises better at release- Optimised
determinant
for 4x4 cases. Determinant now works on all types and not just float and double all
is now an alias tofall
which means many tensor view expressions can now be dispatched to tensor fixed views. The implication of this is that expressions likea(all)
andA(all,all)
can just return the underlying tensor as opposed to creating a view with unnecessary sequences and offsets. This is much faster- Specialised constructors for many view types that construct the tensor much faster
- Improved support for
TensorMap
class to behave exactly the same asTensor
class including views, block indexing and so on - Improved unit-testing under many configurations (debug and release)
- Many
Tensor
related methods and functionalities have been separated in to separate files that are now usable by other tensor type classes - Division of an expression by a scalar can now be dispatched to multiplication which creates the opportunity for FMA
- Cofactor and adjoint can now fall back to a scalar version when SIMD types are not available
- Documentation is now available under Wiki pages
Bug fixes
- Fix a bug in
product
method of Tensor class (99e3ff0) - Fix AVX store bug in backend matmul 3k3 (8f4c6ae)
- Fix bug in tensor matmul for matrix-vector case (899c6c0)
- Fix a bug in SIMDVector under scalar mode with mixed types (f707070)
- Fix bugs with math functions on SIMDVector with size>256 not compiling (ca2c74d)
- Fix bugs with matrix-vector einsum (8241ac8, 70838d2)
- Fix a bug with strided_contraction when the second matrix disappears (4ff2ea0)
- Fix a bug in 4D tensor initializer_list constructor (901d8b1)
- Fixes to fully support SIMDVector fallback to scalar version
- and many more undocumented fixes
Key changes
- Complete re-architecturing the directory hierarchy of Fastor. Fastor should now be included as
#include <Fastor/Fastor.h>
TensorRef
class has now been renamed toTensorMap
- Expressions now evaluate based on the type of their underlying derived classes rather than the tensor that they are getting assigned to
There is a lot more major and minor undocumented changes.