This is a library for some tensor decomposition and tensor-based methods. This is a project under development and more methods will be added. The current methods are functional.
Use the package manager pip to install tensorlearn in Python.
pip install tensorlearn
tensorlearn.cp_completion_als(tensor, samples, rank, iteration, cp_iteration=100)
This implementation for tensor completion is based on CP decomposition given a fixed rank.
- tensor < array >: The given tensor to be decomposed.
- samples < array >: An array of
0
s and1
s where1
s represent observed samples, and0
s indicate missing entries. This array's size must match the dimensions of the tensor. - rank < int >: The rank for CP decomposition.
- iteration < int >: The iteration for the ALS algorithm.
- cp_iteration < int >: The iteration for initialization.
-
weights < array >: the vector of normalization weights (lambda) in CP decomposition
-
factors < list of arrays >: factor matrices of CP decomposition
tensorlearn.auto_rank_tt(tensor, epsilon)
This implementation of tensor-train decomposition determines the ranks automatically based on a given error bound according to Oseledets (2011). Therefore the user does not need to specify the ranks. Instead the user specifies an upper error bound (epsilon) which bounds the error of the decomposition. For more information and details please see the page tensor-train decomposition.
-
tensor < array >: The given tensor to be decomposed.
-
epsilon < float >: The error bound of decomposition in the range [0,1].
- TT factors < list of arrays >: The list includes numpy arrays of factors (or TT cores) according to TT decomposition. Length of the list equals the dimension of the given tensor to be decomposed.
tensorlearn.cp_als_rand_init(tensor, rank, iteration, random_seed=None)
This is an implementation of CANDECOMP/PARAFAC (CP) decomposition using alternating least squares (ALS) algorithm with random initialization of factors.
-
tensor < array >: the given tensor to be decomposed
-
rank < int >: number of ranks
-
iterations < int >: the number of iterations of the ALS algorithm
-
random_seed < int >: the seed of random number generator for random initialization of the factor matrices
-
weights < array >: the vector of normalization weights (lambda) in CP decomposition
-
factors < list of arrays >: factor matrices of CP decomposition
tensorlearn.tucker_hosvd(tensor, epsilon)
- tensor < array >: the given tensor to be decomposed
- epsilon < float >: The error bound of decomposition in the range [0,1].
- core_factor < array >: Core tensor factor of Tucker
- factor_matrices < list >: A list of factor matrices of Tucker
tensorlearn.tt_to_tensor(factors)
Returns the full tensor given the TT factors
- factors < list of numpy arrays >: TT factors
- full tensor < numpy array >
tensorlearn.tt_compression_ratio(factors)
Returns data compression ratio for tensor-train decompostion
- factors < list of numpy arrays >: TT factors
- Compression ratio < float >
Returns the full tensor given the CP factor matrices and weights
tensorlearn.cp_to_tensor(weights, factors)
-
weights < array >: the vector of normalization weights (lambda) in CP decomposition
-
factors < list of arrays >: factor matrices of the CP decomposition
- full tensor < array >
Returns the full tensor given the Tucker core factor and factor matrices
tensorlearn.tucker_to_tensor(core_factor,factor_matrices)
-
core_factor < array >: Core factor of Tucker decomposition
-
factor_matrices < list of arrays >: factor matrices of Tucker decomposition
- full tensor < array >
Returns data compression ratio for CP- decompostion
tensorlearn.cp_compression_ratio(weights, factors)
-
weights < array >: the vector of normalization weights (lambda) in CP decomposition
-
factors < list of arrays >: factor matrices of the CP decomposition
- Compression ratio < float >
Returns data compression ratio for Tucker decomposition.
tensorlearn.tucker_compression_ratio(core_factor,factor_matrices)
-
core_factor < array >: Core factor of Tucker decomposition
-
factor_matrices < list of arrays >: factor matrices of Tucker decomposition
- Compression ratio < float >
tensorlearn.tensor_resize(tensor, new_shape)
This method reshapes the given tensor to a new shape. The new size must be bigger than or equal to the original shape. If the new shape results in a tensor of greater size (number of elements) the tensor fills with zeros. This works similar to numpy.ndarray.resize()
-
tensor < array >: the given tensor
-
new_shape < tuple >: new shape
- tensor < array >: tensor with new given shape
tensorlearn.unfold(tensor, n)
Unfold the tensor with respect to dimension n.
-
tensor < array >: tensor to be unfolded
-
n < int >: dimension based on which the tensor is unfolded
- matrix < array >: unfolded tensor with respect to dimension n
tensorlearn.tensor_frobenius_norm(tensor)
Calculates the frobenius norm of the given tensor.
- tensor < array >: the given tensor
- frobenius norm < float >
tensorlearn.error_truncated_svd(x, error)
This method conducts a compact svd and return sigma (error)-truncated SVD of a given matrix. This is an implementation using numpy.linalg.svd with full_matrices=False. This method is used in TT-SVD algorithm in auto_rank_tt.
-
x < 2D array >: the given matrix to be decomposed
-
error < float >: the given error in the range [0,1]
- r, u, s, vh < int, numpy array, numpy array, numpy array >
tensorlearn.column_wise_kronecker(a, b)
Returns the column wise Kronecker product (Sometimes known as Khatri Rao) of two given matrices.
- a,b < 2D array >: the given matrices
- column wise Kronecker product < array >