VectorWolf is (Tensor => Vector) + (reverse(Flow) => Wolf)
Syntax is almost the same as TensorFlow.
Faster than TensorFlow for smaller batch sizes.
| Performance (in sec) | VectorWolf | TensorFlow |
|---|---|---|
|
Cancer Prediction Training Data = 426 Batch size = 20, Epochs = 70 |
2.226 | 10.115 |
|
Coffee Roasting Training Data = 200000 Batch size = 64, Epochs = 10 |
1.238 | 47.201 |
|
House Price Prediction Training Data = 15117 Batch size = 20, Epochs = 50 |
5.259 | 85.520 |
- How to use
- Important Points
- Metric
- Classification
- Regression
- Activation functions
- Loss functions
- Optimizers
- Callbacks
- History
- Methods for Layer
- Methods for Model
- Other Methods
- Write your code in
main.cpp - main.cpp is given as an example to demonstrate its usage.
- Ensure that relative locations of header (.cpp) files are located correctly w.r.t
main.cpp. - This is meant to be used accompanying with python because visualisation and feature engineering is much easier on python.
- Output processed data to .csv files and read them separately with VectorWolf.
- Train the model in VectorWolf and then again write required data into .csv file then read it again with python for further checks.
- To change data type for calculation go to basic.h and change -
using D = doubleto desired data_type. - Running on Windows Powershell is faster than running on WSL + Ubuntu. Running on DevCpp is much much faster (10x) than running on other terminals.
- .cpp files are treated as header instead of .h files to get a perfomance boost (5x).
- The dimensions of weight matrix is transpose of what is used on TensorFlow.
- Go here for images about Running and Output.
- Losses in BinaryCrossentropy printed using VectorWolf are generally lower than TensorFlow even for the same set of weights and biases, because of inaccurate calculations due to capping at eps = 1e-15 (to prevent runtime errors). For e.g. this results in
$-log(e^{-18})$ to become$-log(e^{-15})$ , hence value changes from 18 to 15.
Parameters:-
y_true: Actual results (necessary).y_pred: Predicted results (necessary).print_: Whether to print the result (set to True by default) (optional).
Use: Gets the evaluation metric values after model prediction.
Add metrics. before calling any of the Metric methods.
$\text{Accuracy} = \frac{\text{correct classifications}}{\text{total classifications}} = \frac{TP+TN}{TP+TN+FP+FN}$ returns ->
D
$\text{Recall (or TPR)} = \frac{\text{correctly classified actual positives}}{\text{all actual positives}} = \frac{TP}{TP+FN}$ returns ->
D
$\text{Precision} = \frac{\text{correctly classified actual positives}}{\text{everything classified as positive}} = \frac{TP}{TP+FP}$ returns ->
D
$\text{F1-score}=2*\frac{\text{precision * recall}}{\text{precision + recall}} = \frac{2\text{TP}}{2\text{TP + FP + FN}}$ returns ->
D
returns ->
voidUse: Prints all the above metrics.
returns ->
vector<vector<int>>(2$\times$ 2)
$\text{MAE}(y,\hat{y}) = \frac{1}{m} \cdot \sum\limits_{i=1}^{m}|y_i - \hat{y}_i|$ returns ->
D
$\text{MSE}(y,\hat{y}) = \frac{1}{m} \cdot \sum\limits_{i=1}^{m}(y_i - \hat{y}_i)^2$ returns ->
D
$\text{RMSE}(y,\hat{y}) = \sqrt{ \frac{1}{m} \cdot \sum\limits_{i=1}^{m}(y_i - \hat{y}_i)^2 }$ returns ->
D
Parameter in layers.Dense().
Pass it as activation = "activation_name" (case independent).
$f(z) = z$
$f(z) = z \geq 0$
$f(z) = \frac{1}{1+e^{(-z)}}$
Parameter in model.compile().
Pass it as loss = "loss_name" (case independent).
$ℒ(y, \hat{y}) = \frac{1}{m} \cdot \sum\limits_{i=1}^{m} (y_i - \hat{y}_i)^2$ Can also be passed as "mse"
$ℒ(y, \hat{y}) = -\frac{1}{m} \cdot \sum\limits_{i=1}^{m} \left[ y_i \log(\hat{y}_i) + (1 - y_i) \log(1 - \hat{y}_i) \right]$ Can also be passed as "bce"
Parameter in model.compile().
Pass it as optimizer = "optimizer_name" (case independent).
This is the default optimizer if not mentioned. Hyperparameters:-
learning_rate: Learning rate (set to 0.001 by default) (optional)
Hyperparameters:-
learning_rate: Learning rate (set to 0.001 by default) (optional).beta_1: Exponents decay rate for first moment (set to 0.9 by default) (optional).beta_2: Exponents decay rate for second moment (variance) (set to 0.999 by default) (optional).epsilon: For numerical stability (prevent division by zero) (set to$10^{-7}$ by default) (optional).
Parameter in model.fit().
Pass it as callbacks = {callback1, callback2, ...}.
Add Callback:: before the Callback methods.
returns ->
class CallbackUse: Stops training with when given condition is true.
Parameters:-
monitor: Parameter on which condition is set (necessary).mode: Given parameter to be "min" or "max" (If not mentioned automatically set as per parameter) (optional).patience: No. of epochs after which training stops, if monitored parameter is not optimized (necessary).
Struct returned by model.fit().
Contains information about training the model.
Attributes:-
epoch:vector<int>of epochs on which model was trained.history:map<string,vector<D>>. Contains the history of losses and val_losses (if present).params:map<string,int>. Contains number of epochs and steps_per_epoch of model trained.
returns ->
class LayerUse: Creating a layer.
Parameters:-
units: No. of units in that layer (necessary).- Activation (set to linear by default) (optional).
name: Name of layer (set to Layer 'layer_number' by default) (optional).
returns ->
vector<vector<D>>Use: Getting the output Matrix from layer, after giving input.
Parameters:-
x: Input for the layer (necessary).z_store: If true, stores the Matrix z, i.e. just before applying activation function on the Matrix. (set to false by default) (optional).
returns ->
intUse: Prints layer name, type, units, parameters for that layer. Returns the parameters in that layer.
Parameters:-
prev_units: No. of units in previous layer (necessary).
returns ->
stringUse: Getting the name of layer.
Parameters: None
returns ->
voidUse: Setting/Changing the name of layer.
Parameters:-
name__: Updated name of the layer (necessary).
returns ->
intUse: Getting the number of units in the layer.
Parameters: None
returns ->
vector<vector<D>>Use: Getting the weight Matrix of layer.
Parameters: None
returns ->
voidUse: Load a weight matrix onto the layer.
Parameters:-
new_weight: Updated weight matrix of the layer (necessary).
returns ->
vector<D>Use: Getting the bias vector of layer.
Parameters: None
returns ->
voidUse: Load a bias vector onto the layer.
Parameters:-
new_bias: Updated bias vector of the layer (necessary).
returns ->
string &Use: Getting the name of activation function of the layer.
Parameters: None
returns ->
voidUse: Prints layer_name, units, all weights and biases for that layer.
Parameters:-
layer: class Layer to be printed (necessary).
returns ->
class ModelUse: Creating a model.
Parameters:-
input_param: No. of features in input of training data (necessary).vector<Layer>: Vector of layers inputed in the form{ layers.Dense(units = ..., activation = "...", name = "..."), ... }(optional).
returns ->
voidUse: Adding a new layer to the model.
Parameters:-
new_layer: New layer to be added to be model (necessary).
returns ->
void.Use: Prints the layer_name, output shape and parameters for each layer in model
Parameters: None
returns ->
void.Use: Setting the loss function and learning_rate for the model.
Parameters:-
returns ->
struct History.Use: Training the model. Prints the current epoch_number, Loss (and val_loss if available) for that epoch and Time to run that epoch.
Parameters:-
x_train: Input for training data (necessary).y_train: Ouput for given data (necessary).epochs: No. of epochs (set to 0 i.e. nothing happens by default) (optional).batch_size: Input data divided into subsets (set to 32 by default) (If$\text{batch\_size} \nmid \text{|x\_train|}$ , it warps around from start to compensate) (optional).steps_per_epoch: No. of steps per epoch (set to$\left\lceil \frac{|{x\_train}|}{batch\_size} \right\rceil$ by default) (optional).validation_data: Pair of {X_test,y_test} to compare test losses (optional)- Callbacks
Shuffle: Training data will be randomly shuffled before each epoch (set to true by default) (optional).
returns ->
vector<D>Use: Predicting the output from input Matrix based on the pre-compiled and fitted model. Returns output vector.
Parameters:-
x: Input to be predicted/processed (necessary).
returns ->
DUse: Calculating the loss on a given dataset based on previous training of model.
Parameters:-
X_test: Input for the dataset (necessary).y_test: Output for the dataset (necessary).print_: Whether the to print box for predict (set to true by default) (optional).
returns ->
voidUse: Updating the no. of input features for the model.
Parameters:-
input_features_: New no. of input features for the model.
returns ->
class Layer &Use: Getting a layer by reference to do changes to it or get information.
Parameters:-
name_: Name of layer to be returned (necessary).
returns ->
vector<class Layer>Use: Getting all the layers in a model.
Parameters: None
returns ->
void.Use: Prints print(layer[i]) for all layers.
Parameters:-
model: class Model to be printed (necessary).
returns ->
vector<vector<D>>Use: Reads input/ouput data from a .csv file and returns as a 2D vector.
Parameters:-
path: Absolute/relative path to the .csv file to be read (necessary).header: To detect if headers (1st row) is to be filtered out separately (set to true by default) (optional).dummy_replace:map<string,map<string,D>>. For replacing dummy variables from columns from string to D (optional).null_values: vector of strings which are treated as NULL values (set to empty string by default) (optional).
returns ->
voidUse: Writing data onto a .csv file for further processing later.
Parameters:-
path: Absolute/relative path to the .csv file to be read (necessary).data: vector to be written into .csv file (necessary).
returns ->
voidUse: Printing the shape of a Matrix.
Parameters:-
M: Matrix whose shape is to be printed (necessary).
returns ->
vector<vector<D>>Use: Getting the transpose of a Matrix.
Parameters:-
M: Matrix to be transposed (necessary).
returns ->
vector<vector<D>>orvector<D>Use: Multiplying two Matrices (if their dimensions match) or Matrix with a constant or Vector with a constant.
Parameters:-
a: First Matrix or Vector (necessary).borc: Second Matrix or constant (necessary).
returns ->
vector<vector<D>>orvector<D>Use: Element-wise product of two Matrices or Vectors (if their dimensions match).
Parameters:-
a: First Matrix or Vector (necessary).b: Second Matrix or Vector (necessary).
returns ->
voidUse: Printing contents of a 1D vector, 2D vector, Layer or Model.
Parameters:-
vecormatorlayerormodel: Any one of the four (necessary).