Skip to content

kevbuh/froog

Repository files navigation

froog unit test badge num downloads badge

froog the frog
froog: a gpu accelerated tensor library
homepage | documentation | pip

froog is an easy-to-read tensor library (31k pip installs!) with support for GPU acceleration with OpenCL and Apple Metal. Inspired by tinygrad, and micrograd.

Installation

pip install froog

Features

Quick Example

Here's how you set up a simple multilayer perceptron for classification on MNIST. Looks pretty similar to pytorch, right?

from froog.tensor import Tensor
from froog.ops import Linear
import froog.optim as optim

class mnistMLP:
  def __init__(self):
    self.l1 = Tensor(Linear(784, 128)) # layer 1
    self.l2 = Tensor(Linear(128, 10))  # layer 2

  def forward(self, x):
    # forward pass through both layers and softmax for output probabilities
    return x.dot(self.l1).relu().dot(self.l2).logsoftmax() 

model = mnistMLP() # create model
optim = optim.SGD([model.l1, model.l2], lr=0.001) # stochastic gradient descent optimizer

GPU Support

Device management is handled transparently and will automatically select one of [METAL, OPENCL, CPU]. To use the GPU:

from froog.tensor import Tensor
from froog import get_device
# Check if GPU is available
has_gpu = get_device() is not None and get_device().name != "CPU"
# Create a tensor
x = Tensor([1, 2, 3])
# Push to GPU if available
if has_gpu: x = x.to_gpu()
# Operations run on GPU automatically
y = x + x
z = y * y
# Bring back to CPU when needed
result = z.to_cpu()
print(result.data)

You can also check what devices are available:

from froog import get_available_devices
available_devices = get_available_devices()
print(f"Available devices: {available_devices}")

Or set a specific device:

from froog import set_device
set_device("METAL")  # or "OPENCL"

EfficientNet in froog!

pug

We have an implementation of EfficientNet v2 built entirely in froog using the official PyTorch weights! Running inference on this pug...

python3 models/efficientnet.py <https://optional_image_url>

***********output*************
inference 4.34 s

imagenet class: 254
prediction    : pug, pug-dog
probability   : 0.9402361
******************************

I would recommend checking out the code, it's highly documented and pretty cool.

API

MATH

  • .add(y) - Addition with y
  • .sub(y) - Subtraction with y
  • .mul(y) - Multiplication with y
  • .div(y) - Division by y
  • .pow(y) - Power function (raise to power y)
  • .sum() - Sum all elements
  • .mean() - Mean of all elements
  • .sqrt() - Square root
  • .dot(y) - Matrix multiplication with y
  • .matmul(y) - Alias for dot

MACHINE LEARNING

  • .relu() - Rectified Linear Unit activation
  • .sigmoid() - Sigmoid activation
  • .dropout(p=0.5, training=True) - Dropout regularization
  • .logsoftmax() - Log softmax function
  • .swish() - Swish activation function (x * sigmoid(x))
  • .conv2d(w, stride=1, groups=1) - 2D convolution
  • .im2col2dconv(w) - Image to column for convolution
  • .max_pool2d(kernel_size=(2,2)) - 2D max pooling
  • .avg_pool2d(kernel_size=(2,2)) - 2D average pooling

TENSOR

  • Tensor.zeros(*shape) - Create tensor of zeros
  • Tensor.ones(*shape) - Create tensor of ones
  • Tensor.randn(*shape) - Create tensor with random normal values
  • Tensor.eye(dim) - Create identity matrix
  • Tensor.arange(start, stop=None, step=1) - Create tensor with evenly spaced values

TENSOR PROPERTIES

  • .shape - The shape of the tensor as a tuple
  • .size - Total number of elements in the tensor
  • .ndim - Number of dimensions (rank) of the tensor
  • .transpose - Transpose of the tensor
  • .dtype - Data type of the tensor
  • .is_gpu - Whether tensor is on GPU
  • .grad - Gradient of tensor with respect to some scalar value
  • .data - Underlying NumPy array (or GPU buffer)
  • .to_float() - Converts tensor to float32 data type
  • .to_int() - Converts tensor to int32 data type
  • .to_bool() - Converts tensor to boolean data type
  • .reshape(*shape) - Change tensor shape
  • .view(*shape) - Alternative to reshape
  • .pad2d(padding=None) - Pad 2D tensors
  • .flatten() - Returns a flattened 1D copy of the tensor
  • .unsqueeze(dim) - Add dimension of size 1 at specified position
  • .squeeze(dim=None) - Remove dimensions of size 1
  • .detach() - Returns a tensor detached from computation graph
  • .assign(x) - Assign values from tensor x to this tensor

GPU

  • .to_cpu() - Moves tensor to CPU
  • .to_gpu() - Moves tensor to GPU
  • .gpu_() - In-place GPU conversion (modifies tensor)

AUTOGRAD

  • .backward(allow_fill=True) - Performs backpropagation

About

gpu accelerated tensor library (opencl and metal)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published