Skip to content

NexusGPU/tensor-fusion-site

Repository files navigation

Logo

TensorFusion.AI
Next-Generation GPU Virtualization and Pooling for Enterprises
Less GPUs, More AI Apps.
Explore the docs »
View Demo | Report Bug | Request Feature

♾️ Tensor Fusion

Contributors Forks Stargazers Issues MIT License LinkedIn

Tensor Fusion is a state-of-the-art GPU virtualization and pooling solution designed to optimize GPU cluster utilization to its fullest potential.

This repo is for TensorFusion official website and documentation. Check out the main repo for further information: NexusGPU/tensor-fusion

🚀 Quick Start

Onboard Your Own AI Infra

Try it out

# Step 1: Install TensorFusion in Kubernetes
helm install --repo https://nexusgpu.github.io/tensor-fusion/ --create-namespace

# Step 2. Onboard GPU nodes into TensorFusion cluster
kubectl apply -f https://raw.githubusercontent.com/NexusGPU/tensor-fusion/main/manifests/gpu-node.yaml

# Step 3. Check if cluster and pool is ready
kubectl get gpupools -o wide && kubectl get gpunodes -o wide

# Step3. Create an inference app using virtual, remote GPU resources in TensorFusion cluster
kubectl apply -f https://raw.githubusercontent.com/NexusGPU/tensor-fusion/main/manifests/inference-app.yaml

# Then you can forward the port to test inference, or exec shell

💬 Discussion