TensorFusion.AI
Next-Generation GPU Virtualization and Pooling for Enterprises
Less GPUs, More AI Apps.
Explore the docs »
View Demo
|
Report Bug
|
Request Feature
Tensor Fusion is a state-of-the-art GPU virtualization and pooling solution designed to optimize GPU cluster utilization to its fullest potential.
This repo is for TensorFusion official website and documentation. Check out the main repo for further information: NexusGPU/tensor-fusion
-
Explore the demo account: Demo Console - Working in progress
-
Run following command to try TensorFusion out in 3 minutes
# Step 1: Install TensorFusion in Kubernetes
helm install --repo https://nexusgpu.github.io/tensor-fusion/ --create-namespace
# Step 2. Onboard GPU nodes into TensorFusion cluster
kubectl apply -f https://raw.githubusercontent.com/NexusGPU/tensor-fusion/main/manifests/gpu-node.yaml
# Step 3. Check if cluster and pool is ready
kubectl get gpupools -o wide && kubectl get gpunodes -o wide
# Step3. Create an inference app using virtual, remote GPU resources in TensorFusion cluster
kubectl apply -f https://raw.githubusercontent.com/NexusGPU/tensor-fusion/main/manifests/inference-app.yaml
# Then you can forward the port to test inference, or exec shell
- Discord channel: https://discord.gg/2bybv9yQNk
- Discuss anything about TensorFusion: Github Discussions
- Contact us with WeCom for Greater China region: 企业微信
- Email us: support@tensor-fusion.com
- Schedule 1:1 meeting with TensorFusion founders