Benchmarks for Multi-GPU Communication with MVAPICH2
-
Updated
Jan 4, 2017 - C
Benchmarks for Multi-GPU Communication with MVAPICH2
asynchoronous learning example working inside localhost
CRNN(Convolutional Recurrent Neural Network), with optional STN(Spatial Transformer Network), in Tensorflow, multi-gpu supported.
MobileNet build with Tensorflow
code for py-R-FCN-multiGPU maintained by bupt-priv
Keras light-weight model for sketch images classification using Quick!Draw dataset
multi_gpu_infer 多gpu预测 multiprocessing or subprocessing
PyTorch original implementation of Cross-lingual Language Model Pretraining.
Very minimal pytorch boilerplate with wandb logging and multi gpu support
Custom Iterable Dataset Class for Large-Scale Data Loading
AI核心库
Training Using Multiple GPUs
Leveraging Structural Indexes for High-Performance JSON Data Processing on GPUs
Recommendation Engine powered by Matrix Factorization.
This helps you to submit job with multinode & multgpu in Slurm in Torchrun
⚡ LLaMA-2 model experiment
Distributed_compy is a distributed computing library that offers multi-threading, heterogeneous (CPU + mult-GPU), and multi-node support
Add a description, image, and links to the multigpu topic page so that developers can more easily learn about it.
To associate your repository with the multigpu topic, visit your repo's landing page and select "manage topics."