Skip to content

chybhao666/TensorRT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 

Repository files navigation

README

This repository contains docker image of ubuntu16.04+cuda9.0+cudnn7.0+tensorrt3.0


Author Alex Cheng
E-mail chybhao666@126.com

About Docker Engine Utility for NVIDIA GPUs

GitHub license Documentation Package repository

nvidia-gpu-docker

Please following the instructions in https://github.com/NVIDIA/nvidia-docker/blob/master/README.md to install nvidia-docker.

About TensorRT

TensorRT

The following sentences are quoted from https://devblogs.nvidia.com/parallelforall/int8-inference-autonomous-vehicles-tensorrt/, which is developed by Joohoon Lee.

TensorRT is a high-performance deep learning inference optimizer and runtime engine for production deployment of deep learning applications. Developers can optimize models trained in TensorFlow or Caffe to generate memory-efficient runtime engines that maximize inference throughput, making deep learning practical for latency-critical products and services like autonomous driving..

The latest TensorRT 3 release introduces a fully-featured Python API, which enables researchers and developers to optimize and serialize their DNN using familiar Python code. With TensorRT 3 you can deploy models either in Python, for cloud services, or in C++ for real-time applications such as autonomous driving software running on the NVIDIA DRIVE PX AI car computer.

Pull CUDA-9.0 + CUDNN_7.0 + TensorRT-3.0 GA docker image

docker pull chybhao666/cuda9_cudnn7_tensorrt3.0:latest

Play docker image: nvidia-docker run -it --net=host chybhao666/cuda9_cudnn7_tensorrt3.0:latest

About

This repository contains docker image of ubuntu16.04+cuda9.0+cudnn7.0+tensorrt3.0

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published