Skip to content

Latest commit

 

History

History
106 lines (92 loc) · 5.17 KB

index.md

File metadata and controls

106 lines (92 loc) · 5.17 KB

::::{grid} :reverse: :gutter: 2 1 1 1 :margin: 4 4 1 1

:::{grid-item} :columns: 4

:width: 300px

::: :::{grid-item} :columns: 8 :class: sd-fs-3

NVIDIA Triton Inference Server

::: ::::

Triton Inference Server is an open source inference serving software that streamlines AI inferencing.

<iframe width="560" height="315" src="https://www.youtube.com/embed/NQDtfSi5QF4" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

Triton Inference Server

Triton Inference Server enables teams to deploy any AI model from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS FIL, and more. Triton supports inference across cloud, data center, edge and embedded devices on NVIDIA GPUs, x86 and ARM CPU, or AWS Inferentia. Triton Inference Server delivers optimized performance for many query types, including real time, batched, ensembles and audio/video streaming. Triton inference Server is part of NVIDIA AI Enterprise, a software platform that accelerates the data science pipeline and streamlines the development and deployment of production AI.

Major features include:

Join the Triton and TensorRT community and stay current on the latest product updates, bug fixes, content, best practices, and more. Need enterprise support? NVIDIA global support is available for Triton Inference Server with the NVIDIA AI Enterprise software suite.

See the Latest Release Notes for updates on the newest features and bug fixes.