Triton Inference Server
Pinned repositories
Repositories
-
onnxruntime_backend
The Triton backend for the ONNX Runtime.
-
server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
-
openvino_backend
OpenVINO backend for Triton.
-
model_analyzer
Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.
-
python_backend
Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
-
backend
Common source, scripts and utilities for creating Triton backends.
-
fil_backend
FIL backend for the Triton Inference Server
-
common
Common source, scripts and utilities shared across all Triton repositories.
-
third_party
Third-party source packages that are modified for use in Triton.
-
dali_backend
The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
-
pytorch_backend
The Triton backend for the PyTorch TorchScript models.
-
client
Triton Python and C++ client libraries and example, and client examples for go, java and scala.
-
tensorrt_backend
The Triton backend for TensorRT.
-
identity_backend
Example Triton backend that demonstrates most of the Triton Backend API.
-
model_navigator
The Triton Model Navigator is a tool that provides the ability to automate the process of model deployment on the Triton Inference Server.
-
tensorflow_backend
The Triton backend for TensorFlow 1 and TensorFlow 2.
-
square_backend
Simple Triton backend used for testing.
-
repeat_backend
An example Triton backend that demonstrates sending zero, one, or multiple responses for each request.
-
checksum_repository_agent
The Triton repository agent that verifies model checksums.
-
contrib
Community contributions to Triton that are not officially supported or maintained by the Triton project.
Most used topics
People
This organization has no public members. You must be a member to see who’s a part of this organization.