Unified Model Serving Framework
-
Updated
Apr 27, 2023 - Python
Unified Model Serving Framework
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
A REST API for Caffe using Docker and Go
This is a repository for an nocode object detection inference API using the Yolov3 and Yolov4 Darknet framework.
This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
This is a repository for an object detection inference API using the Tensorflow framework.
Serving AI/ML models in the open standard formats PMML and ONNX with both HTTP (REST API) and gRPC endpoints
Orkhon: ML Inference Framework and Server Runtime
K3ai is a lightweight, fully automated, AI infrastructure-in-a-box solution that allows anyone to experiment quickly with Kubeflow pipelines. K3ai is perfect for anything from Edge to laptops.
Deploy DL/ ML inference pipelines with minimal extra code.
A standalone inference server for trained Rubix ML estimators.
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
Inference Server Implementation from Scratch for Machine Learning Models
Session Based Real-time Hotel Recommendation Web Application
Modelz is a developer-first platform for prototyping and deploying machine learning models.
Serve pytorch inference requests using batching with redis for faster performance.
Client/Server system to perform distributed inference on high load systems.
An example of using Redis + RedisAI for a microservice that predicts consumer loan probabilities using Redis as a feature and model store and RedisAI as an inference server.
Add a description, image, and links to the inference-server topic page so that developers can more easily learn about it.
To associate your repository with the inference-server topic, visit your repo's landing page and select "manage topics."