Skip to content
A scalable inference server for models optimized with OpenVINO™
Python Jupyter Notebook Dockerfile Shell Makefile
Branch: master
Clone or download

Latest commit

jcchr Cvs 30080 ovms quickstart short (#227)
* CVS-30080 ovms quickstart short

* CVS-30080 adding script to the README.md

* CVS-30080 review

* CVS-30080 ovms quickstart

* CVS-30080 ovms quickstart

* CVS-30080 image link correction

* CVS-30080 quick start - wrong link
Latest commit 64130e5 May 13, 2020

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
deploy CVS-30106: kubernetization, deployment docs (#191) May 8, 2020
docs Cvs 30080 ovms quickstart short (#227) May 13, 2020
example_client Fix rest_get_model_status.py (#181) Apr 9, 2020
example_kubeflow_pipelines Switching to Jenkins CI and test models from OpenVINO model zoo (#154) Mar 13, 2020
example_sagemaker corrected path to openvino model server repo Oct 22, 2018
ie_serving Bug fixes in update procedure (#187) Apr 22, 2020
tests CVS-30892: Remove container after run (#200) Apr 29, 2020
.coveragerc Dip_64 Unit tests (#9) Jul 24, 2018
.gitignore Support binary build with OpenVino 2020.2 (#185) Apr 21, 2020
Dockerfile Update OpenVINO to 2020.2 (#188) Apr 23, 2020
Dockerfile_binary_openvino Support binary build with OpenVino 2020.2 (#185) Apr 21, 2020
Dockerfile_clearlinux Added support for OV binary package 2020.1 (#137) Feb 11, 2020
Dockerfile_openvino_base adjust dockerfile for public image (#193) Apr 30, 2020
Jenkinsfile Add email notifications (#183) Apr 20, 2020
LICENSE Add license file and headers (#16) Aug 21, 2018
Makefile Update OpenVINO to 2020.2 (#188) Apr 23, 2020
README.md Cvs 30080 ovms quickstart short (#227) May 13, 2020
requirements-dev.txt update requiremetns (#83) Jul 17, 2019
requirements.txt added compatibility support to TF2.0 and TF1.15 (#110) Feb 5, 2020
requirements_clearlinux.txt added compatibility support to TF2.0 and TF1.15 (#110) Feb 5, 2020
setup.py Update OpenVINO to 2020.2 (#188) Apr 23, 2020
start_server.sh start_server.sh: POSIX compatible, no need for bash (#43) May 8, 2019

README.md

OpenVINO™ Model Server

OpenVINO™ Model Server is a scalable, high-performance solution for serving machine learning models optimized for Intel® architectures. The server provides an inference service via gRPC enpoint or REST API -- making it easy to deploy new algorithms and AI experiments using the same architecture as TensorFlow Serving for any models trained in a framework that is supported by OpenVINO.

The server is implemented as a python service using the gRPC interface library or falcon REST API framework with data serialization and deserialization using TensorFlow, and OpenVINO™ as the inference execution provider. Model repositories may reside on a locally accessible file system (e.g. NFS), Google Cloud Storage (GCS), Amazon S3 or MinIO.

Review the Architecture concept document for more details.

A few key features:

Running the Server

Start using OpenVINO Model Server in 5 Minutes or less:

# Download the latest Model Server image
docker pull openvino/ubuntu18_model_server:latest

# Download model into a separate directory
curl --create-dirs https://download.01.org/opencv/2020/openvinotoolkit/2020.2/open_model_zoo/models_bin/3/face-detection-retail-0004/FP32/face-detection-retail-0004.xml https://download.01.org/opencv/2020/openvinotoolkit/2020.2/open_model_zoo/models_bin/3/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/face-detection-retail-0004.xml -o model/face-detection-retail-0004.bin

# Start the container serving gRPC on port 9000
docker run -d -v $(pwd)/model:/models/face-detection/1 -e LOG_LEVEL=DEBUG -p 9000:9000 openvino/ubuntu18_model_server /ie-serving-py/start_server.sh ie_serving model --model_path /models/face-detection --model_name face-detection --port 9000  --shape auto

# Download the example client script
curl https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/client_utils.py -o client_utils.py https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/face_detection.py -o face_detection.py  https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/client_requirements.txt -o client_requirements.txt

# Download an image to be analyzed
curl --create-dirs https://raw.githubusercontent.com/openvinotoolkit/model_server/master/example_client/images/people/people1.jpeg -o images/people1.jpeg

# Install client dependencies
pip install -r client_requirements.txt

# Create a folder for results
mkdir results

# Run inference and store results in the newly created folder
python face_detection.py --batch_size 1 --width 600 --height 400 --input_images_dir images --output_dir results

A more detailed description of the steps above can be found here.

More complete guides to using Model Server in various scenarios can be found here:

Advanced Configuration

Custom layer extensions

Performance tuning

Using FPGA (TBD)

gRPC API Documentation

OpenVINO™ Model Server gRPC API is documented in the proto buffer files in tensorflow_serving_api. Note: The implementations for Predict, GetModelMetadata and GetModelStatus function calls are currently available. These are the most generic function calls and should address most of the usage scenarios.

predict function spec has two message definitions: PredictRequest and PredictResponse.

  • PredictRequest specifies information about the model spec, a map of input data serialized via TensorProto to a string format.
  • PredictResponse includes a map of outputs serialized by TensorProto and information about the used model spec.

get_model_metadata function spec has three message definitions: SignatureDefMap, GetModelMetadataRequest, GetModelMetadataResponse. A function call GetModelMetadata accepts model spec information as input and returns Signature Definition content in the format similar to TensorFlow Serving.

get model status function spec can be used to report all exposed versions including their state in their lifecycle.

Refer to the example client code to learn how to use this API and submit the requests using the gRPC interface.

Using the gRPC interface is recommended for optimal performace due to its faster implementation of input data deserialization. gRPC achieves lower latency, especially with larger input messages like images.

RESTful API Documentation

OpenVINO™ Model Server RESTful API follows the documentation from tensorflow serving rest api.

Both row and column format of the requests are implemented. Note: Just like with gRPC, only the implementations for Predict, GetModelMetadata and GetModelStatus function calls are currently available.

Only the numerical data types are supported.

Review the exemplary clients below to find out more how to connect and run inference requests.

REST API is recommended when the primary goal is in reducing the number of client side python dependencies and simpler application code.

Usage Examples

References

OpenVINO™

TensorFlow Serving

gRPC

RESTful API

Inference at scale in Kubernetes

OpenVINO Model Server boosts AI

Troubleshooting

Server Logging

OpenVINO™ model server accepts 3 logging levels:

  • ERROR: Logs information about inference processing errors and server initialization issues.
  • INFO: Presents information about server startup procedure.
  • DEBUG: Stores information about client requests.

The default setting is INFO, which can be altered by setting environment variable LOG_LEVEL.

The captured logs will be displayed on the model server console. While using docker containers or kubernetes the logs can be examined using docker logs or kubectl logs commands respectively.

It is also possible to save the logs to a local file system by configuring an environment variable LOG_PATH with the absolute path pointing to a log file. Please see example below for usage details.

docker run --name ie-serving --rm -d -v /models/:/opt/ml:ro -p 9001:9001 --env LOG_LEVEL=DEBUG --env LOG_PATH=/var/log/ie_serving.log \
 ie-serving-py:latest /ie-serving-py/start_server.sh ie_serving config --config_path /opt/ml/config.json --port 9001
 
docker logs ie-serving 

Model Import Issues

OpenVINO™ Model Server loads all defined models versions according to set version policy. A model version is represented by a numerical directory in a model path, containing OpenVINO model files with .bin and .xml extensions.

Below are examples of incorrect structure:

models/
├── model1
│   ├── 1
│   │   ├── ir_model.bin
│   │   └── ir_model.xml
│   └── 2
│       ├── somefile.bin
│       └── anotherfile.txt
└── model2
    ├── ir_model.bin
    ├── ir_model.xml
    └── mapping_config.json

In above scenario, server will detect only version 1 of model1. Directory 2 does not contain valid OpenVINO model files, so it won't be detected as a valid model version. For model2, there are correct files, but they are not in a numerical directory. The server will not detect any version in model2.

When new model version is detected, the server loads the model files and starts serving new model version. This operation might fail for the following reasons:

  • there is a problem with accessing model files (i. e. due to network connectivity issues to the remote storage or insufficient permissions)
  • model files are malformed and can not be imported by the Inference Engine
  • model requires custom CPU extension

In all those situations, the root cause is reported in the server logs or in the response from a call to GetModelStatus function.

Detected but not loaded model version will not be served and will report status LOADING with error message: Error occurred while loading version. When model files become accessible or fixed, server will try to load them again on the next version update attempt.

At startup, the server will enable gRPC and REST API endpoint, after all configured models and detected model versions are loaded successfully (in AVAILABLE state).

The server will fail to start if it can not list the content of configured model paths.

Client Request Issues

When the model server starts successfully and all the models are imported, there could be a couple of reasons for errors in the request handling. The information about the failure reason is passed to the gRPC client in the response. It is also logged on the model server in the DEBUG mode.

The possible issues could be:

  • Incorrect shape of the input data.
  • Incorrect input key name which does not match the tensor name or set input key name in mapping_config.json.
  • Incorrectly serialized data on the client side.

Resource Allocation

RAM consumption might depend on the size and volume of the models configured for serving. It should be measured experimentally, however it can be estimated that each model will consume RAM size equal to the size of the model weights file (.bin file). Every version of the model creates a separate inference engine object, so it is recommended to mount only the desired model versions.

OpenVINO™ model server consumes all available CPU resources unless they are restricted by operating system, docker or kubernetes capabilities.

Usage Monitoring

It is possible to track the usage of the models including processing time while DEBUG mode is enabled. With this setting model server logs will store information about all the incoming requests. You can parse the logs to analyze: volume of requests, processing statistics and most used models.

Inference Results Serialization

Model server employs configurable serialization function.

The default implementation starting from 2020.1 version is _prepare_output_with_make_tensor_proto. It employs TensorFlow function make_tensor_proto. For most of the models it returns TensorProto response with inference results serialized to string via a numpy.toString call. This method achieves low latency, especially for models with big size of the output.

Prior 2020.1 version, serialization was using function _prepare_output_as_AppendArrayToTensorProto. Contrary to make_tensor_proto, it returns the inference results as TensorProto object containing a list of numerical elements.

In both cases, the results can be deserialized on the client side with make_ndarray. If you're using tensorflow's make_ndarray to read output in your client application, then the transition between those methods is transparent.

Add environment variable SERIALIZATION_FUNCTION=_prepare_output_as_AppendArrayToTensorProto to enforce the usage of legacy serialization method.

Known Limitations and Plans

  • Currently, Predict, GetModelMetadata and GetModelStatus calls are implemented using Tensorflow Serving API. Classify, Regress and MultiInference are planned to be added.
  • Output_filter is not effective in the Predict call. All outputs defined in the model are returned to the clients.

Contribution

Contribution Rules

All contributed code must be compatible with the Apache 2 license.

All changes needs to have passed style, unit and functional tests.

All new features need to be covered by tests.

Building

Docker image with OpenVINO Model Server can be built with several options:

  • make docker_build_bin dldt_package_url=<url> - using Intel Distribution of OpenVINO binary package (ubuntu base image)
  • make docker_build_apt_ubuntu - using OpenVINO apt packages with ubuntu base image
  • make docker_build_ov_base - using public image of OpenVINO runtime base image
  • make docker_build_clearlinux - using clearlinux base image with DLDT package

Note: Images based on ubuntu include OpenVINO 2020.1.
In clearlinux based image, it is 2019.3 - to be upgraded later soon.

Testing

make style to run linter tests

make unit to execute unit tests (it requires OpenVINO installation followed by make install) Alternatively unit tests can be executed in a container by running the script ./tests/scripts/unit-tests.sh

make test to execute full set of functional tests (it requires building the docker image in advance).

Contact

Submit Github issue to ask question, request a feature or report a bug.


* Other names and brands may be claimed as the property of others.

You can’t perform that action at this time.