Skip to content
Zihang Li edited this page May 6, 2020 · 26 revisions

Tengine Quick Start

Tengine is an excellent lightweight deep neural network inference engine used in end-side or embedded environment, and an AIoT development kit based on AI inference framework compatible with multiple operating systems and deep learning algorithms. This document will demonstrate the classification model (TensorFlow MobileNetv1 model) on x86 Linux and Arm64 Linux, so that you can quickly get started with Tengine.

Compile Tengine on Linux x86

Download source code

$ git clone https://github.com/OAID/tengine/

Install tools and libraries

Before you start to compile Tengine, you need to confirm that you have installed cmake, g ++, if not, you can install it by the following method:

$ sudo apt install cmake g++

Make

$ cd /path/to/Tengine
$ mkdir build
$ cd build
$ cmake -DCMAKE_TOOLCHAIN_FILE=../toolchains/x86.gcc.toolchain.cmake ..
$ make –j4 && make install

The compilation is successful if we get the libtengine.so file.

qtang@tengine-train:~/github/Tengine/build$ ls
benchmark  CMakeCache.txt  CMakeFiles  cmake_install.cmake  examples  libtengine.so  Makefile  tests

Compile Tengine on Arm64

Compilation method is similar to x86

Download source code

$ git clone https://github.com/OAID/tengine/

Install tools and libraries

$ sudo apt install cmake g++

Make

$ cd /path/to/Tengine
$ mkdir build
$ cd build
$ cmake -DCMAKE_TOOLCHAIN_FILE=../toolchains/arm64.native.gcc.toolchain.cmake ..
$ make –j4 && make install

The compilation is successful if we get the libtengine.so file.

Model Conversion

Binary tools

  1. We provide a model conversion tool convert_model_to_tm, which is convenient for you to convert Tensorflow / Caffe / MxNet / ONNX and other framework models into Tengine model format tmfile:

convert_model_to_tm

  1. If we use the source code compilation, the storage path of the tool is at:
$ tree install/
install/
├── benchmark
│   ├── bench_mobilenet
│   └── bench_sqz
├── convert_tools
│   └── convert_model_to_tm (here!)

Model Zoo

We provide tmfile files of common models, you can find them here:

Tengine Model zoo(Password : hhgc)

Demo of model conversion

The demo using convert_model_to_tm convert mobilenet.pb to Tengine model is as follows:

$ ./convert_model_to_tm -f tensorflow -m ./mobilenet.pb -o ./mobilenet_tf.tmfile

Which:

$ ./install/convert_tools/convert_model_to_tm -h

[Usage]: ./install/convert_tools/convert_model_to_tm [-h] [-f file_format] [-p proto_file] [-m model_file] [-o output_tmfile]


-f: framworks such as caffe、caffe_single、onnx、mxnet、tensorflow、darknet、tflite. Here set tensorflow.
-m: source model path
-o: output Tengine model path
// There is no need to set "-p" option, only set "-m" option when only one model input.

// Convert Caffe model
$ ./install/tool/convert_model_to_tm -f caffe -p models/sqz.prototxt -m models/squeezenet_v1.1.caffemodel -o models/squeezenet.tmfile 

// Convert TensorFlow model
$ ./install/tool/convert_model_to_tm -f tensorflow -m models/squeezenet.pb -o models/squeezenet_tf.tmfile

It should be noted that convert_model_to_tm can only be compiled or run on the Linux x86. It depends on the protobuf third library (> = 3.0) during compilation, and needs to be installed in advance.

$ sudo apt-get install libprotobuf-dev protobuf-compiler

After getting the Tengine model such as Mobilenet.tmfile, you can use it to develop applications on various platforms.

Introduction of common API

Tengine core API as follows:

  • init_tengine

Initialize Tengine, this function only called once.

  • create_graph

Create Tengine calculation graph

  • prerun_graph

Pre-run, allocate required resources.

  • run_graph

Run Tengine graph inference

  • postrun_graph

Stop running graph and release the resources.

  • destroy_graph

Destroy graph

Postrun_graph and destroy_graph are called after the model inference is performed, and are generally called continuously.

Run mobilenet classification model using Tengine C++ API

Tengine provides C, C ++, Python API for users to use. here we show how to run MobileNetv1 network model for image classification using Tengine C ++ API, so that you can get start quickly. In order to make users familiar with the functions of the Tengine C++ API, we have detailed code comments to help users develop their own code faster. In the code, we use the well-known tiger cat.jpg as the test picture.

Test code

The complete test code is as follows:

#include <unistd.h>
#include <iostream>
#include <functional>
#include <algorithm>
#include <fstream>
#include <iomanip>

#include "tengine_operations.h"
#include "tengine_c_api.h"
#include "tengine_cpp_api.h"
#include "common_util.hpp"

const char* model_file = "./models/mobilenet.tmfile";
const char* image_file = "./tests/images/cat.jpg";
const char* label_file = "./models/synset_words.txt";

const float channel_mean[3] = {104.007, 116.669, 122.679};

using namespace TEngine;

int repeat_count = 100;
void LoadLabelFile(std::vector<std::string>& result, const char* fname)
{
    std::ifstream labels(fname);

    std::string line;
    if(labels.is_open())
    {
        while(std::getline(labels, line))
            result.push_back(line);
    }
}

void PrintTopLabels(const char* label_file, float* data)
{
    // load labels
    std::vector<std::string> labels;
    LoadLabelFile(labels, label_file);

    float* end = data + 1000;
    std::vector<float> result(data, end);
    std::vector<int> top_N = Argmax(result, 5);

    for(unsigned int i = 0; i < top_N.size(); i++)
    {
        int idx = top_N[i];
        if(labels.size())
            std::cout << std::fixed << std::setprecision(4) << result[idx] << " - \"" << labels[idx] << "\"\n";
        else
            std::cout << std::fixed << std::setprecision(4) << result[idx] << " - " << idx << "\n";
        
    }
}

void get_input_data(const char* image_file, float* input_data, int img_h, int img_w, const float* mean, float scale)
{
    image img = imread(image_file);

    image resImg = resize_image(img, img_w, img_h);
    resImg = rgb2bgr_premute(resImg);
    float* img_data = ( float* )resImg.data;
    int hw = img_h * img_w;
    for(int c = 0; c < 3; c++)
        for(int h = 0; h < img_h; h++)
            for(int w = 0; w < img_w; w++)
            {
                input_data[c * hw + h * img_w + w] = (*img_data - mean[c]) * scale;
                img_data++;
            }
}

int main(int argc, char* argv[])
{
    std::string device = "";
    std::string file_path = "";
    char* cpu_list_str = nullptr;

    int res;

    while((res = getopt(argc, argv, "p:d:f:r:")) != -1)
    {
        switch(res)
        {
            case 'p':
                cpu_list_str = optarg;
                break;

            case 'd':
                device = optarg;
                break;

            case 'f':
                file_path = optarg;
                break;

            case 'r':
                repeat_count = strtoul(optarg, NULL, 10);
                break;

            default:
                break;
        }
    }

    int img_h = 224;
    int img_w = 224;

    tengine::Net mobilenet;
    tengine::Tensor input_tensor;
    tengine::Tensor output_tensor;

    /* load model */
    mobilenet.load_model(NULL, "tengine", model_file);

    /* prepare input data */
    input_tensor.create(img_w, img_h, 3);
    get_input_data(image_file, (float* )input_tensor.data, img_h, img_w, channel_mean, 0.017);
    mobilenet.input_tensor("data", input_tensor);
    
    /* forward */
    mobilenet.run();

    /* get result */
    mobilenet.extract_tensor("fc7", output_tensor);

    /* after process */
    PrintTopLabels(label_file, (float*)output_tensor.data);
    
    std::cout << "--------------------------------------\n";
    std::cout << "ALL TEST DONE\n";

    return 0;
}

Compilation

The default configuration has been implemented to automatically compile the demo program, stored in ./install/examples/.

install/
├── benchmark
│   ├── bench_mobilenet
│   └── bench_sqz
├── examples
│   ├── classification
│   ├── mobilenet_ssd
│   └── synset_words.txt

Result

Put the test picture and classification label file in the specified directory and run it:

$ ./classification -m /path/to/mobilenet.tmfile -l /path/to/labels.txt -i /path/to/img.jpg -g 224,224 -s 0.017 -w 104.007,116.669,122.679

--------------------------------------
0.3465 - "tiger cat"
0.1609 - "tabby"
0.1564 - "weasel"
0.0844 - "Egyptian cat"
0.0258 - "bucket"
--------------------------------------
ALL TEST DONE

It can be seen that we have successfully classified the test picture as tiger cat. The above is the basic getting started guide. You can explore other functions on your own. We will also update various tutorial examples from time to time.

...\(^0^)/...233

You can’t perform that action at this time.