-
Updated
Jul 4, 2020 - JavaScript
onnx
Here are 180 public repositories matching this topic...
Several parts of the op sec like the main op description, attributes, input and output descriptions become part of the binary that consumes ONNX e.g. onnxruntime causing an increase in its size due to strings that take no part in the execution of the model or its verification.
Setting __ONNX_NO_DOC_STRINGS doesn't really help here since (1) it's not used in the SetDoc(string) overload (s
-
Updated
Jun 10, 2020 - Python
GCP QUICKSTART GUIDE
To get started using this repo quickly using a Google Cloud Platform (GCP) Deep Learning Virtual Machine (VM) follow the instructions below. New GCP users are eligible for a $300 free credit offer. Other quickstart options for this repo include our [Google Colab Notebook](https://colab.research.google.com/github/ultralytics/yolov3/blob
Platform (like ubuntu 16.04/win10): Windows 10
Python version: 3.7.4, mmdnn==0.2.5
Running scripts: mmconvert -f caffe -df keras -om test
I know that this command is not supposed to run without passing an input file, but the error message is incorrect and should be improved:
mmconvert: error: argument --srcFramework/-f: invalid choice: 'None' (choose from 'caffe', 'caffe2', 'cn
-
Updated
Jun 17, 2020 - Jupyter Notebook
bad sample for cuda
Shouldn't this be session_options not session_option?
-
Updated
Jul 3, 2020 - TypeScript
The model zoo currently contains three models (Resnet50, SqueezeNet and VGG19) that each have two variants which is confusing to end consumers.
Ideally these should be de-duplicated. If that doesn't make sense then they should state their differences outside of origin framework and be organized in a way that places them all in the same sub-folder/path on the repo.
Following are pointers to t
Changes made in #1874 to test/models/mxnet/1rnn_layer_3lstm.json reveal an error in the fusion of a sequence of LSTM cells into a single layer RNN for gpu.
The fused result should look like:

Instead, we currently see:
 where our code had been merged into.
Can we add more instructions on how to migrate to tensorflow-onnx? It looks like that package is doing TF -> ONNX only, not ONNX -> TF. What am
-
Updated
Jun 23, 2020 - Python
-
Updated
Jul 3, 2020 - Jupyter Notebook
-
Updated
Jul 4, 2020 - Rust
Typos in readme.md
-
Updated
May 12, 2018 - Python
Issue
In onnxmltools/convert/lightgbm/convert.py, inline comments indicate that only LGBMClassifiers, LGBMRegressor and Booster are supported:
This function produces an equivalent ONNX model of the given lightgbm model.
The supported lightgbm modules are
-
Updated
Jul 3, 2020 - C++
Add describe info
Is your feature request related to a problem? Please describe.
N/A
Describe the solution you'd like
it could help to have functions to describe the expected shapes of input and output.
For example, in the case of image classification, the input's shape is related to the size of the picture. This could allow to easily use a pre-processing of the picture, without hardcoding it.
Gett
-
Updated
Jun 22, 2020 - Python
Improve this page
Add a description, image, and links to the onnx topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the onnx topic, visit your repo's landing page and select "manage topics."
我发现examples/retinaface.cpp中,如果开启OMP加速的话似乎在检测到人脸时会发生内存泄漏,但我定位不了这个问题的具体原因。
值得注意的时,如果将qsort_descent_inplace函数中的OMP指令注释掉这个问题就会消失掉。