vision
Here are 626 public repositories matching this topic...
In README.md, it says:
SETX PATH C:/Python27/;C:/Python27/Scripts/;C:/OpenCV2.3/opencv/build/x86/vc10/bin/;%PATH%
SETX PYTHONPATH C:/OpenCV2.3/opencv/build/python/2.7/;%PYTHONPATH%
however, the correct one should be:
SETX PYTHONPATH C:\OpenCV2.3\opencvbuild\python\2.7;%PYTHONPATH%
SETX PATH C:\Python27;C:\Python27\Scripts;C:\OpenCV2.3\opencv\build\x86\vc10\bin;%PATH%
and also, it's only
Currently the throttle output by the autopilot has no awareness of the client-side brake button or max throttle settings. It would be nice to update the server to respect the client side settings and brake control, which will make it easier prevent and stop runaways.
-
Updated
Mar 4, 2020 - Swift
-
Updated
Mar 1, 2020 - Swift
Per this comment in #12
-
Updated
Feb 27, 2020 - Swift
-
Updated
Feb 20, 2020 - C++
-
Updated
Mar 4, 2020 - Python
-
Updated
Mar 3, 2020 - Python
-
Updated
Mar 4, 2020 - JavaScript
-
Updated
Feb 26, 2020 - Python
-
Updated
Mar 4, 2020 - Python
-
Updated
Mar 2, 2020 - Cuda
The next stable release will probably be Aravis 1.0. I'm currently reviewing the API and make some corrections before the API freeze. That means API breaks happen:
- arv_device_set_*_feature_value functions take a GError
- device_get_status moved to ArvCamera
I am planning to add other OpenCV functions like Feature detectors like ORB. Is there any documentation on things to look out for adding such features?
-
Updated
Mar 2, 2020
-
Updated
Jan 21, 2020 - Jupyter Notebook
-
Updated
Feb 15, 2020 - Swift
-
Updated
Mar 3, 2020 - Python
-
Updated
Mar 4, 2020 - C++
-
Updated
Jan 21, 2020 - Swift
-
Updated
Mar 4, 2020 - C++
-
Updated
Feb 16, 2020 - Swift
-
Updated
Jan 27, 2020 - C#
Improve this page
Add a description, image, and links to the vision topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the vision topic, visit your repo's landing page and select "manage topics."
Line 1137 of the Caffe.Proto states "By default, SliceLayer concatenates blobs along the "channels" axis (1)."
Yet, the documentation on http://caffe.berkeleyvision.org/tutorial/layers/slice.html states, "The Slice layer is a utility layer that slices an input layer to multiple output layers along a given dimension (currently num or channel only) with given slice indices." which seems to be