:metal: LabelImg is a graphical image annotation tool and label object bounding boxes in images
-
Updated
Oct 19, 2019 - 343 commits
- Python
:metal: LabelImg is a graphical image annotation tool and label object bounding boxes in images
PyTorch tutorials and fun projects including neural talk, neural style, poem writing, anime generation
Experience, Learn and Code the latest breakthrough innovations with Microsoft AI
An absolute beginner's guide to Machine Learning and Image Classification with Neural Networks
Hi, @zhreshold
Do you have a plan to provide the object detection speed at the inference phase, like fps.
I am not sure how to determine the inference time, e.g. before nms or not?
Curated list of Machine Learning, NLP, Vision, Recommender Systems Project Ideas
It would be nice to extend the documentation with examples of interpretation script for segmentation models:
https://github.com/opencv/open_model_zoo/tree/master/intel_models/road-segmentation-adas-0001
https://github.com/opencv/open_model_zoo/blob/master/intel_models/instance-segmentation-security-0010/description/instance-segmentation-security-0010.md
Differentiable architecture search for convolutional and recurrent networks
Nudity detection with JavaScript and HTMLCanvas
This is the placeholder for information regarding the building of deepdetect on Ubuntu 16.04 LTS (expected future reference platform).
Build status: successful, tested with Caffe back-end on CPU
Thanks to @MartinThoma the correct way of doing it is below:
$ sudo apt-get remove libcurlpp0
$ cd [wherever]
$ git clone https://github.com/jpbarrette/curlpp.git
$ cd curlpp
$ cmake .
$ s
Hi.
I wanna understand the embeddings of the USE model in detail; where should I get the info?
For example, ELMo's embeddings are described on https://tfhub.dev/google/elmo/2.
But, in the case of USE, there is only a description the output is a 512 dimensional vector on https://tfhub.dev/google/universal-sentence-encoder/2.
From where is the output coming?
I could find the output is
Hi I'm trying to do audio classification but the files are too big to upload somewhere, plus it's sensitive data too. Is there a way to provide the url field locally on the tasks.json?
Thanks in advance
Labelbox is the fastest way to annotate data to build and ship computer vision applications.
High level network definitions with pre-trained weights in TensorFlow
A curated list of deep learning image classification papers and codes
Sandbox for training convolutional networks for computer vision
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
Use of Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation
Implementation of EfficientNet model. Keras and TensorFlow Keras.
Food Classification with Deep Learning in Keras / Tensorflow
A (PyTorch) imbalanced dataset sampler for oversampling low frequent classes and undersampling high frequent ones.
Implementation code of the paper: FishNet: A Versatile Backbone for Image, Region, and Pixel Level Prediction, NeurIPS 2018
Practice on cifar100(ResNet, DenseNet, VGG, GoogleNet, InceptionV3, InceptionV4, Inception-ResNetv2, Xception, Resnet In Resnet, ResNext,ShuffleNet, ShuffleNetv2, MobileNet, MobileNetv2, SqueezeNet, NasNet, Residual Attention Network, SENet)
PyTorch extensions for fast R&D prototyping and Kaggle farming
Official Implementation of 'Fast AutoAugment' in PyTorch.
A Guidance on PyTorch Coding Style Based on Kaggle Dogs vs. Cats
Related files:
albumentations/augmentations/transforms.py[Depricated]suffix.Related files:
tools/make_transforms_docs.py