#
sgd
Here are 105 public repositories matching this topic...
Efficient, transparent deep learning in hundreds of lines of code.
deep-learning
neural-network
matlab
cnn
lstm
sgd
rnn
mlp
cudnn
softmax-layer
quasi-recurrent-neural-networks
-
Updated
Apr 25, 2019 - MATLAB
Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
-
Updated
Nov 19, 2018 - Python
MATLAB/Octave library for stochastic optimization algorithms: Version 1.0.17
machine-learning
big-data
optimization
linear-regression
machine-learning-algorithms
sgd
classification
logistic-regression
gradient
optimization-algorithms
online-learning
gradient-descent-algorithm
variance-reduction
stochastic-gradient-descent
newtons-method
stochastic-optimization-algorithms
quasi-newton
-
Updated
Nov 20, 2018 - Terra
Keras implementation of AdamW, SGDW, NadamW, Warm Restarts, and Learning Rate multipliers
-
Updated
May 7, 2020 - Python
Lua implementation of Entropy-SGD
-
Updated
Apr 9, 2018 - Python
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124
-
Updated
Nov 14, 2018 - Python
Distributed Learning by Pair-Wise Averaging
python
distributed-systems
deep-learning
pytorch
sgd
neural-networks
gossiping
asynchronous-learning
-
Updated
Oct 31, 2017 - Python
R/Rcpp implementation of the 'Follow-the-Regularized-Leader' algorithm
-
Updated
Mar 26, 2018 - R
Machine learning algorithms with dart
dart
classifier
data-science
machine-learning
algorithm
linear-regression
machine-learning-algorithms
regression
hyperparameters
sgd
logistic-regression
softmax-regression
dartlang
stochastic-gradient-descent
softmax
lasso-regression
batch-gradient-descent
mini-batch-gradient-descent
softmax-classifier
softmax-algorithm
-
Updated
May 6, 2020 - Dart
Implementation of key concepts of neuralnetwork via numpy
neural-network
numpy
cnn
dropout
mnist
sgd
regularization
deeplearning
xavier-initializer
relu
cross-entropy-loss
numpy-neuralnet-exercise
-
Updated
Feb 6, 2018 - Python
-
Updated
Mar 17, 2018 - Python
Code for paper "On Sampling Strategies for Neural Network-based Collaborative Filtering"
-
Updated
Oct 1, 2017 - Python
A C++ toolkit for Convex Optimization (Logistic Loss, SVM, SVR, Least Squares etc.), Convex Optimization algorithms (LBFGS, TRON, SGD, AdsGrad, CG, Nesterov etc.) and Classifiers/Regressors (Logistic Regression, SVMs, Least Squares Regression etc.)
machine-learning
machine-learning-algorithms
sgd
tron
logistic-regression
regularization
gradient-descent
support-vector-machines
optimization-algorithms
convex-optimization
lbfgs-algorithm
lbfgs
jensen
-
Updated
Oct 11, 2018 - C++
Automatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient Descent
-
Updated
Oct 30, 2019 - Python
Riemannian stochastic optimization algorithms: Version 1.0.3
machine-learning
big-data
optimization
machine-learning-algorithms
constrained-optimization
sgd
manifold
nonlinear-optimization
optimization-algorithms
large-scale-learning
online-learning
stochastic-optimizers
variance-reduction
stochastic-gradient-descent
nonlinear-optimization-algorithms
stochastic-optimization
riemannian-manifold
riemannian-optimization
non-convex-optimization
-
Updated
May 31, 2019 - MATLAB
A Java library for Stochastic Gradient Descent (SGD)
-
Updated
Jan 19, 2018 - Java
implementation of factorization machine, support classification.
-
Updated
Jun 7, 2018 - C++
Distributed Fieldaware Factorization Machines based on Parameter Server
-
Updated
Jan 5, 2018 - C++
Adaptive Reinforcement Learning of curious AI basketball agents
nba
data-science
reinforcement-learning
simulation
random-forest
genetic-algorithm
basketball
pandas
seaborn
feature-selection
bayesian-network
artificial-intelligence
sgd
neural-networks
adaptive-learning
bayesian-inference
svm-classifier
curiosity
information-gain
one-hot-encode
-
Updated
Oct 28, 2017 - Jupyter Notebook
NBSVM using SGD for fast processing of large datasets
-
Updated
Mar 17, 2017 - Java
nickgreenquist
commented
Dec 17, 2018
right now, we are using our test set as a validation set (ie basing learning rate of it for example). We should change our split function to return a 70/10/10 split.
Simple MATLAB toolbox for deep learning network: Version 1.0.3
machine-learning
deep-neural-networks
big-data
deep-learning
optimization
matlab
machine-learning-algorithms
sgd
convolutional-layers
convolutional-neural-networks
adam
adagrad
variance-reduction
stochastic-gradient-descent
forward-backward
stochastic-optimization
softmax-layer
relu-layer
sgd-optimizer
sgd-momentum
-
Updated
Apr 16, 2019 - MATLAB
TensorFlow implementation of entropy SGD
-
Updated
Aug 9, 2018 - Python
Telegram Bot that does Singaporean Stuff
-
Updated
Apr 22, 2019 - Python
vector quantization for stochastic gradient descent .
-
Updated
Apr 13, 2020 - Python
Improve this page
Add a description, image, and links to the sgd topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the sgd topic, visit your repo's landing page and select "manage topics."
Hi,
First thanks for releasing this, it has been quite helpful.
Would be great if the README page mentioned in software requirements the dependency on pytorch-qrnn (for QRNN-based models). Currently, following the instructions and running one of the standard QRNN models will just throw a ModuleNotFoundError with no instructions. Would be great if there was a prior mention and/or a try/catch w