LSTM and QRNN Language Model Toolkit for PyTorch
-
Updated
Feb 12, 2022 - Python
LSTM and QRNN Language Model Toolkit for PyTorch
NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/
Efficient, transparent deep learning in hundreds of lines of code.
MATLAB/Octave library for stochastic optimization algorithms: Version 1.0.20
Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"
Keras/TF implementation of AdamW, SGDW, NadamW, Warm Restarts, and Learning Rate multipliers
Machine learning algorithms in Dart programming language
Lua implementation of Entropy-SGD
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124
A tour of different optimization algorithms in PyTorch.
Distributed Learning by Pair-Wise Averaging
Riemannian stochastic optimization algorithms: Version 1.0.3
R/Rcpp implementation of the 'Follow-the-Regularized-Leader' algorithm
Implementation of key concepts of neuralnetwork via numpy
A C++ toolkit for Convex Optimization (Logistic Loss, SVM, SVR, Least Squares etc.), Convex Optimization algorithms (LBFGS, TRON, SGD, AdsGrad, CG, Nesterov etc.) and Classifiers/Regressors (Logistic Regression, SVMs, Least Squares Regression etc.)
Automatic and Simultaneous Adjustment of Learning Rate and Momentum for Stochastic Gradient Descent
Add a description, image, and links to the sgd topic page so that developers can more easily learn about it.
To associate your repository with the sgd topic, visit your repo's landing page and select "manage topics."