Thanks for your great work!
Could you please share the influence of the batch size and the number of GPUs?
Also how to choose a suitable learning rate and batch size if the available GPUs is not enough.
Thank you!
Absolutely amazing SOTA Google Colab (Jupyter) Notebooks for creating/training SOTA Music AI models and for generating music with Transformer technology (Google XLNet/Transformer-XL)
Pytorch Implementation of Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods
cBench provides a unified CLI and API to reproduce results from ML&systems research papers on bare-metal platforms and participate in collaborative benchmarking and optimization using live scoreboards. See the real-world example for the MLPerf benchmark:
This repository's scripts follow chainer official examples' style as possible. Reproducing code for the paper "Learning Discrete Representations via Information Maximizing Self Augmented Training"
Thanks for your great work!
Could you please share the influence of the batch size and the number of GPUs?
Also how to choose a suitable learning rate and batch size if the available GPUs is not enough.
Thank you!