ADADELTA: An Adaptive Learning Rate Method

Matthew D. Zeiler
2012
6 references

Abstract

We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.

2 repositories
5 references

Code References

â–¶ pytorch/pytorch
1 file
â–¶ torch/optim/adadelta.py
1
L239 https://arxiv.org/abs/1212.5701
â–¶ tensorflow/tensorflow
3 files
â–¶ tensorflow/python/keras/optimizer_v1.py
1
L401 method](http://arxiv.org/abs/1212.5701)
â–¶ tensorflow/python/keras/optimizer_v2/adadelta.py
1
L62 - [Zeiler, 2012](http://arxiv.org/abs/1212.5701)
â–¶ tensorflow/python/training/adadelta.py
2
L30 [Zeiler, 2012](http://arxiv.org/abs/1212.5701)
L31 ([pdf](http://arxiv.org/pdf/1212.5701v1.pdf))
Link copied to clipboard!