Neural Optimizer Search with Reinforcement Learning.

Irwan Bello, Barret Zoph, Vijay Vasudevan, Quoc V. Le
2017
4 references

Abstract

We present an approach to automate the process of discovering optimization methods, with a focus on deep learning architectures. We train a Recurrent Neural Network controller to generate a string in a domain specific language that describes a mathematical update equation based on a list of primitive functions, such as the gradient, running average of the gradient, etc. The controller is trained with Reinforcement Learning to maximize the performance of a model after a few epochs. On CIFAR-10, our method discovers several update rules that are better than many commonly used optimizers, such as Adam, RMSProp, or SGD with and without Momentum on a ConvNet model. We introduce two new optimizers, named PowerSign and AddSign, which we show transfer well and improve training on a variety of different tasks and architectures, including ImageNet classification and Google's neural machine translation system.

1 repository
4 references

Code References

â–¶ tensorflow/tensorflow
2 files
â–¶ tensorflow/python/keras/optimizer_v2/learning_rate_schedule.py
2
https://arxiv.org/abs/1709.07417
https://arxiv.org/abs/1709.07417
â–¶ tensorflow/python/keras/optimizer_v2/legacy_learning_rate_decay.py
2
[Bello et al., 2017](http://proceedings.mlr.press/v70/bello17a.html)
[Bello et al., 2017](http://proceedings.mlr.press/v70/bello17a.html)
Link copied to clipboard!