SGDR: Stochastic Gradient Descent with Warm Restarts

Ilya Loshchilov, Frank Hutter
2016
8 references

Abstract

Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR

2 repositories
6 references

Code References

pytorch/pytorch
1 file
torch/optim/lr_scheduler.py
2
L1386 https://arxiv.org/abs/1608.03983
L2137 https://arxiv.org/abs/1608.03983
tensorflow/tensorflow
1 file
tensorflow/python/keras/optimizer_v2/learning_rate_schedule.py
4
L565 See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),
L660 See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),
L794 with Warm Restarts. https://arxiv.org/abs/1608.03983
L915 with Warm Restarts. https://arxiv.org/abs/1608.03983
Link copied to clipboard!