SGDR: Stochastic Gradient Descent with Warm Restarts.

Evans Lansing Smith
2017
10 references

Abstract

In this paper, we describe a phenomenon, which we named "super-convergence",\nwhere neural networks can be trained an order of magnitude faster than with\nstandard training methods. The existence of super-convergence is relevant to\nunderstanding why deep networks generalize well. One of the key elements of\nsuper-convergence is training with one learning rate cycle and a large maximum\nlearning rate. A primary insight that allows super-convergence training is that\nlarge learning rates regularize the training, hence requiring a reduction of\nall other forms of regularization in order to preserve an optimal\nregularization balance. We also derive a simplification of the Hessian Free\noptimization method to compute an estimate of the optimal learning rate.\nExperiments demonstrate super-convergence for Cifar-10/100, MNIST and Imagenet\ndatasets, and resnet, wide-resnet, densenet, and inception architectures. In\naddition, we show that super-convergence provides a greater boost in\nperformance relative to standard training when the amount of labeled training\ndata is limited. The architectures and code to replicate the figures in this\npaper are available at github.com/lnsmith54/super-convergence. See\nhttp://www.fast.ai/2018/04/30/dawnbench-fastai/ for an application of\nsuper-convergence to win the DAWNBench challenge (see\nhttps://dawn.cs.stanford.edu/benchmark/).\n

2 repositories
10 references

Code References

â–¶ pytorch/pytorch
1 file
â–¶ torch/optim/lr_scheduler.py
2
https://arxiv.org/abs/1608.03983
https://arxiv.org/abs/1608.03983
â–¶ tensorflow/tensorflow
2 files
â–¶ tensorflow/python/keras/optimizer_v2/learning_rate_schedule.py
4
See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),
See [Loshchilov & Hutter, ICLR2016](https://arxiv.org/abs/1608.03983),
with Warm Restarts. https://arxiv.org/abs/1608.03983
with Warm Restarts. https://arxiv.org/abs/1608.03983
â–¶ tensorflow/python/keras/optimizer_v2/legacy_learning_rate_decay.py
4
[Loshchilov et al., 2017]
[Loshchilov et al., 2017]
[Loshchilov et al., 2017]
[Loshchilov et al., 2017]
Link copied to clipboard!