SGDR: Stochastic Gradient Descent with Warm Restarts.
Abstract
In this paper, we describe a phenomenon, which we named "super-convergence",\nwhere neural networks can be trained an order of magnitude faster than with\nstandard training methods. The existence of super-convergence is relevant to\nunderstanding why deep networks generalize well. One of the key elements of\nsuper-convergence is training with one learning rate cycle and a large maximum\nlearning rate. A primary insight that allows super-convergence training is that\nlarge learning rates regularize the training, hence requiring a reduction of\nall other forms of regularization in order to preserve an optimal\nregularization balance. We also derive a simplification of the Hessian Free\noptimization method to compute an estimate of the optimal learning rate.\nExperiments demonstrate super-convergence for Cifar-10/100, MNIST and Imagenet\ndatasets, and resnet, wide-resnet, densenet, and inception architectures. In\naddition, we show that super-convergence provides a greater boost in\nperformance relative to standard training when the amount of labeled training\ndata is limited. The architectures and code to replicate the figures in this\npaper are available at github.com/lnsmith54/super-convergence. See\nhttp://www.fast.ai/2018/04/30/dawnbench-fastai/ for an application of\nsuper-convergence to win the DAWNBench challenge (see\nhttps://dawn.cs.stanford.edu/benchmark/).\n