Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization
Abstract
We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.
Code References
tensorflow/tensorflow
1 file
tensorflow/core/kernels/squared-loss.h
1
L26
// See page 23 of http://arxiv.org/pdf/1309.2375v2.pdf for the derivation of
Link copied to clipboard!