Greedy function approximation: A gradient boosting machine.

Jerome H. Friedman
2001
6 references

Abstract

Function estimation/approximation is viewed from the perspective\nof numerical optimization in function space, rather than parameter space. A\nconnection is made between stagewise additive expansions and steepest-descent\nminimization. A general gradient descent “boosting” paradigm is\ndeveloped for additive expansions based on any fitting criterion.Specific\nalgorithms are presented for least-squares, least absolute deviation, and\nHuber-M loss functions for regression, and multiclass logistic likelihood for\nclassification. Special enhancements are derived for the particular case where\nthe individual additive components are regression trees, and tools for\ninterpreting such “TreeBoost” models are presented. Gradient\nboosting of regression trees produces competitive, highly robust, interpretable\nprocedures for both regression and classification, especially appropriate for\nmining less than clean data. Connections between this approach and the boosting\nmethods of Freund and Shapire and Friedman, Hastie and Tibshirani are\ndiscussed.

1 repository
6 references

Code References

scikit-learn/scikit-learn
2 files
doc/modules/ensemble.rst
5
[Friedman2001]_. GBDT is an excellent model for both regression and
chapter on gradient boosting in [Friedman2001]_ and is related to the parameter
control the sensitivity with regards to outliers (see [Friedman2001]_ for
[Friedman2001]_ proposed a simple regularization strategy that scales
.. [Friedman2001] Friedman, J.H. (2001). :doi:`Greedy function approximation: A gradient
sklearn/_loss/loss.py
1
# See formula before algo 4 in Friedman (2001), but we apply it to y_true,
Link copied to clipboard!