Calibration of Machine Learning Models

Antonio Bella, Cèsar Ferri, José Hernández‐Orallo, Marïa José Ramírez-Quintana
2012
3 references

Abstract

The evaluation of machine learning models is a crucial step before their application because it is essential to assess how well a model will behave for every single case. In many real applications, not only is it important to know the “total” or the “average” error of the model, it is also important to know how this error is distributed and how well confidence or probability estimations are made. Many current machine learning techniques are good in overall results but have a bad distribution assessment of the error. For these cases, calibration techniques have been developed as postprocessing techniques in order to improve the probability estimation or the error distribution of an existing model. This chapter presents the most common calibration techniques and calibration measures. Both classification and regression are covered, and a taxonomy of calibration techniques is established. Special attention is given to probabilistic classifier calibration.

1 repository
3 references

Code References

scikit-learn/scikit-learn
1 file
doc/modules/model_evaluation.rst
3
loss and refinement loss [Bella2012]_. Calibration loss is defined as the mean
.. [Bella2012] Bella, Ferri, Hernández-Orallo, and Ramírez-Quintana
and applications." Hershey, PA: Information Science Reference (2012).
Link copied to clipboard!