Inter-Coder Agreement for Computational Linguistics

2008
1 reference

Abstract

This article is a survey of methods for measuring agreement among corpus annotators. It exposes the mathematics and underlying assumptions of agreement coefficients, covering Krippendorff's alpha as well as Scott's pi and Cohen's kappa; discusses the use of coefficients in several annotation tasks; and argues that weighted, alpha-like coefficients, traditionally less used than kappa-like measures in computational linguistics, may be more appropriate for many corpus annotation tasks—but that their use makes the interpretation of the value of the coefficient even harder.

1 repository
1 reference

Code References

scikit-learn/scikit-learn
1 file
sklearn/metrics/_classification.py
1
.. [2] `R. Artstein and M. Poesio (2008). "Inter-coder agreement for
Link copied to clipboard!