📕 subnode [[@KGBicheno/inter rater_agreement]] in 📚 node [[inter-rater_agreement]]

inter-rater agreement

Go back to the [[AI Glossary]]

A measurement of how often human raters agree when doing a task. If raters disagree, the task instructions may need to be improved. Also sometimes called inter-annotator agreement or inter-rater reliability. See also Cohen's kappa, which is one of the most popular inter-rater agreement measurements.

📖 stoas
⥱ context