Utility of weights for weighted kappa as a measure of interrater agreement on ordinal scale

Moonseong Heo

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Kappa statistics, unweighted or weighted, are widely used for assessing interrater agreement. The weights of the weighted kappa statistics in particular are defined in terms of absolute and squared distances in ratings between raters. It is proposed that those weights can be used for assessment of interrater agreements. A closed form expectations and variances of the agreement statistics referred to as AI1 and AI2, functions of absolute and squared distances in ratings between two raters, respectively, are obtained. AI1 and AI2 are compared with the weighted and unweighted kappa statistics in terms of Type I Error rate, bias, and statistical power using Monte Carlo simulations. The AI1 agreement statistic performs better than the other agreement statistics.

Original languageEnglish (US)
Pages (from-to)205-222
Number of pages18
JournalJournal of Modern Applied Statistical Methods
Volume7
Issue number1
StatePublished - May 2008

Fingerprint

Ordinal Scale
Statistics
Statistical Power
Type I Error Rate
Statistic
Closed-form
Monte Carlo Simulation
Interrater agreement

Keywords

  • Bias
  • Interrater agreement
  • Kappa statistic
  • Statistical power
  • Type I Error rate

ASJC Scopus subject areas

  • Statistics, Probability and Uncertainty
  • Statistics and Probability

Cite this

Utility of weights for weighted kappa as a measure of interrater agreement on ordinal scale. / Heo, Moonseong.

In: Journal of Modern Applied Statistical Methods, Vol. 7, No. 1, 05.2008, p. 205-222.

Research output: Contribution to journalArticle

@article{ca9be89d566e4bdd82b1659a75700777,
title = "Utility of weights for weighted kappa as a measure of interrater agreement on ordinal scale",
abstract = "Kappa statistics, unweighted or weighted, are widely used for assessing interrater agreement. The weights of the weighted kappa statistics in particular are defined in terms of absolute and squared distances in ratings between raters. It is proposed that those weights can be used for assessment of interrater agreements. A closed form expectations and variances of the agreement statistics referred to as AI1 and AI2, functions of absolute and squared distances in ratings between two raters, respectively, are obtained. AI1 and AI2 are compared with the weighted and unweighted kappa statistics in terms of Type I Error rate, bias, and statistical power using Monte Carlo simulations. The AI1 agreement statistic performs better than the other agreement statistics.",
keywords = "Bias, Interrater agreement, Kappa statistic, Statistical power, Type I Error rate",
author = "Moonseong Heo",
year = "2008",
month = "5",
language = "English (US)",
volume = "7",
pages = "205--222",
journal = "Journal of Modern Applied Statistical Methods",
issn = "1538-9472",
publisher = "Wayne State University",
number = "1",

}

TY - JOUR

T1 - Utility of weights for weighted kappa as a measure of interrater agreement on ordinal scale

AU - Heo, Moonseong

PY - 2008/5

Y1 - 2008/5

N2 - Kappa statistics, unweighted or weighted, are widely used for assessing interrater agreement. The weights of the weighted kappa statistics in particular are defined in terms of absolute and squared distances in ratings between raters. It is proposed that those weights can be used for assessment of interrater agreements. A closed form expectations and variances of the agreement statistics referred to as AI1 and AI2, functions of absolute and squared distances in ratings between two raters, respectively, are obtained. AI1 and AI2 are compared with the weighted and unweighted kappa statistics in terms of Type I Error rate, bias, and statistical power using Monte Carlo simulations. The AI1 agreement statistic performs better than the other agreement statistics.

AB - Kappa statistics, unweighted or weighted, are widely used for assessing interrater agreement. The weights of the weighted kappa statistics in particular are defined in terms of absolute and squared distances in ratings between raters. It is proposed that those weights can be used for assessment of interrater agreements. A closed form expectations and variances of the agreement statistics referred to as AI1 and AI2, functions of absolute and squared distances in ratings between two raters, respectively, are obtained. AI1 and AI2 are compared with the weighted and unweighted kappa statistics in terms of Type I Error rate, bias, and statistical power using Monte Carlo simulations. The AI1 agreement statistic performs better than the other agreement statistics.

KW - Bias

KW - Interrater agreement

KW - Kappa statistic

KW - Statistical power

KW - Type I Error rate

UR - http://www.scopus.com/inward/record.url?scp=77950364090&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77950364090&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:77950364090

VL - 7

SP - 205

EP - 222

JO - Journal of Modern Applied Statistical Methods

JF - Journal of Modern Applied Statistical Methods

SN - 1538-9472

IS - 1

ER -