The Neer classification system for proximal humeral fractures. An assessment of interobserver reliability and intraobserver reproducibility

M. L. Sidor, J. D. Zuckerman, T. Lyon, K. Koval, F. Cuomo, N. Schoenberg

Research output: Contribution to journalArticle

300 Citations (Scopus)

Abstract

The radiographs of fifty fractures of the proximal part of the humerus were used to assess the interobserver reliability and intraobserver reproducibility of the Neer classification system. A trauma series consisting of scapular anteroposterior, scapular lateral, and axillary radiographs was available for each fracture. The radiographs were reviewed by an orthopaedic shoulder specialist, an orthopaedic traumatologist, a skeletal radiologist, and two orthopaedic residents, in their fifth and second years of postgraduate training. The radiographs were reviewed on two different occasions, six months apart. Interobserver reliability was assessed by comparison of the fracture classifications determined by the five observers. Intraobserver reproducibility was evaluated by comparison of the classifications determined by each observer on the first and second viewings. Kappa (κ) reliability coefficients were used. All five observers agreed on the final classification for 32 and 30 per cent of the fractures on the first and second viewings, respectively. Paired comparisons between the five observers showed a mean reliability coefficient of 0.48 (range, 0.43 to 0.58) for the first viewing and 0.52 (range, 0.37 to 0.62) for the second viewing. The attending physicians obtained a slightly higher kappa value than the orthopaedic residents (0.52 compared with 0.48). Reproducibility ranged from 0.83 (the shoulder specialist) to 0.50 (the skeletal radiologist), with a mean of 0.66. Simplification of the Neer classification system, from sixteen categories to six more general categories based on fracture type, did not significantly improve either interobserver reliability or intraobserver reproducibility.

Original languageEnglish (US)
Pages (from-to)1745-1750
Number of pages6
JournalJournal of Bone and Joint Surgery - Series A
Volume75
Issue number12
DOIs
StatePublished - Jan 1 1993
Externally publishedYes

Fingerprint

Shoulder Fractures
Orthopedics
Matched-Pair Analysis
Humerus
Physicians
Wounds and Injuries

ASJC Scopus subject areas

  • Surgery
  • Orthopedics and Sports Medicine

Cite this

The Neer classification system for proximal humeral fractures. An assessment of interobserver reliability and intraobserver reproducibility. / Sidor, M. L.; Zuckerman, J. D.; Lyon, T.; Koval, K.; Cuomo, F.; Schoenberg, N.

In: Journal of Bone and Joint Surgery - Series A, Vol. 75, No. 12, 01.01.1993, p. 1745-1750.

Research output: Contribution to journalArticle

@article{2a15d3d2a58a40969ad0855997dfc504,
title = "The Neer classification system for proximal humeral fractures. An assessment of interobserver reliability and intraobserver reproducibility",
abstract = "The radiographs of fifty fractures of the proximal part of the humerus were used to assess the interobserver reliability and intraobserver reproducibility of the Neer classification system. A trauma series consisting of scapular anteroposterior, scapular lateral, and axillary radiographs was available for each fracture. The radiographs were reviewed by an orthopaedic shoulder specialist, an orthopaedic traumatologist, a skeletal radiologist, and two orthopaedic residents, in their fifth and second years of postgraduate training. The radiographs were reviewed on two different occasions, six months apart. Interobserver reliability was assessed by comparison of the fracture classifications determined by the five observers. Intraobserver reproducibility was evaluated by comparison of the classifications determined by each observer on the first and second viewings. Kappa (κ) reliability coefficients were used. All five observers agreed on the final classification for 32 and 30 per cent of the fractures on the first and second viewings, respectively. Paired comparisons between the five observers showed a mean reliability coefficient of 0.48 (range, 0.43 to 0.58) for the first viewing and 0.52 (range, 0.37 to 0.62) for the second viewing. The attending physicians obtained a slightly higher kappa value than the orthopaedic residents (0.52 compared with 0.48). Reproducibility ranged from 0.83 (the shoulder specialist) to 0.50 (the skeletal radiologist), with a mean of 0.66. Simplification of the Neer classification system, from sixteen categories to six more general categories based on fracture type, did not significantly improve either interobserver reliability or intraobserver reproducibility.",
author = "Sidor, {M. L.} and Zuckerman, {J. D.} and T. Lyon and K. Koval and F. Cuomo and N. Schoenberg",
year = "1993",
month = "1",
day = "1",
doi = "10.2106/00004623-199312000-00002",
language = "English (US)",
volume = "75",
pages = "1745--1750",
journal = "Journal of Bone and Joint Surgery - American Volume",
issn = "0021-9355",
publisher = "Journal of Bone and Joint Surgery Inc.",
number = "12",

}

TY - JOUR

T1 - The Neer classification system for proximal humeral fractures. An assessment of interobserver reliability and intraobserver reproducibility

AU - Sidor, M. L.

AU - Zuckerman, J. D.

AU - Lyon, T.

AU - Koval, K.

AU - Cuomo, F.

AU - Schoenberg, N.

PY - 1993/1/1

Y1 - 1993/1/1

N2 - The radiographs of fifty fractures of the proximal part of the humerus were used to assess the interobserver reliability and intraobserver reproducibility of the Neer classification system. A trauma series consisting of scapular anteroposterior, scapular lateral, and axillary radiographs was available for each fracture. The radiographs were reviewed by an orthopaedic shoulder specialist, an orthopaedic traumatologist, a skeletal radiologist, and two orthopaedic residents, in their fifth and second years of postgraduate training. The radiographs were reviewed on two different occasions, six months apart. Interobserver reliability was assessed by comparison of the fracture classifications determined by the five observers. Intraobserver reproducibility was evaluated by comparison of the classifications determined by each observer on the first and second viewings. Kappa (κ) reliability coefficients were used. All five observers agreed on the final classification for 32 and 30 per cent of the fractures on the first and second viewings, respectively. Paired comparisons between the five observers showed a mean reliability coefficient of 0.48 (range, 0.43 to 0.58) for the first viewing and 0.52 (range, 0.37 to 0.62) for the second viewing. The attending physicians obtained a slightly higher kappa value than the orthopaedic residents (0.52 compared with 0.48). Reproducibility ranged from 0.83 (the shoulder specialist) to 0.50 (the skeletal radiologist), with a mean of 0.66. Simplification of the Neer classification system, from sixteen categories to six more general categories based on fracture type, did not significantly improve either interobserver reliability or intraobserver reproducibility.

AB - The radiographs of fifty fractures of the proximal part of the humerus were used to assess the interobserver reliability and intraobserver reproducibility of the Neer classification system. A trauma series consisting of scapular anteroposterior, scapular lateral, and axillary radiographs was available for each fracture. The radiographs were reviewed by an orthopaedic shoulder specialist, an orthopaedic traumatologist, a skeletal radiologist, and two orthopaedic residents, in their fifth and second years of postgraduate training. The radiographs were reviewed on two different occasions, six months apart. Interobserver reliability was assessed by comparison of the fracture classifications determined by the five observers. Intraobserver reproducibility was evaluated by comparison of the classifications determined by each observer on the first and second viewings. Kappa (κ) reliability coefficients were used. All five observers agreed on the final classification for 32 and 30 per cent of the fractures on the first and second viewings, respectively. Paired comparisons between the five observers showed a mean reliability coefficient of 0.48 (range, 0.43 to 0.58) for the first viewing and 0.52 (range, 0.37 to 0.62) for the second viewing. The attending physicians obtained a slightly higher kappa value than the orthopaedic residents (0.52 compared with 0.48). Reproducibility ranged from 0.83 (the shoulder specialist) to 0.50 (the skeletal radiologist), with a mean of 0.66. Simplification of the Neer classification system, from sixteen categories to six more general categories based on fracture type, did not significantly improve either interobserver reliability or intraobserver reproducibility.

UR - http://www.scopus.com/inward/record.url?scp=0027717951&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0027717951&partnerID=8YFLogxK

U2 - 10.2106/00004623-199312000-00002

DO - 10.2106/00004623-199312000-00002

M3 - Article

C2 - 8258543

AN - SCOPUS:0027717951

VL - 75

SP - 1745

EP - 1750

JO - Journal of Bone and Joint Surgery - American Volume

JF - Journal of Bone and Joint Surgery - American Volume

SN - 0021-9355

IS - 12

ER -