Statistical power as a function of Cronbach alpha of instrument questionnaire items Data analysis, statistics and modelling

Moonseong Heo, Namhee Kim, Myles S. Faith

Research output: Contribution to journalArticle

17 Citations (Scopus)

Abstract

Background: In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C α, can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C α have been lacking for various study designs. Methods: We formulate a statistical model for parallel items to derive power functions as a function of C α under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C α is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. Results: It is shown that C α is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C α . Closed-form power functions and samples size determination formulas are derived in terms of C α, for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C α, regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Conclusion: Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C α, or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Discussion: Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.

Original languageEnglish (US)
Article number86
JournalBMC Medical Research Methodology
Volume15
Issue number1
DOIs
StatePublished - Oct 14 2015

Fingerprint

Statistical Models
Sample Size
Research Design
Outcome Assessment (Health Care)
Clinical Trials
Surveys and Questionnaires

Keywords

  • Coefficient alpha
  • Cronbach alpha
  • Effect size
  • Internal consistency
  • Reliability
  • Statistical power
  • Test-retest correlation

ASJC Scopus subject areas

  • Health Informatics
  • Epidemiology

Cite this

Statistical power as a function of Cronbach alpha of instrument questionnaire items Data analysis, statistics and modelling. / Heo, Moonseong; Kim, Namhee; Faith, Myles S.

In: BMC Medical Research Methodology, Vol. 15, No. 1, 86, 14.10.2015.

Research output: Contribution to journalArticle

@article{4b555b509d7f4e669653e56e9822776d,
title = "Statistical power as a function of Cronbach alpha of instrument questionnaire items Data analysis, statistics and modelling",
abstract = "Background: In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C α, can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C α have been lacking for various study designs. Methods: We formulate a statistical model for parallel items to derive power functions as a function of C α under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C α is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. Results: It is shown that C α is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C α . Closed-form power functions and samples size determination formulas are derived in terms of C α, for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C α, regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Conclusion: Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C α, or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Discussion: Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.",
keywords = "Coefficient alpha, Cronbach alpha, Effect size, Internal consistency, Reliability, Statistical power, Test-retest correlation",
author = "Moonseong Heo and Namhee Kim and Faith, {Myles S.}",
year = "2015",
month = "10",
day = "14",
doi = "10.1186/s12874-015-0070-6",
language = "English (US)",
volume = "15",
journal = "BMC Medical Research Methodology",
issn = "1471-2288",
publisher = "BioMed Central",
number = "1",

}

TY - JOUR

T1 - Statistical power as a function of Cronbach alpha of instrument questionnaire items Data analysis, statistics and modelling

AU - Heo, Moonseong

AU - Kim, Namhee

AU - Faith, Myles S.

PY - 2015/10/14

Y1 - 2015/10/14

N2 - Background: In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C α, can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C α have been lacking for various study designs. Methods: We formulate a statistical model for parallel items to derive power functions as a function of C α under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C α is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. Results: It is shown that C α is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C α . Closed-form power functions and samples size determination formulas are derived in terms of C α, for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C α, regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Conclusion: Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C α, or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Discussion: Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.

AB - Background: In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C α, can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C α have been lacking for various study designs. Methods: We formulate a statistical model for parallel items to derive power functions as a function of C α under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C α is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. Results: It is shown that C α is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C α . Closed-form power functions and samples size determination formulas are derived in terms of C α, for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C α, regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Conclusion: Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C α, or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Discussion: Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.

KW - Coefficient alpha

KW - Cronbach alpha

KW - Effect size

KW - Internal consistency

KW - Reliability

KW - Statistical power

KW - Test-retest correlation

UR - http://www.scopus.com/inward/record.url?scp=84944076635&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84944076635&partnerID=8YFLogxK

U2 - 10.1186/s12874-015-0070-6

DO - 10.1186/s12874-015-0070-6

M3 - Article

C2 - 26467219

AN - SCOPUS:84944076635

VL - 15

JO - BMC Medical Research Methodology

JF - BMC Medical Research Methodology

SN - 1471-2288

IS - 1

M1 - 86

ER -