### Abstract

Background: In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C _{α}, can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C _{α} have been lacking for various study designs. Methods: We formulate a statistical model for parallel items to derive power functions as a function of C _{α} under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C _{α} is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. Results: It is shown that C _{α} is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C _{α} . Closed-form power functions and samples size determination formulas are derived in terms of C _{α}, for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C _{α}, regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Conclusion: Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C _{α}, or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Discussion: Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.

Original language | English (US) |
---|---|

Article number | 86 |

Journal | BMC Medical Research Methodology |

Volume | 15 |

Issue number | 1 |

DOIs | |

State | Published - Oct 14 2015 |

### Fingerprint

### Keywords

- Coefficient alpha
- Cronbach alpha
- Effect size
- Internal consistency
- Reliability
- Statistical power
- Test-retest correlation

### ASJC Scopus subject areas

- Health Informatics
- Epidemiology

### Cite this

*BMC Medical Research Methodology*,

*15*(1), [86]. https://doi.org/10.1186/s12874-015-0070-6

**Statistical power as a function of Cronbach alpha of instrument questionnaire items Data analysis, statistics and modelling.** / Heo, Moonseong; Kim, Namhee; Faith, Myles S.

Research output: Contribution to journal › Article

*BMC Medical Research Methodology*, vol. 15, no. 1, 86. https://doi.org/10.1186/s12874-015-0070-6

}

TY - JOUR

T1 - Statistical power as a function of Cronbach alpha of instrument questionnaire items Data analysis, statistics and modelling

AU - Heo, Moonseong

AU - Kim, Namhee

AU - Faith, Myles S.

PY - 2015/10/14

Y1 - 2015/10/14

N2 - Background: In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C α, can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C α have been lacking for various study designs. Methods: We formulate a statistical model for parallel items to derive power functions as a function of C α under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C α is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. Results: It is shown that C α is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C α . Closed-form power functions and samples size determination formulas are derived in terms of C α, for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C α, regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Conclusion: Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C α, or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Discussion: Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.

AB - Background: In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C α, can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C α have been lacking for various study designs. Methods: We formulate a statistical model for parallel items to derive power functions as a function of C α under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C α is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. Results: It is shown that C α is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C α . Closed-form power functions and samples size determination formulas are derived in terms of C α, for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C α, regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Conclusion: Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C α, or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Discussion: Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.

KW - Coefficient alpha

KW - Cronbach alpha

KW - Effect size

KW - Internal consistency

KW - Reliability

KW - Statistical power

KW - Test-retest correlation

UR - http://www.scopus.com/inward/record.url?scp=84944076635&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84944076635&partnerID=8YFLogxK

U2 - 10.1186/s12874-015-0070-6

DO - 10.1186/s12874-015-0070-6

M3 - Article

C2 - 26467219

AN - SCOPUS:84944076635

VL - 15

JO - BMC Medical Research Methodology

JF - BMC Medical Research Methodology

SN - 1471-2288

IS - 1

M1 - 86

ER -