Context: Better peer review is needed, but proven methods to improve quality are unknown. Our objective was to determine whether written feedback to reviewers improves subsequent reviews. Methods: Eligible reviewers were randomized to intervention or control (receiving other reviewers' unscored reviews and the editor's decision letter). Study 1 (September 1998-September 2000) included reviewers with a median quality score of 3 or lower; study 2 (April 2000-January 2002), reviewers with median score of 4 or lower. Study 1 was designed with a power of 0.80 to detect a difference in score of 1; study 2, with a power of 0.80 to detect a difference of 0.5. All reviewers were at a peer-reviewed journal (Annals of Emergency Medicine). The main outcome measure was the editor's routine quality rating (1-5) of all reviews (blinded to study enrollment). Results: For study 1, 51 reviewers were eligible and randomized and 35 had sufficient data (182 reviews) for analysis. The mean individual reviewer rating change was 0.16 (95% confidence interval [Cl], -0.26 to 0.58) for control and -0.13 (-0.49 to 0.23) for intervention. For study 2, 127 reviewers were eligible and randomized, and 95 had sufficient data (324 reviews). Controls had a mean individual rating change of 0.12 (95% Cl, -0.20 to 0.26) and intervention reviewers, 0.06 (-0.19 to 0.31). Conclusions: In study 1, minimal feedback from editors on review quality had no effect on subsequent performance of poor-quality reviewers, and the trend was toward a negative effect. In study 2, feedback to average reviewers was more extensive and supportive but produced no improvement in reviewer performance. Simple written feedback to reviewers seems to be an ineffective educational tool.
ASJC Scopus subject areas