Online citations, reference lists, and bibliographies.

Princesses Are Bigger Than Elephants: Effect Size As A Category Error In Evidence‐based Education

Adrian J. Simpson
Published 2018 · Psychology

Cite This
Download PDF
Analyze on Scholarcy
Share
Much of the evidential basis for recent policy decisions is grounded in effect size: the standardised mean difference in outcome scores between a study's intervention and comparison groups. This is interpreted as measuring educational influence, importance or effectiveness of the intervention. This article shows this is a category error at two levels. At the individual study level, the intervention plays only a partial role in effect size, so treating effect size as a measure of the intervention is a mistake. At the meta‐analytic level, the assumptions needed for a valid comparison of the relative effectiveness of interventions on the basis of relative effect size are absurd. While effect size continues to have a role in research design, as a measure of the clarity of a study, policy makers should recognise the lack of a valid role for it in practical decision‐making.
This paper references
10.1037/0033-2909.119.2.254
The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory.
A. Kluger (1996)
10.1080/15391523.2005.10782442
Effects of an Online Instructional Application on Reading And Mathematics Standardized Test Scores
Trey Martindale (2005)
10.3102/0013189X16656615
How Methodological Features Affect Effect Sizes in Education
Alan C. K. Cheung (2016)
10.1080/02680939.2017.1280183
The misdirection of public policy: comparing and combining standardised effect sizes
Adrian Simpson (2017)
10.1037/h0045186
The statistical power of abnormal-social psychological research: a review.
J. Cohen (1962)
10.1080/1743727X.2016.1166486
Communicating comparative findings from meta-analysis in educational research: some examples and suggestions
Steve Higgins (2016)
10.1177/0013164496056005002
Practical Significance: A Concept Whose Time Has Come
R. Kirk (1996)
10.1037//0882-7974.12.1.84
Overcoming feelings of powerlessness in "aging" researchers: A primer on statistical power in analysis of variance designs.
J. Levin (1997)
10.1007/S10648-013-9227-1
Assessing the Impact of Testing Aids on Post-Secondary Student Performance: A Meta-Analytic Investigation
Karen H. Larwin (2013)
10.1177/109442810141003
Correcting the Effect Size of d for Range Restriction and Unreliability
P. Bobko (2001)
10.1080/1049482900010104
A Large-Scale Evaluation of an Intelligent Discovery World: Smithtown
V. Shute (1990)
10.1080/0141192840100202
Meta‐analysis: an explication
C. Fitz‐Gibbon (1984)
10.3102/00346543071003449
Small Group and Individual Learning with Technology: A Meta-Analysis
Y. Lou (2001)
10.3102/00028312017002211
A Meta-analysis of the Effects of Advance Organizers on Learning and Retention
John W. Luiten (1980)
10.1177/002246698401800106
Meta-Analysis: an Abuse of Research Integration
H. Eysenck (1984)
10.1177/0193841X11419281
Evidence-Based Versus Junk-Based Evaluation Research
Richard A. Berk (2011)
10.3389/fpsyg.2013.00026
Similarity-Dissimilarity Competition in Disjunctive Classification Tasks
F. Mathy (2013)
10.3102/00346543069001021
Effects of Small-Group Learning on Undergraduates in Science, Mathematics, Engineering, and Technology: A Meta-Analysis
L. Springer (1997)
10.1037/A0034752
A meta-analysis of the effectiveness of intelligent tutoring systems on college students' academic learning
Saiying Steenbergen-Hu (2014)
10.1037/bul0000098
Variables Associated With Achievement in Higher Education: A Systematic Review of Meta-Analyses
Michael Schneider (2017)
10.1016/J.LEARNINSTRUC.2007.09.013
Instructional animation versus static pictures: A meta-analysis
Tim N. Höffler (2007)
10.1080/13632434.2017.1343655
Educators are not uncritical believers of a cult figure
John Hattie (2017)
10.1080/00220671.1991.10702818
Effects of Frequent Classroom Testing.
R. Bangert-Drowns (1991)
10.1080/02671522.2016.1225811
What works and what fails? Evidence from seven popular literacy ‘catch-up’ schemes for the transition to secondary school in England
Stephen Gorard (2017)
10.2307/747888
Preventing Early Reading Failure with One-to-One Tutoring: A Review of Five Programs.
B. Wasik (1993)
10.1111/j.1949-8594.1971.tb15466.x
An Experimental Study of the Relationship of Homework to Pupil Success in Computation With Fractions.
R. F. Gray (1971)
10.3102/00028312019003415
Effects of Ability Grouping on Secondary School Students: A Meta-analysis of Evaluation Findings
Chen-Lin C. Kulik (1981)
10.1348/000712608X377117
Standardized or simple effect size: what should be reported?
Thom Baguley (2009)
10.3102/0013189X031007015
Evidence-Based Education Policies: Transforming Educational Practice and Research
R. Slavin (2002)
10.1080/00220671.1984.10885581
The effects of homework on learning: A quantitative synthesis.
Rosanne A. Paschal (1984)
10.1002/TEA.10027
On the evaluation of systemic science education reform: Searching for instructional sensitivity
M. A. Ruiz-Primo (2002)
10.3368/jhr.53.4.0516.7952R
Does Teaching Children How to Play Cognitively Demanding Games Improve Their Educational Attainment?
John W. Jerrim (2018)



This paper is referenced by
Semantic Scholar Logo Some data provided by SemanticScholar