Abstract
In the literature on repeated measures designs, it is common to find assumptions about the magnitude of pre-test and post-test score variances. Some researchers argue for the homoskedasticity of both variances, justifying the use of the pooled standard deviation as the standardizer in effect size calculation. In contrast, others contend that post-test variance should be greater due to the influence of the treatment, advocating for standardization based solely on pre-test score variance. We provide an empirical evaluation of these assumptions using a database of primary studies from various meta-analyses in the field of clinical psychology. Specifically, we calculated the percentage of studies where post-test score variance exceeds pre-test score variance, and vice versa. Additionally, we assessed the proportion of studies in which these differences are statistically significant. Finally, to assess the practical relevance of this decision, we estimated the combined standardized mean change for each meta-analysis, comparing the results using the pre-test scores standard deviation versus the pooled standard deviation, and we compared the results. We discuss the implications of the results of the study.
Funding: MICIU/AEI /10.13039/501100011033/ and FEDER funds, European Union, grant no. PID2022-137328NB-I00
Poster | Evaluating assumptions on variance magnitude in repeated measures designs: an empirical evaluation |
---|---|
Author | Manuel Jesús Albaladejo-Sánchez, José Antonio López-López, Julio Sánchez-Meca, Fulgencio Marín-Martínez, María Rubio-Aparicio, and Juan José López-García |
Affiliation | University of Murcia (Spain) |
Keywords | meta-analysis, effect size |