Speaker
Description
Underpowered studies are ubiquitous in psychology and related disciplines. Meta-analysis can help alleviate this problem, increasing the statistical power by combining the results of a set of primary studies. However, this is not necessarily true when we use a random-effects model, which is currently the predominant approach when carrying out meta-analyses. In this study, we examined the statistical power of a sample of 141 meta-analyses on the effectiveness of clinical psychological interventions. Additionally, we compared the estimated statistical power of these meta-analyses with the power of the individual studies that comprised them and computed the minimum number of primary studies needed to achieve 80% statistical power. To do so, we used different analytical approaches and a Monte Carlo approach. The statistical power of random-effects meta-analyses was computed under different values of the true effect size and levels of heterogeneity. Our results show that under certain scenarios, the hypothesis test of the null-hypothesis of no average effect is underpowered. These scenarios were characterised by small true effect sizes, high heterogeneity, and a small number of included studies in the meta-analysis. Statistical power of the meta-analysis could also be lower than the median or maximum power of the included primary studies. These results are discussed in light of the statistical basis of random-effects meta-analysis, and recommendations are made for applied researchers. Funding: MICIU/AEI /10.13039/501100011033/ and FEDER funds, European Union, grant no. PID2022-137328NB-I00