Speakers
Description
Moderator analysis in meta-analysis is commonly used to study whether certain study characteristics can explain the heterogeneity in effect sizes. Understanding why effect sizes vary between contexts is important for selecting the right intervention for the right context and for guiding further research. In order to rely on the results from moderator analyses, the moderator effect estimates need to be unbiased. When publication bias is present, this cannot be guaranteed. Previous research has demonstrated that moderator effects in (mixed-effects) meta-regression may be both, under- or overestimated, depending on the characteristics of the meta-analysis. In practice, one would not only like to understand the influence of publication bias on moderator effects but also how to correct for it. For this purpose, we have conducted an extensive simulation study to assess how well publication bias models can account for publication bias in moderator effect estimates. In total, 1,728 simulation scenarios were generated by varying the true effect sizes, the amount of heterogeneity, the number of studies in the meta-analysis, the primary study sample sizes, and the amount and type of publication bias. We focused on generating estimates from a meta-regression model with either a single binary or a single continuous moderator using the conventional mixed-effects meta-regression model that does not correct for publication bias and different publication bias models. The included publication bias models were step function selection models (Hedges, 1992; Hedges & Vevea, 1996), PET and PEESE and PEESE MRA which allows for different amounts of publication bias at each level of the moderator (Stanley, 2008; Stanley & Doucouliagos, 2014). In this talk, we will present the main results from this simulation study and give recommendations on which of these models can correctly account for publication bias in meta-regression analysis and in which contexts they are applicable.