22–25 Jul 2025
EAM2025
Atlantic/Canary timezone

Recent Developments in Meta-Analysis

Not scheduled
1h 30m
EAM2025

EAM2025

Av. César Manrique, 38320 La Laguna, Santa Cruz de Tenerife

Speakers

Belén Fernández-Castilla Franziska F. Rüffer Hannelies de Jonge Robbie van Aert Rubén López Nicolás Suzanne Jak (University of Amsterdam)

Description

Meta-analysis is a statistical technique that combines effect sizes from independent primary studies on the same topic, and is currently seen as the “gold standard” for synthesizing and summarizing results from multiple primary studies. The main research objectives of a meta-analysis are (i) estimating the average effect, (ii) assessing the heterogeneity of true effect sizes, and if the true effect size differs across studies (iii) incorporating moderator variables in the meta-analysis to explain this heterogeneity.

All three research objectives are covered with the presentations in this symposium. Each presentation focuses on one or more key aspects of meta-analysis such as transforming effect sizes, the robustness of meta-analytic conclusions, the consequences of publication bias, statistical modeling of complex meta-analytic data, and estimation and testing in moderator analyses.

The first two presentations focus on Meta-Analytic Structural Equation Modeling (MASEM) which has increased in popularity in the last couple of years. The first presentation will discuss how dichotomous variables can be included in MASEM by transforming standardized mean differences to point-biserial correlations. The second presentation introduces a new modeling approach for MASEM that mimics two-level structural equation models. The third presentation studies the state-of-the-art of reporting practices in a large number of published meta-analyses and re-analyzes these meta-analyses to examine the robustness of their conclusions. The fourth and fifth presentations both focus on moderator analyses in meta-analysis. The focus of the fourth presentation is to what extent the parameter estimation of moderator analyses is distorted by publication bias. The fifth presentation compares different testing procedures for moderator analysis in a three-level meta-analysis model. The sixth presentation introduces a new method to correct for publication bias in multivariate and multilevel meta-analysis models.

Abstract

Reporting biases are well-known phenomena that can undermine the credibility of published scientific findings and potentially distort meta-analytic effect estimates. These biases arise when the decision to publish or report results is influenced by their nature or direction. Traditionally, methods for assessing small-study effects and evaluating the robustness of results against publication bias have been widely used to address this issue. However, in recent years, novel approaches to detecting and correcting for reporting biases have emerged and gained attention. The proliferation of methods for assessing reporting biases presents challenges, as their sensitivity, specificity, and accuracy can vary under different conditions, with no single method consistently outperforming others under all conditions. Consequently, the wide availability of alternative methods could introduce researcher bias into these analyses, creating a paradox where reporting bias may itself be present in reporting bias assessments.

This project has two main aims. First, we investigated current practices in reporting bias analysis among recent meta-analyses. Second, we examined the potential impact of the variety of available approaches for reporting bias analyses on the robustness of conclusions.

We included meta-analyses published in Psychological Bulletin from January 2020 to May 10, 2024. The articles selected met the following criteria: (a) they included at least one meta-analysis, (b) they were not re-analyses of previously published meta-analyses, and (c) their unit of analysis in the synthesis was primary studies. Additionally, for analyses related to the second aim, meta-analyses had to meet the following criteria: (a) the original data of the meta-analysis had to be openly available in a machine-readable format, (b) the meta-analysis had to be based on traditional standardized effect sizes for group differences or bivariate associations, and (c) reporting bias had to be assessed in the original meta-analysis. We collected data on the prevalence of reporting bias assessments, the methods used to assess reporting bias, the number of reporting bias methods applied, whether reporting bias assessments were pre-registered, any deviations from pre-registered protocols, and the conclusions reached regarding the presence of bias. Second, for the subset of meta-analyses meeting the second aim’s criteria, we reanalyzed their primary data using a set of pre-registered methods. Then, the number of methods indicating the presence of bias, based on pre-registered criteria, were counted and these results were grouped according to the original conclusions reached.

Abstract

Moderator analyses play a crucial role in meta-analysis, as they help to identify relationships between study characteristics and the effect size magnitude. When multiple effect sizes are reported within studies, various methods can be used to perform moderator analysis or meta-regression. These include three-level models (which may or may not account for variability in moderator effects across studies), Robust Variance Estimation (RVE) methods (with or without the wild bootstrapping technique), and multilevel models combined with RVE. In this study, we conducted a simulation to compare the performance of these methods in terms of Type I error rates and statistical power when performing meta-regressions, focusing specifically on qualitative moderator variables (such as study design or sample type). This focus arises from the common occurrence of unbalanced effect size distributions across moderator categories (i.e., most effect sizes belong to one category, while few belong to others), and it remains unclear which method performs best under these conditions. Additionally, we provide an empirical example of how these differences among methods affect real meta-analyses.

To simulate typical meta-analyses, we generated standardized mean differences under varying conditions, such as the number of studies, effect size differences across moderator categories, and average outcome numbers, among others. We analyzed qualitative variables with two or three categories to represent study or effect size characteristics, and the effect sizes were distributed in balanced, unbalanced, or highly unbalanced ways across moderator categories. When simulating three categories, we also used Tukey’s multiple comparison correction to assess differences across categories.

Results showed that when the qualitative variable referred to effect size characteristics, the three-level model that did not account for moderator effect variability (the one commonly implemented in practice) had highly inflated Type I error rates, while other methods maintained acceptable rates. Power levels were generally lower when the moderator referred to effect size characteristics, and these were minimally affected by unbalanced effect size distributions across categories. When the moderator referred to study characteristics, all methods exhibited acceptable Type I error rates, but power was inadequate, particularly when effect sizes were highly unbalanced. Across all conditions, three-level models combined with RVE provided the best Type I error-power balance, although power remained very low.

In conclusion, this study suggests that, in the presence of multiple effect sizes within studies, multilevel models should always be applied with RVE correction when conducting meta-regressions. Additionally, further advancements are needed to generally improve power for detecting moderator effects.

Communication 5

Correcting for Publication Bias in Moderator Effects: A Simulation Study

Communication 6

Correcting for publication bias in multivariate and multilevel meta-analysis: A multivariate step function selection model approach

Abstract

In a recent paper we presented a way of incorporating mean structures in meta-analytic structural equation modeling (MASEM). MASEM with means is applicable when the studies included in the meta-analysis used the same indicators, measured on the same scales. The meta-analytic data consist of the studies’ covariance matrices and mean vectors. The MASEM then restricts the vector of meta-analyzed means and covariances to the structure of the hypothesized SEM, and quantifies the heterogeneity of the model-implied covariances and means across studies. In this presentation we explain how the heterogeneity matrix of the model implied means can be interpreted as what is often referred to as ΣBETWEEN in two-level SEM, while the model-implied pooled covariance matrix can be interpreted as ΣWITHIN. We illustrate how to fit SEM models to the heterogeneity matrix of the model implied means in the R-package OpenMx, and compare the results with those obtained from fitting two-level models directly on raw data in lavaan. These new modeling options have implications for meta-analytic research (e.g., extending the range of models that can be evaluated) as well as for two-level SEM (e.g., fitting models on summary statistics, flexibility in adding random effects).

Abstract

Meta-analysis is a statistical technique that combines effect sizes from independent primary studies on the same topic, and is currently seen as the “gold standard” for synthesizing and summarizing results from multiple primary studies. The main research objectives of a meta-analysis are (i) estimating the average effect, (ii) assessing the heterogeneity of true effect sizes, and if the true effect size differs across studies (iii) incorporating moderator variables in the meta-analysis to explain this heterogeneity.

All three research objectives are covered with the presentations in this symposium. Each presentation focuses on one or more key aspects of meta-analysis such as transforming effect sizes, the robustness of meta-analytic conclusions, the consequences of publication bias, statistical modeling of complex meta-analytic data, and estimation and testing in moderator analyses.

The first two presentations focus on Meta-Analytic Structural Equation Modeling (MASEM) which has increased in popularity in the last couple of years. The first presentation will discuss how dichotomous variables can be included in MASEM by transforming standardized mean differences to point-biserial correlations. The second presentation introduces a new modeling approach for MASEM that mimics two-level structural equation models. The third presentation studies the state-of-the-art of reporting practices in a large number of published meta-analyses and re-analyzes these meta-analyses to examine the robustness of their conclusions. The fourth and fifth presentations both focus on moderator analyses in meta-analysis. The focus of the fourth presentation is to what extent the parameter estimation of moderator analyses is distorted by publication bias. The fifth presentation compares different testing procedures for moderator analysis in a three-level meta-analysis model. The sixth presentation introduces a new method to correct for publication bias in multivariate and multilevel meta-analysis models.

Abstract

Meta-analytic structural equation modeling (MASEM) is a method to systematically synthesize results from primary studies, allowing the researchers to simultaneously examine multiple relations among variables by fitting a structural equation model to the pooled correlations. Incorporating dichotomous variables (e.g., having a specific disease or not) into MASEM poses challenges. While primary studies that investigate the relation between a dichotomous and continuous variable typically report standardized mean differences (e.g., Cohen’s d), in the specialized MASEM software it is not possible to directly include standardized mean differences. Instead, MASEM typically uses correlation matrices as input. A proposed solution is to convert the standardized mean differences to point-biserial correlations. Here lies a complication because, in contrast to a standardized mean difference, the point-biserial correlation depends on the distribution of group membership. Through three Monte Carlo simulation studies, we investigated which conversion formula is suitable when one wants to include a dichotomous variable in MASEM. We varied the prevalence, sampling plan, and within-study sample sizes, and the distribution of participants over two groups. Our results show that which conversion is suitable, and which is not depends on the aim of the meta-analyst. We have extended our freely available web application to fill the existing gap and to assist the meta-analyst with their conversions.

Abstract

Univariate meta-analysis models assume that all effect sizes included in the meta-analysis are independent. This assumption is violated if, for example, two outcomes are reported in a study that are of interest to the meta-analyst or a study reports multiple experiments administered by the same researchers in the same lab. The multivariate and multilevel meta-analysis model allow to model dependent effect sizes and these models have recently gained in popularity among meta-analysts in psychology.

One of the largest threats to multivariate and multilevel meta-analysis is publication bias, but there are currently no methods available that correct for publication bias in these models. Selection model approaches are nowadays frequently used to correct for publication bias in a meta-analysis. In this presentation, we extend the univariate step function selection model approach to multivariate and multilevel meta-analysis. We propose a strict and more relaxed selection model that assigns a different publication probability to studies that have only statistically significant outcomes or at least one significant outcome.

We illustrate how the multivariate step function selection model approach can be used in a sensitivity analysis by applying it to the data of a published multivariate and multilevel meta-analysis. Two simulation studies tailored to these two applications show that the multivariate step function selection model approach outperforms the multivariate and multilevel meta-analysis model that do not correct for publication bias. We conclude this presentation with offering guidance for applying the proposed method in practice and discussing limitations of the method as well as opportunities for future development.

Symposium title Recent Developments in Meta-Analysis
Coordinator Robbie C.M. van Aert
Affiliation Tilburg University
Keywords meta-analysis; MASEM; meta-regression; publication bias
Number of communicatios 6
Communication 1 Can We Include Dichotomous Variables in Meta-Analytic Structural Equation Modeling? Mind the Prevalence
Authors Hannelies de Jonge, Belén Fernández-Castilla, Suzanne Jak, & Kees-Jan Kan
Affiliation Leiden University
Communication 2 Fitting two-level structural equation models to meta-analytic data
Authors Suzanne Jak, Mike W.-L Cheung
Affiliation University of Amsterdam
Communication 3 Reporting Biases Analyses in Psychological Meta-analyses: Current Practices and Robustness of Conclusions
Authors Rubén López Nicolás, Miguel A. Vadillo, Alejandro Sandoval-Lentisco
Communication 4 Comparing Type I error and power rates in meta-regression with multiple effect sizes: A study of analytical approaches
Authors Belén Fernández-Castilla, José A. López-López, María Rubio-Aparicio, Barbara González-Amado
Affiliation Universidad Nacional de Educación a Distancia
Authors Franziska F. Rüffer, Robbie C.M. van Aert, Marcel A.L.M. van Assen, Jelte M. Wicherts
Affiliation Tilburg University
Abstract Moderator analysis in meta-analysis is commonly used to study whether certain study characteristics can explain the heterogeneity in effect sizes. Understanding why effect sizes vary between contexts is important for selecting the right intervention for the right context and for guiding further research. In order to rely on the results from moderator analyses, the moderator effect estimates need to be unbiased. When publication bias is present, this cannot be guaranteed. Previous research has demonstrated that moderator effects in (mixed-effects) meta-regression may be both, under- or overestimated, depending on the characteristics of the meta-analysis. In practice, one would not only like to understand the influence of publication bias on moderator effects but also how to correct for it. For this purpose, we have conducted an extensive simulation study to assess how well publication bias models can account for publication bias in moderator effect estimates. In total, 1026 simulation scenarios were generated by varying the true effect sizes, the amount of heterogeneity, the number of studies in the meta-analysis, the primary study sample sizes, and the amount and type of publication bias. We focused on generating estimates from a meta-regression model with either a single binary or a single continuous moderator using the conventional mixed-effects meta-regression model that does not correct for publication bias and different publication bias models. The included publication bias models were step function selection models (Hedges, 1992; Hedges & Vevea, 1996), PET and PEESE and PEESE MRA which allows for different amounts of publication bias at each level of the moderator (Stanley, 2008; Stanley & Doucouliagos, 2014). In this talk, we will present the main results from this simulation study and give recommendations on which of these models can correctly account for publication bias in meta-regression analysis and in which contexts they are applicable.
Authors Robbie C.M. van Aert
Affiliation Tilburg University

Primary authors

Presentation materials

There are no materials yet.