Speaker
Description
Reporting biases are well-known phenomena that can undermine the credibility of published scientific findings and potentially distort meta-analytic effect estimates. These biases arise when the decision to publish or report results is influenced by their nature or direction. Traditionally, methods for assessing small-study effects and evaluating the robustness of results against publication bias have been widely used to address this issue. However, in recent years, novel approaches to detecting and correcting for reporting biases have emerged and gained attention. The proliferation of methods for assessing reporting biases presents challenges, as their sensitivity, specificity, and accuracy can vary under different conditions, with no single method consistently outperforming others under all conditions. Consequently, the wide availability of alternative methods could introduce researcher bias into these analyses, creating a paradox where reporting bias may itself be present in reporting bias assessments.
This project has two main aims. First, we investigated current practices in reporting bias analysis among recent meta-analyses. Second, we examined the potential impact of the variety of available approaches for reporting bias analyses on the robustness of conclusions.
We included meta-analyses published in Psychological Bulletin from January 2020 to May 10, 2024. The articles selected met the following criteria: (a) they included at least one meta-analysis, (b) they were not re-analyses of previously published meta-analyses, and (c) their unit of analysis in the synthesis was primary studies. Additionally, for analyses related to the second aim, meta-analyses had to meet the following criteria: (a) the original data of the meta-analysis had to be openly available in a machine-readable format, (b) the meta-analysis had to be based on traditional standardized effect sizes for group differences or bivariate associations, and (c) reporting bias had to be assessed in the original meta-analysis. We collected data on the prevalence of reporting bias assessments, the methods used to assess reporting bias, the number of reporting bias methods applied, whether reporting bias assessments were pre-registered, any deviations from pre-registered protocols, and the conclusions reached regarding the presence of bias. Second, for the subset of meta-analyses meeting the second aim’s criteria, we reanalyzed their primary data using a set of pre-registered methods. Then, the number of methods indicating the presence of bias, based on pre-registered criteria, were counted and these results were grouped according to the original conclusions reached.