A Simple Method For Detecting And Adjusting Meta-Analyses For Publication Bias
F Richy, J Reginster
Keywords
adjustment methods, meta-analysis, p, ublication bias
Citation
F Richy, J Reginster. A Simple Method For Detecting And Adjusting Meta-Analyses For Publication Bias. The Internet Journal of Epidemiology. 2005 Volume 3 Number 2.
Abstract
weighting on the global estimate provided by classic meta-analysis. When a significant imbalance is found (i.e. when the required moment of forced exceeds the 95% confidence interval around equilibrium), the system can be counterbalanced by generating the “missing studies”, taken from the mirrored set of actual studies. The method was compared to the linear regression approach using a simulated dataset in which we intentionally induced publication bias, and afterwards on two datasets from previous meta-analyses by our group.
Background
The use of a biased set of studies in terms of completeness or effect magnitude has been regularly cited as the main criticism against meta-analysis, since its development in the late seventies1,2,3,4. Indeed, using a non-representative set of studies is prone to lead to and overestimation of the “true” intervention effect. The initial report of publication bias dates to 1959 when Sterling noticed that 97% of articles published in four journals reported statistically significant findings, raising the likelihood that studies lacking significant differences were not being published5. Funnel plot techniques for evaluating the probability of publication bias have initiated in 1984 by Light and colleagues6 and have been introduced for the first time in formal research in 1988 by Vandenbroucke and colleagues7. This graphic approach to publication bias has been developed, mainly thanks to the work of Matthias Egger and colleagues3,8. Its grounds are scatter plots of the treatment effects estimated from individual studies (horizontal axis) against some measure of study size (vertical axis). Because only precision in estimating the underlying treatment effect increases as a study's sample size increases, effect estimates from small studies scatter more widely at the bottom of the graph, with the spread narrowing among larger studies. In the absence of bias the plot therefore resembles a symmetrical inverted funnel. Asymmetrical funnel plots may indicate publication bias or be due to exaggeration of treatment effects in small studies of low quality (small study effects). Indeed, we have arguments favouring an additional probabilistic nature of small study effect, distinct from publication bias9.
Sterne and colleagues3,10 and Thornton and colleagues2 have recently reviewed the available methods to detect and, further, to adjust the global estimates of meta-analyses, for publication bias. Several researchers have proposed methods, of various complexities and requirements, to this end. The association between treatment effect size and its standard error, unrelated to sample-size, is the keystone of two statistical methods, a rank correlation test11 and a regression method8, which have been proposed as a means of avoiding the subjectivity associated with visual assessment of funnel plots. However the validity of these methods has been questioned12,13,14. Other methods, like trim and fill15 have been proposed to adjust the global estimate for funnel plot asymmetry. Trim and Fill builds on the key idea behind the funnel plot; that in the absence of bias the plot would be symmetric about the summary effect. If there are more small studies on the right than on the left, the concern is that studies may be missing from the left. The Trim and Fill procedure imputes these missing studies, adds them to the analysis, and then re-computes the summary effect size. However, simulation studies have found that the trim and fill method may suffer from low specificity, e.g. it might detect “missing” studies in a substantial proportion of metaanalyses, even in the absence of bias16. Thus there is a danger of over correcting nonexistent bias in response to funnel plot asymmetry arising from nothing more than random variation. Thus, the current consensus on correction of treatment effect estimates for bias reflects that it should be avoided as such corrections may hardly depend on the assumptions made10,17.
The aim of this paper was to describe an innovative, easy and systematically applicable method for simultaneously detecting and correcting for publication bias in meta-analyses of published data.
Methods
Our method uses the principle of “moment of force” (MF) that we have all seen in physics classes. The moment of a force is the multiplication of the force size by its distance to the fulcrum of the considered lever. The moment of the f1 force (in Newtons) is f1xd1 Newtons-metre. The balance is reached when the moment of the f1 (MF1) force equals the moment of the f2 (MF2) force:
∑[(
Our approach for detecting and correcting for publication bias is based on this simple principle. In a classic funnel plot graph of relative risks or odds ratios, the Ln(estimate) is plotted against its precision (1/Std ERR(Ln estimate)). In our model, the fulcrum is located below the value of the Ln(global estimate) provided by the meta-analysis. Each study has a specific moment of force on the system, represented by the multiplication of its precision (the force applied to the system)
A simple approach to determine the number of missing studies is to divide the parameter X by highest moment of force among existing “negative” studies: .
We compared our results with a formal statistic test of funnel plot asymmetry8, shown to be more powerful than the rank correlation method3. In this linear regression method, the standard normal deviates of the Ln(RRi) (defined by Ln(RRi) divided by its standard error) are regressed against their precisions (1/Std error of the Ln(RRi). The main assumption is that, in absence of publication bias, the linear regression scatters about a line that runs through the origin at SND=0, at p<0.10.
The validation of a method to detect publication bias remains difficult in absence of a “gold standard” set of data, in which we can formally exclude a prevalent publication bias. An approach would consist in working on the basis of couples of data including pairs of large RCTs and their corresponding meta-analyses. Notwithstanding, this approach suffers some limitations. Using such pairs induces a variability in the appraisal of their concordance since large RCT may themselves be victims of publication bias, as outlined with the recent hype on paroxetine 18, and because of the lack of association between precision and sample-size8. Furthermore, small studies and large ones may not reflect the same treatment effect, due to risk differences among included patients, comedications, duration, etc3.
In order to test our method in a evidence-based way, we have initially applied it to a set of 20 studies that we have generated using an iterative algorithm simulating the estimates and the precisions of twenty trials, under a condition of strict equivalence (difference <0.0001) of the moments of force below and above the corresponding global estimate given by meta-analysis (figure 1). As a second step, we applied our method to actual datasets extracted from our previous works on glucosamine and chondroïtin19, and D-analogs20.
All development operations were done using Microsoft Excel and Visual Basic (® Microsoft Corporation 1985-2005) and Statistica 7.0 (Statsoft, France, 2005). The meta-analytic calculations were performed using Comprehensive meta-analysis 1.0.25 and 2.0(Biostat, USA, 2002-2006).
Results
Simulated set of data
The corresponding meta-analysis statistics provided:
We have sequentially removed the studies #1 to #6 at the extreme left side of the distribution to simulate an increasing publication bias favouring the treatment effect. We have iteratively computed the corresponding global estimate, tests for association and heterogeneity, and compared the linear regression test for publication bias to our method.
After removal two studies and more, and using our method, the system appeared to be no more balanced, since a moment of force of -6,02 was needed to adjust for equilibrium, which was out of the 95% confidence interval around the balance point (-3,80; 3,90). Using this dataset, our method displayed a better predictive ability compared to the linear regression method, which was not able to detect any publication bias.
When two negative studies were actually removed from the original set of 20 studies, the number of missing studies provided by our method was 1.58, rounded to 2. Accordingly, the following data were added:
The corresponding funnel plot became:
In the situation of a “massive” publication bias (six studies removed), the following studies were added:
The corresponding funnel plot became:
Application of the method to existing datasets
The first set of data we used was the one from our meta-analysis on glucosamine and chondroïtin19. Using this dataset and the outcome “responders to treatment vs placebo”, the use of the linear regression method provided borderline significant results: intercept=1.67 at p=0.107.
Our method provided a general moment of force of X=-5.59. The 95% confidence interval around the equilibrium was (-9.9; 5.55), reflecting a non-significant publication bias. The number of missing studies was 5.59/9.90=0.56.
When applying the two methods to our dataset on the antifracture efficacy of D-analogs alfacalcidol and calcitriol20:
{image:12}
The linear regression method led to non-significant results: intercept=-0.03 (0.94) and our method provided a global moment of force of 5.78; the 95% confidence interval around the equilibrium was (-7.56; 7.70), also reflecting a non-significant publication bias. The number of estimated missing studies was: 5.78/7.70=0.75.
Discussion
The concerns about publication bias have been regularly disputed since the mid-eighties. As this bias occurs in a place where it may bias the two main components of evidence-based medicine (the RCTs and their meta-analyses), it has been argued that all results of meta-analysis with asymmetrical funnel plots should be regarded with caution21,22. Although the development of methods for assessing publication bias has a reasonably long history, these methods are rarely used in practice. One possible reason for the lack of uptake of methods to deal with publication bias is that previous approaches have involved modelling methods that are difficult to implement and require lengthy calculations.
Since the remarkable publications by Matthias Egger and colleagues in the Journal8,23,24,25,26, the potentially serious consequences of such publication bias have been realised for some time, and there have been repeated calls for worldwide registration of any ongoing clinical trial27,28,29,30. Unfortunately, it is unlikely that this will be widely instituted in the foreseeable future. This is why publication bias analysis and correction methods have to become essential, systematic parts of meta-analyses.
Some cautionary remarks are needed in assessing tests for publication bias. Since the vast majority of methods is based on the lack of symmetry in the funnel plot, and asymmetry might be due to factors other than publication bias3,31, the results produced by some methods may not always reflect detection of publication bias nor correction for it.
In the present paper, we developed and applied an innovative, simple method able to detect, and correct for publication bias that does not rely on symmetry or correlation assumptions. On the basis of the present evidence, we were able to find that it might have several advantages compared to the linear regression or other much more complicated methods. It may be the simplest, but highly effective, approach. Indeed, much more sophisticated methods have been developed, including trim-and-fill15 and rank correlation methods11 and other likelihood maximisation methods which may be beyond the expertise of non-mathematicians involved in meta-analysis. Importantly, our approach does not postulate that the distribution of the funnel plot has to be strictly symmetrical. We have to keep in mind that when analysing funnel plots, we are generally dealing with restricted numbers of studies, assessing a true treatment effect with a random error around it. The probability of getting a perfectly symmetrical funnel plot is actually low, even in absence of publication bias, and this fact limits the power of tests based on this assumption. Our model bypasses this limitation, the only hypothesis being that the relative weights of positive and negative evidence are equal, whatever their distribution and level of significance. Using a non-parametric construction of the confidence interval around the fulcrum may be more adapted to variable distribution patterns compared to formally compare the obtained values to a predefined distribution (e.g. chi-square). Last but not least, it seems to be more powerful in predicting publication bias than the regression method while it allows correcting the meta-analysis estimates easily. However, the nature of the selection mechanism, the range of variances of the effect size estimates, and the true underlying effect size are all observed to be influential in determining the power of the test. in many of the configurations in which there was low power, there was also relatively little bias in the summary effect size estimate. Nonetheless, the authors stated that the test should be interpreted with caution in small meta-analyses. In particular, bias cannot be ruled out if the test is not significant.
Any adjustment method should be used primarily as a form of sensitivity analysis together with metaregression techniques. A correction of the global estimate should be done only when covariants appear to be non-significantly impacting it, since some may be confounding factors.
Finally, it should be noted that articles on publication bias are susceptible to such discrimination themselves. That is, papers seeking to report the failure to find publication bias may face a lesser chance of being submitted and accepted for publication.
Correspondence to
Professor Florent RICHY University of Liège CHU, B23 B-4000 SART-TILMAN BELGIUM Tel: +32 4 366 2581 Fax: +32 4 366 2812 Email: florent.richy@ulg.ac.be