How accurate are the conclusions drawn from meta-analysis?

Modified on Mon, 14 Oct at 1:04 PM

The findings derived from any meta-analysis are only as good as the individual research studies selected for the review. It is an empirical question whether the quality of the study is a moderator.  As stated in Visible Learning Lipsey and Wilson (1993), for example, summarized 302 meta-analyses in psychology and education, and found no differences between studies that only included random versus non-random design studies (d = .46 vs. d = .41), or between high (d = .40) and low (d = .37) quality studies. There was a bias upwards from the published (d = .53) compared to non-published studies (d = .39), although sample size was unrelated to effect size (d = -.03). Further, Sipe and Curlette (1996) found no relationship between the overall effect size of 97 meta-analyses (d = .34) and sample size, number of variables coded, type of research design, and a slight increase for published (d = .46) versus unpublished (d = .36) meta-analyses.


There is one exception which can be predicted from the principles of statistical power, where if the effect sizes are close to zero, then the probability of having high confidence in this effect is probably related to the sample size (see Cohen, 1988, 1990).The aim should be to summarize all possible studies regardless of their design and then ascertain if quality is a moderator to the final conclusions.  For example, in Visible Learning (2009) Hattie noted when quality was a moderator.


The research that John Hattie used in Visible Learning (2009) was based on over 1,100 meta-analyses from carefully selected articles and books that met the criteria of having been conducted using rigorous methodological frameworks and robust analysis of the findings. Details supporting the validity, reliability and error associated with the tools used to measure each intervention were also critically reviewed. Further, an overwhelming number of the research had been conducted by researchers who are considered experts in their field.. Given these selection criteria, Professor John Hattie has full confidence of the integrity of the data used for his meta-analysis review.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article