Science and technology | Reproducing research

Try again

A large study replicates psychology experiments, but not most of their results

NO MATTER how exciting a newly published scientific paper is, or how respectable its authors are, the research it describes is of little value if its results cannot be reproduced by others. Yet for a study to be published in a journal, it need only be peer-reviewed by a few independent workers in the field and approved by an editor. If the author’s methods and conclusions seem reasonable to them, it can enter the academic literature and often go unchallenged by replication—for verifying someone else’s work is far less glamorous than coming up with your own findings. As a consequence, researchers might reasonably wonder how much of their discipline’s literature they can actually rely on. And now a team of psychologists has provided an answer, at least for their own particular field: not as much as you might hope.

Brian Nosek of the University of Virginia, the project’s co-ordinator, arranged for numerous colleagues (270 of them) to replicate parts of 100 studies carried out in 2008. The results have just been published in Science.

To reproduce every selected study in its entirety would have been too costly and time consuming, so the researchers (working in 90 teams) picked for replication a single result within a study that they thought crucial to the conclusions of the paper they were investigating. The teams then contacted the original authors of the papers, to make sure the all-important gritty details of the method used were kept the same, repeated the experiment that had led to the chosen result’s alleged discovery, and recalculated the size and significance of the effect in question using the data that the repeat experiment had yielded.

For anyone who hopes for concrete answers from single studies, the project’s findings will not make comfortable reading. In only 47 of the 100 studies was the stated size of the result being investigated matched (within a 95% confidence interval) by the replicated finding. In only 39 did the team doing the replicating think, subjectively, that they had reproduced the original conclusions. And there was a clear bias in the direction of errors. In 82 of the repetitions (including those where the difference between the old and new results was within the confidence interval) the value of the effect in the repeated study was smaller than that in the initial finding.

In truth, these results will surprise few of those involved in research, for whom bias at the heart of academic publishing is an open secret. High-profile journals are more likely to accept articles that show new, positive results than ones which demonstrate no correlation or effect. Since the careers of researchers depend on getting their work published, the temptation to, for example, massage things by removing inconvenient outliers which those concerned persuade themselves are freak results, can be overwhelming.

To complicate matters, ignoring outliers is sometimes good statistical practice. But another trick, known as “p-hacking”, is more dodgy. The p-value of a statistical calculation is the likelihood that an effect which researchers have seen in an experiment would be observed by chance if their hypothesis were wrong (the “p” in question stands for probability).

The maximum acceptable value of p varies from discipline to discipline (p < 0.05 is usually enough for psychology), and previous work has shown that, regardless of field, there is a curious tendency for published observations to cluster just below this maximum. One reason is researchers’ tendencies to trawl their data after the event, looking for significant results, rather than (as statistical rigour requires) deciding in advance what it is they will test. Another is that different tests may produce different p-values, permitting people to go “test-shopping”.

Dr Nosek’s project, then, emphasises the virtue of trying to reproduce results. Outright fraud is rare in science, but the burnishing of reputation is as common as in any other field of human endeavour. The scientific method will thus never be perfect, and this is a timely reminder that no paper should ever be treated, by itself, as the final word on anything.

Correction. The original version of this article stated that 100 teams of researchers were involved. In fact, only 90 were, with some investigating more than one original result. We have amended the text accordingly.

More from Science and technology

Large language models are getting bigger and better

Can they keep improving forever?

What is screen time doing to children?

Demands grow to restrict young people’s access to phones and social media


Locust-busting is getting a upgrade

From pesticides to drones, new technologies are helping win an age-old battle