Science & technology | When science goes wrong (I)

Computer says: oops

Two studies, one on neuroscience and one on palaeoclimatology, cast doubt on established results. First, neuroscience and the reliability of brain scanning

NOBODY knows how the brain works. But researchers are trying to find out. One of the most eye-catching weapons in their arsenal is functional magnetic-resonance imaging (fMRI). In this, MRI scanners normally employed for diagnosis are used to study volunteers for the purposes of research. By watching people’s brains as they carry out certain tasks, neuroscientists hope to get some idea of which bits of the brain specialise in doing what.

The results look impressive. Thousands of papers have been published, from workmanlike investigations of the role of certain brain regions in, say, recalling directions or reading the emotions of others, to spectacular treatises extolling the use of fMRI to detect lies, to work out what people are dreaming about or even to deduce whether someone truly believes in God.

But the technology has its critics. Many worry that dramatic conclusions are being drawn from small samples (the faff involved in fMRI makes large studies hard). Others fret about over-interpreting the tiny changes the technique picks up. A deliberately provocative paper published in 2009, for example, found apparent activity in the brain of a dead salmon. Now, researchers in Sweden have added to the doubts. As they reported in the Proceedings of the National Academies of Science, a team led by Anders Eklund at Linkoping University has found that the computer programs used by fMRI researchers to interpret what is going on in their volunteers’ brains appear to be seriously flawed.

fMRI works by monitoring blood flow in the brain. The idea behind this is that thinking, like any other bodily function, is hard work. The neurons doing the thinking require oxygen and glucose, which are supplied by the blood. The powerful magnetic fields generated by an MRI machine are capable of distinguishing between the oxygenated and deoxygenated states of haemoglobin, the molecule which gives red blood cells their colour and which is responsible for shepherding oxygen around the body. Monitoring haemoglobin therefore monitors how much oxygen brain cells are using, which in turn is a proxy for how hard they are working.

I want to look inside your head

In an fMRI study, an image of a brain is divided into a large number of tiny “voxels”—3D, volumetric versions of the familiar pixels that make up a digital image. Computer algorithms then hunt for changes in both individual voxels and clumps of them. It was in that aggregation process that Dr Eklund and his colleagues found the problems.

To perform their test, they downloaded data from old fMRI studies—specifically, information from 499 resting volunteers who were being scanned while not thinking about anything in particular (these scans were intended for use as controls in the original papers). The researchers divided their trove arbitrarily into “controls” and “test subjects”, and ran the data through three different software packages commonly used to analyse fMRI images. Then they redivided them, in a different arbitrary way, and analysed those results in turn. They repeated this process until they had performed nearly 3m analyses in total.

Since all the “participants” in these newly conducted trials were, in fact, controls in the original trials, there ought to have been no discernible signal. All would presumably have been thinking about something, but since they were idling rather than performing a specific task there should have been no discernible distinction between those categorised as controls and those used as subjects. In many cases, though, that is not what the analysis suggested. The software spat out false positives—claiming a signal where there was none—up to 70% of the time.

False positives can never be eliminated entirely. But the scientific standard used in this sort of work is to have only one chance in 20 that a result could have arisen by chance. The problem, says Dr Eklund, lies with erroneous statistical assumptions built into the algorithms. And in the midst of their inspection, his team turned up another flaw: a bug in one of the three software packages that was also generating false positives all on its own.

The three packages investigated by the team are used by almost all fMRI researchers. Dr Eklund and his colleagues write that their results cast doubt on something like 40,000 published studies. After crunching the numbers, “we think that around 3,000 studies could be affected,” says Dr Eklund. But without revisiting each and every study, it is impossible to know which those 3,000 are.

Dr Eklund’s results blow a hole in a lot of psychological and neuroscientific work. They also raise the question of whether similar skeletons lurk in other closets. Fields from genomics to astronomy rely on computers to sift huge amounts of data before presenting summaries to their human masters. Few researchers are competent to check the assumptions on which such software is built, or to scour code for bugs—which, as programmers know, are virtually guaranteed to be present in any complicated piece of software.

There is another problem, says Dr Eklund: “it is very hard to get funding to check this kind of thing.” Those who control the purse strings are more interested in headline-grabbing discoveries, as are the big-name journals in which researchers must publish if they wish to advance their careers. That can leave the pedestrian—but vital—job of checking others’ work undone. This may be changing. Many areas of science, including psychology, are in the midst of a “replication crisis”, in which solid-seeming results turn out to be shaky when the experiments are repeated. Dr Eklund’s findings suggest more of this checking is needed, and urgently.

Correction: A previous version of this article misquoted Dr Eklund as saying “around 3,000 studies could simply be wrong”, rather than “we think that around 3,000 studies could be affected”. This has been updated

This article appeared in the Science & technology section of the print edition under the headline "Computer says: oops"

Donald Trump and a divided America

From the July 16th 2016 edition

Discover stories from this section and more in the list of contents

Explore the edition

Discover more

Antarctica, Earth’s largest refrigerator, is defrosting

The world must pay more attention to its southern pole

Killer whales deploy brutal, co-ordinated attacks when hunting

Their techniques are passed down through the generations


A new generation of music-making algorithms is here

Their most useful application may lie in helping human composers