The Economist explains

How “fake news” could get even worse

Did Rick Astley just give up?

By H.H.

RICK ASTLEY is rightly famous. His 1987 single, “Never Gonna Give You Up”, has been played more than 330m times on YouTube. But in February last year Mr Astley (pictured) indulged in a rather odd experiment. The singer, looking remarkably similar to his late-’80s self, covered his own hit song, but sang the whole thing in order of pitch. The song, a video of which appears on YouTube, proceeds in a rather unconventional manner. Mr Astley stammers out different lines in a jumble, going from dulcet bass tones to shrill trebles over a tortuous three-and-a-half minutes. The only coherent thing about the apparent stunt is the progression from low to high notes—except then he goes back to low at the end. Did this actually happen?

No. Mr Astley did not rework his song. An artist called Mario Klingemann did, using clever software. The video is a particularly obvious example of generated media, which uses quick and basic techniques. More sophisticated technology is on the verge of being able to generate credible video and audio of anyone saying anything. This is down to progress in an artificial intelligence (AI) technique called machine learning, which allows for the generation of imagery and audio. One particular set-up, known as a generative adversarial network (GAN), works by setting a piece of software (the generative network) to make repeated attempts to create imagesthat look real, while a separate piece of software (the adversarial network) is set up in opposition. The adversary looks at the generated images and judges whether they are “real”, which is measured by similarity to those in the generative software’s training database. In trying to fool the adversary, the generative software learns from its errors. Generated images currently require vast computing power, and only work at low resolution. For now.

Images and audio have been manipulated almost since their invention. But older fakes were made by snipping bits out of photographic negatives, combining them with others, and then making a fresh print. New techniques are very different. Doctoring images and audio by fiddling with film or using photo-editing software requires skill. But generative software can churn out forgeries automatically, based on simple instructions. Images generated in this fashion can fool humans, but also computers. Some of the pixels in an image doctored using editing software will not match up with what might be expected in the real world. Generated images, because their creation requires convincing an adversary that checks for just such statistical anomalies, contain none of these tell-tale signs of forgery. A generated image is also internally consistent, betraying few signs of tampering.

Such forgeries are unlikely to much harm politicians or other powerful actors. Material like that is scrutinised in public. Fakes will be identified by other means, such as by examining the circumstances depicted and cross-referencing them with a person’s movements. It is also likely that media will start to come with metadata (such as time, date, location of recording) to prove its veracity, as that is much harder to forge. But smaller-scale fakes—of a classmate doing something embarrassing, or a disliked co-worker saying something rude about the boss—that are not subject to the same amount of scrutiny will be much harder to check. Without a means to verify recordings, individuals may find themselves with few options but to distrust everything.

Correction (July 7th): An earlier version of this explainer misstated the method for faking film photographs. This has been amended.

More from The Economist explains

How a home-improvement subsidy is wrecking Italy’s public finances

Government largesse is costing taxpayers

What is geoengineering?

Deliberately cooling the climate is an unsettling idea


Why are embassies supposed to be inviolable?

Ecuador’s raid on a Mexican embassy challenges a central principle of diplomacy