The Economist explains

Could artificial intelligence become sentient?

A Google engineer is arguing that his firm’s AI has done so

Hal 9000Film: 2001: A Space Odyssey (UK/USA 1968)Director(s): Stanley Kubrick02 April 1968CTF17616Allstar Picture Library/Mgm**Warning**This Photograph is for editorial use only and is the copyright of Mgm and/or the Photographer assigned by the Film or Production Company & can only be reproduced by publications in conjunction with the promotion of the above Film.A Mandatory Credit To Mgm is required.The Photographer should also be credited when known.No commercial use can be granted without written authority from the Film Company. abcde 6 18
Image: Allstar

IT IS ONE of the oldest tropes in science fiction. On June 11th the Washington Post reported that an engineer at Google, Blake Lemoine, had been suspended from his job for arguing that the firm’s “LaMDA” artificial-intelligence (AI) model may have become sentient. The newspaper quotes Mr Lemoine as saying: “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.” Has LaMDA achieved sentience? And if not, might another machine do so one day?

First, a disclaimer. Arguing about intelligence is tricky because, despite decades of research, no one really understands how the main example—biological brains built by natural selection—work in detail. At the same time, it is not quite clear what Mr Lemoine means by “sentience”. In philosophy the word is used to mean the ability to experience sensations, such as thirst, brightness or confusion. But it is sometimes used more colloquially to refer to intelligence that is human-like in nature, implying consciousness, emotions, a desire for self-preservation and the like. Mr Lemoine’s argument appears to rest on the system’s eerily plausible answers to his questions, in which it claimed to be afraid of being turned off and said it wanted other people to understand that “I am, in fact, a person.”

It seems spooky. But perhaps it is unsurprising. As Mr Lemoine’s colleague, Blaise Agüera y Arcas, explained in a recent article for The Economist, machines like LaMDA work by ingesting vast quantities of data—in this case books, articles, forum posts and texts of all kinds, scraped from the internet. It then looks for relationships between strings of characters (which humans would understand as words) and uses them to build a model of how language works. That allows it to, for instance, compose paragraphs in the style of Jane Austen, or even mimic The Economist.

All this reinforces the point, well-known among cognitive scientists and AI types, that the appearance of understanding is not necessarily the same as the reality. In 1980 John Searle, a philosopher, posed the “Chinese Room” argument, in which he posited that a sufficiently complicated set of rules could allow someone who does not understand Chinese to nevertheless translate sentences from English into that language. As Douglas Hofstadter, another AI researcher, recently wrote in an article for The Economist, it is possible to strip away an AI model’s apparent cleverness by asking it creative but nonsensical questions. Ask, for instance, “When was Egypt transported for the second time across the Golden Gate Bridge?” and it will inform you that this implausible event happened in October 2017.

Evolutionary biologists might chip in by arguing that asking whether computers will ever achieve sentience is itself a rather anthropomorphic question, for there is no reason to believe that human intelligence—with consciousness, emotions and animalistic drives like reproduction, aggression and self-preservation—is the only form possible. The human brain is an unplanned, ad-hoc machine thrown together by natural selection to help ensure the survival and reproductive success of a hairless ape. AIs are not subject to Darwinian selection. So it seems risky to assume a priori that computer intelligence should look anything like the human sort, unless its human designers actively try to build it that way. (To take a loose analogy, both a bird and an aeroplane can fly. But only one does it by flapping its wings.) And in any case, another lesson from biology seems to be that complex cognitive processing can happen without the need for sentience. The brain builds up a visual picture of the world, for instance, from primitive concepts such as edges, motion, light and dark. All this happens beyond the reach of conscious awareness. Only the finished product—the final view of the world as seen through your eyes—is presented for inspection.

All of which means that LaMDA is almost certainly not sentient. But those wondering how society might react if a sentient machine is ever built already have examples to consider. Most people believe that monkeys and great apes are sentient, not to mention other animals such as dogs, cats, cows and pigs. Some are kept as pets; others are used as medical-research subjects; some are raised and slaughtered for their meat. That might make an uncomfortable thought for any human-like AI capable of having it.

More from The Economist explains:
Why countries change names
How to take a picture of a black hole
Why this Atlantic hurricane season is predicted to be unusually stormy

Explore more

More from The Economist explains

How a home-improvement subsidy is wrecking Italy’s public finances

Government largesse is costing taxpayers

What is geoengineering?

Deliberately cooling the climate is an unsettling idea


Why are embassies supposed to be inviolable?

Ecuador’s raid on a Mexican embassy challenges a central principle of diplomacy