The Economist explains

How machine learning works

By T.C.

THE standard joke about artificial intelligence (AI) is that, like nuclear fusion, it has been the future for more than half a century now. In 1958 the New York Times reported that the Perceptron, an early AI machine developed at Cornell University with military money, was "the embryo of an electronic computer that [the American Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence". Five decades later, self-aware battleships remain conspicuous by their absence. Yet alongside the hype, there has been spectacular progress: computers are now better than any human at the games of chess and Go, for instance. Computers can process human speech and read even messy handwriting. To many people today, automated telephone-response systems are infuriating. But they would seem like magic to someone from the 1950s. These days AI is in the news again, for there has been impressive progress in the past few years in a particular subfield of AI called machine learning. But what exactly is it?

Machine learning is exactly what it sounds like: an attempt to perform a trick that even very primitive animals are capable of, namely learning from experience. Computers are hyper-literal, ornery beasts: anyone who has tried programming one will tell you that the difficulty comes from dealing with the fact that a computer will do exactly and precisely what you tell it to, stupid mistakes and all. For tasks that can be boiled down into simple, unambiguous rules – such as crunching through difficult mathematics, for instance – that is fine. For woollier jobs, it is a serious problem, especially because humans themselves might struggle to articulate clear rules. In 1964 Potter Stewart, a US Supreme Court judge, found it impossibly difficult to set a legally watertight definition of pornography. Frustrated, he famously wrote that, although he could not define porn as such, "I know it when I see it." Machine learning aims to help computers discover such fuzzy rules by themselves, without having to be explicitly instructed every step of the way by human programmers.

Understanding the new AI boom

There are many different kinds of machine learning. But the one that is grabbing headlines at the moment is called “deep learning”. It uses artificial neural networks – simplified computer simulations of how biological neurons behave – to extract rules and patterns from sets of data. Show a neural network enough pictures of cats, for instance, or have it listen to enough German speech, and it will be able to tell you if a picture it has never seen before is a cat, or a sound recording is in German. The general approach is not new (the Perceptron, mentioned above, was one of the first neural networks). But the ever-increasing power of computers has allowed deep learning machines to simulate billions of neurons. At the same time, the huge quantity of information available on the internet has provided the algorithms with an unprecedented quantity of data to chew on. The results can be impressive. Facebook's Deep Face algorithm, for instance, is about as good as a human being when it comes to recognising specific faces, even if they are poorly lit, or seen from a strange angle. E-mail spam is much less of a problem than it used to be, because the vast quantities of it circulating online have allowed computers to learn to recognise what a spam e-mail looks like, and divert it before it ever reaches your inbox.

Big firms like Google, Baidu and Microsoft are pouring resources into AI development, aiming to improve search results, build computers you can talk to, and more. A wave of startups wants to use the techniques for everything from looking for tumours in medical images to automating back-office work like the preparation of sales reports. The appeal of automated voice or facial-recognition for spies and policemen is obvious, and they are also taking a keen interest. This rapid progress has spawned prophets of doom, who worry that computers could become cleverer than their human masters and perhaps even displace them. Such worries are not entirely without foundation. Even now, scientists do not really understand how the brain works. But there is nothing supernatural about it – and that implies that building something similar inside a machine should be possible in principle. Some conceptual breakthrough, or the steady rise in computing power, might one day give rise to hyper-intelligent, self-aware computers. But for now, and for the foreseeable future, deep-learning machines will remain pattern-recognition engines. They are not going to take over the world. But they will shake up the world of work.

Dig deeper:
How to ensure the benefits of AI outweigh the risks (May 2015)
New technologies are bringing sweeping change to labour markets (October 2014)

Update: This blog post has been amended to remove the news peg.

More from The Economist explains

How a home-improvement subsidy is wrecking Italy’s public finances

Government largesse is costing taxpayers

What is geoengineering?

Deliberately cooling the climate is an unsettling idea


Why are embassies supposed to be inviolable?

Ecuador’s raid on a Mexican embassy challenges a central principle of diplomacy