Culture | Artificial intelligence

Machines for thinking

Computers will get smarter, but with humans in charge

Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. By John Markoff. Ecco; 400 pages; $26.99.

The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. By Pedro Domingos. Basic Books; 352 pages; $29.99. Allen Lane; £20.

Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence. By Jerry Kaplan. Yale University Press; 256 pages; $35, £20.

ARTIFICIAL INTELLIGENCE (AI) is quietly everywhere, powering Google’s search engine, Amazon’s recommendations and Facebook’s facial recognition. It is how post offices decipher handwriting and banks read cheques. But several books in recent years have spewed fire and brimstone, claiming that algorithms are poised to obliterate white-collar knowledge-work in the 21st century, just as automation displaced blue-collar manufacturing work in the 20th. Some people go further, arguing that artificial intelligence threatens the human race. Elon Musk, an American entrepreneur, says that developing the technology is “summoning the demon”.

Now several new books serve as replies. In “Machines of Loving Grace”, John Markoff of the New York Times focuses on whether researchers should build true artificial intelligence that replaces people, or aim for “intelligence augmentation” (IA), in which the computers make people more effective. This tension has been there from the start. In the 1960s, at one bit of Stanford University John McCarthy, a pioneer of the field, was gunning for AI (which he had named in 1955), while across campus Douglas Engelbart, the inventor of the computer mouse, aimed at IA. Today, some Google engineers try to improve search engines so that people can find information better, while others develop self-driving cars to eliminate drivers altogether.

Mr Markoff focuses on the personalities, since technology depends on the values of its creators. The human element makes the subject accessible. (His chapter on the history of AI is superb.) But he spends little time on how AI actually works, and philosophical themes—such as the meaning of reliance on machines—are raised before being dropped too soon.

At the start, AI was about coding the rules of logic into software. But that failed at bigger tasks. A different approach gave the computers data and got them using probability to infer, say, what film to recommend. It worked poorly at first, but improved as computers got better and were fed more data. Called “machine learning”, it is why computer translation and speech recognition are no longer so laughable.

Pedro Domingos’s “The Master Algorithm” is focused on explaining to a general reader how machine-learning works. The book does a good job of examining the field’s five main techniques: symbolic reasoning, connections modelled on the brain’s neurons, evolutionary algorithms that test variation, Bayesian inference (updating probabilities with new information) and systems that learn by analogy.

The subject is meaty and the author, a professor at the University of Washington, has a knack for introducing concepts at the right moment. But in Mr Domingos’s zeal to simplify he constantly invents metaphors that grate or confuse. He prefers to use imaginary examples to explain the technology rather than actual, cutting-edge work, which is a missed opportunity.

A terrific balance between delightful stories and thoughtful analysis is found in Jerry Kaplan’s relatively short book, “Humans Need Not Apply”. An entrepreneur and AI expert (he is one of the personalities in Mr Markoff’s story), Mr Kaplan has done some serious thinking about how AI will transform business, jobs and most interestingly, the law. The book glimmers with originality and verve.

He starts from the idea that the technology creates “forged labourers” and “synthetic intellects” that will do the jobs of people. (One lawyer has already set up the firm Robot, Robot & Hwang.) He delves into fascinating areas such as legal liability when robots err. There is a deep look at the growing income inequality between a small cadre of Silicon Valley elites (himself included) and the rest.

Others have raised these issues, but Mr Kaplan is unique in devising solutions. To the problem of skills not being well matched to the needs of businesses, he proposes a “job mortgage”. Companies would agree to hire a person in future in return for a tax break; the person would take out a loan against the future income to pay for the training. This way, educational institutions get clearer economic signals about what skills they should teach.

To lessen income inequality, Mr Kaplan gets even more inventive. Companies would get tax breaks if their shares are broadly owned, using a measure he bases on the Gini coefficient. The American government would let people choose the firms where some of their Social Security (national pension) funds would be invested. Spreading stock ownership, Mr Kaplan reckons, will diffuse the gains from companies that, using AI, make oodles of money but employ few.

All three authors are optimistic that society will find a way to live with AI, with not a killer robot in sight. Mr Domingos ponders a new set of Geneva protocols: banning humans from fighting. As for truly autonomous robots, Mr Markoff quotes a software designer: we’ll know they exist when rather than going to work, they go to the beach instead.

This article appeared in the Culture section of the print edition under the headline "Machines for thinking"

Dominant and dangerous

From the October 3rd 2015 edition

Discover stories from this section and more in the list of contents

Explore the edition

Discover more

Could your marriage survive a shipwreck?

“Maurice and Maralyn”, a new book about a couple stranded almost four months at sea, makes you wonder

What lies behind Beyoncé’s country turn?

The star singer has always been a canny interpreter of musical trends


Museums are becoming more expensive

Will it kill off future patronage and attendance?