The World Ahead Globe Icon
The World Ahead | The World in 2019

There are no killer robots yet—but regulators must respond to AI in 2019

AI does not need a whole new set of rules. Better to simply adapt and reinforce existing ones

MENTION ARTIFICIAL intelligence (AI), and the term may bring to mind visions of rampaging killer robots, like those seen in the “Terminator” films, or worries about widespread job losses as machines displace humans. The reality, heading into 2019, is more prosaic: AI lets people dictate text messages instead of typing them, or call up music from a smart speaker on the kitchen counter. That does not mean that policymakers can ignore AI, however. As it is applied in a growing number of areas, there are legitimate concerns about possible unintended consequences. How should regulators respond?

The immediate concern is that the scramble to amass the data needed to train AI systems is infringing on people’s privacy. Monitoring everything that people do online, from shopping to reading to posting on social media, lets internet giants build detailed personal profiles that can be used to target advertisements or recommend items of interest. The best response is not to regulate the use of AI directly, but instead to concentrate on the rules about how personal data can be gathered, processed and stored.

The General Data Protection Regulation, a set of rules on data protection and privacy introduced by the European Union in May 2018, was a step in the right direction, giving EU citizens, at least, more control over their data (and prompting some internet companies to extend similar rights to all users globally). The EU will further clarify and tighten the rules in 2019 with its ePrivacy Regulation. Critics will argue that such rules hamper innovation and strengthen the internet giants, which can afford the costs of regulatory compliance in a way that startups cannot. They have a point. But Europe’s approach seems preferable to America’s more hands-off stance. China, meanwhile, seems happy to allow its internet giants to gather as much personal data as they like, provided the government is granted access.

As AI systems start to be applied in areas like predictive policing, prison sentencing, job recruitment or credit scoring, a second area of concern is that of “algorithmic bias”—the worry that when systems are trained using historical data, they will learn and perpetuate the existing biases. Advocates of the use of AI in personnel departments (for example, to scan the résumés of job applicants) say using impartial machines could reduce bias. To ensure fairness, AI systems need to be better at explaining how they reach decisions (an area of much research); and they should help humans make better decisions, rather than making decisions for them.

A third area where AI is causing concern is in self-driving cars. Many companies are now testing autonomous vehicles and running pilot “robotaxi” services on public roads. But such systems are not perfect, and in March 2018 a pedestrian was killed by an autonomous car in Tempe, Arizona—the first fatality of its kind. The right response is to require makers of autonomous vehicles to publish regular safety reports, put safety drivers in their cars to oversee them during testing and install “black box” data recorders so that investigators can work out what happened if something goes wrong.

In short, given how widely applicable AI is—like electricity or the internet, it can be applied in almost any field—the answer is not to create a specific set of laws for it, or a dedicated regulatory body akin to America’s Food and Drug Administration. Rather, existing rules on privacy, discrimination, vehicle safety and so on must be adapted to take AI into account. What about those killer robots? They are still science fiction, but the question of whether future autonomous weapons systems should be banned, like chemical weapons, is moving up the geopolitical agenda. Formal discussion of the issue at a UN conference in August 2018 was blocked by America and Russia, but efforts to start negotiations on an international treaty will persist in 2019.

Get real
As for jobs, the rate and extent of AI-related job losses remains one of the most debated, and uncertain, topics in the business world. In future workers will surely need to learn new skills more often than they do now, whether to cope with changes in their existing jobs or switch to new ones. As in the Industrial Revolution, automation will demand changes to education, to cope with shifts in the nature of work. Yet there is little sign that politicians are taking this seriously: instead many prefer to demonise immigrants or globalisation. In 2019, this is an area in which policymakers need to start applying real thought to artificial intelligence.

This article appears in “The World in 2019”, our annual edition that looks at the year ahead. See more at worldin2019.economist.com

More from The World Ahead

The World Ahead The World Ahead

The World Ahead 2024

Future-gazing analysis, predictions and speculation

The World Ahead The World Ahead

The World Ahead 2024

Future-gazing analysis, predictions and speculation


The World Ahead The World Ahead

The World Ahead 2024

Future-gazing analysis, predictions and speculation