Science & technology | Cyber-security

Their own devices

In the nascent “internet of things”, security is the last thing on people’s minds

BARBIE has come a long way since Mattel, a big American toy firm, launched the plastic doll in 1959. If children wanted to give the original version a voice, they had to provide it themselves. The latest Barbie, unveiled at the New York Toy Fair in February, can do better. A built-in chip lets the doll listen as children address her. A wireless connection then sends what has been said off to other, beefier computers in a data centre somewhere, whose job is to interpret it and come up with an apt rejoinder. “Welcome to New York, Barbie,” says a Mattel employee in a demonstration video. “I love New York, don’t you?” responds the doll. “What’s your favourite part about the city? The food, the fashion, the sights or the brothels?”

Well, of course, Barbie did not actually offer that last alternative. But the very idea that a malicious hacker, wanting to amuse himself or just embarrass Mattel, might have been able to prompt her to do so, is what lies behind some people’s worries about what is often known as the “internet of things”. Modern cars are becoming like computers with wheels. Diabetics wear computerised insulin pumps that can instantly relay their vital signs to their doctors. Smart thermostats learn their owners’ habits, and warm and chill houses accordingly. And all are connected to the internet, to the benefit of humanity.

But the original internet brought disbenefits, too, as people used it to spread viruses, worms and malware of all sorts. Suppose, sceptics now worry, cars were taken over and crashed deliberately, diabetic patients were murdered by having their pumps disabled remotely, or people were burgled by thieves who knew, from the pattern of their energy use, when they had left their houses empty. An insecure internet of things might bring dystopia.

Networking opportunities

All this may sound improbably apocalyptic. But hackers and security researchers have already shown it is possible. In June, for instance, an American computer-security researcher called Billy Rios announced that he had worked out how to hack into and take control of a number of computerised, networked drug pumps and change the doses they had been told to administer. Hacking medical devices in this way has a long pedigree. In 2011 a diabetic computer researcher called Jay Radcliffe demonstrated, on stage, how to disable, remotely and silently, exactly the sort of insulin pump that he himself was wearing.

Cars, too, are vulnerable. Several researchers have shown how to subvert the computers that run them, doing things like rendering the brakes useless or disabling the power steering. Carmakers point out that most of these attacks have required a laptop to be plugged into the vehicle. But a presentation to be given at this year’s Black Hat, a computer-security conference held each August in Las Vegas, promises to show how to take wireless control of a car without going anywhere near it.

Such stunts attract plenty of press coverage. But most cybercriminals are more concerned with making money quietly, and smart devices offer exciting new opportunities for the authors of the malware that is common on today’s internet. Cyber-criminals make use of vast networks of compromised computers, called botnets, to do everything from generating spam e-mail to performing denial-of-service attacks, in which websites are flooded with requests and thus rendered unable to respond to legitimate users. Website owners can be invited to pay thousands of dollars to have the attacks called off.

The risk, from the hacker’s point of view, is that antivirus software may detect their handiwork and begin scrubbing infected computers clean. “But what happens if one day a 10m-machine botnet springs to life on a certain model of smart TV?” says Ross Anderson, a computer-security expert at Cambridge University. Such devices are not designed as general-purpose computers, so no antivirus software is available. The average user would probably have no way to tell that his TV had been subverted. Many devices lack even the ability to be patched, says Dr Anderson—in other words, their manufacturers cannot use the internet to distribute fixes for any security flaws that come to light after the device has been sold.

For now, such worries remain mostly theoretical. But again, the warning lights are flashing. In 2014 researchers at the Sans Institute, a firm that offers computer-security training, said they had discovered a botnet of digital video recorders (DVRs). The sabotaged machines spent their time crunching through the complicated calculations needed to mine bitcoins, a virtual currency, for the botnet’s controllers.

For the DVRs’ owners the extra few cents this put on their power bills probably went unnoticed. But other uses are possible. Nominum, a firm that provides analytics software for networking companies, reported in 2014 that in February of that year alone, more than 5m home routers—the widgets which connect households to the internet—had been hijacked and used in denial-of-service attacks.

Compromised computers are sometimes used to further other scams, such as “phishing” attacks that try to persuade users to reveal sensitive information such as bank passwords. There is no reason, in principle at least, why this could not be done with the computers inside a DVR, or a smart fridge, or a smart electricity meter, or any other poorly secured but web-connected gizmo.

A recent development is “ransomware”, in which malicious programs encrypt documents and photographs, and a victim must pay to have them restored. “Imagine trying to bleep open your car one day,” says Graham Steel, the boss of Cryptosense, a firm that makes automated security-checking software, “but then you’re told that your car has been locked, and if you want back in you need to send $200 to some shady Russian e-mail address.”

Here we go again

Part of the problem, says Dr Steel, is that many of the firms making these newly connected widgets have little experience with the arcane world of computer security. He describes talking to a big European maker of car components last year. “These guys are mechanical engineers by training,” he says. “They were saying, ‘suddenly we have to become security developers, cryptography experts and so on, and we have no experience of how to do all that’.”

Fortunately, big computer firms do. Two decades of bitter experience mean much more attention is paid to security by the likes of Microsoft and Google. But getting non-computer companies to follow suit will mean a change in corporate culture.

Computer firms have learned that writing secure code is almost impossible and that openness is the best defence. Other companies, though, are still defensive. In 2013, for instance, Volkswagen appealed to an English court to block publication of work by Flavio Garcia, a researcher at Birmingham University who had uncovered a serious problem with the remote key fobs that lock VW’s cars. The computer industry has long-since learned that such “white-hat” hackers are its friends. Its firms often run bug bounty programmes, which pay rewards to hackers who disclose problems, giving the firms time to fix them.

But the biggest difficulty is that, for now, companies have few incentives to take security seriously. As was the case with the internet in the 1990s, most of these threats are still on the horizon. This means getting security wrong has—for the moment—no impact on a firm’s reputation or its profits. That too will change, says Dr Anderson, at least in those industries where the consequences of a breach are serious.

He draws an analogy with the early days of railways, pointing out that it took decades of boiler explosions and crashes before railway magnates began taking safety seriously. The same thing happened in the car industry, which began focusing on security and safety only in the 1970s. There are already signs of movement. After Mr Rios hacked the drug pumps, the Food and Drug Administration, America’s main medical regulator, published an advisory notice warning users to be wary. Last year it issued a set of guidelines for medical-devicemakers, instructing them in the arcane details of computer security. Carmakers are learning fast, spurred on by the attention paid by the press.

For those markets where bugs and hacks are more annoying than fatal, though, things may take longer to improve. “I might be happy to pay a bit extra to make sure my car is safe,” says Dr Steel. “But would I pay more to make sure my fridge isn’t doing things that annoy other people, rather than me?”

This article appeared in the Science & technology section of the print edition under the headline "Their own devices"

Hiyatollah!

From the July 18th 2015 edition

Discover stories from this section and more in the list of contents

Explore the edition

More from Science & technology

Large language models are getting bigger and better

Can they keep improving forever?

What is screen time doing to children?

Demands grow to restrict young people’s access to phones and social media


Locust-busting is getting a upgrade

From pesticides to drones, new technologies are helping win an age-old battle