Special report | War at hyperspeed

Getting to grips with military robotics

Autonomous robots and swarms will change the nature of warfare

Effective—and expendable
Listen to this story.
Enjoy more audio and podcasts on iOS or Android.

PETER SINGER, AN expert on future warfare at the New America think-tank, is in no doubt. “What we have is a series of technologies that change the game. They’re not science fiction. They raise new questions. What’s possible? What’s proper?” Mr Singer is talking about artificial intelligence, machine learning, robotics and big-data analytics. Together they will produce systems and weapons with varying degrees of autonomy, from being able to work under human supervision to “thinking” for themselves. The most decisive factor on the battlefield of the future may be the quality of each side’s algorithms. Combat may speed up so much that humans can no longer keep up.

Frank Hoffman, a fellow of the National Defence University who coined the term “hybrid warfare”, believes that these new technologies have the potential not just to change the character of war but even possibly its supposedly immutable nature as a contest of wills. For the first time, the human factors that have defined success in war, “will, fear, decision-making and even the human spark of genius, may be less evident,” he says.

Weapons with a limited degree of autonomy are not new. In 1943 Germany produced a torpedo with an acoustic homing device that helped it find its way to its target. Tomahawk cruise missiles, once fired, can adjust their course using a digital map of Earth’s contours. Anti-missile systems are pre-programmed to decide when to fire and engage an incoming target because the human brain cannot react fast enough.

But the kinds of autonomy on the horizon are different. A report by the Pentagon’s Defence Science Board in 2016 said that “to be autonomous, a system must have the capability to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation.” What distinguishes autonomous systems from what may more accurately be described as computerised automatic systems is that they work things out as they go, making guesses about the best way to meet their targets based on data input from sensors. In a paper for the Royal Institute of International Affairs in London, Mary Cummings of Duke University says that an autonomous system perceives the world through its sensors and reconstructs it to give its computer “brain” a model of the world which it can use to make decisions. The key to effective autonomous systems is “the fidelity of the world model and the timeliness of its updates”.

A distinction needs to be made between “narrow” AI, which allows a machine to carry out a specific task much better than a human could, and “general” AI, which has far broader applications. Narrow AI is already in wide use for civilian tasks such as search and translation, spam filters, autonomous vehicles, high-frequency stock trading and chess-playing computers.

Waiting for the singularity

General AI may still be at least 20 years off. A general AI machine should be able to carry out almost any intellectual task that a human is capable of. It will have the ability to reason, plan, solve problems, think abstractly and learn quickly from experience. The AlphaGo Zero machine which last year learned to play Go, the ancient strategy board game, was hailed as a major step towards creating the kind of general-purpose algorithms that will power truly intelligent machines. By playing millions of games against itself over 40 days it discovered strategies that humans had developed over thousands of years, and added some of its own that showed creativity and intuition.

Mankind is still a long way from the “singularity”, the term coined by Vernor Vinge, a science-fiction writer, for the moment when machines become more intelligent than their creators. But the possibility of killer robots can no longer be dismissed. Stephen Hawking, Elon Musk, Bill Gates and many other experts believe that, handled badly, general AI could be an existential threat to the human race.

In the meantime, military applications of narrow AI are already close to bringing about another revolution. Robert Work, the architect of America’s third offset strategy, stresses that this is not all about autonomous drones, important though they will increasingly become. His main focus has been on human-machine collaboration to help humans make better decisions much faster, and “combat teaming”, using unmanned and manned systems together.

Autonomous systems will draw on machine deep learning to operate “at the speed of light” where humans cannot respond fast enough to events like cyber attacks, missiles flying at hypersonic speed or electronic warfare. AI will also become ever more important in big-data analytics. Military analysts are currently overwhelmed by the amount of data, especially video, being generated by surveillance drones and the monitoring of social-media posts by terrorist groups. Before leaving the Pentagon, Mr Work set up an algorithmic-warfare team to consider how AI can help hunt Islamic State fighters in Syria and mobile missile launchers in North Korea. Cyber warfare, in particular, is likely to become a contest between algorithms as AI systems look for network vulnerabilities to attack, and counter-autonomy software learns from attacks to design the best response.

In advanced human-machine combat teaming, UAVs will fly ahead of and alongside piloted aircraft such as the F-35. The human pilot will give the UAV its general mission instructions and define the goal, such as striking a particular target, but the UAV will be able to determine how it meets that goal by selecting from a predefined set of actions, and will respond to any unexpected challenges or opportunities. Or unmanned ground vehicles might work alongside special forces equipped with wearable electronics and exoskeletons to provide machine strength and protection. As Mr Work puts it: “Ten years from now, if the first through a breach isn’t a fricking robot, shame on us.”

Autonomous “uninhabited” vehicles, whether in the air, on the ground or under the sea, offer many advantages over their manned equivalents. Apart from saving money on staff, they can often be bolder and more persistent than humans because they do not get tired, frightened, bored or angry. They are also likely to be cheaper and smaller than manned versions because they do not have to protect people from enemy attack, so they can be deployed in greater numbers and in more dangerous situations.

Increasingly autonomous drones will be able to perform a range of tasks that will soon make them indispensable. UAVs will carry out the whole range of reconnaissance or strike missions, and stealth variants will become the tip of the spear for penetrating sophisticated air defences. Some will be designed to loiter at altitude while waiting for a target to emerge. Israel already deploys the Harop, an autonomous anti-radiation drone which can fly for up to six hours, attacking only when an enemy air-defence radar lights up. Autonomous high-altitude UAVs will be used as back-up data links in case satellites are destroyed, or as platforms for anti-missile solid-state lasers. Larger UAVs will be deployed as tankers and transport aircraft that can operate close to the action.

Underwater warfare will become ever more important in the future because the sea offers a degree of sanctuary from which power can be projected within A2/AD zones. Unmanned undersea vehicles (UUVs) will be able to carry out a wide range of difficult and dangerous missions, such as mine clearance or mine-laying near an adversary’s coast; distributing and collecting data from undersea anti-submarine sensor networks in contested waters; patrolling with active sonar; resupplying missiles to manned submarines; and even becoming missile platforms themselves, at a small fraction of the cost of nuclear-powered attack submarines. There are still technical difficulties to be overcome, but progress is accelerating.

Potentially the biggest change to the way wars are fought will come from deploying lots of robots simultaneously. Paul Scharre, an autonomous-weapons expert at CNAS who has pioneered the concept of “swarming”, argues that “collectively, swarms of robotic systems have the potential for even more dramatic, disruptive change to military operations.” Swarms can bring greater mass, co-ordination, intelligence and speed.

The many, not the few

As Mr Scharre points out, swarming will solve a big problem for America. The country currently depends on an ever-decreasing number of extremely capable but eye-wateringly expensive multi-mission platforms which, if lost at the outset of a conflict, would be impossible to replace. A single F-35 aircraft can cost well over $100m, an attack submarine $2.7bn and a Ford-class carrier with all its aircraft approaching $20bn.

By contrast, low-cost, expendable distributed platforms can be built in large numbers and controlled by relatively few humans. Swarms can make life very difficult for adversaries. They will come in many shapes and sizes, each designed to carry out a particular mission, such as reconnaissance over a wide area, defending ships or troops on the ground and so on. They will be able to work out the best way to accomplish their mission as it unfolds, and might also be networked together into a single “swarmanoid”. Tiny 3D-printed drones, costing perhaps as little as a dollar each, says Mr Scharre, could be formed into “smart clouds” that might permeate a building or be air-dropped over a wide area to look for hidden enemy forces.

It is certain that autonomous weapons systems will appear on the battlefield in the years ahead. What is less clear is whether America will be the first to deploy them. In July 2017 China produced its “Next-Generation Artificial-Intelligence Development Plan”, which designates AI as the transformative technology underpinning future economic and military power. It aims for China to become the pre-eminent force in AI by 2030, using a strategy of “military-civil fusion” that America would find hard to replicate. And in September Vladimir Putin told Russian children returning to school that “artificial intelligence is the future, not only for Russia but for all of mankind…whoever becomes the leader in this sphere will become the ruler of the world.” Elon Musk, of Tesla and SpaceX fame, responded by tweeting that “competition for AI superiority at national level [is the] most likely cause of WW3.”

Peter Singer is less apocalyptic than Mr Musk, but he agrees that the competition for AI dominance is fuelling an arms race that will itself generate insecurity. This arms race may be especially destabilising because the capabilities of robotic weapons systems will not become clear until someone is tempted to use them. The big question is whether this competition can be contained, and whether rules to ensure human control over autonomous systems are possible—let alone enforceable.

This article appeared in the Special report section of the print edition under the headline "War at hyperspeed"

The next war: The growing danger of great-power conflict

From the January 27th 2018 edition

Discover stories from this section and more in the list of contents

Explore the edition