Technology Quarterly | Lots of signal, lots of noise

Where to process data, and how to add them up

The dark art of data fusion

“YOU COULD put forward a thesis that Afghanistan was the most densely surveilled battlespace in the history of humankind,” says Mick Ryan, until recently the head of Australia’s defence college. “And that didn’t seem to help us.” For an information advantage to change the course of a war you need more than just a cornucopia of sensors; you need ways to combine their data into information that can be acted on at speed.

Listen to this story.
Enjoy more audio and podcasts on iOS or Android.

Take radar, a technology which changed the course of the war in which it made its debut more than any other new sensor in history. It had applications from the finding of submarines (via their snorkels) to the proximity fuses which made artillery and anti-aircraft rounds more lethal. As its developers used to grumble when nuclear physicists were lauded for their godlike power, “Radar won the war; nuclear weapons just ended it.”

But radar’s capabilities had to be built into systems that made use of them. The canonical example is the air-defence system used during the Battle of Britain. Its radars were linked to a network of radio receivers, barrage balloons, fighter planes and human spotters through a network of phone lines. The resulting reports were plotted on a map and used to guide fighter planes to their targets with spectacular success.

Over the past decade efforts to embody similar feats of collective intelligence in AI systems have made real progress. In a recent exercise in Poland, the British Army experimented with a command and control system built over eight weeks in collaboration with Anduril, a California-based company which provides both sensors and systems to fuse their data. The system did not just spot targets; it also worked out the closest suitable aircraft that could be used to attack them and presented its results to the for ce’s commanders in the form of clearly delineated options.

This far outperformed the old way of doing things; options for hitting targets were delivered 30 minutes quicker, according to an officer involved in the experiment. And it required a team of just five people, rather than the 25 it used to take. The officer compares the improvement to that offered by satellite navigation with real-time traffic updates. “It’s like going from an A-Z…to Waze. You’re operating at a ridiculously different speed.”

Joseph Votel, a recently retired head of the Pentagon’s Central Command, said last year he was struck by how Israeli forces mounting strikes on Gaza in May had been integrating AI into their operations and by “the impact that is having on their targeting cycles”. He says Israel is using AI to generate a large range of potential targets for surveillance to whittle down. This lets its forces “disrupt enemy attacks without the need for a lengthy development period or a longer campaign.”

America’s armed forces, helped by Palantir (an AI company which, like Anduril, takes its name from “The Lord of the Rings”) and other contractors, is trying to build such technology into a system which can narrow down a huge range of potential targets and pass information about them freely to where it is most needed. Given the finite capacity of communication systems, not to mention the vulnerability, this requires that an increasing amount of processing be done “on the edge”—that is, on the platform carrying the sensor.

In 2016 a Pentagon project called Maven started trying to address the “lots of surveillance but not much to show for it” problem identified by General Ryan. The idea was to automate the identification of people and objects in the petabytes of video footage sent back by surveillance drones. It ended up producing software efficient enough to run on the drones themselves. In Scarlet Dragon, a recent- AI focused American exercise in which a wide range of systems were used to comb a large area for a small target, things were greatly speeded up by allowing satellites to provide estimates of where a target might be in a compact format readable by another sensor or a targeting system, rather than transmitting high-definition pictures of the sort humans look at.

In a world where bandwidth is often the biggest constraint such parsimony is a boon. It speeds up kill chains while reducing vulnerability to jamming. At the same time, it puts a greater burden on the automated parts of the system to provide reliable synopses of what they see, which is a worry for people keen to ensure that fully informed and responsible human beings stay on top of all decisions about where and when to blow things up.

Reforging the shards

However much edge processing may whittle down individual flows, though, the ability for sensors to proliferate and the hunger for more knowledge elsewhere in the system will still mean that command systems need a greater capacity for handling data in bulk. That is why armed forces are spending heavily on cloud-computing services provided by big tech companies to increase their data-handling capacity. In 2019 the Pentagon awarded Microsoft a $10bn contract for its Joint Enterprise Defence Infrastructure (JEDI). Last year Amazon, which has been supplying the CIA with such services since 2013, got the contract annulled. A new tender issued in November will probably see the work shared among a number of firms. There will be more than enough to go around.

“You’ll be amazed at the patterns it picks up when you put bulk data together”

Clouds offer advantages in speed, scale and flexibility. They also help with “data fusion”—combining different pieces of information to reveal things that one source cannot capture, including things no human would think to look for. “You’ll be amazed at the patterns it picks up when you put bulk data together from different sources and run AI algorithms across them,” says an official familiar with Odyssey, a cloud-computing system being developed by the British armed forces.

Fusion is not just about adding things up; subtraction matters too. In a presentation last year, Brigadier-General Paul Murray put on screen the radar picture available to the North American Aerospace Defence Command (NORAD) on the afternoon of April 15th 2015. It looked like a canvas at which someone had hurled a bucket of blue paint. Somewhere within the mess was the flight path of Doug Hughes, a postman from Florida who had taken it on himself to deliver letters of protest to America’s Congress by flying his gyrocopter from Gettysburg to the lawn of the Capitol. Whatever impact this may have had on the legislature, his ability to cross highly restricted airspace unnoticed alarmed NORAD.

Mr Jones’s approach was not entirely undetected. But it was captured only intermittently, and amid everything else going on a human looking at the data at the time concluded that it was innocuous. When a system called Pathfinder fused the relevant data from more than 300 sensors and used AI to remove the clutter, though, the errant aircraft’s path stood out clearly.

To rule them all

Pathfinder’s decluttering uses commercial flight plans and weather reports to help sort things out; the integration of such open-source data is crucial to a lot of intelligence and surveillance. Last year America’s National Security Commission on AI, chaired by Eric Schmidt, a former CEO of Google, said that the country’s intelligence agencies would need to build “a continuous pipeline of all-source intelligence analysis” into “continually learning analytic engines”. The results, it hoped, would be insights “beyond the current limits of unaided human cognition”. Call it Omniscient Neural-net Engineering for Reconnaissance, Intelligence and National Goal-achievement, or ONERING for short.

Some workers at tech companies do not like the idea of being involved in such things. In 2017 thousands of Google employees signed a letter outlining their unhappiness with the company’s role in the Maven project. Microsoft’s bid for the JEDI contract faced internal opposition on similar grounds. Many others will also have concerns about data fusion on such a scale, for military or any other purposes.

They might take some comfort, at least for the time being, from the fact that seamlessness is much more easily wished for and invested in than achieved. Different military services and agencies contracting with different companies to build their own clouds and AI systems just the way they want them will be likely to produce the digital equivalent of Babel after God smote it. Military organisations, accustomed to laying out their requirements years in advance and in excruciating detail, are ill-equipped for a world in which computing power has become a subscription service and in which new software can transform the hardware it is running on.

The old-school defence contractors who tend to get tasked with integrating the data are “shockingly bad and wildly insecure”, according to Oliver Lewis of Rebellion Defence, an AI provider. “They often use an industrial-era approach designed for building tanks and aircraft that makes it impossible for them to write great software.” Interoperability often requires a level of commercial and technical finesse rarely seen in the management of government contracts. “Defence procurement,” says one AI executive, “is currently fundamentally incompatible with this new model.”

It is not just that the technology is changing, the business environment unfamiliar and large-scale systems integration always hard—particularly so, it often seems, for governments. The systems which fuse and interpret large amounts of data from disparate far-flung sources have to be robust not just in everyday operation but when adversaries are trying very hard to break them down. When it comes to the crunch, the enemy gets a say, too.

This article appeared in the Technology Quarterly section of the print edition under the headline "Heads in the clouds"

Russia’s roulette: The stakes in Ukraine

From the January 29th 2022 edition

Discover stories from this section and more in the list of contents

Explore the edition