The World If | If people were paid for their data

Data workers of the world, unite

Advocates of “data as labour” think users should be paid for using online services

Listen to this story.
Enjoy more audio and podcasts on iOS or Android.

“DATA SLAVERY.” Jennifer Lyn Morone, an American artist, thinks this is the state in which most people now live. To get free online services, she laments, they hand over intimate information to technology firms. “Personal data are much more valuable than you think,” she says. To highlight this sorry state of affairs, Ms Morone has resorted to what she calls “extreme capitalism”: she registered herself as a company in Delaware in an effort to exploit her personal data for financial gain. She created dossiers containing different subsets of data, which she displayed in a London gallery in 2016 and offered for sale, starting at £100 ($135). The entire collection, including her health data and social-security number, can be had for £7,000.

Only a few buyers have taken her up on this offer and she finds “the whole thing really absurd”. Yet if the job of the artist is to anticipate the Zeitgeist, Ms Morone was dead on: this year the world has discovered that something is rotten in the data economy. Since it emerged in March that Cambridge Analytica, a political consultancy, had acquired data on 87m Facebook users in underhand ways, voices calling for a rethink of the handling of online personal data have only grown louder. Even Angela Merkel, Germany’s chancellor, recently called for a price to be put on personal data, asking researchers to come up with solutions.

Data provided by humans can be seen as a form of labour which powers artificial intelligence

Given the current state of digital affairs, in which the collection and exploitation of personal data is dominated by big tech firms, Ms Morone’s approach, in which individuals offer their data for sale, seems unlikely to catch on. But what if people really controlled their data—and the tech giants were required to pay for access? What would such a data economy look like?

It would not be the first time that an important economic resource had gone from simply being used to being owned and traded; the same has already happened with land and water, for example. But digital information seems an unlikely candidate to be allocated by markets. Unlike physical resources, personal data are an example of what economists call “non-rival” goods, meaning they can be used more than once. In fact, the more they are used, the better for society. And frequent leaks show how difficult it can be to control data. But another historical precedent might provide a model—and also chimes with contemporary concerns about “technofeudalism”, argue Jaron Lanier, a virtual-reality pioneer, and Glen Weyl, an economist at Yale University, who both work for Microsoft Research.

Labour, like data, is a resource that is hard to pin down. Workers were not properly compensated for labour for most of human history. Even once people were free to sell their labour, it took decades for wages to reach liveable levels on average. History won’t repeat itself, but chances are that it will rhyme, Mr Weyl predicts in “Radical Markets”, a provocative new book he has co-written with Eric Posner of the University of Chicago. He argues that in the age of artificial intelligence, it makes sense to treat data as a form of labour.

To understand why, it helps to keep in mind that “artificial intelligence” is something of a misnomer. Messrs Weyl and Posner call it “collective intelligence”: most AI algorithms need to be trained using reams of human-generated examples, in a process called machine learning. Unless they know what the right answers (provided by humans) are meant to be, algorithms cannot translate languages, understand speech or recognise objects in images. Data provided by humans can thus be seen as a form of labour which powers AI. As the data economy grows up, such data work will take many forms. Much of it will be passive, as people engage in all kinds of activities—liking social-media posts, listening to music, recommending restaurants—that generate the data needed to power new services. But some people’s data work will be more active, as they make decisions (such as labelling images or steering a car through a busy city) that can be used as the basis for training AI systems.

Yet whether such data are generated actively or passively, few people will have the time or inclination to keep track of all the information they generate, or estimate its value. Even those who do will lack the bargaining power to get a good deal from AI firms. But the history of labour offers a hint about how things could evolve: because historically, if wages rose to acceptable levels, it was mostly due to unions. Similarly, Mr Weyl expects to see the rise of what he calls “data-labour unions”, organisations that serve as gatekeepers of people’s data. Like their predecessors, they will negotiate rates, monitor members’ data work and ensure the quality of their digital output, for instance by keeping reputation scores. Unions could funnel specialist data work to their members and even organise strikes, for instance by blocking access to exert influence on a company employing its members’ data. Similarly, data unions could be conduits channelling members’ data contributions, all while tracking them and billing AI firms that benefit from them.

All this may sound like science fiction. Why should Google and Facebook, for instance, ever give up their current business model of using free data to sell targeted online advertising? In 2017 they raked in a combined $135bn in ad dollars. If they had to compensate people for their data, they would be much less profitable. Meanwhile, startups such as CitizenMe and Datacoup, which can be seen as early forms of data unions, have so far failed to make much headway. Yet in other corners of the industry, tech giants already pay for data, although they are careful not to talk too much about it. Mostly through outsourcing firms, they employ armies of raters and moderators to check the quality of their algorithms and take down content that is illegal or offensive. Other firms use crowd-working platforms, such as Amazon’s Mechanical Turk, to farm out data work such as tagging pictures. Mighty AI, a startup based in Seattle, pays thousands of online workers to label images of the street scenes that are used to train the algorithms that power self-driving cars.

What is more, if AI lives up to the hype, it will lead to demand for more and better data. As AI services get more sophisticated, algorithms will need to be fed a higher-quality diet of digital information, which people may only provide if they get paid. Once one big tech firm starts paying for data, others may have to follow.

Treating data as labour means tech giants’ profit margins are likely to get squeezed, but their overall business may get bigger. And workers will, at least partially, be in the driving seat. Their mornings might start with checking a dashboard provided by their data-labour union, showing a personalised list of available jobs: from watching advertising (the computer’s camera collects facial reactions) to translating a text into a rare language, to exploring a virtual building to see how easy it is to navigate. The dashboard might also list past earnings, show ratings and suggest new skills.

But much still needs to happen for personal data to be widely considered as labour, and paid for as such. For one thing, the right legal framework will be needed to encourage the emergence of a new data economy. The European Union’s new General Data Protection Regulation, which came into effect in May, already gives people extensive rights to check, download and even delete personal data held by companies. Second, the technology to keep track of data flows needs to become much more capable. Research to calculate the value of particular data to an AI service is in its infancy.

Third, and most important, people will have to develop a “class consciousness” as data workers. Most people say they want their personal information to be protected, but then trade it away for nearly nothing, something known as the “privacy paradox”. Yet things may be changing: more than 90% of Americans think being in control of who can get data on them is important, according to the Pew Research Centre, a think-tank.

Even if people got money for their data, sceptics say, they wouldn’t get much. If Facebook shared out its profits across all its monthly users, for instance, each would get just $9 a year. But such calculations fail to recognise that the data age has only just begun. AI is often likened to electricity, and when electrification began in the late 19th century, entire cities used only as much power as a single household does today.

Wouldn’t this data economy be hugely unequal? Some people’s data will surely be worth much more than others’. But Mr Weyl argues that the skills needed to generate valuable data may be more widely spread than you might think, so data work could disrupt the standard hierarchy of human capital. One way or another, societies will have to find a mechanism to distribute the wealth created by AI. As things stand, most of it accrues to the big data distilleries. Unless this changes, social inequality could revert to medieval levels, Mr Weyl warns. If that happens, it is not unreasonable to assume that one day, the data workers of the world will unite.

This article appeared in the The World If section of the print edition under the headline "Data workers of the world, unite"

The rift

From the July 7th 2018 edition

Discover stories from this section and more in the list of contents

Explore the edition

Discover more

What if the Republicans pivoted on climate?

How an ambitious, conservative environmentalism came into being. An imagined scenario from 2024

What if water shortages destabilise China?

The painfully unequal distribution of water in China reawakens intra-regional resentments not seen in decades. An imagined scenario from 2050


What if climate activists turn to terrorism?

Protesters against climate change have not resorted to terrorism to advance their cause. This scenario from 2031 imagines what would happen if they did