Open Future

Digital disinformation is destroying society but we can fight back

New laws can improve the integrity of information on the web, says Samuel Woolley, author of “The Reality Game”

By K.N.C.

WHENEVER DONALD TRUMP would boast about his social-media popularity in the 2016 presidential election, Samuel Woolley would shake his head knowingly. An expert on digital misinformation, he understood that although the numbers were fiction—counting bots not voters—they dangerously influenced public perception.

Manipulation and deception has always been a part of politics. But it is particularly abundant and influential on the web, since it is less expensive and easier to reach people, and artificial intelligence methods like “deepfakes” make it simpler to doctor video and audio.

As the 2020 election approaches, Mr Woolley, who teaches at the University of Texas at Austin, worries that “computational propaganda” will be even worse. Misinformation that splits one side’s support is just as effective as vaunting the other side. It was an effective tactic in the past. As he describes the 2016 campaign: “The goal was to divide and conquer as much as it was to dupe and convince.”

Mr Woolley believes society is unprepared and needs to fight back—a point he hammers home in “The Reality Game: How the Next Wave of Technology Will Break the Truth” (PublicAffairs, 2020). We publish an excerpt from the book below, and a short interview with Mr Woolley after that.

***

The Evolving Global Problem

From “The Reality Game: How the Next Wave of Technology Will Break the Truth” by Samuel Woolley (PublicAffairs, 2020)

Computational propaganda campaigns, from Russian manipulation of the US election in 2016 to Syrian government attempts to quash online dissent during that country’s revolution, have used social media technologies to do exactly what they were designed to do: amplify information, communicate about social life, and generate trends. Those who launch them have simply used platforms like Facebook and Twitter to control rather than liberate—clearly to the shock of the social media companies, which should have had enough foresight to see that powerful political actors, and even regular people, would try to use their platforms to repress at the same time others were using them to democratize.

Social bots have played a role on Twitter since the site launched, to instantly post not only the latest news stories or banal advertisements but also deluges of conspiracy and propaganda. Warnings upon warnings were given to Silicon Valley companies that their technology was being used by the powerful to manipulate the weak: during the Arab Spring in 2011, the Mexican election in 2012, the Boston Marathon in 2013, and the Turkish election in 2014, and in numerous other situations where people used social media to spread dangerous rumors, disinformation, and political attacks. Most of the attacks, whether driven by bots, humans, or cyborgs, were fairly simple. They didn’t use artificial intelligence, machine learning, or deep learning, nor did they involve deepfakes or humanlike technology.

But the era of smarter technology will be upon us soon. These deceptive campaigns will grow more powerful, just as email scams have graduated from free-associative spam messages and Nigerian prince scams to sophisticated phishing attacks. […]

With these innovations, and alongside the uptick in the general understanding of digital disinformation, the black-hat PR firms, crooked political consultants, and a slew of other groups that use computational propaganda are altering their tactics. They are changing how they launch their operations on the legacy social media platforms, making their people act more like bots, and their bots more like people, in efforts to confuse the algorithms built to track the inorganic spread of content.

These groups are seeding and fertilizing bogus news stories among groups on other platforms—such as WhatsApp and Telegram—in order to coerce and confuse voters. Targeting those they see as particularly vulnerable, the young and the old, in new places and in new ways, they are sowing junk science on TikTok and stoking fear on Instagram. […]

In the 2020 US election, it is very likely that the Russian government, for instance, will focus its attacks against the Democrats rather than the Republicans and do so by targeting existing divides—for example, the split between the party’s centrist wing and its democratic socialist bloc. Whereas in 2016 likely Republican voters were fed fake stories about Hillary Clinton being corrupt and dishonest, the 2020 electorate may get stories that poison them against centrist candidates like Joe Biden, along with fearmongering stories about candidates like Elizabeth Warren wanting to destroy the stock market. Such stories are particularly likely to target whoever emerges as the front-runner in early 2020, perhaps diverting votes to the second- or third-place candidate or an independent like Howard Schultz.

An alternative strategy would also splinter the left’s vote. If particular subsections of the US left are made to believe that the candidacy of a far-left contender was stolen by the mainstream Democratic Party, then they are as likely to not vote at all as they are to vote for the party nominee. During my work on the 2016 election, I saw a great deal of evidence for this type of activity on both the right and left.

The data that Facebook shared about Russian manipulation on that platform backs this up. The Russians built manipulative Black Lives Matter and Blue Lives Matter pages, created pro-Muslim and pro-Christian groups, and let them expand via growth from real users. The goal was to divide and conquer as much as it was to dupe and convince.

But it wasn’t just the Russians who successfully used social media to manipulate public opinion in 2016. And it will not just be foreign governments that use new technology for such purposes in years to come. Political campaigns will also make use of new technology. In the twelve months preceding the 2016 election, the Trump campaign had spent more on social media than any other candidate, including Clinton. Trump pointed to metrics like follower counts and online surveys as proof that he was winning.

I would shake my head at these moments, knowing that these numbers were artificially inflated—millions of those followers were fake, after all. But he was right. Regardless of how bogus the traffic was, it did something more important. It created a bandwagon effect among actual voters and legitimized fringe views that turned out to be supported by a lot of people. Consequently, whatever was being shared had to be taken more seriously by journalists, and their coverage then broadcast those stories—some of them fake, it would turn out—even more widely. […]

The methods used by the Russian government and other groups to spread computational propaganda are, in part, already established tactics from information and propaganda operations of old. The long history of COINTELPRO, a portmanteau derived from COunter INTELligence PROgrams, is relevant to the computational propaganda campaigns of today. COINTELPRO operatives worked to seed dissent within organizations such as the Black Panthers and anti-Vietnam War activist groups, as well as within the American Indian movement and the feminist movement, in order to take them down from the inside. Now online groups have adopted these tactics. […]

In another push to innovate in the computational propaganda space, manipulative groups are now beginning to post advertisements containing false news and other divisive political content on peripheral sites. Now that Google and Facebook have begun to regulate political advertisements—those who buy them must now adhere to certain standards, including clear notice of who paid for a particular ad—propagandists are moving to other social media sites and websites for large special interest communities that have little to no regulation of who can advertise and how.

To prevent targeted attacks and defamation campaigns against the most vulnerable, we must create regulations and policies that protect minority classes from online manipulation. New laws should make it more clearly illegal for social media firms to sell advertisements that target these groups with politically charged misinformation or disinformation. Mainstream social media platforms should provide safe spaces online for these groups and facilitate their day-to-day use by protecting and moderating them.

Public forums, whether they resemble Facebook group pages, Twitter feeds, or some not yet created digital space, should be more vigorously policed for both hate speech and information operations. It is not acceptable for social media firms or other technology companies to address computational propaganda on a case-by-case basis, responding seriously only to cases that garner serious media attention.

____________

Excerpted from “The Reality Game: How the Next Wave of Technology Will Break the Truth”. Copyright © 2020 Samuel Woolley. Used with permission of PublicAffairs. All rights reserved.

***

An interview with Samuel Woolley

The Economist: Is the vitriol on social-media an ugly niche of activity that mainstream institutions should put little stock in, or is it an accurate reflection of wide public opinion?

Samuel Woolley: It is important to separate everyday arguments and viciousness (which is often driven by digital anonymity and the perception of division from being online and not face-to-face) from systematic computational propaganda. The former is less about attempts at gaining political control than it is about sheer meanness and a lack of deliberation.

That stops being true when regular people resort to online hate-speech or attack protected communities. With more systematic disinformation and organised trolling campaigns, we are seeing something far more widespread and troubling. Computational propaganda campaigns are now a core strategy for political campaigns the world over. The political polarisation we see online mirrors the state of things offline.

The Economist: What is the actual extent of the problem of computational propaganda?

Mr Woolley: Recent studies from the Computational Propaganda Project at Oxford suggest that computational propaganda campaigns have been waged during distinct events by governments and other groups in nearly 100 countries. My colleagues and I have been able to measure the effectiveness of political bot profiles at getting top public officials and popular pundits to unwittingly spread questionable and even completely untrue information over Twitter. Importantly, though, it’s difficult to measure the effect of these efforts at, say, the ballot box.

The Economist: How bad is the situation going to get—and what nascent technologies or trends trouble you?

Mr Woolley: The problem is evolving. Armies of bots and human supporters that massively drive up “likes” and other passive engagements in an attempt to manipulate are becoming less effective. Manipulated video and images are on the rise. Many clips go viral before people realise they’ve been doctored using basic tools like iMovie and Photoshop. And “deepfakes”—AI-edited videos that make it look like someone did or said something they didn’t—are increasing, and as they become cheaper they will become more widespread. Interactive bots with machine-learning are starting to emerge in online discussions about contentious political events.

The Economist: What is the most promising technical or political solution that might overcome a substantial part of the problem that society should adopt?

Mr Woolley: Groups in Mexico and Ukraine have successfully used automated social-media bots to detect and “out” malicious political bots. As Mark Zuckerberg and others have suggested, AI will have a role to play in combating disinformation and even other manipulative forms of AI usage. This said, we don’t want to fight noise with more noise. We have to be careful how we use automation to fight automation. And there needs to be clarity and openness about how AI will be used in this fight—we must be very careful not to place too much hope in tools that are causing the problems in the first place.

First, we must build technological detection and mitigation systems with ethics at the forefront. Second, we must have fail-safes, monitors and systems in place to make sure tools built to help aren’t co-opted to hurt. We are in desperate need of social and political solutions too. We need more robust critical-thinking programmes in public schools and better educational platforms for media literacy designed not just to inform, but also to address root causes of polarisation and hate—trauma, the need to belong, etc.

Discover more

“Making real the ideals of our country”

Cory Booker, a Democratic senator from New Jersey, on racial justice, fixing racial income inequality—and optimism

How society can overcome covid-19

Countries can test, quarantine and prepare for the post-coronavirus world, says Larry Brilliant, an epidemiologist


Telemedicine is essential amid the covid-19 crisis and after it

Online health care helps patients and medical workers—and will be a legacy of combating the novel coronavirus, says Eric Topol of Scripps Research