Trump's Twitter Ban Obscures The Real Problem: State-Backed Manipulation Is Rampant On Social Media (ozrimoz/Shutterstock)
Donald Trump’s controversial removal from social media platforms has reignited debate around the censorship of information published online. But the issue of disinformation and manipulation on social media goes far beyond one man’s Twitter account. And it is much more widespread than previously thought.
Since 2016, our team at the Oxford Internet Institute has monitored the rapid global proliferation of social media manipulation campaigns, which we define as the use of digital tools to influence online public behavior. In the past four years, social media manipulation has evolved from a niche concern to a global threat to democracy and human rights.
Our latest report found that organised social media manipulation campaigns are now common across the world — identified in 81 countries in 2020, up from 70 countries in 2019. The map below shows the global distribution of these 81 countries, marked in dark blue.
The countries marked in dark blue experienced industrial disinformation campaigns in 2020. (OII, Author provided (No reuse))
Computational propaganda involves the use of programmed bots or humans to spread purposefully misleading information across the internet, often on an industrial scale.
To do this, computational propagandists make use of an extensive toolkit of disinformation tools. Political bots amplify hate speech and create the impression of trending political messages on Twitter and Facebook. The illegal harvesting of data helps propagandists target messaging at specific, often vulnerable individuals and groups. Troll armies, meanwhile, are regularly deployed to suppresses political activism and the freedom of the press.
In 2020, we identified 62 countries in which state agencies themselves are using these tools to shape public opinion. In other countries included in our study, these tools are being used by private organisations, or foreign actors.
Disinformation for hire
Despite the Cambridge Analytica scandal exposing how private firms can meddle in democratic elections, our research also found an alarming increase in the use of “disinformation-for-hire” services across the world. Using government and political party funding, private-sector cyber troops are increasingly being hired to spread manipulated messages online, or to drown out other voices on social media.
Our research found state actors working with private computational propaganda companies in 48 countries in 2020, up from 21 identified between 2017 and 2018, and only nine such instances between 2016 and 2017. Since 2007, almost US$60 million (£49 million) has been spent globally on contracts with these firms.
Additionally, we’ve uncovered relationships between hired cyber troops and civil society groups who ideologically support a particular cause, such as youth groups and social media influencers. In the United States, for example, the pro-Trump youth group Turning Point Action was used to spread online disinformation and pro-Trump narratives about both COVID-19 and mail-in ballots.
To achieve their political ends, smear campaigns against a political opponent are the most common strategy employed by cyber troops, featuring in 94% of all the countries we investigated. In 90% of countries we observed the spreading of pro-party or pro-government propaganda. Suppressing participation through trolling or harassment was a feature in 73% of countries, while in 48% cyber troops’ messaging sought to polarise citizens.
Social media moderation
Clearly, debates around the censoring of Trump and his supporters on social media cover only one facet of the industry’s disinformation crisis. As more countries invest in campaigns that seek to actively mislead their citizens, social media firms are likely to face increased calls for moderation and regulation — and not just of Trump, his followers and related conspiracy theories like QAnon.
Donald Trump was banned from Twitter in the aftermath of the Capitol riots (pcruciatti/Shutterstock)
Already this year, the prevalence of computational propaganda campaigns throughout the COVID-19 pandemic and in the aftermath of the US election has prompted many social media firms to limit the misuse of their platforms by removing accounts which they believe are managed by cyber troops.
For instance, our research found that between January 2019 and December 2020, Facebook removed 10,893 accounts, 12,588 pages and 603 groups from its platform. In the same period, Twitter removed 294,096 accounts, and continues to remove accounts linked to the far right.
Despite these account removals, our research has exposed that between January 2019 and December 2020 almost US$10 million was spent by cyber troops on political advertisements. And a crucial part of the story is that social media companies continue to profit from the promotion of disinformation on their platforms. Calls for tighter regulation and firmer policing are likely to follow Facebook and Twitter until they truly get to grips with the tendency of their platforms to host, spread and multiply disinformation.
A strong, functional democracy relies upon the public’s access to high-quality information. This enables citizens to engage in informed deliberations and to seek consensus. It’s clear that social media platforms have become crucial in facilitating this information exchange.
These companies should therefore increase their efforts to flag and remove disinformation, along with all cyber troop accounts which are used to spread harmful content online. Otherwise, the continued escalation in computational propaganda campaigns that our research has revealed will only heighten political polarisation, diminish public trust in institutions, and further undermine democracy worldwide.
About Today's Contributor:
Hannah Bailey, PhD researcher in Social Data Science, University of Oxford
This article is republished from The Conversation under a Creative Commons license.