Why does this matter?

The findings in this report demonstrate how easily and inexpensively anyone could influence specific groups.

Inflating the engagement of tweets through retweets or replies can [create the illusion](https://datajournalism.com/read/handbook/verification-3/investigating-actors-content/3-spotting-bots-cyborgs-and-inauthentic-activity#:~:text=The amplifier bot exists,controversial or under siege.) of popularity and help “launder” stories from the fringes of public conversation into the mainstream. As mentioned earlier, [past research](https://www.nature.com/articles/s41467-018-06930-7#:~:text=Bots amplify such,of online misinformation.) has shown that early engagements play a significant role in whether something goes viral, “Bots amplify such content in the early spreading moments before an article goes viral.”

Bad actors can leverage the illusion to [direct](https://datajournalism.com/read/handbook/verification-3/investigating-actors-content/3-spotting-bots-cyborgs-and-inauthentic-activity#:~:text=By working together in large numbers%2C amplifier bots seem more legitimate and therefore help shape the online public opinion landscape.) public discussion [and influence](https://apnews.com/article/asia-pacific-china-europe-middle-east-government-and-politics-62b13895aa6665ae4d887dcc8d196dfc#:~:text=A seven-month,is government-sponsored.) opinion without disclosing financial backing, or they can [misinform](https://datajournalism.com/read/handbook/verification-3/investigating-actors-content/3-spotting-bots-cyborgs-and-inauthentic-activity#:~:text=Amplifier bots that spread disinformation do it mainly through hashtag campaigns or by sharing news in the form of links%2C videos%2C memes%2C photos or other content types. Hashtag campaigns involve bots constantly tweeting the same hashtag%2C or set of hashtags%2C in coordination.) or [discourage](https://www.bbc.com/news/world-africa-58474936#:~:text=However%2C the hired influencers have managed to scare away critical voices from the debate on Twitter%2C with civil activists saying they now self-censor on the platform.) opponents through targeted harassment. TrendMicro, a global cybersecurity firm, wrote of the threat in 2017 (p 74):

Careful and extended use of propaganda can shift the Overton window. Prolonged opinion manipulation techniques can make the public receptive to ideas that would have previously been unwelcome and perhaps even offensive at worst. The concept of the slippery slope applies: once an opinion has been changed a bit, it becomes easier to change it even more.

Concerning a case where coordinated Twitter campaigns targeted civil advocates, the activists [said](https://www.bbc.com/news/world-africa-58474936#:~:text=the hired influencers have managed to scare away critical voices from the debate on Twitter%2C with civil activists saying they now self-censor on the platform.), “They now self-censor on the platform.” An in-depth investigation by Mozilla included interviews with the influencers who had accepted payments to partake in the information operation:

“They were told to promote tags – trending on Twitter was the primary target by which most of them were judged. The aim was to trick people into thinking that the opinions trending were popular – the equivalent to ‘paying crowds to show up at political rallies,’ the research says.”

Information disorder research leaves something to be desired.

In 2022, the Annual Threat Assessment of the US Intelligence Community stated that malign influence would continue to threaten the United States, especially Russia, and China. Adversaries may use acceptable messengers to spread their divisive and misleading content by simply amplifying individuals in Western society that are already aligned with their interests. As this threat grows, our ability to understand and analyze it has been limited by platforms.

Platforms routinely remove accounts operated by bad actors, but this usually also removes their interactions and history. Researchers can piece together history using archives and mentions, but we are often deprived of the full dataset needed to perform a comprehensive analysis. When still possible, the cumbersome process of documenting deleted bad-actor accounts frequently prevents independent researchers from providing an invaluable check on social media platforms.

Platforms frequently opt to share data with a handful of groups–sometimes the same groups with which other platforms share data. The process lacks transparency. Consequently, the arrangement and research findings are more vulnerable to malign influence and lack the checks afforded to other fields, where researchers may more freely replicate results.

<aside> <img src="/icons/info-alternate_gray.svg" alt="/icons/info-alternate_gray.svg" width="40px" />

If platforms fail to stop and remove far more accounts and efforts than is currently recognized, there is little incentive for them to tell us.

</aside>

← Table of contents

Nord Stream explosion timeline →

tags: NordStream