What is 'the dead internet theory' and why is it so sinister?

The dead internet theory essentially claims that activity and content on the internet, including social media accounts, are predominantly being created and automated by artificial intelligence agents.

An artificially-generated image of a shrimp featuring a likeness to Jesus Christ.

An example of a 'shrimp Jesus' image on Facebook. Source: The Conversation / Facebook

If you search "shrimp Jesus" on Facebook, you might encounter dozens of images of artificial intelligence (AI) generated crustaceans meshed in various forms with a stereotypical image of Jesus Christ.

Some of these hyper-realistic images have garnered more than 20,000 likes and comments.

So what exactly is going on here?

The "dead internet theory" has an explanation: AI- and bot-generated content has surpassed the human-generated internet.

But where did this idea come from, and does it have any basis in reality?

What is the dead internet theory?

The dead internet theory essentially claims that activity and content on the internet, including social media accounts, are predominantly being created and automated by artificial intelligence agents.

These agents can rapidly create posts alongside AI-generated images designed to farm engagement (clicks, likes, comments) on platforms such as Facebook, Instagram and TikTok.

As for shrimp Jesus, it appears AI has learned it's the current, latest mix of absurdity and religious iconography to go viral.

But the dead internet theory goes even further.

Many of the accounts that engage with such content also appear to be managed by artificial intelligence agents.

This creates a vicious cycle of artificial engagement, one that has no clear agenda and no longer involves humans at all.

Harmless engagement farming or sophisticated propaganda?

At first glance, the motivation for these accounts to generate interest may appear obvious — social media engagement leads to advertising revenue.

If a person sets up an account that receives inflated engagement, they may earn a share of advertising revenue from social media organisations such as Meta.

So, does the dead internet theory stop at harmless engagement farming?

Or perhaps beneath the surface lies a sophisticated, well-funded attempt to support autocratic regimes, attack opponents and spread propaganda?

While the shrimp Jesus phenomenon may seem harmless (albeit bizarre), there is potentially a longer-term ploy at hand.
LISTEN TO
SBS On the Money: The rise of AI in the workplace & retail stocks tumble image

SBS On the Money: The rise of AI in the workplace & retail stocks tumble

SBS News

09/05/202408:43
As these AI-driven accounts grow in followers (many fake, some real), the high follower count legitimises the account to real users.

This means that out there, an army of accounts is being created.

Accounts with high follower counts which could be deployed by those with the highest bid.

This is critically important, as social media is now the primary news source for many users around the world.

In Australia, nominated social media as their main source of news last year.

This is up from 28 per cent in 2022, taking over from traditional outlets such as radio and TV.

Bot-fuelled disinformation

Already, there is strong evidence social media is being manipulated by these inflated bots to sway public opinion with disinformation — and it's been happening for years.

In 2018, a study analysed 14 million tweets over a ten-month period in 2016 and 2017.

It found bots on social media were in disseminating articles from unreliable sources.

Accounts with high numbers of followers were legitimising misinformation and disinformation, leading real users to believe, engage and reshare bot-posted content.

This approach to social media manipulation has been found to occur after mass shooting events in the United States.

In 2019, a study found bot-generated posts on X (formerly Twitter) , serving to amplify or distort potential narratives associated with extreme events.
LISTEN TO
Death by Metadata: How Artificial Intelligence is used to kill people image

Death by Metadata: How Artificial Intelligence is used to kill people

SBS News

26/12/202303:50
More recently, several large-scale, pro-Russian disinformation campaigns have aimed to undermine support for Ukraine and promote pro-Russian sentiment.

Uncovered by activists and journalists, the coordinated efforts used bots and AI to create and spread fake information, reaching millions of social media users.

On X alone, a campaign used more than 10,000 bot accounts to rapidly post tens of thousands of messages of pro-Kremlin content attributed to US and European celebrities seemingly supporting the ongoing war against Ukraine.

On this scale the influence is significant.
Some reports have even found that nearly half of all internet traffic in 2022 was made by bots.

With recent advancements in generative AI — such as OpenAI's ChatGPT models and Google's Gemini — the quality of fake content will only be improving.

Social media organisations are seeking to address the misuse of their platforms.

Notably, Elon Musk has explored requiring X users to pay for membership to stop bot farms.

Social media giants are capable of removing large amounts of detected bot activity if they so choose. (Bad news for our friendly shrimp Jesus.)

Keep the dead internet in mind

The dead internet theory is not really claiming that most of your personal interactions on the internet are fake.

It is, however, an interesting lens through which to view the internet.

That it is no longer for humans, by humans — this is the sense in which the internet we knew and loved is "dead".

The freedom to create and share our thoughts on the internet and social media is what made it so powerful.

Naturally, it is this power that bad actors are seeking to control.

The dead internet theory is a reminder to be sceptical and navigate social media and other websites with a critical mind.

Any interaction, trend, and especially "overall sentiment" could very well be synthetic.

Designed to slightly change the way in which you perceive the world.

Jake Renzella is a lecturer and director of studies (computer science) at UNSW Sydney.

Vlada Rozova is a research fellow in applied machine learning at the University of Melbourne.


Share
5 min read
Published 21 May 2024 5:37am
Updated 21 May 2024 6:03am
By Jake Renzella, Vlada Rozova
Source: The Conversation



Share this with family and friends