The Human Internet Is Dying, And All We Can Do Is Watch

  • By
  • October 10, 2024
The Human Internet Is Dying, And All We Can Do Is Watch

Imagine a future where most of the content you encounter online is created by artificial intelligence. Now open your eyes, we’re already living in that reality.

Today, the vast majority of generated content is starting to exceed human made content. From social media posts to articles, images, and even music, AI-generated content is rapidly overtaking human efforts, and without needing any major breakthroughs, it can continue as rapidly in the following years.

Consider this: more than 15 billion images were created using text-to-image algorithms from 2022 to 2023 alone. To put this in perspective, approximately 3.5 trillion photos have been taken since Daguerre captured Boulevard du Temple 174 years ago.

With this in mind, while it took nearly two centuries to reach that number, we can only speculate on how much time left until human generated photography will become hidden or outnumbered by the vast sea of stable-diffusion models or images created with generative tools. We assume that this can happen when more than 99% of generated content is artificially made.

Going forward, there has been a lot of buzz around the dead internet theory.

Dead Internet Theory

Once dismissed as a fringe concept, today, not so far fetched. The theory suggests that much of the internet is ‘dead‘ in terms of genuine human interaction, with AI-generated content, bot interactions, and manipulated traffic creating an illusion of human activity.

This theory extends beyond social media. It suggests that a significant chunk of online content, from articles to images to videos, is churned out by AI, not humans. While we are still however in the ‘early‘ days.

Discussions about this theory happened also in academic circles, and in a 2022 article published in Information, Communication & Society, Golding and Schofield examined it. While the authors don’t claim this theory as fact, they discussed how the perception of AI’s growing role in content creation might affect user trust and engagement with the internet as a whole.

With more and more content invading every possible digital space, as content creators we’re starting to feel claustrophobic, and that we slowly being pushed out of our own digital spaces. We’re seining huge increase in AI driven content from google images, stock photos, to major social media sites (Pinterest, Instagram, Facebook, Twitter, Reddit, YouTube) unto music platforms and media outlets.

AI Driven Content Consequences

Images

AI content does feel less engaging than “real” content. As astonishing it first looks, the images do get bored pretty soon. Especially when it looks too real, having that uncanny valley effect, of something resembling reality but not exactly. Also hyper realism, does get boring faster than a tiktok trend.

Writing

As for writing, we are experiencing it everywhere. When is good, is unnoticeable, when is bad, the writing is full of repetitive structures, exaggeration, overusing of ‘crucial, delve’.

The real consequence is when it will be better, because we will hope it won’t just sound better, but fact check better, and until then, at this state is mainly error prone, fact prone, content prone. Still a great tool to use, but a faulty one nevertheless, and something that doesn’t even remotely is capable of what we hear in the news.

For coding

For coding, I do have to say at first it was something else, but after a while, I’ve learned that it doesn’t really help with productivity, it mainly takes a lot of time, because of a lot of code reviewing.

The same observations apply on each section, and possibly even in future with videos, but I will make it in the coding aspect, because here is where I tried it the most. At first I was honestly afraid of not missing a paradigm shift, afterwards it turned out to be a ‘better google‘ in some cases.

While it has it’s perks with reading and understanding minified code for example, or adding / improving static types, in the end is just that: a text predictor that assumes a lot, and more often than not is wrong.

One last thing, if you do manage to overall get great results with the tool, keep in mind: you did not make them, you just asked it for a result, and while it does feel rewarding to take the results and go in the world, at the end of the day that code, that text, that pixel from the image was resulted from various data that was used to train the models.

And the data could’ve been stolen. [1] [2] [3] [4]

So, until Terminator robots and AI singularity comes and wipes humanity, we must face a different threat: content saturation.

AI content saturation: A real cybersecurity concern

I’ve noticed a concerning parallel between a known vulnerability(Conversation Overflow) in Large Language Models (LLMs) and the AI generated content trend.

In the realm of LLMs, there’s an attack vector where overwhelming the model with excessive information can cause it to lower its defenses, making it more susceptible to manipulation.

Now, a similar phenomenon is seen unfold, affecting the human internet users.

This AI driven content consequences can pose in future as a security risk.

Now just trying to locate various security updates or search for an article, or other verified information amidst an ocean of AI-generated content can become a daunting and complex task. And it’s not easy to distinguish, because now, everything can have a touch of AI.

This could affect users in worst possible ways, from misinformation to mass manipulation. And security threats that we cannot even imagine yet.

When the majority of online content is AI-generated, how do we ensure we’re making decisions based on reliable information?

A conclusion

The near-future might not be conquered by AI singularity, but with maybe your grandchild asking, “How was everything before the Dead Internet happened?”.

Also what will happen when AI will get trained mostly on AI data and there are very few people that can fact check anymore? We’ll stop with questions, we’re going too far with assumptions.

Further reads:
People Are Creating an Average of 34 Million Images Per Day. Statistics for 2024

Post inspired from reddit: https://www.reddit.com/r/ChatGPT/comments/1fye6tb/the_human_internet_is_dying_ai_images_taking_over/

Photo by Igor Omilaev on Unsplash