Nothing Is Real Anymore: How AI Hoaxes Are Hijacking Reality
It’s 2025, and facts don’t stand a chance.
Your smartest friend is reposting an image of Devil’s Tower in Wyoming, claiming it’s the stump of a 4-mile petrified tree. Your mother thinks Katy Perry was at the Met Gala in a meat dress – again. A video of political violence goes viral, only for the watermark to reveal it was stitched together by an AI tool more powerful than Photoshop on steroids.
The problem? These aren’t harmless internet jokes. They’re believable lies, mass-produced by machine learning systems trained on the scraps of human attention. And even when the truth catches up, the damage is already done.
This is the era of AI slop – and we’re drowning in it.
The Devil’s Tower Myth: A Hoax That Refuses to Die
Back in 2017, a satirical Facebook post made the rounds, claiming that Wyoming’s iconic Devil’s Tower wasn’t a geological formation but the “stump” of an ancient tree, complete with “4-mile petrified roots.”
The post was clearly fake – filled with absurd claims and Photoshopped images. Scientists debunked it. Snopes labeled it satire. Everyone moved on.
Until they didn’t.
In 2025, the same myth is back, turbocharged by AI-generated visuals so convincing they’ve tricked even well-read professionals. “I thought it was a re-discovered site,” said one commenter on a now-viral TikTok video. “Didn’t realize it was just an old meme.”
Geologists have reissued statements clarifying that Devil’s Tower is the remnant of an ancient volcanic intrusion – not the root of a planetary-sized redwood. But by then, the new version of the hoax had already reached millions.
The kicker? The AI-generated images had improved. The lighting matched. The scale looked believable. And it only took a few clicks to replicate.
AI Slop: Endless Misinformation at Scale
“AI slop” has entered the lexicon as a catch-all for low-effort, high-volume synthetic content: videos, text, images, even full articles generated by AI tools trained to mimic reality just well enough to fool the average scroll-happy consumer – and too think we are on the verge of quantum computing.
Examples are everywhere:
- Fake product listings for seeds of glow-in-the-dark plants.
- Fake concert announcements using AI-generated posters and fake ticketing links.
- Deepfake AI pets sold by shady digital marketplaces.
- Entire TikTok accounts built to spread fake news via generated avatars.
What makes this so dangerous isn’t just the volume – it’s the emotional bait these pieces of slop are designed to hook. Whether it’s political outrage, nostalgic fantasy, or just weird clickbait, AI slop weaponizes the things we want to believe.
And it’s getting more personal by the day.
Deepfakes You’ve Already Seen and Probably Believed
Google’s Veo 3 Tool
In April, a video circulated of what appeared to be a riot during a U.S. election recount. It showed smashed windows, tear gas, and shouting police officers. It was entirely fake – generated with a beta version of Google’s Veo 3, a tool capable of crafting realistic video with synced audio and emotional cues.
Yes, it had a watermark. But viewers ignored it.
Katy Perry at the Met Gala
No, she didn’t show up in a neon-flamingo feathered suit. But the AI-generated photos went viral anyway – even fooling her own mother, according to Perry’s post on Threads. The dress, the lighting, even the expressions were fabricated by Midjourney paired with AI face-tuning tools.
And for a moment, millions believed.
Deepfake School Scandals
In a horrifying case in Massachusetts, an AI-generated voice recording of a school principal making racist and anti-Semitic comments was leaked online. It led to death threats and media uproar – only later was it revealed the clip had been synthesized using an antivirus program’s voice modulator.
The principal was placed on leave before the truth emerged.
Politics, Public Figures, and the Collapse of Trust
Taylor Swift and Explicit AI Imagery
A series of explicit images of Taylor Swift began circulating in January – all generated by AI tools that replicated her face with disturbing accuracy. The images triggered mass reporting, platform bans, and even a legislative proposal for federal protections against AI sexual deepfakes.
But the reputational damage was done. Some fans believed. Some still do.
Global Election Manipulations
Fake speeches, manipulated debate clips, and viral audio quotes have infiltrated elections around the globe:
- Biden accused of statements he never made.
- Macron falsely portrayed supporting fringe policies.
- Ferdinand Marcos Jr. smeared with fake video testimony ahead of elections.
In every case, the platforms moved too slowly. By the time moderation kicked in, the damage was already circulating in private group chats and encrypted platforms.
Why Smart People Fall for Dumb Fakes
This is where things get bleak.
AI-generated content preys not on ignorance, but on confirmation bias. The Katy Perry fan who believes the dress photo is real? They want it to be real. The conspiracy theorist reposting the Devil’s Tower myth? It validates a worldview. The voter who sees a fake clip of a candidate saying something outrageous? It “feels right,” so they hit share.
Even brilliant people fall for these hoaxes because they’re built to exploit how humans process information emotionally before logically.
We are, quite literally, designed to be fooled.
Can We Spot the Lies?
There are still tells. Glitchy fingers. Dead eyes. Smeared jewelry. Audio that cuts at weird places. Shadows that fall the wrong way.
But as tools improve, these clues become subtler. AI tools are now building hands, teeth, and lighting with such precision that only forensic specialists can tell the difference.
Platforms like Snopes, PolitiFact, and Blackbird.ai are building databases of known AI content and issuing real-time alerts. But they remain underfunded, under-read, and always a step behind the creators.
Even Google’s watermarking initiative is falling flat. Most hoax content is cropped, filtered, or screen-recorded to remove telltale signs before sharing.
And there’s one thing no watermark can fix: vigilance fatigue.
The Real-World Fallout
The costs of AI hoaxes go beyond embarrassment.
- Financial fraud through deepfake voice calls now account for billions in annual losses.
- Reputation destruction is instantaneous and often irreversible.
- Legal systems are scrambling to distinguish real from synthetic evidence.
Worst of all, even legitimate news is now met with skepticism. When everything could be fake, everything becomes suspect. And that’s a very dangerous place for a democracy to be.
The Trust Collapse: A Timeline in Progress
We’ve passed a tipping point.
Where once “fake news” was a punchline, it’s now a permanent condition of modern information life. Hoaxes have moved from fringe forums to mainstream feeds. And the average person – bombarded by content, exhausted by fact-checking, and pulled by algorithms toward outrage – can’t keep up.
The net result? We stop caring.
And in that vacuum, lies thrive.
What Can Be Done?
Individually:
- Pause before sharing
- Look for source links or video inconsistencies
- Verify with trusted outlets
Institutionally:
- Invest in real-time AI content detection
- Pressure platforms to delay viral spread of unverified content
- Fund independent fact-checking at scale
Politically:
- Legislate deepfake consent laws
- Establish penalties for malicious synthetic content
- Promote AI transparency mandates
Final Word: Believe Nothing at First Glance
Whether it’s a fossilized tree myth, a fake political scandal, or a celebrity image designed to go viral, we’ve entered a world where nothing is real by default.
Facts still matter. But believing blindly – even if it feels right – is the surest way to help misinformation win.
Take a breath. Ask questions. Verify. And most of all – stop feeding the machine with your shares.
Because the future isn’t just about truth vs. lies. It’s about whether we can remember the difference.
FAQs
What is “AI slop” exactly?
It’s low-quality, high-volume AI-generated content — from fake products to political lies — designed to manipulate and flood attention feeds.
Can deepfakes be reliably detected yet?
Not reliably. Detection is improving, but most fakes spread faster than they can be identified, especially in private groups or encrypted platforms.
Are there legal protections against AI hoaxes?
Some countries are beginning to draft laws on AI content labeling, consent for likeness use, and penalties for malicious deepfakes — but enforcement lags behind tech capabilities.
What’s the most effective thing I can do to combat this?
Verify before sharing. Be skeptical of sensational content. Follow trusted sources. And speak up when you see misinformation circulating.
