The Slop Loop: How AI and Short Video Make Truth Expensive

How low‑effort AI and high‑speed feeds could break research, politics—and our sense of what’s real

When models train on their own exhaust and voters doomscroll synthetic video, truth becomes expensive. We can still cheapen it—if platforms and policymakers act.

1) The supply shock: AI slop meets recursive training

The internet is filling with AI slop [low‑effort, low‑quality AI content]. That isn’t just an aesthetic problem. Generative models increasingly ingest the very outputs they produce. Theory and experiments show that when models feed on model‑made data, they “forget” the tails of human creativity—a dynamic researchers dub model collapse. Left unchecked, each training round narrows what systems can discover, worsening homogenisation in search, summarisation and even science.  

Add adversaries and the picture darkens. It is cheap to poison web‑scale datasets in ways curators will not spot—$60 could have tainted popular image corpora—making future models brittle or biased in targeted ways.  

Zoom out and a familiar macro risk appears: algorithmic monoculture. When many decision‑makers converge on the same systems or ranking functions, average accuracy can fall and systemic fragility rises—even before shocks arrive. In information markets, a monoculture of recommendation and re‑generation magnifies slop, then feeds on it.  

By the numbers

• 40%: global trust in news—stuck at a low plateau.  

• 33% of people use TikTok; 17% use it for news—usage growing fastest among the young.  

• ~55%: pooled human accuracy at spotting deepfakes—barely above chance.  

• −46% reposts, −44% likes after an X note is attached to a misleading post.  

• $186 per worker/month: cost of AI “workslop” in US offices.  

2) The demand shock: short video compresses attention

Truth struggles not only because supply degrades; demand is being rewired by design. Short‑video feeds reward novelty, moral emotion and speed—exactly the traits that boost diffusion of polarised content and negative headlines. That is measurable: each extra “moral‑emotional” word raises the odds a political tweet spreads, and each extra negative word in a headline lifts click‑through by about 2.3%.  

Collective attention itself is shortening across media, as more content competes for fixed cognitive bandwidth. The churn is faster; dwell time shrinks.  Meanwhile, real usage data show TikTok can train daily habits quickly; even light users escalate to well over an hour a day, pushing decision‑relevant information into ever tighter frames.  

Why this matters for research and decisions: when evidence must fight for milliseconds, methods devolve toward vibes. In such markets, slop that is emotive, familiar and frictionless beats slow, careful work.

3) Blurring reality: Sora, deepfakes and the liar’s dividend

As synthetic video matures, the boundary between reportage and make‑believe erodes. OpenAI’s Sora 2 launch is a leap in realism (now with synchronised audio) and a distribution shift: a consumer app. Sensibly, the firm ships visible watermarks and C2PA [an open provenance standard] metadata by default—important, because provenance will become the “receipt” of authenticity even when pixels look perfect.  

Alas, humans are mediocre deepfake detectors—near coin‑flip in aggregate—so realistic fabrications will often pass casual scrutiny. Worse, rising deepfake awareness turbocharges the liar’s dividend: the ability to dismiss inconvenient truths as “AI”. Together, these forces make ordinary users less able and less willing to sort real from fake.  

Platforms are edging toward better labelling. YouTube now requires disclosure of “realistic” synthetic content; TikTok has begun labelling AI media from other tools using C2PA signals. Good—but labels must travel with downloads and edits, not just live on one platform.  

4) Politics in the feed: propaganda, micro‑targeting and platform responses

State‑aligned actors already flood platforms with low‑quality influence content (China‑linked “DRAGONBRIDGE” is the canonical case), and while most such spam draws little organic engagement, the flood still clogs discovery and fuels confusion.  

Regulators are moving. The EU’s DSA [content law] is probing TikTok’s ad transparency; separately the bloc’s new TTPA [rules on transparency/targeting of political ads] has prompted Meta—and now Google—to halt political ads in the EU, rather than re‑tool for compliance. That choice reduces one channel for micro‑targeted propaganda but pushes more contestation into organic feeds, creator “newsfluencers” and encrypted groups where transparency is weaker.  

Can crowd‑fact‑checking help? Evidence is mixed but encouraging on post‑level impacts. Rigorous work finds X’s Community Notes (formerly Birdwatch) can materially reduce diffusion—roughly halving re‑shares and likes—once a note appears. But notes often arrive late; and labelling a subset of false posts risks the implied truth effect, where unlabelled falsehoods seem more credible. Design details—speed, coverage, note quality, and when labels appear—decide outcomes.  

5) When factuality becomes a luxury

If slop is cheap and attention is scarce, verification becomes a luxury good. Knowledge workers already pay a hidden tax for “workslop”—plausible‑looking but wrong memos, slides and emails that take hours to unwind. That same tax hits citizens sifting political clips and “AI‑news” in their off‑hours. Over time, the people and institutions willing (or able) to pay for provenance, corroboration and curated context will diverge from those who cannot. That split—more than any one election—would be the real democratic recession.  

6) Making truth cheap again: a policy‑and‑product playbook

A. Provenance by default. Mandate tamper‑evident credentials for realistic synthetic media across major platforms and devices; make labels travel with files and show prominently in embeds. The EU AI Act’s transparency duties and YouTube’s disclosure rules are a start; extend them to default, cross‑platform C2PA.  

B. Slow the first hop. Introduce “friction at virality”: when content spreads unusually fast (especially around politics or health), platforms should slow the boost until either (i) provenance is verified, or (ii) high‑quality notes appear. Evidence suggests crowd context can cut virality sharply if delivered before the cascade peaks.  

C. Evidence trays. For any item labelled synthetic or contested, show an expandable tray with original‑source candidates (archives, wire copies), model provenance, and fact‑checks. This offsets the implied truth effect by offering alternatives, not just warnings.  

D. Anti‑monoculture incentives. Search, social and LLM providers should publish overlap scores and diversify training and ranking inputs—akin to financial concentration limits—to avoid the slop loop. The welfare gains of diversity are real under monoculture models.  

E. Research hygiene. Journals, funders and firms should adopt “RAG‑first” practices (RAG = retrieval‑augmented generation [LLM linked to sources]) for summaries and drafts, with compulsory bibliographies and data deposits; training pipelines must include human‑origin “clean rooms” to resist recursive collapse.  

F. Ad transparency that survives platform exits. As major platforms retreat from EU political ads, regulators should require public, standardised reporting of political spend and targeting across all channels (search, influencer, CTV, creators), or risk the debate shifting to darker venues.  

7) The kicker

We do not need to choose between creativity and credibility. But we do need to price provenance and pause back into the feed. If platforms can make the wrong clip spread in seconds, they can make the right context spread just as fast.

Sources (selected)

  • Shumailov et al. (2023), The Curse of Recursion—model collapse.  
  • Lorenz‑Spreen et al. (2019), Accelerating dynamics of collective attention.  
  • Reuters Institute (2025), Digital News Report—trust and TikTok use.  
  • OpenAI (2025), Launching Sora responsibly—watermarks & C2PA.  
  • Slaughter et al. (2025), PNAS—Community Notes reduce virality.