Taming the infinite scroll: a public‑health mandate

Attention isn’t just personal grit; it’s a public good. Treat short‑video feeds like a safety‑critical system: labels, defaults, audits and data access across jurisdictions.

Modern economies leak attention like a faulty pipe. Average on‑screen focus now resets in ~47 seconds; daily reading‑for‑pleasure has fallen by ~40% over two decades. Meanwhile, nearly half of US teens say they are online almost constantly. None of this proves simple causation; it does establish a base rate that warrants action. In public‑health terms the markers are present: high prevalence, meaningful severity (especially for adolescents), clear externalities (lost learning, error‑prone work), modifiable risks (autoplay, infinite scroll, push alerts) and feasible interventions.

The market failure

Short‑video platforms optimise for watch‑time using designs that mimic variable‑ratio reinforcement [unpredictable rewards; the slot‑machine schedule]. Infinite scroll, autoplay and push alerts externalise interruption costs onto households, classrooms and workplaces. Crucially, when researchers removed only mobile internet for two weeks—texts and desktop intact—participants’ sustained attention and well‑being improved materially. That is a policy‑scale hint: change defaults, change outcomes.

The risk is dynamic, not static. A new real‑user study of 1,100 TikTok accounts found that even light users roughly doubled their daily watch time over six months. The curve is endogenous; usage begets usage. Regulation must therefore target system behaviour, not individual virtue.

From self‑help to systems: the attention diet as regulation

Think of an “attention diet” not as self‑denial but as market design: labels, budgets and defaults at the product layer, plus audits and data access at the system layer.

  1. Labels (calories for cognition).
    • Warning labels for minors, as urged by the US Surgeon General, frame the baseline risk. But add attention‑nutrition panels—notifications/day; autoplay status; average time/session—visible in app stores and settings. Like food labels, they don’t ban; they disclose.
  2. Budgets (age‑appropriate ceilings).
    • For under‑16s, autoplay off and finite scroll by default; time‑of‑day curfews that respect sleep and school hours; counters that reset only after offline intervals. Australia’s new minimum‑age law for social media provides the legal handle; make the defaults align.
  3. Defaults (friction beats willpower).
    • Notification batching (e.g., three drops/day) reduces stress without FoMO spikes. Creator‑first defaults (draft before feed) and For‑Later queues convert roulette into reading lists. These are small levers with measured effects.
  4. Audits and access (accountability, not vibes).
    • The EU’s DSA (EU’s online‑platforms law) already mandates systemic‑risk assessments for VLOPs (very large online platforms)—including risks to public health and minors’ well‑being—and opens Article 40 data access for vetted researchers. Export that architecture: audited risk logs, independent experiments, and public dashboards.
  5. Evidence at scale (policy RCTs).
    • Run city‑ or district‑wide trials that toggle autoplay, scroll limits, or labels with pre‑registered outcomes (sleep, anxiety screens, reading minutes, exam scores). The two‑week mobile‑internet RCT shows behavioural gains are available; governments should fund the follow‑through.

A cross‑national playbook

United States. Pair warning labels and public‑health messaging with a federal baseline for age‑appropriate defaults. Re‑table KOSA (US minors online‑safety bill) with precise design mandates (autoplay off, robust age checks, independent audits) and safe‑harbour protections for privacy‑preserving research access. Use federal procurement to require attention‑friendly defaults in official apps.

European Union. Enforce DSA risk assessments and mitigations with a named “attention harm” class and require VLOPs to offer a chronological feed and to publish shrink tests (impact of disabling engagement features on minors’ outcomes). Use the new researcher‑access delegated act to open multi‑country panels.

United Kingdom. Under the Online Safety Act, Ofcom’s draft children’s codes already push algorithmic taming and age checks. Add attention metrics to compliance (notifications/day, autoplay status) and align school‑day phone rules with clear guidance on storage, exceptions and enforcement.

Australia. With an under‑16s minimum‑age regime taking effect by December 2025, set pragmatic definitions of “reasonable steps” (high‑accuracy age assurance; family‑device attestation) and require public accuracy audits of age‑checks to avoid false positives.

Schools as the high‑yield lever

School‑day smartphone prohibitions are becoming the norm (England’s guidance; France’s tightening practice). The best evidence suggests test‑score gains concentrate among lower achievers when phone bans are enforced; but bans alone don’t fix sleep or anxiety. Marry phone‑free days with print‑reading minutes and digital‑hygiene lessons.

Productivity is a public‑health externality

Interrupted workers “go faster” to catch up, raising stress and error risk. The mere presence of a smartphone can sap cognitive capacity—replication is mixed, but the prudent policy is out of sight in safety‑critical zones and batched alerts elsewhere. Governments can lead by setting attention standards for the public sector and contractors.

How to measure success

  • Exposure: minutes of autoplay exposure per minor/day; notifications/day; share of time in chronological feeds.
  • Outcomes: reading‑for‑pleasure minutes (time‑use diaries), sleep duration, YRBS (CDC teen health survey) indicators, and school exam variance.
  • Process: share of VLOPs with published risk logs, independent audits, and researcher APIs.

Objections—and replies

  • “This is parental, not public, responsibility.” Seatbelts and food labels did not replace personal responsibility; they raised the baseline. So will safer defaults.
  • “It will harm innovation.” Transparency, interoperable controls and audits are pro‑innovation; they make quality legible and reduce arms‑race externalities.
  • “Evidence is mixed.” True for causal links to all harms, less so for design exposures (autoplay, infinite scroll) and usage intensity. Base‑rate trends and RCT signals justify low‑regret guardrails.

The kicker

Treat attention like clean air for cognition: invisible, essential, taken for granted until it degrades. Set labels, budgets and defaults; demand audits and data. The feeds won’t police themselves.

By the numbers

  • ~47 seconds: average on‑screen focus before switching. (gloriamark.com)
  • −40%: decline in US reading‑for‑pleasure, 2003–2023. (Cell)
  • ~50%: US teens online “almost constantly.” (Pew Research Center)
  • RCT: 2 weeks without mobile internet → better attention & well‑being. (OUP Academic)
  • Escalation: even light TikTok users double daily watch time in 6 months. (The Washington Post)
  • Policy shift: Australia’s under‑16 minimum‑age law effective Dec 2025. (Infrastructure and Transport Dept)

Sources (selected)

  1. US Surgeon General advisory on social media & youth mental health. (HHS.gov)
  2. PNAS Nexus (2025) RCT: blocking mobile internet improves attention and well‑being. (OUP Academic)
  3. iScience (2025): 20‑year decline in US reading for pleasure (ATUS). (Cell)
  4. Pew Research (2025) Teens & social media fact sheet—“almost constantly” online. (Pew Research Center)
  5. EU DSA researcher‑access (Article 40) explainer. (algorithmic-transparency.ec.europa.eu)

The Slop Loop: How AI and Short Video Make Truth Expensive

How low‑effort AI and high‑speed feeds could break research, politics—and our sense of what’s real

When models train on their own exhaust and voters doomscroll synthetic video, truth becomes expensive. We can still cheapen it—if platforms and policymakers act.

1) The supply shock: AI slop meets recursive training

The internet is filling with AI slop [low‑effort, low‑quality AI content]. That isn’t just an aesthetic problem. Generative models increasingly ingest the very outputs they produce. Theory and experiments show that when models feed on model‑made data, they “forget” the tails of human creativity—a dynamic researchers dub model collapse. Left unchecked, each training round narrows what systems can discover, worsening homogenisation in search, summarisation and even science.  

Add adversaries and the picture darkens. It is cheap to poison web‑scale datasets in ways curators will not spot—$60 could have tainted popular image corpora—making future models brittle or biased in targeted ways.  

Zoom out and a familiar macro risk appears: algorithmic monoculture. When many decision‑makers converge on the same systems or ranking functions, average accuracy can fall and systemic fragility rises—even before shocks arrive. In information markets, a monoculture of recommendation and re‑generation magnifies slop, then feeds on it.  

By the numbers

• 40%: global trust in news—stuck at a low plateau.  

• 33% of people use TikTok; 17% use it for news—usage growing fastest among the young.  

• ~55%: pooled human accuracy at spotting deepfakes—barely above chance.  

• −46% reposts, −44% likes after an X note is attached to a misleading post.  

• $186 per worker/month: cost of AI “workslop” in US offices.  

2) The demand shock: short video compresses attention

Truth struggles not only because supply degrades; demand is being rewired by design. Short‑video feeds reward novelty, moral emotion and speed—exactly the traits that boost diffusion of polarised content and negative headlines. That is measurable: each extra “moral‑emotional” word raises the odds a political tweet spreads, and each extra negative word in a headline lifts click‑through by about 2.3%.  

Collective attention itself is shortening across media, as more content competes for fixed cognitive bandwidth. The churn is faster; dwell time shrinks.  Meanwhile, real usage data show TikTok can train daily habits quickly; even light users escalate to well over an hour a day, pushing decision‑relevant information into ever tighter frames.  

Why this matters for research and decisions: when evidence must fight for milliseconds, methods devolve toward vibes. In such markets, slop that is emotive, familiar and frictionless beats slow, careful work.

3) Blurring reality: Sora, deepfakes and the liar’s dividend

As synthetic video matures, the boundary between reportage and make‑believe erodes. OpenAI’s Sora 2 launch is a leap in realism (now with synchronised audio) and a distribution shift: a consumer app. Sensibly, the firm ships visible watermarks and C2PA [an open provenance standard] metadata by default—important, because provenance will become the “receipt” of authenticity even when pixels look perfect.  

Alas, humans are mediocre deepfake detectors—near coin‑flip in aggregate—so realistic fabrications will often pass casual scrutiny. Worse, rising deepfake awareness turbocharges the liar’s dividend: the ability to dismiss inconvenient truths as “AI”. Together, these forces make ordinary users less able and less willing to sort real from fake.  

Platforms are edging toward better labelling. YouTube now requires disclosure of “realistic” synthetic content; TikTok has begun labelling AI media from other tools using C2PA signals. Good—but labels must travel with downloads and edits, not just live on one platform.  

4) Politics in the feed: propaganda, micro‑targeting and platform responses

State‑aligned actors already flood platforms with low‑quality influence content (China‑linked “DRAGONBRIDGE” is the canonical case), and while most such spam draws little organic engagement, the flood still clogs discovery and fuels confusion.  

Regulators are moving. The EU’s DSA [content law] is probing TikTok’s ad transparency; separately the bloc’s new TTPA [rules on transparency/targeting of political ads] has prompted Meta—and now Google—to halt political ads in the EU, rather than re‑tool for compliance. That choice reduces one channel for micro‑targeted propaganda but pushes more contestation into organic feeds, creator “newsfluencers” and encrypted groups where transparency is weaker.  

Can crowd‑fact‑checking help? Evidence is mixed but encouraging on post‑level impacts. Rigorous work finds X’s Community Notes (formerly Birdwatch) can materially reduce diffusion—roughly halving re‑shares and likes—once a note appears. But notes often arrive late; and labelling a subset of false posts risks the implied truth effect, where unlabelled falsehoods seem more credible. Design details—speed, coverage, note quality, and when labels appear—decide outcomes.  

5) When factuality becomes a luxury

If slop is cheap and attention is scarce, verification becomes a luxury good. Knowledge workers already pay a hidden tax for “workslop”—plausible‑looking but wrong memos, slides and emails that take hours to unwind. That same tax hits citizens sifting political clips and “AI‑news” in their off‑hours. Over time, the people and institutions willing (or able) to pay for provenance, corroboration and curated context will diverge from those who cannot. That split—more than any one election—would be the real democratic recession.  

6) Making truth cheap again: a policy‑and‑product playbook

A. Provenance by default. Mandate tamper‑evident credentials for realistic synthetic media across major platforms and devices; make labels travel with files and show prominently in embeds. The EU AI Act’s transparency duties and YouTube’s disclosure rules are a start; extend them to default, cross‑platform C2PA.  

B. Slow the first hop. Introduce “friction at virality”: when content spreads unusually fast (especially around politics or health), platforms should slow the boost until either (i) provenance is verified, or (ii) high‑quality notes appear. Evidence suggests crowd context can cut virality sharply if delivered before the cascade peaks.  

C. Evidence trays. For any item labelled synthetic or contested, show an expandable tray with original‑source candidates (archives, wire copies), model provenance, and fact‑checks. This offsets the implied truth effect by offering alternatives, not just warnings.  

D. Anti‑monoculture incentives. Search, social and LLM providers should publish overlap scores and diversify training and ranking inputs—akin to financial concentration limits—to avoid the slop loop. The welfare gains of diversity are real under monoculture models.  

E. Research hygiene. Journals, funders and firms should adopt “RAG‑first” practices (RAG = retrieval‑augmented generation [LLM linked to sources]) for summaries and drafts, with compulsory bibliographies and data deposits; training pipelines must include human‑origin “clean rooms” to resist recursive collapse.  

F. Ad transparency that survives platform exits. As major platforms retreat from EU political ads, regulators should require public, standardised reporting of political spend and targeting across all channels (search, influencer, CTV, creators), or risk the debate shifting to darker venues.  

7) The kicker

We do not need to choose between creativity and credibility. But we do need to price provenance and pause back into the feed. If platforms can make the wrong clip spread in seconds, they can make the right context spread just as fast.

Sources (selected)

  • Shumailov et al. (2023), The Curse of Recursion—model collapse.  
  • Lorenz‑Spreen et al. (2019), Accelerating dynamics of collective attention.  
  • Reuters Institute (2025), Digital News Report—trust and TikTok use.  
  • OpenAI (2025), Launching Sora responsibly—watermarks & C2PA.  
  • Slaughter et al. (2025), PNAS—Community Notes reduce virality.