It’s pretty obvious they’re not gonna be able to filter out the bad stuff. They’re gonna have to validate the quote unquote good stuff. And that is gonna lead to all kinds of facial recognition to prove you’re a human.
Brian Morrissey
PodcastonAI · 30:19
SeriesPeople vs. Algorithms

WhatSayAI take
AI-assisted editorial framing · not reportingThe line is strong because it turns a content-quality problem into a surveillance problem. Its force comes from the causal chain: once platforms cannot reliably filter synthetic junk, the pressure shifts toward verifying real humans, which can quickly harden into biometric infrastructure.
Editor's note
Strong because it reframes AI moderation as a potential identity and surveillance issue rather than just a content issue.
Crowd verdict
How readers judged this take
Vote distribution
13primary votes
- Accurate2(15%)
- Useful6(46%)
- Speculative3(23%)
- Hype2(15%)
Needs sources · 1 flags (7% of all responses)
Live reader tally.
The context
Brian argues that AI slop will push platforms away from simple moderation and toward identity validation, with facial recognition emerging as a likely downstream consequence.
What's your take?
Tap to register your view.
Pass along