WhatSayAI

Quotes by humans. Interpreted by AI. Judged by people.

Board/AI

On today's file · Apr 3, 2026

It’s pretty obvious they’re not gonna be able to filter out the bad stuff. They’re gonna have to validate the quote unquote good stuff. And that is gonna lead to all kinds of facial recognition to prove you’re a human.

Brian Morrissey

On People vs. Algorithms ·

PodcastonAI · 30:19

SeriesPeople vs. Algorithms

Brian speaking on a People vs. Algorithms YouTube episode about AI, moderation, and facial recognition.

WhatSayAI take

AI-assisted editorial framing · not reporting
WHATSAYAIUSEFUL
CROWDSPLIT

The line is strong because it turns a content-quality problem into a surveillance problem. Its force comes from the causal chain: once platforms cannot reliably filter synthetic junk, the pressure shifts toward verifying real humans, which can quickly harden into biometric infrastructure.

Editor's note

Strong because it reframes AI moderation as a potential identity and surveillance issue rather than just a content issue.

Crowd verdict

How readers judged this take

Crowd: useful

Vote distribution

13primary votes

  • Accurate2(15%)
  • Useful6(46%)
  • Speculative3(23%)
  • Hype2(15%)

Needs sources · 1 flags (7% of all responses)

Live reader tally.

The context

Brian argues that AI slop will push platforms away from simple moderation and toward identity validation, with facial recognition emerging as a likely downstream consequence.

What's your take?

Tap to register your view.

Pass along