Twenty years ago, only evil genius super-villains used AI.
I know this from watching The Incredibles (2004) last night.
And when an AI spoke, it sounded all robotic ‘n’ stuff.
This morning, I woke up and decided that wouldn’t be a bad thing, here in 2025.
Not the super-villain part. We have enough of that going on already.
I’m talking about requiring spoken-language output of AIs to sound “robotic”. And so require that AI spoken output be easily recognizable to humans as machine-generated speech.
Hence, the need for a Truth-in-AI Act.
Truth-in and commercial speech
Many laws bar deception in various forms of commercial speech. Truth-in-advertising laws. Truth-in-lending laws.
Materially mis-representing something you are selling is already against the law.
The Truth-in-AI law would just be another layer of that. It would say that you can’t send me AI-generated speech without making it clear that it is AI-generated speech.
Cue tape
By Birchflow – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=146074295
I’m old enough to know why the AI’s speech is so disjointed in The Incredibles.
It harks back to the days when the only way to create fake audio of a person’s voice was to cut-and-paste audio tape recordings of that voice. Literally snip and tape together carefully selected sections of audio tape.
The result was definitely a sentence — almost surely saying something inappropriate — in the voice of the original tape recording. But the sound of each word was taken completely out of the original context.
This is what the oddly lilting speech of the autopilot in The Incredibles was modeled after.
Even today, we have vestiges of “robotic voice” in (e.g.) legacy systems that will send you a security code over the phone. Each number is pronounced as a stand-alone entity. So that 111 is pronounced one-(pause)-one-(pause)-one. And not “a hundred and eleven”.
But seriously
Post #2052: A 15-minute podcast summarizing the issues for a proposed Vienna pool/gym.
The power of AI to persuade (and mislead) the masses largely hinges on not being able to tell that the results are AI-generated.
AI already passes the simple version of the Turing test. You can hold a conversation with one, and not know that you’re not talking to a human).
And, as it turns out, being unable to tell AI from reality is a bad thing. This is surely true for AI-faked videos and photos.
It’s also true for audio products like fake two-person podcasts, as referenced above.
For free, the Google AI product NotebookLM will produce you an outstanding and persuasive two-person podcast. Feed it only the information you want it to see, and it will say what you’d like it to say.
I was appalled at how persuasive that free, easily-created two-person podcast was.
But if you took the exact same discussion, and made the AI actors “talk like robots”, it would not be anywhere near as persuasive. All of the brain-bypassing emotional appeal of the two warm human voices would be lost.
You’d have to rely solely on the logic and sense of the underlying argument.
And I’d say we’re desperately in need of that these days.
Conclusion
If I took horsemeat, and labeled and sold it as hamburger so that I could sell it, I’d be liable for fraud.
But if I take an AI-generated “conversation”, and put that out there as if it were human, so that I can “sell” it to the audience — that’s A-OK?
We have to start dealing meaningfully with AI’s outstanding ability to deceive.
If passing off a horse for a cow is illegal, then passing off an AI for a human should be as well.
No more of this sounding-like-a-human stuff — that’s a good start. Even if that can only be applied to commercial speech.