When AI voices sound perfect to human ears, spectrograms tell a different story

🔊 We’ve been testing Resemble AI and ElevenLabs created voices – both sound completely human to me. I can’t hear the difference. But Aleksei Aleshin ran the spectrogram analysis. The patterns jumped out immediately. 🎶 I’m sharing three spectrograms below: ElevenLabs, Resemble AI, and a real human voice. See for yourself! What the spectrograms reveal:👉 … Read more

Categories Uncategorized

I realized I don’t trust anything stamped “Created by Human.”

“Created by Human” doesn’t automatically mean it’s true. Seth Godin recently wrote about how AI-enabled scams are destroying our ability to trust digital interactions in his article “Scams at Scale”. But this erosion of trust might be exactly what we need. His prediction: our circles of trust will shrink, networks will fracture, and it’s going … Read more

Categories Uncategorized

Erosion of trust

I love the Cybersecurity Tip challenge happening right now – people sharing quick insights about deepfakes and how easily they can trick us when we’re not paying attention. The core message: awareness is our strongest defence. Before clicking or doing something – pause. Technology can’t think for us, but we can. Know yourself to protect … Read more

Categories Uncategorized

YouTube’s likeness-detection technology has officially launched. Not sure it’s meant well.

YouTube revealed that its likeness-detection technology has officially rolled out to eligible creators in the YouTube Partner Program, following a pilot phase. The technology allows creators to request the removal of AI-generated content that uses their likeness—identifying and managing AI-generated content featuring their face and voice. 👋 Kinda related to my narrative about deepfake detection, … Read more

Categories Uncategorized

My friend wants a filter to turn off all AI content.

“I just want to see the real deal,” she said.And I agree – I’d like to see more of the imperfect perfect.I’ve been working on deepfake detection and cybersecurity. But I’ve been digging into detection, not into preserving authenticity when everything can be faked. AI creates text that sounds human, voices that sound real, videos … Read more

Categories Uncategorized

Turns out, warning people about AI scams doesn’t work. But telling them what AI can actually do? Game changer.

🎙️ Can you tell if that voice on the phone is real or AI? Your dialect might be clouding your judgment. Fascinating new research reveals a hidden vulnerability: we’re significantly more likely to assume AI voices are human when they speak in regional, minority, or non-standard dialects. ✨ Why? Because we’ve been conditioned to believe … Read more

Categories Uncategorized

People are poorly equipped to detect AI-powered voice clones

Think you could spot an AI voice clone? Spoiler: You probably can’t. And neither can I. I’ve read a paper by Sarah Barrington, @Emily A. Cooper & Hany Farid from UC Berkeley and…Well, in a nutshell, we are surprisingly bad at detecting AI-generated voice clones, both in terms of recognizing if it’s the same person … Read more

Categories Uncategorized

Experiment to detect voice deepfakes

So, I was wondering how we can spot AI-created voices, in practice.It’s awesome to have friends like Aleksei Aleshin, a sound engineer, who set up a quick experiment to detect voice deepfakes.Here’s what he did:🔊He took several real and synthetic voices, used Python and Librosa to extract audio features (MFCC, spectrum, pitch, etc.), and compared … Read more

Categories Uncategorized

Anatomy of a Deepfake Social Engineering Attack 🕵️

What makes a voice social engineering attack actually work? Is it the perfect content? The conversational flow? The voice clone quality?… 🤔 I just read an experiment run by Reality Defender, by Dharva KhambholiaThey conducted a case study demonstrating how attackers can use AI voice cloning and conversational AI to execute social engineering attacks.The scenario: … Read more

Categories Uncategorized

Spotting the Fake: A Dangerous Overconfidence Gap

🔍 My research on deepfake threats reveals a dangerous overconfidence gap. A lot of people I interview say: “Sure, it’s dangerous, but I’ll know.” They’re worried about protecting elderly relatives or decision-impaired individuals, but believe they can easily spot fakes themselves.Bad actors’ use audio attacks more frequently, while video malicious deepfakes are mostly used for … Read more

Categories Uncategorized