Vishing. You heard it here first. Well, second, if you read Channel News Asia.
Can people tell a real voice from an AI-generated one? We put it to the test
Channel News AsiaAI-related fraud attempts surged by nearly 200 per cent in 2024, according to a cybersecurity firm. CNA takes a closer look at deepfake voice phishing, an increasingly common method used by scammers.
“Hi, how have you been? So I heard about this restaurant that just recently opened up. Want to go check it out the next time we meet?” These were the innocuous sentences used in a simple experiment to find out if people in my social circle could distinguish my real voice from an AI-cloned version.
The result was some confusion – but more importantly, the ease with which the imitation was generated suggests that more attention should be paid to the phenomenon of deepfake voice phishing, or vishing.
♪ “When you vish a porno star . . .” ♪
Millions of dollars have been lost to scammers using cheap yet increasingly sophisticated artificial intelligence tools to impersonate the voices of real people.
… AI-related fraud attempts surged by 194 percent in 2024 compared to 2023, with deepfake vishing emerging as one of the most commonly used methods, said the company’s senior fraud analyst for the region Yuan Huang.
“Vanity, V.D., Vishing.”
— Bilious Caesar












