logo
ADVERTISEMENT

BEN JACOB: Deepfakes - The Next Human Vulnerability For Businesses?

The danger lies not in code, but in people. Familiar voices and faces can no longer be trusted as proof of authenticity.

image
by BEN JACOB

Opinion31 October 2025 - 10:00
ADVERTISEMENT

In Summary


  • In one case from February 2024, a Hong Kong company employee was tricked into transferring €24 million after joining what appeared to be a legitimate video call.
  • Everything seemed real: the accent, tone, and mannerisms.
Vocalize Pre-Player Loader

Audio By Vocalize

BEN JACOB/HANDOUT






Synthetic audio and video—better known as deepfakes—have moved beyond entertainment or political misuse to become a key tool in cybercrime. What was once a tech novelty is now a serious business risk.

The danger lies not in code, but in people. Familiar voices and faces can no longer be trusted as proof of authenticity. Attackers now use cloned voices and fake videos to deceive employees into making costly mistakes.

In one case from February 2024, a Hong Kong company employee was tricked into transferring €24 million after joining what appeared to be a legitimate video call. Everything seemed real: the accent, tone, and mannerisms.

Deepfake attacks are spreading fast. A 2024 report by Anozr Way estimated their number could rise from 500,000 in 2023 to over 8 million by 2025. These attacks exploit one of our most basic instincts—trust in human interaction.

Today, cloning a voice takes only a few seconds of public audio from platforms like YouTube or TikTok. Within minutes, attackers can create convincing voices and use them in large-scale scams, including automated phone calls that sound real.

This marks a major shift: hackers no longer “break in”; they “log in.” By stealing credentials or impersonating trusted figures, they bypass traditional security systems entirely. Deepfakes make it even easier by allowing criminals to mimic identity itself—voice, face, and behavior.

Recent breaches show that identity has become the new battleground. As companies strengthen firewalls and passwords, attackers are focusing on manipulating human trust instead.

A fake call from a “CEO” or a realistic video meeting can easily trick staff into sharing confidential information or approving payments. The line between genuine and fake interaction is fading fast.

Most companies still train employees to spot phishing emails—but not fake calls or videos. That’s a dangerous blind spot. Deepfakes exploit urgency and emotional cues, making it hard to detect subtle errors in timing or speech.

Organizations must strengthen their defenses by teaching employees to verify any unusual voice or video requests, even when they appear to come from familiar sources.

This includes using secondary checks such as follow-up messages or questions that only legitimate colleagues would know.

Awareness training should also go beyond email phishing to cover voice and video scams, which are becoming increasingly sophisticated.

In this new threat landscape, the old cybersecurity principle—trust but verify—has never been more relevant.

Deepfakes are not just a technical issue; they’re a test of organizational awareness and culture. Companies need new habits of skepticism and cross-checking, backed by crisis drills and communication protocols.

In an era where voices and faces can be faked, trust must be earned through verification—not assumption. The next cyberattack may sound like your CEO—but it won’t be.

The writer is tech lead for Europe,  Middle East and Africa, Sophos red team at Sophos

ADVERTISEMENT

Related Articles