AI-powered fraud could cost firms USD $11.5 billion by 2027

November 20, 2025

Artificial intelligence technology is rapidly transforming the nature of digital fraud, with experts warning that generative AI tools are now able to create deceptive fake personas and deepfakes at scale.

Financial analysts project that email fraud supported by generative AI could result in losses of up to USD $11.5 billion globally by 2027, as cybercriminals increasingly adopt sophisticated techniques that outpace traditional security measures.

AI-driven deception

Generative AI systems are being used to produce highly convincing fake audio and video content, allowing attackers to easily mimic trusted individuals within an organisation.

These deepfakes pose a significant challenge to established digital trust models, undermining the effectiveness of standard verification methods such as passwords and multifactor authentication.

"Fraud has entered a new phase fueled by AI where deception moves faster than detection, and human judgment has become the target to exploit. Attackers no longer need to steal credentials or breach firewalls; they simply mimic a trusted voice or familiar face. What once took days of preparation can now be done in seconds with a few lines of synthetic audio or a convincing video clip," said Sandy Kronenberg, Founder and CEO, Netarx.

Trust as vulnerability

Digital trust is emerging as a critical vulnerability, particularly as organisations grapple with the proliferation of AI-generated disinformation. Cybercriminals no longer require specialised technical skills or malware to gain access to sensitive information. The ability to generate authentic-sounding voices and familiar faces can easily convince employees to provide confidential data or approve fraudulent transactions.

Kronenberg commented: "During International Fraud Awareness Week, we must recognise that trust has become the biggest vulnerability inside organisations, and AI is making it easier to exploit. Deepfakes, synthetic identities and AI-generated disinformation are rewriting the cybercrime playbook. These attacks don't rely on malware or code. They work because we tend to believe what looks and sounds real."

Limits of traditional defence

Standard cyber defences, including employee awareness training and two-factor authentication, are no longer sufficient in isolation. Cybercriminals can bypass these controls by directly attacking users' perceptions and exploiting the trust established between colleagues and business partners.

"Traditional defences like employee training and multifactor authentication still matter, but they can't stand alone. A six-digit code won't protect you from a voice that sounds like your CEO or a video that looks exactly like your finance director. The problem is no longer access; it's authenticity," said Kronenberg.

Technological response

Organisations are increasingly searching for solutions that can verify identity and intent in real time across multiple communication channels. AI-driven detection tools aim to keep pace with the speed and sophistication of new fraud tactics, linking and interpreting digital signals to identify potentially deceptive content before damage occurs.

"Defending against this new wave of fraud requires technology that learns and adapts as quickly as the threats themselves. It demands AI that can distinguish human reality from synthetic illusion in real time, linking digital signals across every channel, interpreting them in context and verifying authenticity without disrupting how people work," said Kronenberg.

Adapting to new risks

"In this new environment, trust can't be assumed. It must be tested, verified and earned every time. The faster we accept that, the better prepared organisations will be for the next generation of deception," said Kronenberg.

Read the article: HERE