Artificial intelligence is revolutionizing business operations but it’s also arming cybercriminals with tools that make social engineering and impersonation attacks far more convincing. One of the fastest-growing risks is deepfake-enabled cybercrime, where attackers use AI to convincingly mimic voices, video, or even entire identities to gain access to critical systems.
Unlike traditional malware, deepfake-driven attacks exploit trust in people and processes. They bypass firewalls and endpoint tools by targeting the human layer, and they often succeed before anyone realizes something is wrong.
Most organizations still rely on Identity Verification (IDV), MFA, or endpoint tools to defend themselves. These were not designed for AI-generated threats. The result is clear: deepfake-driven attacks succeed because they exploit human trust, not technical controls.
At Netarx, we believe it is no longer enough to “verify” identity. The only way to stay protected is to adopt an environment where no trust is required — and where every interaction is validated against advanced detection signals.
Recent high-profile incidents underscore how sophisticated attackers are becoming — and why enterprises need to prepare now.
High-Profile Examples of Modern AI-Driven Attacks
Allianz Life, Google, Pearson, Coinbase & More – August 2025
A Salesforce-focused campaign leveraged social engineering across major enterprises, with data exfiltration and extortion at scale. CRM systems are now primary targets, and deepfakes raise the success rate of such attacks.
- Vector: Salesforce compromise via social-engineering-driven campaign
- Groups: ShinyHunters, Scattered Spider, with Lapsus$ tradecraft
- Impact: Data exfiltration and extortion across multiple major enterprises.
As CRM and SaaS platforms house customer and sales data, they are ripe targets. Deepfake impersonation of executives or IT staff makes these compromises increasingly likely.
MGM Resorts – September 2023
Help-desk impersonation led to MFA resets and ransomware, shutting down hotels and casinos. A convincing voice clone makes this attack almost unstoppable with legacy tools.
- Vector: Help-desk social engineering → MFA reset → ransomware
- Group: Scattered Spider with ALPHV/BlackCat
- Impact: Major operational disruptions across Las Vegas hotels and casinos.
Attackers impersonated employees and tricked help-desk staff into resetting credentials, bypassing MFA. A deepfake voice could make this even harder to detect.
Caesars Entertainment – September 2023
Attackers social-engineered an IT vendor and exfiltrated customer data, reportedly forcing a ransom payout. Outsourced IT is especially vulnerable to voice and video deepfakes.
- Vector: Social engineering of an outsourced IT vendor
- Group: Scattered Spider
- Impact: Customer data exfiltration and a reported multimillion-dollar ransom payment.
With third-party IT staff often fielding calls, a deepfake-enhanced social engineering attempt could easily escalate into system-wide compromise.
Ticketmaster & Santander – May–June 2024
Stolen credentials led to access into Snowflake tenants, exposing hundreds of millions of records. Deepfake-powered phishing makes credential compromise faster and easier.
- Vector: Stolen credentials enabling access to Snowflake cloud environments
- Group: ShinyHunters
- Impact: Up to 560 million Ticketmaster records and 30 million Santander customer accounts claimed exposed.
These attacks highlight how stolen credentials fuel mass breaches. Now imagine deepfake-powered phishing calls convincing employees to “verify” credentials — a multiplier for damage.
Twilio, Cloudflare, DoorDash – August 2022
Attackers gained access to Ticketmaster’s environment through stolen credentials connected to their Snowflake cloud tenant. A hacking group known as ShinyHunters claimed responsibility and advertised 560 million customer records for sale, which allegedly included payment details and personal information. This incident highlighted how credential theft from infostealers and reused passwords can compromise large-scale cloud systems.
Around the same timeframe, Santander Bank disclosed that attackers accessed systems in select countries through Snowflake-related credential compromise. The same group, ShinyHunters, claimed the attack and posted samples from what they said were 30 million customer records. The breach underscored the growing risk for financial institutions, especially as attackers increasingly exploit third-party platforms like Snowflake.
- Vector: SMS phishing (“0ktapus”) campaign
- Impact: 130+ organizations impacted, including downstream disruption for apps like Signal.
While these relied on text lures, adding a deepfake phone call pretending to be IT support could drastically increase success rates.
Uber – September 2022
Attackers used MFA fatigue and IT impersonation over WhatsApp. A deepfake video call would have made this nearly impossible to detect.
- Vector: MFA fatigue + impersonation of IT staff via WhatsApp/vishing
- Group: Lapsus$-style attackers
- Impact: Widespread compromise of internal systems.
Voice deepfakes make IT impersonation even more convincing, turning a nuisance attack into an enterprise-scale breach.
Why Deepfakes Raise the Stakes
These cases show that social engineering is the weak link in enterprise defenses. Attackers don’t need to break into networks directly — they can trick people into opening the gates. Deepfakes amplify this risk by:
- Bypassing identity verification: AI-generated voices and videos can fool help desks, HR, finance, and even biometric systems.
- Exploiting remote work culture: With fewer in-person verifications, audio/video impersonation succeeds more often.
- Accelerating extortion campaigns: Once inside, attackers use data theft and ransomware together for maximum leverage.
How Organizations Can Prepare
- Educate employees to be skeptical of urgent, unusual requests — even if they appear to come from executives.
- Deploy advanced detection tools that go beyond Identity Verification (IDV) to analyze metadata, behavioral anomalies, and cross-channel signals such as the Netarx Platform.
- Harden help-desk and IT processes to prevent resets and approvals based solely on voice or video.
- Adopt a “no trust required” model — moving beyond zero trust to environments where deepfake impersonation cannot succeed without additional cryptographic proof.
The Bottom Line
Deepfake-enabled threats are no longer hypothetical. As the incidents above demonstrate, attackers are scaling social engineering across industries, and AI is their force multiplier. Enterprises that fail to prepare for this shift risk being the next headline.
Netarx gives you the confidence that your business interactions are genuine. By eliminating the need for trust and validating every digital interaction, we ensure your organization stays ahead of AI-enabled attackers.
It is only a matter of time before your organization is targeted. Netarx makes sure the attack fails.