Deepfake IDs and AI-Powered Phishing: New Kimsuky Campaign Hits in July 2025

In mid-July 2025, security teams detected a sophisticated spear-phishing campaign that combined generative AI deepfakes with old-school obfuscation to bypass traditional antivirus controls. Cybercriminals attributed to the Kimsuky group used AI-generated images of government ID cards embedded in phishing lures that tricked recipients into downloading malicious archives.

CyberSecurity, DeepfakePhishing
AI Deepfake Phishing Threat

The attacks impersonated military and security organizations and asked targets to “review” draft ID cards. When victims opened the attached archive, an elaborate multi-stage infection chain unfolded: a shortcut file invoked cmd.exe to create a long environment variable, then used character-slicing techniques to reconstruct obfuscated commands. Those commands fetched two payloads from South Korean command-and-control servers — a deepfake PNG created with generative AI and a batch script that executed immediately.

Analysts found the campaign used a mix of AutoIt and PowerShell payloads. The batch script reconstructed via environment-variable slicing pulled down and executed code, then established persistence by creating a scheduled task masquerading as legitimate software updates (HncAutoUpdateTaskMachine) to run a fake Hancom Office update (HncUpdateTray.exe) every seven minutes. Deeper inspection showed the AutoIt component encrypted configuration strings with a Vigenère-style variant, complicating static analysis.

Notably, metadata analysis flagged the downloaded image as AI-generated (deepfake) with high confidence, underscoring how generative models were weaponized to create visually convincing social-engineering assets. Despite the use of advanced AI artifacts, the campaign relied on well-known evasion and persistence techniques — a hybrid approach that made detection harder for signature-based antivirus engines.

Security researchers warn this trend is significant: attackers are increasingly blending generative-AI content with automated scripting and obfuscation to craft credible lures and to hide malicious intent until late in the execution chain. The result is a hybrid threat that traditional defenses struggle to catch.

What defenders should do:

  • Deploy behavioral monitoring and Endpoint Detection & Response (EDR) to catch suspicious script activity, scheduled-task creation, and unusual use of cmd.exe/PowerShell.

  • Inspect and block long, reconstructed command strings and environment-variable abuse patterns.

  • Use AI-aware deepfake detectors and image-metadata analysis as part of phishing triage.

  • Educate users to treat unsolicited “draft” attachments and review requests with suspicion, especially when they urge immediate action.

  • Harden mail gateways to flag archives containing shortcut files (.lnk) and unusual executable artifacts.

Comments

Popular posts from this blog

Nothing Secures $200M, Plans AI-Native Devices for 2026

Tarun Wig’s Resilient Leadership: The Rise of Innefu Labs

Notable High-Tech Crimes That Didn't Involve Hacking