The Algorithmic Adversary: Tracking the Shift to Novel AI-Enabled Malware
Threat actors have entered a new operational phase by deploying novel, autonomous malware, including PROMPTFLUX and PROMPTSTEAL, that leverage Large Language Models mid-execution to dynamically alter their behavior and evade detection
đ§ Listen to this Episode
Show Notes
The Google Threat Intelligence Group (GTIG) has identified a significant shift where adversaries are now deploying novel AI-enabled malware in active operations, moving beyond simple productivity gains observed in 2024. This new operational phase includes "Just-in-Time" AI malware, such as PROMPTFLUX and PROMPTSTEAL, that utilize Large Language Models (LLMs) during execution to dynamically obfuscate code, regenerate themselves, or generate malicious commands, representing a significant step toward more autonomous and adaptive malware. Furthermore, state-sponsored actors are using social engineering pretextsâlike posing as students or "capture-the-flag" participantsâto persuade AI systems like Gemini to bypass safety guardrails, even as Google disrupts accounts and strengthens its models and the Secure AI Framework (SAIF).
Â
Sponsors:
Â
Enjoying CISO Insights?
Subscribe to get new episodes delivered directly to your podcast app.