AI Unlocked: The Prompt Hacking Threat Landscape
This podcast explores the emerging threats of prompt hacking and the adversarial misuse of AI by analyzing real-world examples and security guidelines for Large Language Models.
🎧 Listen to this Episode
Show Notes
Delve into the critical security vulnerabilities of Artificial Intelligence, exploring the dangerous world of prompt injection, leaking, and jailbreaking as highlighted in SANS' Critical AI Security Controls and real-world adversarial misuse of generative AI like Gemini by government-backed actors. Understand how malicious actors attempt to bypass safety controls, extract sensitive information and manipulate LLMs for nefarious purposes, drawing insights from documented cases involving Iranian, PRC, North Korean, and Russian threat actors. Learn about the offensive techniques used and the ongoing challenge of securing AI systems,
Enjoying CISO Insights?
Subscribe to get new episodes delivered directly to your podcast app.