Securing the AI Frontier: Unmasking LLM and RAG Vulnerabilities
Join us to understand the security risks in LLMs and RAG architectures, learn about current attack methods, and discover how red teaming helps build more robust AI systems.
🎧 Listen to this Episode
Show Notes
Large language models present new security challenges, especially when they leverage external data sources through Retrieval Augmented Generation (RAG) architectures . This podcast explores the unique attack techniques that exploit these systems, including indirect prompt injection and RAG poisoning. We delve into how offensive testing methods like AI red teaming are crucial for identifying and addressing these critical vulnerabilities in the evolving AI landscape.
www.securitycareers.help/navigating-the-ai-frontier-a-cisos-perspective-on-securing-generative-ai/
www.hackernoob.tips/the-new-frontier-how-were-bending-generative-ai-to-our-will
Enjoying CISO Insights?
Subscribe to get new episodes delivered directly to your podcast app.