Advertisement
Episode 316 November 4, 2025 🎧 15:45

Guardrails and Attack Vectors: Securing the Generative AI Frontier

This installment provides security executives with actionable strategies for embedding security into the entire AI lifecycle to mitigate novel adversarial threats and responsibly manage complex compliance risks associated with enterprise Generative AI deployments

Guardrails and Attack Vectors: Securing the Generative AI Frontier

🎧 Listen to this Episode

Show Notes

This episode dissects critical risks specific to Large Language Models (LLMs), focusing on vulnerabilities such as Prompt Injection and the potential for Sensitive Information Disclosure. It explores how CISOs must establish internal AI security standards and adopt a programmatic, offensive security approach using established governance frameworks like the NIST AI RMF and MITRE ATLAS. We discuss the essential role of robust governance, including mechanisms for establishing content provenance and maintaining information integrity against threats like Confabulation (Hallucinations) and data poisoning.

 

Sponsor:

www.cisomarketplace.services 

Enjoying CISO Insights?

Subscribe to get new episodes delivered directly to your podcast app.

Advertisement