About

Giskard helps you secure your AI agents through our comprehensive testing system that combines hallucination detection, security scanning, and cybersecurity watch. Our platform ensures continuous protection by adapting to emerging threats, alerting you instantly when new AI vulnerabilities arise. We enable collaboration between technical and business teams and provide independent, expert validation for confident AI deployment.
Meet The Speakers
11:00 - 11:45
Securing AI Agents Through Continuous Red Teaming: Prevent Hallucinations and Vulnerabilities in LLM Agents
Learn how continuous Red Teaming can protect your LLM agents from emerging threats like hallucinations and data leakage. We'll present how enterprises can automate security evaluation, detect vulnerabilities before they become incidents, and ensure continuous protection of AI agents
April 1, 2025