AI Defense Inspection API

Cisco AI Defense runtime protection and the AI Defense Inspection API secure your production AI applications with guardrails that protect them from evolving threats such as prompt injection attempts, denial-of-service attacks, and data leakage. The Inspection API is intended for AI application developers who want to build AI content inspection capabilities into their AI applications.

When enforced by Inspection API calls, runtime protection does not rely on a gateway that intercepts traffic. Instead, your application evaluates prompts and responses by sending them to the Inspection API endpoint. This means that enforcement and decision-making remain within your application, enabling your application to allow or block each user prompt and model response based on the AI Defense runime evaluation results.

The Inspection API is not the only way to apply AI Defense runtime protection. Alternatively, you can protect LLM traffic using an AI Defense Gateway or Cisco Multicloud Defense. Each of these approaches is explained in the runtime protection section of the AI Defense documentation.

API-based policy enforcement

By using the AI Defense Inspection API, you can build runtime protection into your AI applications in the form of Inspection API calls. This allows you, the AI application developer, to specify how your AI application will handle violations detected in AI Defense.

API-based policy enforcement relies on applications and guardrails policies that your team sets up in AI Defense. There are two places you can manage these:

Policies and rules

Regardless of enforcement approach, AI Defense runtime protection relies on AI runtime rules, policies, and threat taxonomies. AI Defense runtime protection and the AI Defense Inspection API enforce your policies and rules to ensure that only safe and permissible content is shared to and from AI applications. Rules are managed in the Policies section of AI Defense. Enforcement actions (blocked content or an alert about content) are reported in the AI Events screen when a policy or rule is triggered.

What can you do with the AI Defense Inspection API?

  1. Prevent data leaks - Enforce security policies to stop sensitive data exposure.
  2. Secure AI applications - Protect AI applications and their data from threats.
  3. Inspect AI-generated content - Analyze chat messages and HTTP traffic for risks.

Track API version changes

See the API Changelog.