AI Defense Management API

The AI Defense Management API allows developers to interact programmatically with the AI Defense service over RESTful interface.

This RESTful API enables administrators to register resources for protection by the AI Defense runtime, and allows AI safety and security teams to validate AI models before and after deployment.

For AI model validation, the AI Defense Management API provides validation endpoints for:

  • Starting and managing AI model validation runs
  • Retrieving validation results

For AI runtime protection, this API provides endpoints for:

  • Saving connection information to manage how AI Defense connects to your AI applications and models
  • Managing the AI Defense policies that protect those applications and models, as well as their users.
  • Retrieving and inspecting the events that AI Defense generates when it detects a violation of your AI safety and security policies and guardrails.

For MCP scanning, the API includes beta support for managing MCP application connections.

Runtime Protection

The endpoints in the Applications, Connections, Policies, and Events sections allow you to set up and manage AI Defense runtime protection.

AI Defense runtime protection secures your LLM chat applications by inspecting user prompts and LLM responses in real time. When runtime protection detects content that violates your security, privacy, or safety policies, it raises an alert in the Events log and, if configured, blocks the content from reaching the user or the LLM.

Learn more about runtime protection.

Applications and Connections

In AI Defense, each chat application appears as an Application. Each application includes one or more Connections that represent the LLM APIs it protects.

For information about endpoints for managing the AI applications and connections that are protected by AI Defense, see Applications and Connections in the Reference section.

Policies

Once you've created an application and its connections, you apply a runtime protection policy to each connection to secure it.

For information about endpoints for managing AI safety and security policies, see the Policies section of the Reference.

Enforcement Point

To protect an AI application and its users with your policy, you must set up a runtime enforcement point. This can be an AI Defense Gateway, Multicloud Defense with AI Guardrails, or the AI Defense Inspection API. You will use the AI Defense web-based UI to set up your enforcement points. Learn more in Set up Runtime Protection.

Note: The AI Defense Inspection API is a separate API from the AI Defense Management API. See the AI Defense Inspection API documentation for details.

Runtime Protection Output: Viewing Runtime Protection Events

AI Defense produces events to alert you of runtime violations of your AI safety and security policies. To retrieve and inspect events, use the endpoints that are listed in the Events section of the Reference.

Important When you use the AI Defense Inspection API to check compliance with a policy, violations are reported as Events and in the Inspection API response body. In contrast, when you use the Inspection API to check compliance with a rule or rules, violations are returned only in the API response body; no event is generated.

Model Validation

The endpoints in the AiValidationAPI section allow you to set up, initiate, and manage model validation runs and retrieve their results.

AI Defense model validation is an advanced automated testing service designed to assess the security, privacy, and safety of AI models and applications. It can evaluate both AI Defense-managed systems and external AI infrastructures, such as third-party inference APIs.

Model validation provides:

  • Comprehensive security testing evaluates AI models against a wide range of adversarial scenarios.
  • Automated and scalable testing rapidly executes thousands of security tests without manual intervention.
  • Standards-aligned test reports adhere to industry frameworks such as MITRE ATLAS and the OWASP Top 10 for LLM Applications.

Validation report output helps to inform and refine the security policies that you enforce in AI Defense Runtime, ensuring robust AI system protection. Learn more in the Validation section of the main AI Defense documentation.