LLM guardrails & prompt injection detection for Python. Auto-instruments LangChain, CrewAI, OpenAI, LiteLLM + 8 more frameworks. PII masking, toxicity detection, policy CI/CD. One line, zero code changes.
mcp compliance ai-safety policy-engine ai-agents audit-trail ai-security policy-as-code guardrails pii-detection policy-testing ai-governance langchain prompt-injection llm-security model-context-protocol agent-security mcp-security ai-agent-security selection-governance
-
Updated
Apr 8, 2026 - Python