The runtime enforcement layer for agentic AI systems.
Policy-driven Β· Fail-closed Β· Tamper-proof audit trails
Quick Start Β· Architecture Β· Roadmap Β· API Reference Β· Contributing
β οΈ Disclaimer: EnforceCore is provided "as is", without warranty of any kind, express or implied. It is a technical enforcement tool β not a compliance certification. Using EnforceCore does not guarantee regulatory compliance with any standard or law. See DISCLAIMER.md and LICENSE for full legal terms.
Most agent safety solutions operate at the prompt level β they ask the LLM to be safe. This is fundamentally broken: prompts can be bypassed, jailbroken, or ignored.
EnforceCore operates at the runtime boundary β the moment before a tool or API is actually called. At this layer, enforcement is mandatory, not advisory. If a call violates policy, it never executes. Period.
from enforcecore import enforce
@enforce(policy="policies/strict.yaml")
async def search_web(query: str) -> str:
"""This call is policy-enforced before execution."""
return await api.search(query)| Prompt Guardrails | EnforceCore | |
|---|---|---|
| Layer | Inside the LLM | Runtime call boundary |
| Bypassable? | Yes (jailbreaks, prompt injection) | No (code-level enforcement) |
| Auditable? | No | Yes (Merkle-chained trails) |
| Property-tested? | No | Yes (22 Hypothesis properties) |
| EU AI Act aligned? | β | β (see disclaimer) |
EnforceCore vs. OS-level security: EnforceCore operates at the application semantic layer β it understands tool calls, PII, and cost budgets. It does not replace SELinux, AppArmor, seccomp, or container sandboxing. These are complementary β use both for defense-in-depth.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Agent (LangChain Β· LangGraph Β· CrewAI Β· AutoGen Β· Python) β
βββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββ
β tool_call(args)
βΌ
βββββββββββββββββββββββββββ
β @enforce(policy=β¦) β β decorator / adapter
βββββββββββββββ¬ββββββββββββ
β
βββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββ
β Enforcer β
β β
β βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββ β
β β Policy Engine β β Redactor β β Guard β β
β β βββββββββββββββ β β βββββββββββββββ β β βββββββββββ β β
β β YAML rules βββΆβ PII detect βββΆβ time Β· mem β β
β β allow / deny β β & redact β β cost Β· kill β β
β βββββββββββββββββββ βββββββββββββββββββ ββββββββ¬βββββββ β
β β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββΌββββββββ β
β β Audit Trail β β
β β Merkle chain Β· tamper-proof Β· always logs β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββ΄βββββββββββββββββββ
βΌ βΌ
β
allowed β blocked
β execute tool β raise PolicyViolation
| Policy Engine | Declarative YAML policies β allowed tools, denied tools, violation handling |
| Enforcer | Intercepts every call, evaluates policy, blocks or allows |
| Redactor | Real-time PII detection and redaction on inputs & outputs |
| Auditor | Tamper-proof Merkle-tree audit trail for every enforced call |
| Guard | Resource limits (time, memory, cost) with hard kill switch |
pip install enforcecore# policy.yaml
name: "my-agent-policy"
version: "1.0"
rules:
allowed_tools:
- "search_web"
- "calculator"
- "get_weather"
denied_tools:
- "execute_shell"
max_output_size_bytes: 524288 # 512KB
on_violation: "block"from enforcecore import enforce
# Decorator β sync or async, just works
@enforce(policy="policy.yaml")
async def search_web(query: str) -> str:
return await api.search(query)
@enforce(policy="policy.yaml")
def calculator(expr: str) -> float:
return eval(expr) # policy controls whether this tool can be calledHow tool names work:
@enforceuses the function name (e.g.search_web) as the tool name matched againstallowed_tools/denied_tools. To override, passtool_name=:@enforce(policy="policy.yaml", tool_name="web_search") async def search(query: str) -> str: ...
# β
Allowed β tool is in the allowed list
result = await search_web("latest AI papers")
# β Blocked β tool not allowed, raises ToolDeniedError
@enforce(policy="policy.yaml")
async def execute_shell(cmd: str) -> str:
return subprocess.run(cmd, capture_output=True).stdoutfrom enforcecore import Enforcer, Policy
policy = Policy.from_file("policy.yaml")
enforcer = Enforcer(policy)
# Direct invocation (sync)
result = enforcer.enforce_sync(my_tool, arg1, arg2, tool_name="my_tool")
# Direct invocation (async)
result = await enforcer.enforce_async(my_tool, arg1, tool_name="my_tool")π See examples/quickstart.py for a complete runnable demo.
EnforceCore works with any Python-based agent system β no lock-in:
| Framework | Status | Example |
|---|---|---|
| Plain Python | β Available | @enforce() decorator |
| LangChain | β Available | callbacks=[handler] |
| LangGraph | β Available | @enforced_tool(policy="...") |
| CrewAI | β Available | @enforced_tool(policy="...") |
| AutoGen | β Available | @enforced_tool(policy="...") |
# LangGraph β one-line enforcement
from enforcecore.integrations.langgraph import enforced_tool
@enforced_tool(policy="policy.yaml")
def search(query: str) -> str:
"""Search the web."""
return web_search(query)
# CrewAI
from enforcecore.integrations.crewai import enforced_tool
@enforced_tool(policy="policy.yaml")
def calculator(expr: str) -> str:
"""Calculate."""
return str(eval(expr))
# AutoGen
from enforcecore.integrations.autogen import enforced_tool
@enforced_tool(policy="policy.yaml", description="Search the web")
async def search(query: str) -> str:
return await web_search(query)
# LangChain β passive callback handler (works with any LangChain LLM)
from enforcecore.integrations.langchain import EnforceCoreCallbackHandler
handler = EnforceCoreCallbackHandler(policy="policy.yaml")
llm = ChatOpenAI(callbacks=[handler])
result = llm.invoke("My SSN is 123-45-6789")
# SSN is redacted before the LLM sees it; audit entry created automaticallyNo hard dependencies on any framework β adapters use optional imports.
- π Fail-closed β if enforcement fails, the call is blocked. Never fails open.
- β‘ Async-native β first-class support for both sync and async from day one.
- π Cross-platform β core works on Linux, macOS, and Windows. Advanced Linux hardening optional.
- π¦ Zero lock-in β no hard dependency on any agent framework.
- π Honest benchmarks β real overhead numbers, not marketing claims.
Measured with 1 000 iterations + 100 warmup on Apple Silicon (arm64), Python 3.13. Run
python -m benchmarks.runfor your hardware. See docs/benchmarks.md for methodology.
| Component | P50 (ms) | P99 (ms) |
|---|---|---|
| Policy evaluation | 0.012 | 0.228 |
| PII redaction (short) | 0.028 | 0.275 |
| PII redaction (~2KB) | 0.129 | 0.220 |
| Audit entry (write) | 0.068 | 0.232 |
| Audit chain verify (100 entries) | 1.114 | 1.457 |
| Resource guard | < 0.001 | < 0.001 |
| Rate limiter | < 0.001 | 0.002 |
| Secret detection | 0.012 | 0.017 |
| Full enforcement (E2E) | 0.056 | 0.892 |
| E2E + PII redaction | 0.093 | 0.807 |
Negligible compared to tool call latency (100msβ10s for API calls).
| Release | Focus | Status |
|---|---|---|
| v1.0.0 | Core Enforcer + Policy Engine | β Shipped |
| v1.0.1 | PII Redactor + Bug Fixes | β Shipped |
| v1.0.2 | CI Hardening + Release Process | β Shipped |
| v1.1.0 | Evaluation Expansion (26 scenarios, 11 threat categories, HTML reports) | β Shipped |
| v1.1.1 | Eval Polish + Community Prep | β Shipped |
| v1.1.2 | Beta Feedback Fixes (CLI --version, doc links, extras detection) |
β Shipped |
| v1.2.0 | Audit Storage System + Compliance (JSONL / SQLite / PostgreSQL, EU AI Act) | β Shipped |
| v1.3.0 | Subprocess Sandbox (post-execution isolation, resource limits) | β Shipped |
| v1.4.0 | NER PII + Sensitivity Labels (enforcecore[ner]) |
β Shipped |
| v1.5.0 | OpenTelemetry + Observability (Prometheus, OTLP traces, Grafana dashboard) | β Shipped |
| v1.6.0 | Multi-Tenant + Policy Inheritance (extends: keyword, tenant audit trails) |
β Shipped |
| v1.7.0 | Remote Policy Server (signed policies, pull-only, Enforcer.from_server) |
β Shipped |
| v1.8.0 | Compliance Reporting (EU AI Act, SOC2, GDPR β enforcecore audit export) |
β Shipped |
| v1.9.0 | Plugin Ecosystem (custom guards/redactors from PyPI β enforcecore plugin list) |
β Shipped |
| v1.10.0 | Quality Hardening + Async Streaming Enforcement (stream_enforce) |
β Shipped |
| v1.11.0 | AsyncIO Streaming Enforcement (GA), 2324 tests, 97% coverage | β Shipped |
| v1.11.1 | Patch β fix NER example crash, corrected stale docs | β Shipped |
| v1.12.0 | Merkle Bridge β external hash injection + linkage-only chain verification | β Shipped |
| v1.13.0 | LangChain EnforceCoreCallbackHandler β passive PII redaction + audit on every LLM call |
β Shipped |
| v1.14.0 | Upstream PR to langchain-community β EnforceCoreCallbackHandler available via pip install langchain-community |
β Latest |
| v1.15.0 | Developer Experience β README rewrite, HuggingFace Space demo, enforcecore init CLI |
π Next |
| v2.0.0 | Distributed Enforcement (multi-node, global Merkle root) | π Planned |
See docs/roadmap.md for the full roadmap including component details and future directions.
| π Architecture | Technical design and component overview |
| πΊοΈ Roadmap | v1.0.x incremental release plan |
| π§ API Design | Public API surface and patterns |
| π API Reference | API documentation |
| π οΈ Developer Guide | Setup, standards, and workflow |
| π§ͺ Tech Stack | Technology choices and rationale |
| π Evaluation | Adversarial scenarios, benchmarks, and reports |
| π Related Work | Survey and academic positioning |
| π‘οΈ Defense-in-Depth | Security layer architecture and deployment stacks |
| π§ Tool Selection | When to use EnforceCore vs. OS-level security |
| β FAQ | Frequently asked questions |
| π Troubleshooting | Common errors and debugging tips |
| π Vision | Why EnforceCore exists |
| π€ Contributing | How to contribute |
| π Code of Conduct | Community standards |
| π Security | Vulnerability reporting policy |
EnforceCore applies established computer science principles β runtime verification, reference monitors, information-flow control β to the novel problem of AI agent safety. We welcome academic collaboration.
- π Related Work β survey of runtime verification for AI agents, positioning vs. NeMo Guardrails, LlamaGuard, and others
- π CITATION.cff β machine-readable citation metadata (how to cite)
- π¬ Open Research Questions β policy composition, temporal properties, adversarial robustness
- π§ͺ Evaluation Suite β reproducible adversarial benchmarks with 26 scenarios across 11 threat categories
- π Architecture β formal design with Mermaid diagrams
@software{enforcecore2026,
title = {EnforceCore: Runtime Enforcement Layer for Agentic AI Systems},
author = {{AKIOUD AI}},
year = {2026},
url = {https://github.com/akios-ai/EnforceCore},
license = {Apache-2.0}
}EnforceCore is designed for production deployment in regulated environments.
| Concern | EnforceCore Feature |
|---|---|
| Audit compliance | Merkle-chained, tamper-evident audit trails with OS-enforced append-only and hash-only remote witnesses |
| Data protection | Real-time PII redaction (11 categories) |
| Cost control | Per-call and cumulative cost budgets |
| Access governance | Declarative tool allow/deny policies |
| Network control | Domain allowlisting with wildcard support |
| Rate limiting | Per-tool, per-window, global rate caps |
| Incident response | Structured violation events + webhook alerts |
| EU AI Act | Designed for Article 9, 13, 14, 15 alignment |
- π Fail-closed by default β if enforcement fails, the call is blocked
- π¦ No vendor lock-in β Apache 2.0, works with any agent framework
- π Cross-platform β Linux, macOS, Windows (advanced Linux hardening optional)
- π Observability β OpenTelemetry traces, Prometheus-compatible metrics
# Clone
git clone https://github.com/akios-ai/EnforceCore.git
cd EnforceCore
# Setup
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
# Test
pytest --cov=enforcecore
# Lint
ruff check . && ruff format --check .Current stats: 2,366 tests Β· 97% coverage Β· 0 lint errors
EnforceCore builds on a foundation of prior work in computer science and AI safety:
- Runtime Verification β Leucker & Schallhart (2009), Havelund & Goldberg (2005)
- Reference Monitors β Anderson (1972) for the tamperproof, always-invoked enforcement model
- Information Flow Control β Sabelfeld & Myers (2003) for the PII boundary model
- Audit Integrity β Merkle (1987), Crosby & Wallach (2009) for hash-chained tamper evidence
- Agent Containment β Armstrong et al. (2012), Babcock et al. (2016) for the containment framing
- Evaluation Methodology β Prof. ValΓ©rie Viet Triem Tong (CentraleSupΓ©lec, IRISA/PIRAT) for feedback on adversarial evaluation strategies and containment testing
- Microsoft Presidio β for design inspiration on PII detection patterns
- EU AI Act (2024) β Articles 9, 13, 14, 15 directly shaped the design
See CONTRIBUTORS.md and docs/related-work.md for full citations.
EnforceCore is provided "as is", without warranty of any kind. See DISCLAIMER.md for full legal terms.
EnforceCore is a technical tool, not a compliance certification. Using EnforceCore does not guarantee regulatory compliance. Always consult qualified legal counsel for compliance requirements.
Apache 2.0 β free for open-source and commercial use.
Copyright 2025β2026 AKIOUD AI, SAS. See LICENSE for details.