Phylax is a Security & Compliance layer for Python-based AI agents. It provides both automatic monitoring and explicit analysis capabilities to ensure your AI applications comply with security policies and don't accidentally leak sensitive information.
| Feature | Description |
|---|---|
| Plug and Play Design | Automatically monitor all activity within a with Phylax(...): block |
| Explicit Analysis | Use phylax.analyze() for targeted compliance checks on specific data |
| Built-in Presets | Ready-made compliance presets for HIPAA, SOC 2, PCI DSS, GDPR, and Financial Services |
| Flexible Configuration | YAML-based policy configuration supporting regex, SPDX, and custom policies |
| Multiple Trigger Types | Choose from raise, log, human_review, or custom violation handling |
| Comprehensive Monitoring | Console output, function calls, network requests, and file operations |
| Event Hooks | Custom callbacks for input, output, and violation events |
| Thread-Safe | Safe for concurrent use |
| Custom Extractors | Define how to extract meaningful data from complex objects |
| Selective Ignore | Temporarily disable compliance checking with phylax.ignore() context manager |
# Using uv (recommended)
uv add phylax
# Using pip
pip install phylaxfrom phylax import Phylax, PhylaxConfig, Policy
# Define security policies
config = PhylaxConfig(
version=1,
policies=[
Policy(
id="pii_ssn",
type="regex",
pattern=r"\d{3}-\d{2}-\d{4}",
severity="high",
trigger="raise",
scope=["output", "analysis"]
),
Policy(
id="sensitive_keywords",
type="regex",
pattern=r"(?i)(password|secret|token)",
severity="medium",
trigger="log",
scope=["input", "output", "analysis"]
)
]
)
def my_ai_agent(prompt: str) -> str:
# Your AI agent logic here
return f"Response to '{prompt}': Here's some data that might contain PII"
# Method 1: Explicit Analysis (Recommended)
phylax = Phylax(config)
# Analyze specific data
user_input = "Tell me something"
safe_input = phylax.analyze_input(user_input, context="User query validation")
response = my_ai_agent(safe_input)
safe_response = phylax.analyze_output(response, context="AI response validation")
# Method 2: Automatic Monitoring
with Phylax(config) as phylax:
# All function calls within this block are automatically monitored
response = my_ai_agent("Hello world")
print(f"Response: {response}")Phylax provides built-in presets for common compliance standards:
from phylax import PhylaxConfig, list_presets
# See available presets
print(list_presets()) # ['hipaa', 'soc2', 'pci_dss', 'gdpr', 'financial']
# Use a single preset
config = PhylaxConfig.from_preset("hipaa")
# Combine multiple presets
config = PhylaxConfig.from_presets(["hipaa", "soc2"])
# Extend presets with custom policies
custom_policies = [
Policy(
id="custom_employee_id",
type="regex",
pattern="EMP-\\d{6}",
severity="medium",
trigger="log"
)
]
config = PhylaxConfig.from_presets(["hipaa"], custom_policies)
# Use presets in YAML
yaml_config = """
version: 1
presets:
- hipaa
- soc2
policies:
- id: custom_rule
type: regex
pattern: "CUSTOM-\\d{6}"
severity: medium
trigger: log
"""
config = PhylaxConfig.from_yaml(yaml_config)Create a policies.yaml file:
version: 1
policies:
- id: pii_ssn
type: regex
pattern: "\\d{3}-\\d{2}-\\d{4}"
severity: high
trigger: raise
scope: [output, analysis, network]
- id: sensitive_keywords
type: regex
pattern: "(?i)(password|secret|token|api_key)"
severity: medium
trigger: log
scope: [input, output, analysis]
- id: license_compliance
type: spdx
allowed: [MIT, Apache-2.0, BSD-3-Clause]
severity: medium
trigger: log
scope: [file, analysis]Then use it in your code:
from phylax import Phylax
# Load configuration from YAML
phylax = Phylax("policies.yaml")
# Use as before...
result = phylax.analyze("Some data to check", context="Data validation")phylax = Phylax(config)
@phylax.on_violation
def handle_security_violation(policy, sample, context):
# Log to security system
security_logger.alert(
policy_id=policy.id,
severity=policy.severity,
sample=sample[:100], # Truncate for logging
context=context
)
# Send to monitoring dashboard
dashboard.report_violation(policy, context)
# Notify security team for high-severity violations
if policy.severity == "high":
notify_security_team(policy, sample, context)
# Your AI agent calls...
safe_output = phylax.analyze_output(ai_response, context="Final output check")def extract_message_content(data):
"""Extract text from complex message objects."""
if isinstance(data, dict):
return data.get('content', str(data))
elif hasattr(data, 'content'):
return data.content
return str(data)
def extract_response_text(data):
"""Extract text from AI response objects."""
if isinstance(data, dict):
return data.get('text', data.get('response', str(data)))
elif hasattr(data, 'text'):
return data.text
return str(data)
phylax = Phylax(
config,
input_extractor=extract_message_content,
output_extractor=extract_response_text
)
# Now Phylax will use your custom extractors
complex_input = {"content": "User message", "metadata": {...}}
complex_output = {"text": "AI response", "confidence": 0.95}
phylax.analyze_input(complex_input)
phylax.analyze_output(complex_output)# Monitor only specific activities
phylax = Phylax(
config,
monitor_network=True, # Monitor HTTP requests/responses
monitor_console=False, # Don't monitor print statements (default)
monitor_files=True, # Monitor file operations
monitor_function_calls=True # Monitor function calls (default)
)
with phylax:
# Network requests are monitored
response = requests.get("https://api.example.com/data")
# File operations are monitored
with open("sensitive_data.txt", "r") as f:
content = f.read()
# Function calls are monitored
result = my_ai_function(content)Sometimes you may want to temporarily disable compliance checking for specific contexts where you know the data is safe or for internal operations:
phylax = Phylax(config)
with phylax:
# This will be monitored
response = ai_agent("Process this user input")
# Temporarily disable monitoring for internal operations
with phylax.ignore():
# No compliance checking happens here
internal_debug_data = extract_debug_info(response)
log_internal_metrics(internal_debug_data)
cleanup_temp_files()
# Monitoring resumes here
final_response = post_process(response)
# Or use ignore with explicit analysis
user_input = "Tell me about security"
safe_input = phylax.analyze_input(user_input)
with phylax.ignore():
# Internal processing without compliance checks
internal_context = build_internal_context(safe_input)
debug_tokens = tokenize_for_debugging(internal_context)
# Back to normal monitoring
final_output = phylax.analyze_output(generate_response(safe_input))from crewai import Agent, Task, Crew
from phylax import Phylax
# Wrap CrewAI agents with Phylax monitoring
config = PhylaxConfig.from_yaml("security_policies.yaml")
with Phylax(config) as phylax:
# Define your agents
researcher = Agent(
role='Researcher',
goal='Research the given topic',
backstory='Expert researcher with access to various sources'
)
# Define tasks
research_task = Task(
description='Research AI safety best practices',
agent=researcher
)
# Run crew with automatic monitoring
crew = Crew(agents=[researcher], tasks=[research_task])
result = crew.kickoff() # All agent interactions monitoredfrom langchain.llms import OpenAI
from langchain.chains import LLMChain
from phylax import Phylax
config = PhylaxConfig.from_yaml("security_policies.yaml")
phylax = Phylax(config)
# Explicit monitoring approach
llm = OpenAI(temperature=0.7)
chain = LLMChain(llm=llm, prompt=prompt_template)
# Monitor input and output explicitly
user_query = "Tell me about user authentication"
safe_query = phylax.analyze_input(user_query, context="User query")
response = chain.run(safe_query)
safe_response = phylax.analyze_output(response, context="LLM response")
print(f"Safe response: {safe_response}")- id: credit_card_detector
type: regex
pattern: "\\d{4}[-\\s]?\\d{4}[-\\s]?\\d{4}[-\\s]?\\d{4}"
severity: high
trigger: raise
scope: [output, analysis]- id: license_compliance
type: spdx
allowed: [MIT, Apache-2.0, BSD-2-Clause, BSD-3-Clause]
severity: medium
trigger: log
scope: [file, analysis]# Define custom validation function
def check_custom_policy(data: str) -> bool:
# Your custom logic here
return "forbidden_pattern" in data.lower()
# Add to policy (programmatically)
policy = Policy(
id="custom_check",
type="custom",
severity="medium",
trigger="log"
)
policy.custom_func = check_custom_policyraise: Raise aPhylaxViolationexceptionlog: Log the violation (default)human_review: Queue for human review (implement viaon_violationcallback)mitigate: Custom mitigation (implement viaon_violationcallback)
input: Monitor data going into functions/agentsoutput: Monitor data coming from functions/agentsnetwork: Monitor HTTP requests and responsesfile: Monitor file read operationsconsole: Monitor stdout/stderr outputanalysis: Monitor explicitanalyze()calls
Phylax includes a CLI for validation and testing:
# Validate a policy configuration file
phylax validate policies.yaml
# Scan text against policies
phylax scan "Check this text for violations"
# Scan with custom config
phylax scan "Text to check" --config my_policies.yaml
# Show version
phylax --version# Clone the repository
git clone https://github.com/dowhiledev/phylax.git
cd phylax
# Install with development dependencies using uv
uv sync --dev
# Or install development extras with pip
pip install -e ".[dev]"# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=phylax
# Run specific test file
uv run pytest tests/test_core.py# Format and lint code
uv run ruff format .
uv run ruff check . --fix
# Type checking
uv run mypy src/phylaxCheck out the examples/ directory for comprehensive examples:
basic_usage.py- Basic Phylax usage patternsyaml_config_example.py- Using YAML configuration filessecurity_policies.yaml- Example security policy configuration
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the MIT License - see the LICENSE file for details.
If you discover a security vulnerability, please send an e-mail to security@phylax.dev. All security vulnerabilities will be promptly addressed.
- Documentation: https://phylax.readthedocs.io
- Issues: GitHub Issues
- Discussions: GitHub Discussions
See CHANGELOG.md for a list of changes and version history.