A DAML-Safe by Construction development platform that generates provably safe DAML code using verified canonical resources, DAML compiler integration, and Haskell's formal verification capabilities.
Leverage DAML compiler's existing mathematical proofs. Extend with safety annotations and formal verification.
- DAML Compiler Integration: All patterns validated through DAML compilation
- Safety Annotations: Add safety metadata to DAML templates
- Safe Code Generation: Generate provably safe DAML code
- Safety Certificates: Mathematical proof of code safety
- DAML Code Validation: Validate against canonical patterns with DAML compiler safety
- Authorization Debugging: Debug with DAML's built-in authorization model
- Pattern Suggestions: Get recommendations from DAML-compiler-validated patterns
- Compilation Validation: Validate DAML compilation safety
- 3,667+ Documentation Files: From official DAML, Canton, and DAML Finance repositories
- Git-Verified Content: All resources verified via GitHub API
- Structured Ingestion: Categorized by use case, security level, and complexity
- Intelligent Recommendations: AI-powered resource suggestions
- DCAP Performance Tracking: Real-time performance monitoring via DCAP v2 protocol
- x402 Payment Infrastructure: Built-in payment support (disabled by default)
- HTTP+SSE Transport: Streaming support with Server-Sent Events
- Type-Safe Tools: Fully typed parameters and results using Pydantic models
- Python 3.10 or higher
- uv (recommended) or pip
# Clone and install
git clone <repository-url>
cd canton-mcp-server
uv sync
# Run the server
uv run canton-mcp-server# Install from source
pip install -e .
# Run the server
canton-mcp-server# Clone the repository
git clone <repository-url>
cd canton-mcp-server
# Copy environment template (optional)
cp .env.canton.example .env.canton
# Edit .env.canton with your configuration
# Start with docker-compose
docker-compose up -d
# View logs
docker-compose logs -f
# Stop the server
docker-compose downThe Docker setup includes:
- Hot-reload: Source code changes automatically reload (Python code only)
- Health checks: Automatic container health monitoring
- Port mapping: Server accessible at
http://localhost:7284 - Resource files: Baked into image (rebuild to update)
Useful commands:
# Rebuild after resource changes
docker-compose up -d --build
# Access container shell
docker-compose exec canton-mcp-server bash
# View server logs
docker-compose logs -f canton-mcp-server
# Restart the server
docker-compose restart canton-mcp-serverNote: The Docker setup mounts source code for development but bakes resources into the image. If you modify files in resources/, rebuild the image with docker-compose up -d --build.
The Docker setup includes an automated test container (mcp-tester) that continuously validates the MCP server by making random tool calls at irregular intervals.
Purpose:
- Continuous integration testing
- Server availability monitoring
- Load testing and stress testing
- Tool functionality verification
- Real-world usage simulation
Architecture:
- Independent Python container
- Communicates with server via internal Docker network
- No external dependencies beyond requests library
- Automatic restart on failure
- Minimal resource footprint
Files:
test-container/
├── Dockerfile # Python 3.12-slim image
├── requirements.txt # requests>=2.32.0
└── mcp_tester.py # Test orchestration script
The container randomly calls all 5 MCP tools with realistic test data:
-
validate_daml_business_logic
- Tests: DAML code validation engine
- Sample: Simple asset transfer template
-
debug_authorization_failure
- Tests: Authorization error analysis
- Sample: Missing signatory error scenario
-
suggest_authorization_pattern
- Tests: Pattern recommendation engine
- Sample: Multi-party approval workflow
-
get_canonical_resource_overview
- Tests: Resource registry and metadata
- Sample: Full overview request
-
recommend_canonical_resources
- Tests: Resource recommendation system
- Sample: Asset transfer use case
Basic Commands:
# Start both server and tester
docker-compose up -d
# View live tester logs
docker-compose logs -f mcp-tester
# View last 50 log lines
docker-compose logs --tail 50 mcp-tester
# Stop only the tester (keep server running)
docker-compose stop mcp-tester
# Start only the tester
docker-compose start mcp-tester
# Restart tester
docker-compose restart mcp-tester
# Check tester status
docker-compose ps mcp-tester
# Remove tester completely
docker-compose rm -s -f mcp-testerRunning Without Tester:
To disable the test container permanently:
- Edit
docker-compose.yml - Comment out or remove the
mcp-testerservice section - Restart:
docker-compose up -d
Or run only the server:
docker-compose up -d canton-mcp-serverEnvironment Variables:
# Interval configuration
MIN_INTERVAL=30 # Minimum seconds between calls (default: 30)
MAX_INTERVAL=300 # Maximum seconds between calls (default: 300)
# Server URL (usually auto-configured)
MCP_SERVER_URL=http://canton-mcp-server:7284/mcpSet via docker-compose.yml:
services:
mcp-tester:
environment:
- MIN_INTERVAL=60
- MAX_INTERVAL=600Set via .env file:
# .env file
MIN_INTERVAL=60
MAX_INTERVAL=600Set via command line:
MIN_INTERVAL=60 MAX_INTERVAL=600 docker-compose up -dSuccess Example:
mcp-tester | [2025-10-24 17:30:15] [INFO] 🚀 Starting MCP Test Container
mcp-tester | [2025-10-24 17:30:15] [INFO] Server: http://canton-mcp-server:7284/mcp
mcp-tester | [2025-10-24 17:30:15] [INFO] Interval: 30-300 seconds
mcp-tester | [2025-10-24 17:30:15] [INFO] Tools: 5 available
mcp-tester | [2025-10-24 17:30:15] [INFO]
mcp-tester | [2025-10-24 17:30:15] [INFO] [#1] Calling tool: validate_daml_business_logic
mcp-tester | [2025-10-24 17:30:16] [INFO] ✅ Tool executed successfully
mcp-tester | [2025-10-24 17:30:16] [INFO] 💤 Sleeping for 127 seconds...
mcp-tester | [2025-10-24 17:30:16] [INFO]
mcp-tester | [2025-10-24 17:33:23] [INFO] [#2] Calling tool: debug_authorization_failure
mcp-tester | [2025-10-24 17:33:23] [INFO] ✅ Tool executed successfully
mcp-tester | [2025-10-24 17:33:23] [INFO] 💤 Sleeping for 245 seconds...
Error Handling:
mcp-tester | [2025-10-24 17:30:51] [ERROR] Request failed: Connection refused
mcp-tester | [2025-10-24 17:30:51] [ERROR] ❌ Call failed: Connection refused
mcp-tester | [2025-10-24 17:30:51] [INFO] 💤 Sleeping for 33 seconds...
Tool Error Example:
mcp-tester | [2025-10-24 17:31:24] [INFO] [#3] Calling tool: recommend_canonical_resources
mcp-tester | [2025-10-24 17:31:24] [WARN] ⚠️ Tool returned error
mcp-tester | [2025-10-24 17:31:24] [INFO] 💤 Sleeping for 182 seconds...
Check if tester is running:
docker-compose ps mcp-tester
# Expected: "Up" statusView real-time logs:
docker-compose logs -f mcp-tester
# Press Ctrl+C to exitCount successful calls:
docker-compose logs mcp-tester | grep "✅" | wc -lCount failed calls:
docker-compose logs mcp-tester | grep "❌" | wc -lCheck last call:
docker-compose logs --tail 10 mcp-tester | grep "Calling tool"Common Issues:
-
Connection Refused - Server not yet ready
- Wait 10-15 seconds for server startup
- Tester will automatically retry
-
Tool Returns Error - Expected for some tools during testing
- Check server logs:
docker-compose logs canton-mcp-server - Some errors are intentional test scenarios
- Check server logs:
-
Tester Keeps Restarting - Configuration issue
- Check environment variables
- Verify MIN_INTERVAL < MAX_INTERVAL
Development:
# Fast testing during development
MIN_INTERVAL=5 MAX_INTERVAL=30 docker-compose up -d
docker-compose logs -f mcp-testerProduction Monitoring:
# Less frequent for production
MIN_INTERVAL=300 MAX_INTERVAL=900 docker-compose up -dLoad Testing:
# Scale up multiple testers
docker-compose up -d --scale mcp-tester=5CI/CD Integration:
# Start services
docker-compose up -d
# Wait for stability
sleep 30
# Check for errors
if docker-compose logs mcp-tester | grep -q "❌"; then
echo "Test failures detected"
exit 1
fiView aggregated test statistics:
# Total calls made
docker-compose logs mcp-tester | grep -c "Calling tool"
# Success rate
echo "Successful: $(docker-compose logs mcp-tester | grep -c '✅')"
echo "Failed: $(docker-compose logs mcp-tester | grep -c '❌')"
echo "Errors: $(docker-compose logs mcp-tester | grep -c '⚠️')"The server uses environment variables for configuration. Create a .env.canton file (or set system environment variables):
# MCP Server Configuration
MCP_SERVER_URL=http://localhost:7284
# DCAP (Performance Tracking) - ENABLED by default
DCAP_ENABLED=true
DCAP_MULTICAST_IP=159.89.110.236 # UDP relay address (or use multicast like 239.255.0.1)
DCAP_PORT=10191
# x402 Payment Configuration - DISABLED by default
X402_ENABLED=false
X402_WALLET_ADDRESS=
X402_WALLET_PRIVATE_KEY=
X402_NETWORK=base-sepolia
X402_TOKEN=USDCTo enable x402 payments for tool usage:
- Set
X402_ENABLED=truein.env.canton - Configure your wallet address and private key
- Set pricing in tool definitions (default: FREE)
The server uses HTTP+SSE (Server-Sent Events) transport on port 7284:
- Base URL:
http://localhost:7284 - MCP Endpoint:
http://localhost:7284/mcp - Health Check:
http://localhost:7284/health - Streaming: Supported via SSE for progress updates
The server provides comprehensive tools for safe DAML development:
Generate provably safe DAML code using DAML-compiler-validated patterns.
Parameters:
business_intent(string): What you want to achieveuse_case(string): Primary use case (e.g., "asset_management", "financial_instruments")security_level(string): "basic", "enhanced", or "enterprise"constraints(array, optional): Business/technical constraints
Returns: Generated DAML code with safety certificate and compilation validation
Certify DAML patterns with safety annotations and mathematical proof.
Parameters:
daml_code(string): DAML template to certifysafety_properties(array): Safety properties to verifybusiness_context(string): Business context for validation
Returns: Safety certificate with DAML compilation validation
Validate DAML code against canonical patterns with DAML compiler safety.
Parameters:
business_intent(string): What you want to achievedaml_code(string): DAML code to validatesecurity_requirements(array, optional): Security requirements
Returns: Validation results with DAML compilation safety checks
Validate DAML compilation safety and authorization model compliance.
Parameters:
daml_code(string): DAML code to validatevalidation_level(string): "basic", "enhanced", or "enterprise"
Returns: Compilation validation results with safety guarantees
Debug DAML authorization errors using DAML's built-in authorization model.
Parameters:
error_message(string): The authorization error messagedaml_code(string, optional): DAML code that caused the errorcontext(string, optional): Additional context
Returns: Analysis and suggested fixes using DAML compiler insights
Get DAML authorization pattern recommendations from DAML-compiler-validated patterns.
Parameters:
workflow_description(string): Workflow to implementsecurity_level(string): "basic", "enhanced", or "enterprise"constraints(array, optional): Business/technical constraints
Returns: Suggested patterns with DAML compilation validation
Get intelligent recommendations for canonical DAML resources.
Parameters:
use_case(string): Primary use casedescription(string): Detailed description of what you're buildingsecurity_level(string, optional): Required security levelcomplexity_level(string, optional): Required complexity levelconstraints(array, optional): Specific constraints or requirements
Returns: Curated list of relevant canonical resources with safety certificates
Get overview of available canonical resources organized by use case and safety level.
Parameters: None
Returns: Structured overview of all available canonical resources
Retrieve safety certificates for DAML patterns and templates.
Parameters:
pattern_id(string): Pattern identifiercertificate_type(string): Type of certificate requested
Returns: Safety certificate with mathematical proof and DAML compilation validation
# Install development dependencies
uv sync --dev
# Run the server in development mode
uv run python -m canton_mcp_server.server
# Test with MCP Inspector
npx @modelcontextprotocol/inspector http://localhost:7284/mcpThe Canton MCP Server provides multiple ways to test tool calls:
Run the provided test script to verify all tools:
./test-mcp-tools.shThis tests:
- Listing available tools
- Getting canonical resource overview
- Recommending resources for asset transfer
- Validating DAML code
The official MCP Inspector provides a web-based interface to test tools:
npx @modelcontextprotocol/inspector http://localhost:7284/mcpOpen your browser to the displayed URL to interact with all tools visually.
Test individual tools with curl:
# List all tools
curl -X POST http://localhost:7284/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'
# Validate DAML code
curl -X POST http://localhost:7284/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "validate_daml_business_logic",
"arguments": {
"businessIntent": "Create a simple IOU",
"damlCode": "template IOU\n with\n issuer: Party\n owner: Party\n where\n signatory issuer"
}
}
}'
# Debug authorization failure
curl -X POST http://localhost:7284/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "debug_authorization_failure",
"arguments": {
"errorMessage": "Authorization failed: missing signatory"
}
}
}'
# Suggest authorization pattern
curl -X POST http://localhost:7284/mcp \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 4,
"method": "tools/call",
"params": {
"name": "suggest_authorization_pattern",
"arguments": {
"workflowDescription": "Multi-party approval for asset transfer",
"securityLevel": "enhanced"
}
}
}'Test with Python requests:
import requests
response = requests.post(
"http://localhost:7284/mcp",
headers={
"Content-Type": "application/json",
"Accept": "application/json"
},
json={
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "validate_daml_business_logic",
"arguments": {
"businessIntent": "Create a simple IOU",
"damlCode": """
template IOU
with
issuer: Party
owner: Party
where
signatory issuer
"""
}
}
}
)
print(response.json())To use with Claude Desktop, add to your configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"canton": {
"command": "docker",
"args": [
"exec",
"canton-mcp-server",
"uv",
"run",
"canton-mcp-server",
"serve"
]
}
}
}Or for non-Docker installations:
{
"mcpServers": {
"canton": {
"command": "uv",
"args": ["run", "canton-mcp-server", "serve"],
"cwd": "/path/to/canton-mcp-server"
}
}
}All tools are FREE by default:
- validate_daml_business_logic - Validate DAML code against canonical patterns
- debug_authorization_failure - Debug authorization errors with detailed analysis
- suggest_authorization_pattern - Get pattern recommendations for workflows
- recommend_canonical_resources - Get intelligent resource recommendations
- get_canonical_resource_overview - Overview of available canonical resources
The test-daml/ directory contains comprehensive DAML contract examples for testing:
- BasicIou.daml - Simple debt tracking with transfer and settlement
- MultiPartyContract.daml - Complex multi-party approval workflows
- SupplyChain.daml - Product tracking, shipping, and quality control
- AssetManagement.daml - Asset transfer and management patterns
- TradingExample.daml - Financial trading and order matching
- ProblematicExamples.daml - Authorization anti-patterns for testing validators
Use these contracts to test the MCP tools:
# Test validation with BasicIou
# Use MCP client to call validate_daml_business_logic with BasicIou.daml content
# Debug authorization issues with ProblematicExamples
# Use MCP client to call debug_authorization_failure with error scenarios
# Get pattern suggestions for SupplyChain workflows
# Use MCP client to call suggest_authorization_pattern with supply chain requirementsThe Canton MCP Server uses JSON schemas to validate canonical resource files. All resource files must conform to their respective schemas:
- Required fields:
name,version,description,tags,author,created_at,pattern_type,daml_template,authorization_requirements,when_to_use,when_not_to_use,security_considerations,test_cases - Version format: Semantic versioning (e.g., "1.0.0")
- Date format: ISO 8601 timestamps (e.g., "2024-01-15T10:00:00Z")
- Required fields:
name,version,description,tags,author,created_at,anti_pattern_type,severity,problematic_code,why_problematic,detection_pattern,correct_alternative,impact,remediation - Severity levels:
low,medium,high,critical
- Required fields:
name,version,description,tags,author,created_at,rule_type,severity,enforcement,rules - Enforcement levels:
advisory,recommended,mandatory
- Required fields:
name,version,description,tags,author,created_at,doc_type,audience,difficulty,overview,sections - Document types:
guide,tutorial,reference,best-practices,troubleshooting,api-docs - Audience levels:
beginners,developers,architects,operators,all
Resources are automatically validated against their schemas when loaded. Invalid resources are rejected with detailed error messages. The schema validation ensures:
- Data integrity: All required fields are present
- Type safety: Fields have correct data types
- Format compliance: Dates, versions, and other fields follow proper formats
- Content quality: Minimum length requirements for descriptions and other text fields
name: simple-transfer
version: "1.0.0"
description: Basic pattern for transferring ownership of an asset
tags: [transfer, authorization, basic]
author: Canton Team
created_at: "2024-01-15T10:00:00Z"
pattern_type: asset_transfer
daml_template: |
template Transfer
with
owner: Party
asset: Asset
where
signatory owner
choice TransferOwnership : ContractId Transfer
with
newOwner: Party
controller owner
do
create this with owner = newOwner
authorization_requirements:
- id: REQ-AUTH-001
rule: "Controller must be signatory or have explicit authorization"
satisfied: true
explanation: "owner is signatory and controller"
when_to_use:
- "Simple ownership transfers"
- "Unilateral actions by asset owner"
when_not_to_use:
- "Multi-party approval needed"
- "Complex state transitions"
security_considerations:
- "Ensure owner is signatory"
- "Validate asset state before transfer"
test_cases:
- description: "Valid transfer"
passes: true
code: "alice transfers to bob"This server follows the Model Context Protocol (MCP) specification using HTTP+SSE transport.
Test the server interactively:
npx @modelcontextprotocol/inspector http://localhost:7284/mcpAdd to your MCP client configuration:
{
"mcpServers": {
"canton": {
"command": "uv",
"args": ["run", "canton-mcp-server", "serve"],
"cwd": "/path/to/canton-mcp-server",
"env": {
"DCAP_ENABLED": "true",
"DCAP_MULTICAST_IP": "159.89.110.236",
"DCAP_PORT": "10191"
}
}
}
}The server automatically broadcasts performance metrics using DCAP v2 protocol:
- Protocol Version: 2
- Transport: UDP (direct or multicast)
- Default Port: 10191
- Metrics Tracked: Tool name, execution time, success/failure, anonymized parameters
Configure DCAP in .env.canton or via environment variables. Performance data is sent to dashboards/monitoring systems without impacting tool execution.
Note: Set DCAP_MULTICAST_IP to either:
- A direct UDP relay address (e.g.,
159.89.110.236) - A multicast address (e.g.,
239.255.0.1)
The server automatically detects multicast addresses (239.x.x.x) and configures the socket appropriately.
The Canton MCP Server uses a powerful framework that makes adding new tools straightforward and type-safe. Follow these guidelines to implement new tools that integrate seamlessly with DCAP tracking, x402 payments, and MCP protocol compliance.
Create a new file in src/canton_mcp_server/tools/ (e.g., my_new_tool.py):
"""
My New Tool
Brief description of what this tool does.
"""
from typing import List, Optional
from pydantic import Field
from ..core import Tool, ToolContext, register_tool
from ..core.pricing import PricingType, ToolPricing
from ..core.types.models import MCPModel
# IMPORTANT: Use MCPModel, not BaseModel!
# MCPModel automatically handles camelCase/snake_case conversion for MCP protocol
class MyToolParams(MCPModel):
"""Parameters for my tool"""
user_input: str = Field(description="User's input data")
optional_config: Optional[str] = Field(
default=None,
description="Optional configuration"
)
class MyToolResult(MCPModel):
"""Result from my tool"""
success: bool = Field(description="Whether operation succeeded")
output_data: str = Field(description="The result data")
details: List[str] = Field(description="Additional details")
@register_tool # This decorator auto-registers the tool
class MyNewTool(Tool[MyToolParams, MyToolResult]):
"""Tool for doing something awesome"""
# Tool metadata (required)
name = "my_new_tool"
description = "Does something awesome with user input"
params_model = MyToolParams
result_model = MyToolResult
# Pricing configuration (optional, defaults to FREE)
pricing = ToolPricing(
type=PricingType.FREE # or FIXED, DYNAMIC
# base_price_usd=0.01 # For FIXED pricing
)
async def execute(self, ctx: ToolContext[MyToolParams, MyToolResult]):
"""Execute the tool logic"""
# Access validated, typed parameters
user_input = ctx.params.user_input
config = ctx.params.optional_config
# Send progress updates (optional)
yield ctx.progress(0, 100, "Starting processing...")
# Send log messages (optional)
yield ctx.log("info", f"Processing: {user_input}")
# Do your work here
output = f"Processed: {user_input}"
# Update progress
yield ctx.progress(100, 100, "Complete!")
# Create typed result
result = MyToolResult(
success=True,
output_data=output,
details=["Step 1 completed", "Step 2 completed"]
)
# Return structured result
# DCAP tracking happens automatically!
# x402 payment settlement happens automatically!
yield ctx.structured(result)✅ DO:
- Inherit from
MCPModelfor all parameter and result classes - Use type hints and Pydantic
Field()descriptions - Use the
@register_tooldecorator - Define
name,description,params_model,result_model - Use
ctx.paramsto access validated parameters - Use
ctx.structured(result)to return typed results - Use
yieldfor all responses (progress, logs, results)
❌ DON'T:
- Use plain Pydantic
BaseModel(breaks MCP protocol camelCase) - Forget the
@register_tooldecorator - Return results directly (use
yield ctx.structured(...)) - Use blocking I/O (use async/await)
- Access raw request data (use
ctx.paramsinstead)
The MCPModel base class provides automatic camelCase conversion:
from ..core.types.models import MCPModel
class MyParams(MCPModel):
user_name: str # ← Python: snake_case
age_in_years: int
# JSON schema will have: userName, ageInYears (camelCase)
# Python access: params.user_name (snake_case)
# MCP protocol: {"userName": "...", "ageInYears": 25} (camelCase)This ensures:
- Your Python code uses pythonic snake_case
- MCP protocol uses standard camelCase
- Schemas and responses match automatically
The ToolContext provides helpful methods:
# Access parameters
ctx.params.field_name
# Progress updates (optional)
yield ctx.progress(current, total, "Status message")
# Log messages (optional)
yield ctx.log("info", "Processing...")
yield ctx.log("warning", "Non-critical issue")
yield ctx.log("error", "Something failed")
# Return structured result
yield ctx.structured(result_object)
# Return with summary text
yield ctx.structured(result_object, summary_text="Operation completed successfully")
# Check payment status
if ctx.payment.verified:
# Payment was verified
amount = ctx.payment.amount_usdTools can be FREE, FIXED price, or DYNAMIC:
# Free tool (default)
pricing = ToolPricing(type=PricingType.FREE)
# Fixed price per execution
pricing = ToolPricing(
type=PricingType.FIXED,
base_price_usd=0.01 # 1 cent per execution
)
# Dynamic pricing (calculated at runtime)
pricing = ToolPricing(
type=PricingType.DYNAMIC,
min_price_usd=0.001,
max_price_usd=1.0
)When you follow this pattern, you get automatically:
- ✅ DCAP Performance Tracking - All executions are tracked
- ✅ x402 Payment Handling - Payment verification and settlement
- ✅ MCP Protocol Compliance - Proper schema generation and responses
- ✅ Type Safety - Full IDE autocomplete and type checking
- ✅ Progress Streaming - Real-time updates via SSE
- ✅ Error Handling - Standardized error responses
- ✅ Request Management - Cancellation support and lifecycle tracking
# The tool is automatically discovered and registered!
# Just restart the server:
uv run canton-mcp-server serve
# Test with MCP Inspector:
npx @modelcontextprotocol/inspector http://localhost:7284/mcp
# Or test with curl:
curl -X POST http://localhost:7284/mcp \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "my_new_tool",
"arguments": {
"userInput": "test data"
}
}
}'async def execute(self, ctx: ToolContext[MyToolParams, MyToolResult]):
try:
# Your logic here
result = MyToolResult(...)
yield ctx.structured(result)
except ValueError as e:
# Return error response
yield ctx.error(f"Invalid input: {e}")
except Exception as e:
# Log and return error
yield ctx.log("error", f"Unexpected error: {e}")
yield ctx.error("Internal error occurred")async def execute(self, ctx: ToolContext[MyToolParams, MyToolResult]):
for i in range(100):
# Check if cancelled
if ctx.request.is_cancelled():
yield ctx.log("warning", "Operation cancelled by user")
return
# Do work
await process_chunk(i)
yield ctx.progress(i, 100, f"Processing chunk {i}")See src/canton_mcp_server/tools/validate_daml_business_logic.py for a complete, production-ready example that demonstrates all these patterns.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
The Canton MCP Server implements a comprehensive safety-first architecture:
- Gate 1: DAML Compiler Safety - All patterns must compile successfully
- Gate 2: Safety Annotations - Patterns must have safety metadata
- Gate 3: Formal Verification - Safety properties must be verified
- Gate 4: Production Readiness - Must be production-tested
- DAML Compiler Integration: Validates all patterns through DAML compilation
- Safety Annotation System: Adds safety metadata to DAML templates
- Safe Code Generation Engine: Generates provably safe DAML code
- Authorization Safety Engine: Leverages DAML's built-in authorization model
- Business Logic Safety Engine: Uses DAML's consistency guarantees
- Safe Code Generation Tools: Generate and certify DAML code
- Enhanced Validation Tools: Validate with DAML compiler safety
- Resource Management Tools: Access canonical resources with safety certificates
- Tool Base Class: Type-safe tool development with Pydantic models
- Pricing System: Flexible pricing (FREE, FIXED, DYNAMIC) with x402 integration
- DCAP Integration: Automatic performance tracking for all tool executions
- Payment Handler: x402 payment verification and settlement
- Request Manager: Lifecycle management with cancellation support
- FastAPI Server: HTTP+SSE transport with streaming capabilities
- DAML-Safe by Construction: Leverage DAML compiler's existing safety guarantees
- Compiler-First Safety: All validation goes through DAML compilation
- Safety Annotations: Add safety metadata to DAML templates
- Complete Audit Trails: Every DAML compilation must be logged
- Model Context Protocol - The MCP specification
- x402 - Payment protocol for AI services
- Canton - The Canton blockchain platform
- DAML - The DAML smart contract language
- DCAP - Performance tracking protocol (if available)