Revolutionary AI-powered security auditing platform for Internet Computer canisters, providing comprehensive WASM analysis, vulnerability detection, and automated security recommendations.
Canispect is a developer-first auditing platform for canisters on Internet Computer (ICP). It combines static analysis, AI-powered reasoning, and on-chain certification to transparently assess security, performance, and correctness.
Canispect is designed to fill a critical need in the ICP ecosystem by providing automated, transparent canister audits that merge AI assistance with formal analysis methods.
Core Features:
- π AI-Powered WASM Analysis - Upload and analyze canister WASM files with AI interpretation
- π€ Security-Focused AI Assistant - LLM integration with comprehensive fallback analysis for reliable results
- π Static Analysis Integration - Mock integration with security tools like Owi and SeeWasm for WASM analysis
- π‘οΈ Comprehensive Security Assessment - Multi-layered analysis combining static tools and AI reasoning
- βοΈ Modern React UI - Clean, responsive interface for audit workflows and history management
- π Internet Identity Integration - Secure authentication and audit record signing
π Future Roadmap:
- π¦ Rust CLI Tool - Command-line interface for automated WASM analysis in CI/CD pipelines and local development workflows
Architecture:
WASM Upload β AI Analysis Engine β Audit Registry β Frontend Dashboard
β β β β
Static Tools β AI Assistant β Certified Data β Internet Identity
The following flowchart illustrates the complete analysis process from WASM upload to final results:
Canispect implements a comprehensive static analysis system that examines WASM binaries for security vulnerabilities and code quality issues. The analysis is performed by two primary engines:
Purpose: Memory safety and performance analysis
Implementation: mock_owi_analysis() in /src/backend/src/lib.rs
Detection Logic:
// Large binary detection (Performance concern)
if file_size > 1_000_000 {
// Flag: Large WASM binary - optimization needed
// Severity: Medium
// Rationale: Large binaries increase attack surface and deployment costs
}
// Memory complexity analysis
if file_size > 100_000 {
// Flag: Complex memory patterns detected
// Severity: Low
// Rationale: Complex memory usage increases risk of memory safety issues
}Analysis Categories:
- Performance: Binary size optimization
- Memory Management: Memory safety verification
- Attack Surface: Code complexity assessment
Purpose: Symbolic execution and arithmetic safety
Implementation: mock_seewasm_analysis() in /src/backend/src/lib.rs
Detection Logic:
// Symbolic execution completion
// Always generates: "Symbolic execution completed - no critical paths identified"
// Severity: Info
// Purpose: Confirms analysis tool execution
// Arithmetic safety check
if wasm_bytes.len() > 50_000 {
// Flag: Complex arithmetic operations detected
// Severity: Medium
// Rationale: Complex arithmetic increases overflow/underflow risk
}Analysis Categories:
- Symbolic Execution: Code path analysis
- Arithmetic Safety: Overflow/underflow protection
- Function Analysis: Method complexity assessment
Implementation: calculate_code_metrics() in /src/backend/src/lib.rs
fn calculate_code_metrics(wasm_bytes: &[u8]) -> CodeMetrics {
let file_size = wasm_bytes.len() as u32;
// Estimation algorithms:
let estimated_lines = (file_size / 10).max(100); // ~10 bytes per line
let function_count = (file_size / 1000).max(1); // ~1KB per function
let complexity_score = (file_size / 5000).max(1); // ~5KB per complexity unit
}Metrics Provided:
- File Size: Exact WASM binary size in bytes
- Estimated LOC: Approximated lines of code (file_size / 10)
- Function Count: Estimated number of functions (file_size / 1000)
- Complexity Score: Code complexity rating (file_size / 5000)
The AI analysis system provides intelligent security assessment using either LLM integration or comprehensive fallback analysis.
Current Implementation Status:
- LLM Integration: Available but currently disabled due to timeout issues with the
ic-llmservice - Fallback Analysis: Fully functional comprehensive analysis system that provides reliable results
- Hybrid Approach: Automatically falls back to comprehensive analysis when LLM is unavailable
Primary AI Path (Available but temporarily disabled):
- Prompt Generation:
create_security_audit_prompt() - LLM Integration: Via
ic-llmcrate with Llama 3.1 8B model - Response Processing: Structured analysis parsing
Fallback AI Analysis (Active implementation):
Implementation: create_fallback_ai_analysis() in /src/backend/src/lib.rs
1. Summary Generation:
// File size analysis
if file_size > 1_000_000 {
"Large WASM binary detected - consider optimization"
} else if file_size < 10_000 {
"Small WASM binary suggests minimal functionality"
}
// Complexity assessment
if complexity > 50 { "High complexity - increased vulnerability risk" }
else if complexity > 20 { "Moderate complexity - ensure proper testing" }
else { "Low complexity - reduced risk" }2. Pattern Identification:
// Function complexity patterns
if function_count > 20 { "Complex multi-function canister" }
else if function_count > 5 { "Moderate function complexity" }
else { "Simple function structure" }
// Memory pattern detection
if static_findings.contains("Memory") {
"Memory management patterns detected"
}3. Security Concerns Generation:
// Size-based concerns
if file_size > 500_000 {
"Large binary may contain vulnerable dependencies"
}
// Complexity-based concerns
if complexity > 30 {
"High complexity increases security review difficulty"
}
// Always included baseline concerns:
- "Verify proper access controls for all public methods"
- "Ensure comprehensive input validation"
- "Monitor cycle consumption to prevent DoS attacks"4. Recommendation Engine:
// Base recommendations (always included):
- "Implement comprehensive logging for security monitoring"
- "Add input sanitization for all user-provided data"
- "Use Internet Computer's certified data for critical state"
- "Implement proper error handling without revealing internals"
// Conditional recommendations:
if file_size > 1_000_000 {
"Consider code splitting or removing unused dependencies"
}
if complexity > 20 {
"Add comprehensive unit tests for all code paths"
"Consider refactoring complex functions"
}5. Confidence Score Calculation:
let confidence = if static_findings.is_empty() {
0.7 // Good confidence with no static issues
} else if has_critical_findings {
0.9 // High confidence when critical issues found
} else {
0.8 // High confidence with some issues found
};Overall Severity Determination:
let overall_severity = if has_critical_findings {
SecuritySeverity::Critical
} else if has_high_findings {
SecuritySeverity::High
} else if has_medium_findings {
SecuritySeverity::Medium
} else {
SecuritySeverity::Low
};Severity Levels:
- Critical: Immediate security threats requiring urgent action
- High: Serious security concerns needing prompt attention
- Medium: Moderate issues that should be addressed
- Low: Minor concerns or best practice recommendations
- Info: Informational findings for awareness
Analysis Request Flow (/src/frontend/src/services/audit.ts):
-
File Processing:
// Convert File to Uint8Array const arrayBuffer = await file.arrayBuffer(); const wasmBytes = new Uint8Array(arrayBuffer);
-
Request Creation:
const request: WasmAnalysisRequest = { wasm_bytes: wasmBytes, canister_id: canisterId ? Principal.fromText(canisterId) : undefined, metadata: { name, description, version }, };
-
Backend Communication:
const actor = await this.getBackendActor(); const result = await actor.analyze_wasm_security(request);
-
Result Processing: Parse and display comprehensive analysis results including static findings, AI analysis, and actionable recommendations.
Input Validation:
- WASM file format verification
- Size limits to prevent DoS attacks
- Metadata sanitization
Analysis Safety:
- Sandboxed WASM execution
- Resource limits on analysis operations
- Error handling for malformed binaries
Result Integrity:
- SHA256 hashing for WASM verification
- Timestamped analysis records
- Immutable audit trail storage
- π Getting Started
- π Using Canispect
- π Project Structure
- π§ͺ Testing
- π οΈ Development
- π Deployment
- π Resources
- Node.js (v18 or later)
- Rust (latest stable)
- dfx (DFINITY SDK)
- Internet Identity for authentication
- Click "Use this Template" β "Create a new repository"
- Click "Code β Open with Codespaces"
- Select machine type: 4-core 16GB RAM β’ 32GB
- Everything is pre-configured and ready!
# Clone the repository
git clone <your-repo-url>
cd Canispect
# Install dependencies
npm installFor enhanced AI-powered analysis, set up Ollama:
# Start Ollama server
ollama serve
# Expected to start listening on port 11434
# In a separate terminal, download the LLM model
ollama run llama3.1:8b
# Type /bye to exit after model is downloaded# Start local Internet Computer replica
dfx start --clean
# In another terminal, deploy dependencies
dfx deps pull
dfx deps deploy # Deploys the LLM canister
# Deploy Canispect canisters
dfx deploy- Click "Connect with Internet Identity" to authenticate
- Your Internet Identity will be used to sign audit records
- Navigate to the "Analyze WASM" tab
- Drag & drop a
.wasmfile or click to browse - Fill in metadata (optional):
- Canister name and description
- Version information
- Canister ID (if analyzing deployed canister)
- Click "Analyze Security"
Canispect provides comprehensive analysis including:
- π Static Analysis: Mock integration with tools like Owi and SeeWasm
- π€ AI Analysis: AI-powered security assessment and recommendations
β οΈ Security Findings: Categorized vulnerabilities with severity levels- π Code Metrics: File size, complexity, and estimated lines of code
- β Recommendations: Actionable security improvements
- Navigate to "Audit History" tab
- View past audits and their results
- Filter by severity and status
- Access detailed audit records
For testing purposes, use the built-in generators:
# Generate all test WASM files
npm run generate-test-wasm
# Generate specific type
npm run quick-wasm minimal
npm run quick-wasm suspiciousTest files are created in /test-data/ and include:
- minimal.wasm: Basic WASM validation
- simple.wasm: Function analysis testing
- complex.wasm: Multi-function analysis
- suspicious.wasm: Security vulnerability detection
Canispect/
βββ .devcontainer/devcontainer.json # Development container configuration
βββ .github/instructions/ # AI assistant instructions for development
βββ src/
β βββ backend/ # Rust backend canister for WASM analysis
β β βββ src/
β β β βββ lib.rs # AI-powered analysis engine
β β βββ backend.did # Candid interface definition
β β βββ Cargo.toml # Rust dependencies
β βββ audit_registry/ # Rust canister for audit record storage
β β βββ src/
β β β βββ lib.rs # On-chain audit registry
β β βββ audit_registry.did # Candid interface definition
β β βββ Cargo.toml # Rust dependencies
β βββ frontend/ # React + Tailwind frontend
β β βββ src/
β β β βββ App.tsx # Main Canispect application
β β β βββ components/ # UI components
β β β β βββ WasmUpload.tsx # File upload component
β β β β βββ AnalysisResults.tsx # Results display
β β β β βββ AuditHistory.tsx # Audit history viewer
β β β β βββ AuthButton.tsx # Internet Identity auth
β β β βββ services/ # Canister service layers
β β β β βββ auth.ts # Authentication service
β β β β βββ audit.ts # Audit operations
β β β β βββ canispect.ts # Main service
β β β βββ views/ # Page-level components
β β βββ package.json # Frontend dependencies
β β βββ vite.config.ts # Vite build configuration
β βββ declarations/ # Auto-generated canister interfaces
βββ scripts/
β βββ generate-test-wasm.js # WASM test file generator
β βββ quick-wasm-gen.js # Quick WASM generator CLI
β βββ generate-candid.sh # Candid generation script
βββ test-data/ # Generated test WASM files (not tracked)
βββ tests/
β βββ src/ # Backend test files
β βββ vitest.config.ts # Test configuration
βββ dfx.json # Internet Computer configuration
βββ Cargo.toml # Root Rust workspace
βββ README.md # This file
npm testnpm test tests/src/backend.test.tsnpm test --workspace=frontend# Generate all test WASM files with metadata
npm run generate-test-wasm
# Generate specific WASM type
npm run quick-wasm minimal # Minimal WASM (8 bytes)
npm run quick-wasm simple # Simple function WASM
npm run quick-wasm complex # Multi-function WASM
npm run quick-wasm suspicious # WASM with security patterns# Format and lint all code
npm run format
# Check TypeScript errors
npx tsc -p src/frontend/tsconfig.json
# Check Rust code
cargo check# Regenerate Candid files after interface changes
npm run generate-candid# Start local replica
dfx start --clean
# Deploy all canisters
dfx deploy
# Start frontend dev server
npm start-
Configure for Mainnet:
# Set up mainnet environment dfx deploy --network ic -
Update Frontend URLs:
- Update canister URLs in frontend services
- Configure Internet Identity for production
-
Deploy Steps:
# Deploy to Internet Computer mainnet dfx deploy --network ic --with-cycles 1000000000000
The project is optimized for GitHub Codespaces with:
- Pre-configured development container
- Automatic dependency installation
- Ready-to-use development environment
We welcome contributions to Canispect! To contribute:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and add tests
- Run tests:
npm test - Format code:
npm run format - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
- Report bugs or request features via GitHub Issues
- Include steps to reproduce for bugs
- Describe the expected behavior for feature requests
π‘οΈ Secure your canisters with AI-powered analysis! π
