A Claude Code plugin that catches bugs in AI-generated code by verifying against requirements, not implementation.
When you ask an AI to review its own code, it's biased toward its solution. Meta's Chain-of-Verification research showed that independent verification improves accuracy by 28% - because the verifier checks against requirements, not the (possibly flawed) implementation.
SE-CoVe applies this to code: the verifier never sees the draft solution. It answers verification questions by checking docs, searching your codebase, and reasoning about requirements - then a synthesizer compares findings and corrects errors.
Unlike general-purpose verification, SE-CoVe is designed specifically for code. Each agent understands 35+ software engineering claim categories across three domains:
Behaviors, logic flow, boundary conditions, error handling, API usage patterns
Input validation, authentication, authorization, injection prevention, data exposure
Time/space complexity, caching, memory management, async patterns, N+1 queries
The verification pipeline includes:
- Confidence scoring on all claims (High/Medium/Low)
- Severity ranking of issues (Critical/High/Medium/Low)
- Parallel verification tracks for thorough analysis
- Domain-specific checklists for security, performance, and error handling
- Evidence quality scoring with source authority ranking
Question → Baseline (generate + extract claims)
↓
Planner (create verification tasks)
↓
Executor (verify independently - never sees draft)
↓
Synthesizer (compare + correct)
↓
Verified Solution
First add the marketplace
/plugin marketplace add vertti/se-cove-claude-plugin
then install the plugin
/plugin install chain-of-verification
/chain-of-verification:verify <your question>
Tip: Type /ver and press Tab to autocomplete the command.
| Flag | Short | Description |
|---|---|---|
--quick |
-q |
Fast verification (haiku models, 1 executor) |
--thorough |
-t |
Comprehensive verification (opus models, 4+ parallel executors) |
--focus=X |
-f X |
Prioritize a specific area (see below) |
| Focus | What It Checks |
|---|---|
security |
Auth, input validation, injection, data exposure |
performance |
Complexity, caching, memory, N+1 queries |
api |
Contracts, versioning, error responses, compatibility |
testing |
Coverage, edge cases, integration, mocks |
error-handling |
Exceptions, recovery, logging, user feedback |
style |
Naming, organization, documentation, types |
scalability |
Bottlenecks, resource limits, horizontal scaling |
# Standard verification (default)
/chain-of-verification:verify How do I implement debounced search in React?
# Quick sanity check
/chain-of-verification:verify --quick Is this null check correct?
# Thorough security review (4 parallel executors)
/chain-of-verification:verify --thorough --focus=security Review the authentication flow
# API-focused verification
/chain-of-verification:verify -f api Does this endpoint handle errors correctly?
# Test coverage analysis
/chain-of-verification:verify --focus=testing Are these unit tests comprehensive?
# Error handling review
/chain-of-verification:verify -f error-handling How should this handle failures?Good for: Complex code generation, architectural decisions, bug investigations, library/API questions
Skip for: Trivial changes, simple questions, exploratory coding
Based on Meta's Chain-of-Verification research, which demonstrated 28% improvement in factual accuracy through independent verification.
MIT