Skip to content

Deterministic verification layer for LLMs | AI hallucination detection | Model output validation | Formal verification for AI | Python 🐍

License

Notifications You must be signed in to change notification settings

QWED-AI/qwed-verification

Use this GitHub action with your project
Add this Action to an existing workflow or create a new one
View on Marketplace
QWED Logo - AI Verification Engine

QWED Protocol

Model Agnostic Verification Layer for AI

QWED Verification - Production-grade deterministic verification layer for Large Language Models. Works with ANY LLM - OpenAI, Anthropic, Gemini, Llama (via Ollama), or any local model. Detect and prevent AI hallucinations through 8 specialized verification engines. Your LLM, Your Choice, Our Verification.

Don't fix the liar. Verify the lie.
QWED does not reduce hallucinations. It makes them irrelevant.

If an AI output cannot be proven, QWED will not allow it into production.

🌐 Model Agnostic: Local ($0) • Budget ($5/mo) • Premium ($100/mo) - You choose!

CI codecov License Python 3.10+ Docker DOI status PyPI version Contributors

GitHub stars GitHub forks GitHub watchers


💖 Support QWED Development:

Sponsor on GitHub


Twitter LinkedIn Blog


Quick Start · 🆕 QWEDLocal · The Problem · The 8 Engines · 🔌 Integration · ⚡ QWEDLocal · 🖥️ CLI · 🆓 Ollama (FREE!) · 📖 Full Documentation

⚠️ What QWED Is (and Isn't)

QWED is: An open-source engineering tool that combines existing verification libraries (SymPy, Z3, SQLGlot, AST) into a unified API for LLM output validation.

QWED is NOT: Novel research. We don't claim algorithmic innovation. We claim practical integration for production use cases.

Works when: Developer provides ground truth (expected values, schemas, contracts) and LLM generates structured output.

Doesn't work when: Specs come from natural language, outputs are freeform text, or verification domain is unsupported.


🚀 Quick Start: Install & Verify in 30 Seconds

# Install from PyPI (Recommended)
pip install qwed

# Or install from source
git clone https://github.com/QWED-AI/qwed-verification.git
cd qwed-verification
pip install -e .

🎓 NEW: Free Course on AI Verification

Learning Path: From Zero to Production-Ready AI Verification

Course

🚀 Start the Free Course →

What You'll Learn:

  • 💡 Artist vs. Accountant: Why LLMs are creative but terrible at math
  • 🧮 Neurosymbolic AI: How deterministic verification catches 100% of errors*
  • 🏗️ Production Patterns: Build guardrails that actually work
  • 🔒 HIPAA/GDPR Compliance: PII masking for regulated industries
  • 🦜 Framework Integration: LangChain, LlamaIndex, and more

Total Time: ~3 hours | Modules: 4 | Examples: Production-ready code

Perfect for: Developers integrating LLMs, ML engineers, Tech leads evaluating AI safety


from qwed_sdk import QWEDClient

client = QWEDClient(api_key="your_key")

# The LLM says: "Derivative of x^2 is 3x" (Hallucination!)
response = client.verify_math(
    query="What is the derivative of x^2?",
    llm_output="3x" 
)

print(response)
# -> ❌ CORRECTED: The derivative is 2x. (Verified by SymPy)

💡 Want to use QWED locally without our backend? Check out QWEDLocal - works with Ollama (FREE), OpenAI, Anthropic, or any LLM provider.


🚨 The LLM Hallucination Problem: Why AI Can't Be Trusted

Everyone is trying to fix AI hallucinations by Fine-Tuning (teaching it more data).

This is like forcing a student to memorize 1,000,000 math problems.

What happens when they see the 1,000,001st problem? They guess.


📊 The Proof: Why Enterprise AI Needs QWED Verification

We benchmarked Claude Opus 4.5 (one of the world's best LLMs) on 215 critical tasks.

QWED Benchmark Results - LLM Accuracy Testing

Finding Implication
Finance: 73% accuracy Banks can't use raw LLM for calculations
Adversarial: 85% accuracy LLMs fall for authority bias tricks
QWED: 100% error detection All 22 errors caught before production

QWED doesn't compete with LLMs. We ENABLE them for production use.

📄 Full Benchmark Report →


🎯 Use Cases & Applications

QWED is designed for industries where AI errors have real consequences:

Industry Use Case Risk Without QWED
🏦 Financial Services Transaction validation, fraud detection $12,889 error per miscalculation
🏥 Healthcare AI Drug interaction checking, diagnosis verification Patient safety risks
⚖️ Legal Tech Contract analysis, compliance checking Regulatory violations
📚 Educational AI AI tutoring, assessment systems Misinformation to students
🏭 Manufacturing Process control, quality assurance Production defects

✅ The Solution: Verification Layer

QWED is the first open-source Neurosymbolic AI Verification Layer.

We combine:

  • Neural Networks (LLMs) for natural language understanding
  • Symbolic Reasoning (SymPy, Z3, AST) for deterministic verification

The Core Philosophy: "The Untrusted Translator"

QWED operates on a strict principle: Don't trust the LLM to compute or judge; trust it only to translate.

Example Flow:

User Query: "If all A are B, and x is A, is x B?"

↓ (LLM translates)

Z3 DSL: Implies(A(x), B(x))

↓ (Z3 proves)

Result: TRUE (Proven by formal logic)

The LLM is an Untrusted Translator. The Symbolic Engine is the Trusted Verifier.


💡 How QWED Compares

The "Judge" Problem

Most AI safety tools use "LLM-as-a-Judge" (asking GPT-4 to grade GPT-3.5). This is fundamentally unsafe:

  • Recursive Hallucination: If the judge has the same bias as the generator, errors go undetected
  • Probabilistic Evaluation: LLMs give probability, not proof
  • Subjectivity: Different judges = different answers

QWED introduces "Solver-as-a-Judge": Replace neural network opinions with compiler execution and mathematical proof.

Comparison Table

Feature QWED Protocol NeMo Guardrails LangChain Evaluators
The "Judge" Deterministic Solver (Z3/SymPy) Semantic Matcher Another LLM (GPT-4)
Mechanism Translation to DSL Vector Similarity Prompt Engineering
Verification Type Mathematical Proof Policy Adherence Consensus/Opinion
Primary Goal Correctness (Truth) Safety (Appropriateness) Quality Score
False Positives Near Zero (Logic-based) Medium (Semantic drift) High (Subjectivity)
Works Offline ✅ Yes (QWEDLocal) ❌ No ❌ No
Privacy ✅ 100% Local ❌ Cloud-based ❌ Cloud-based

QWED's Advantage: When you need proof, not opinion.


🔬 The Verification Engines

QWED routes queries to specialized engines that act as DSL interpreters:

┌──────────────┐
│  User Query  │
└──────┬───────┘
       │
       ▼
┌──────────────────┐
│ LLM (The Guesser)│
│ GPT-4 / Claude   │
└──────┬───────────┘
       │ Unverified Output
       ▼
┌────────────────────┐
│  QWED Protocol     │
│  (Verification)    │
└──────┬─────────────┘
       │
   ┌───┴────┐
   ▼        ▼
❌ Reject  ✅ Verified
            │
            ▼
   ┌────────────────┐
   │ Your Application│
   └────────────────┘

QWED 🆚 Traditional AI Safety Approaches

Approach Accuracy Deterministic Explainable Best For
QWED Verification ✅ 99%+ ✅ Yes ✅ Full trace Production AI
Fine-tuning / RLHF ⚠️ ~85% ❌ No ❌ Black box General improvement
RAG (Retrieval) ⚠️ ~80% ❌ No ⚠️ Limited Knowledge grounding
Prompt Engineering ⚠️ ~70% ❌ No ⚠️ Limited Quick fixes
Guardrails ⚠️ Variable ❌ No ⚠️ Reactive Content filtering

QWED doesn't replace these - it complements them with mathematical certainty.


🔧 The 8 Verification Engines: How QWED Validates LLM Outputs

We don't use another LLM to check your LLM. That's circular logic.

We use Hard Engineering:

Engine Tech Stack What it Solves
🧮 Math Verifier SymPy + NumPy Calculus, Linear Algebra, Finance. No more $1 + $1 = $3.
⚖️ Logic Verifier Z3 Prover Formal Verification. Checks for logical contradictions.
🛡️ Code Security AST + Semgrep Catches eval(), secrets, vulnerabilities before code runs.
📊 Stats Engine Pandas + Wasm Sandboxed execution for trusted data analysis.
🗄️ SQL Validator SQLGlot Prevents Injection & validates schema.
🔍 Fact Checker TF-IDF + NLI Checks grounding against source docs.
👁️ Image Verifier OpenCV + Metadata Verifies image dimensions, format, pixel data.
🤝 Consensus Engine Multi-Provider Cross-checks GPT-4 vs Claude vs Gemini.

🧠 The QWED Philosophy: Verification Over Correction

❌ Wrong Approach ✅ QWED Approach
"Let's fine-tune the model to be more accurate" "Let's verify the output with math"
"Trust the AI's confidence score" "Trust the symbolic proof"
"Add more training data" "Add a verification layer"
"Hope it doesn't hallucinate" "Catch hallucinations deterministically"

QWED = Query with Evidence and Determinism

Probabilistic systems should not be trusted with deterministic tasks. If it can't be verified, it doesn't ship.


🔌 LLM Framework Integrations

Already using an Agent framework? QWED drops right in.

🦜 LangChain (Native Integration)

Install: pip install 'qwed[langchain]'

from qwed_sdk.integrations.langchain import QWEDTool
from langchain.agents import initialize_agent
from langchain_openai import ChatOpenAI

# Initialize QWED verification tool
tool = QWEDTool(provider="openai", model="gpt-4o-mini")

# Add to your agent
llm = ChatOpenAI()
agent = initialize_agent(tools=[tool], llm=llm)

# Agent automatically uses QWED for verification
agent.run("Verify: what is the derivative of x^2?")

🤖 CrewAI

from qwed_sdk.crewai import QWEDVerifiedAgent

agent = QWEDVerifiedAgent(role="Analyst", allow_dangerous_code=False)

🌍 Multi-Language SDK Support

Language Package Status
🐍 Python qwed ✅ Available on PyPI
🟦 TypeScript @qwed-ai/sdk ✅ Available on npm
🐹 Go qwed-go 🟡 Coming Soon
🦀 Rust qwed 🟡 Coming Soon

git clone https://github.com/QWED-AI/qwed-verification.git cd qwed-verification pip install -r requirements.txt


---

## 🎯 Real Example: The $12,889 Bug

**User asks AI:** "Calculate compound interest: $100K at 5% for 10 years"

**GPT-4 responds:** "$150,000"  
*(Used simple interest by mistake)*

**With QWED:**
```python
response = client.verify_math(
    query="Compound interest: $100K, 5%, 10 years",
    llm_output="$150,000"
)
# -> ❌ INCORRECT: Expected $162,889.46
#    Error: Used simple interest formula instead of compound

Cost of not verifying: $12,889 error per transaction 💸


❓ Frequently Asked Questions

Q: How does QWED differ from RAG (Retrieval Augmented Generation)?

A: RAG improves the input to the LLM by grounding it in documents. QWED verifies the output deterministically. RAG adds knowledge; QWED adds certainty.

Q: Can QWED work with any LLM?

A: Yes! QWED is model-agnostic and works with GPT-4, Claude, Gemini, Llama, Mistral, and any other LLM. We verify outputs, not models.

Q: Does QWED replace fine-tuning?

A: No. Fine-tuning makes models better at tasks. QWED verifies they got it right. Use both.

Q: Is QWED open source?

A: Yes! Apache 2.0 license. Enterprise features (audit logs, multi-tenancy) are in a separate repo.

Q: What's the latency overhead?

A: Typically <100ms for most verifications. Math and logic proofs are instant. Consensus checks take longer (multiple API calls).


📚 Documentation & Resources

Main Documentation:

Resource Description
📖 Full Documentation Complete API reference and guides
🔧 API Reference Endpoints and schemas
⚡ QWEDLocal Guide Client-side verification setup
🖥️ CLI Reference Command-line interface
🔒 PII Masking Guide HIPAA/GDPR compliance
🆓 Ollama Integration Free local LLM setup

Project Documentation:

Resource Description
📊 Benchmarks LLM accuracy testing results
🗺️ Project Roadmap Future features and timeline
📋 Changelog Version history summary
📜 Release Notes Detailed version release notes
🎬 GitHub Action Guide CI/CD integration
🏗️ Architecture System design and engine internals

Community:

Resource Description
🤝 Contributing Guide How to contribute to QWED
📜 Code of Conduct Community guidelines
🔒 Security Policy Reporting vulnerabilities
📖 Citation Academic citation format

🏢 Enterprise Features

Need observability, multi-tenancy, audit logs, or compliance exports?

📧 Contact: rahul@qwedai.com


📄 License

Apache 2.0 - See LICENSE


⭐ Star History

Star History Chart

If chart doesn't load, click here for alternatives

Current Stars: GitHub stars

View trend: Star History Page


👥 Contributors

QWED Contributors

📄 Citation

If you use QWED in your research or project, please cite our archived paper:

@software{dass2025qwed,
  author = {Dass, Rahul},
  title = {QWED Protocol: Deterministic Verification for Large Language Models},
  year = {2025},
  publisher = {Zenodo},
  version = {v1.0.0},
  doi = {10.5281/zenodo.18110785},
  url = {https://doi.org/10.5281/zenodo.18110785}
}

Plain text:

Dass, R. (2025). QWED Protocol: Deterministic Verification for Large Language Models (Version v1.1.0). Zenodo. https://doi.org/10.5281/zenodo.18110785


✅ Using QWED in Your Project?

Add this badge to your README to show you're using verified AI:

[![Verified by QWED](https://img.shields.io/badge/Verified_by-QWED-00C853?style=flat&logo=checkmarx)](https://github.com/QWED-AI/qwed-verification)

Preview:
Verified by QWED

This badge tells users that your LLM outputs are deterministically verified, not just "hallucination-prone guesses."


⭐ Star us if you believe AI needs verification

GitHub Stars



Ready to trust your AI?

"Safe AI is the only AI that scales."


Contribute · Architecture · Security · Documentation