QWED Verification - Production-grade deterministic verification layer for Large Language Models. Works with ANY LLM - OpenAI, Anthropic, Gemini, Llama (via Ollama), or any local model. Detect and prevent AI hallucinations through 8 specialized verification engines. Your LLM, Your Choice, Our Verification.
Don't fix the liar. Verify the lie.
QWED does not reduce hallucinations. It makes them irrelevant.
If an AI output cannot be proven, QWED will not allow it into production.
🌐 Model Agnostic: Local ($0) • Budget ($5/mo) • Premium ($100/mo) - You choose!
💖 Support QWED Development:
Quick Start · 🆕 QWEDLocal · The Problem · The 8 Engines · 🔌 Integration · ⚡ QWEDLocal · 🖥️ CLI · 🆓 Ollama (FREE!) · 📖 Full Documentation
⚠️ What QWED Is (and Isn't)QWED is: An open-source engineering tool that combines existing verification libraries (SymPy, Z3, SQLGlot, AST) into a unified API for LLM output validation.
QWED is NOT: Novel research. We don't claim algorithmic innovation. We claim practical integration for production use cases.
Works when: Developer provides ground truth (expected values, schemas, contracts) and LLM generates structured output.
Doesn't work when: Specs come from natural language, outputs are freeform text, or verification domain is unsupported.
# Install from PyPI (Recommended)
pip install qwed
# Or install from source
git clone https://github.com/QWED-AI/qwed-verification.git
cd qwed-verification
pip install -e .Learning Path: From Zero to Production-Ready AI Verification
- 💡 Artist vs. Accountant: Why LLMs are creative but terrible at math
- 🧮 Neurosymbolic AI: How deterministic verification catches 100% of errors*
- 🏗️ Production Patterns: Build guardrails that actually work
- 🔒 HIPAA/GDPR Compliance: PII masking for regulated industries
- 🦜 Framework Integration: LangChain, LlamaIndex, and more
Total Time: ~3 hours | Modules: 4 | Examples: Production-ready code
Perfect for: Developers integrating LLMs, ML engineers, Tech leads evaluating AI safety
from qwed_sdk import QWEDClient
client = QWEDClient(api_key="your_key")
# The LLM says: "Derivative of x^2 is 3x" (Hallucination!)
response = client.verify_math(
query="What is the derivative of x^2?",
llm_output="3x"
)
print(response)
# -> ❌ CORRECTED: The derivative is 2x. (Verified by SymPy)💡 Want to use QWED locally without our backend? Check out QWEDLocal - works with Ollama (FREE), OpenAI, Anthropic, or any LLM provider.
Everyone is trying to fix AI hallucinations by Fine-Tuning (teaching it more data).
This is like forcing a student to memorize 1,000,000 math problems.
What happens when they see the 1,000,001st problem? They guess.
We benchmarked Claude Opus 4.5 (one of the world's best LLMs) on 215 critical tasks.
| Finding | Implication |
|---|---|
| Finance: 73% accuracy | Banks can't use raw LLM for calculations |
| Adversarial: 85% accuracy | LLMs fall for authority bias tricks |
| QWED: 100% error detection | All 22 errors caught before production |
QWED doesn't compete with LLMs. We ENABLE them for production use.
QWED is designed for industries where AI errors have real consequences:
| Industry | Use Case | Risk Without QWED |
|---|---|---|
| 🏦 Financial Services | Transaction validation, fraud detection | $12,889 error per miscalculation |
| 🏥 Healthcare AI | Drug interaction checking, diagnosis verification | Patient safety risks |
| ⚖️ Legal Tech | Contract analysis, compliance checking | Regulatory violations |
| 📚 Educational AI | AI tutoring, assessment systems | Misinformation to students |
| 🏭 Manufacturing | Process control, quality assurance | Production defects |
QWED is the first open-source Neurosymbolic AI Verification Layer.
We combine:
- Neural Networks (LLMs) for natural language understanding
- Symbolic Reasoning (SymPy, Z3, AST) for deterministic verification
QWED operates on a strict principle: Don't trust the LLM to compute or judge; trust it only to translate.
Example Flow:
User Query: "If all A are B, and x is A, is x B?"
↓ (LLM translates)
Z3 DSL: Implies(A(x), B(x))
↓ (Z3 proves)
Result: TRUE (Proven by formal logic)
The LLM is an Untrusted Translator. The Symbolic Engine is the Trusted Verifier.
Most AI safety tools use "LLM-as-a-Judge" (asking GPT-4 to grade GPT-3.5). This is fundamentally unsafe:
- Recursive Hallucination: If the judge has the same bias as the generator, errors go undetected
- Probabilistic Evaluation: LLMs give probability, not proof
- Subjectivity: Different judges = different answers
QWED introduces "Solver-as-a-Judge": Replace neural network opinions with compiler execution and mathematical proof.
| Feature | QWED Protocol | NeMo Guardrails | LangChain Evaluators |
|---|---|---|---|
| The "Judge" | Deterministic Solver (Z3/SymPy) | Semantic Matcher | Another LLM (GPT-4) |
| Mechanism | Translation to DSL | Vector Similarity | Prompt Engineering |
| Verification Type | Mathematical Proof | Policy Adherence | Consensus/Opinion |
| Primary Goal | Correctness (Truth) | Safety (Appropriateness) | Quality Score |
| False Positives | Near Zero (Logic-based) | Medium (Semantic drift) | High (Subjectivity) |
| Works Offline | ✅ Yes (QWEDLocal) | ❌ No | ❌ No |
| Privacy | ✅ 100% Local | ❌ Cloud-based | ❌ Cloud-based |
QWED's Advantage: When you need proof, not opinion.
QWED routes queries to specialized engines that act as DSL interpreters:
┌──────────────┐
│ User Query │
└──────┬───────┘
│
▼
┌──────────────────┐
│ LLM (The Guesser)│
│ GPT-4 / Claude │
└──────┬───────────┘
│ Unverified Output
▼
┌────────────────────┐
│ QWED Protocol │
│ (Verification) │
└──────┬─────────────┘
│
┌───┴────┐
▼ ▼
❌ Reject ✅ Verified
│
▼
┌────────────────┐
│ Your Application│
└────────────────┘
| Approach | Accuracy | Deterministic | Explainable | Best For |
|---|---|---|---|---|
| QWED Verification | ✅ 99%+ | ✅ Yes | ✅ Full trace | Production AI |
| Fine-tuning / RLHF | ❌ No | ❌ Black box | General improvement | |
| RAG (Retrieval) | ❌ No | Knowledge grounding | ||
| Prompt Engineering | ❌ No | Quick fixes | ||
| Guardrails | ❌ No | Content filtering |
QWED doesn't replace these - it complements them with mathematical certainty.
We don't use another LLM to check your LLM. That's circular logic.
We use Hard Engineering:
| Engine | Tech Stack | What it Solves |
|---|---|---|
| 🧮 Math Verifier | SymPy + NumPy |
Calculus, Linear Algebra, Finance. No more $1 + $1 = $3. |
| ⚖️ Logic Verifier | Z3 Prover |
Formal Verification. Checks for logical contradictions. |
| 🛡️ Code Security | AST + Semgrep |
Catches eval(), secrets, vulnerabilities before code runs. |
| 📊 Stats Engine | Pandas + Wasm |
Sandboxed execution for trusted data analysis. |
| 🗄️ SQL Validator | SQLGlot |
Prevents Injection & validates schema. |
| 🔍 Fact Checker | TF-IDF + NLI |
Checks grounding against source docs. |
| 👁️ Image Verifier | OpenCV + Metadata |
Verifies image dimensions, format, pixel data. |
| 🤝 Consensus Engine | Multi-Provider |
Cross-checks GPT-4 vs Claude vs Gemini. |
| ❌ Wrong Approach | ✅ QWED Approach |
|---|---|
| "Let's fine-tune the model to be more accurate" | "Let's verify the output with math" |
| "Trust the AI's confidence score" | "Trust the symbolic proof" |
| "Add more training data" | "Add a verification layer" |
| "Hope it doesn't hallucinate" | "Catch hallucinations deterministically" |
QWED = Query with Evidence and Determinism
Probabilistic systems should not be trusted with deterministic tasks. If it can't be verified, it doesn't ship.
Already using an Agent framework? QWED drops right in.
Install: pip install 'qwed[langchain]'
from qwed_sdk.integrations.langchain import QWEDTool
from langchain.agents import initialize_agent
from langchain_openai import ChatOpenAI
# Initialize QWED verification tool
tool = QWEDTool(provider="openai", model="gpt-4o-mini")
# Add to your agent
llm = ChatOpenAI()
agent = initialize_agent(tools=[tool], llm=llm)
# Agent automatically uses QWED for verification
agent.run("Verify: what is the derivative of x^2?")from qwed_sdk.crewai import QWEDVerifiedAgent
agent = QWEDVerifiedAgent(role="Analyst", allow_dangerous_code=False)| Language | Package | Status |
|---|---|---|
| 🐍 Python | qwed |
✅ Available on PyPI |
| 🟦 TypeScript | @qwed-ai/sdk |
✅ Available on npm |
| 🐹 Go | qwed-go |
🟡 Coming Soon |
| 🦀 Rust | qwed |
🟡 Coming Soon |
git clone https://github.com/QWED-AI/qwed-verification.git cd qwed-verification pip install -r requirements.txt
---
## 🎯 Real Example: The $12,889 Bug
**User asks AI:** "Calculate compound interest: $100K at 5% for 10 years"
**GPT-4 responds:** "$150,000"
*(Used simple interest by mistake)*
**With QWED:**
```python
response = client.verify_math(
query="Compound interest: $100K, 5%, 10 years",
llm_output="$150,000"
)
# -> ❌ INCORRECT: Expected $162,889.46
# Error: Used simple interest formula instead of compound
Cost of not verifying: $12,889 error per transaction 💸
A: RAG improves the input to the LLM by grounding it in documents. QWED verifies the output deterministically. RAG adds knowledge; QWED adds certainty.
A: Yes! QWED is model-agnostic and works with GPT-4, Claude, Gemini, Llama, Mistral, and any other LLM. We verify outputs, not models.
A: No. Fine-tuning makes models better at tasks. QWED verifies they got it right. Use both.
A: Yes! Apache 2.0 license. Enterprise features (audit logs, multi-tenancy) are in a separate repo.
A: Typically <100ms for most verifications. Math and logic proofs are instant. Consensus checks take longer (multiple API calls).
Main Documentation:
| Resource | Description |
|---|---|
| 📖 Full Documentation | Complete API reference and guides |
| 🔧 API Reference | Endpoints and schemas |
| ⚡ QWEDLocal Guide | Client-side verification setup |
| 🖥️ CLI Reference | Command-line interface |
| 🔒 PII Masking Guide | HIPAA/GDPR compliance |
| 🆓 Ollama Integration | Free local LLM setup |
Project Documentation:
| Resource | Description |
|---|---|
| 📊 Benchmarks | LLM accuracy testing results |
| 🗺️ Project Roadmap | Future features and timeline |
| 📋 Changelog | Version history summary |
| 📜 Release Notes | Detailed version release notes |
| 🎬 GitHub Action Guide | CI/CD integration |
| 🏗️ Architecture | System design and engine internals |
Community:
| Resource | Description |
|---|---|
| 🤝 Contributing Guide | How to contribute to QWED |
| 📜 Code of Conduct | Community guidelines |
| 🔒 Security Policy | Reporting vulnerabilities |
| 📖 Citation | Academic citation format |
Need observability, multi-tenancy, audit logs, or compliance exports?
📧 Contact: rahul@qwedai.com
Apache 2.0 - See LICENSE
If you use QWED in your research or project, please cite our archived paper:
@software{dass2025qwed,
author = {Dass, Rahul},
title = {QWED Protocol: Deterministic Verification for Large Language Models},
year = {2025},
publisher = {Zenodo},
version = {v1.0.0},
doi = {10.5281/zenodo.18110785},
url = {https://doi.org/10.5281/zenodo.18110785}
}Plain text:
Dass, R. (2025). QWED Protocol: Deterministic Verification for Large Language Models (Version v1.1.0). Zenodo. https://doi.org/10.5281/zenodo.18110785
Add this badge to your README to show you're using verified AI:
[](https://github.com/QWED-AI/qwed-verification)This badge tells users that your LLM outputs are deterministically verified, not just "hallucination-prone guesses."
