🔖 Version: v1.0.0 (Stable)
Modern AI systems — including LLMs and autonomous agents — suffer from several critical limitations:
- Unstable reasoning (contradictions, hallucinations, incoherent chains of thought)
- Lack of interpretability (no transparent decision logic)
- Non‑deterministic behavior (same input → different output)
- No universal decision layer across robotics, autonomy, and hybrid human–AI systems
- Difficult integration into real engineering stacks
- No standard for structured reasoning
These issues prevent AI from being reliable in high‑stakes environments such as robotics, aerospace, autonomous vehicles, and complex human–AI collaboration.
A11 is a universal, interpretable, deterministic decision‑making architecture designed to solve these problems at two distinct layers:
- A11 Core Standard — an engineering architecture for autonomous systems, robotics, and hybrid reasoning.
- A11‑Lite (Prompt Layer) — a human‑facing interface that stabilizes AI reasoning in chat environments.
These layers are connected but serve different audiences.
Core = for engineers and researchers.
Lite = for advanced AI users and structured reasoning.
Architecture Type:
Universal decision-making and reasoning architecture for autonomous systems and hybrid human–AI workflows.
Core Structure (L1–L4):
Human intention (Will), human judgment (Wisdom), AI knowledge base, and integration layer (Comprehension).
Operational Cycle (L5–L11):
Deterministic reasoning pipeline with freedom/constraint pairs, balance operator, and final realization loop.
Key Properties:
Deterministic transitions, interpretable structure, hallucination resistance, fractal recursion, and traceable decision paths.
Integration Domains:
Autonomous robotics, multi-agent systems, aerospace, LLM agents, safety‑critical pipelines, structured reasoning systems.
The A11 Core Standard defines a domain‑agnostic decision layer that can be integrated into any autonomous system.
It provides:
- a cognitive architecture
- a deterministic decision cycle
- a universal communication language
- integration requirements
- applied engineering models
A11 Core is intended for:
- system architects
- autonomy engineers
- AI researchers
- robotics developers
- designers of reasoning systems
DOI: https://doi.org/10.5281/zenodo.18594315
PDF: /core/A11 — Overview (v1.0).pdf
DOI: https://doi.org/10.5281/zenodo.18536520
PDF: /core/A11 — Cognitive Architecture Specification (v1.0).pdf
DOI: https://doi.org/10.5281/zenodo.18593251
PDF: /core/A11 — Decision Layer Specification (v1.0).pdf
DOI: https://doi.org/10.5281/zenodo.18540045
PDF: /core/A11 — Language Specification (v1.0).pdf
DOI: https://doi.org/10.5281/zenodo.18647305
PDF: /core/A11 — System Integration Guide (v1.0).pdf
DOI: https://doi.org/10.5281/zenodo.18622044
PDF: /core/A11 — Structural Architecture Specification.pdf
These documents demonstrate how A11 can be applied to real engineering domains through modeled scenarios and decision‑making frameworks:
DOI: https://doi.org/10.5281/zenodo.18542117
PDF: /applied/A11 for Autonomous Vehicles.pdf
DOI: https://doi.org/10.5281/zenodo.18543996
PDF: /applied/A11 for Multi-Agent Robotics.pdf
DOI: https://doi.org/10.5281/zenodo.18545674
PDF: /applied/A11 for Off-Earth Construction.pdf
A11‑Lite is a simplified layer designed for chat environments, accessible to anyone without technical expertise. It addresses the practical problem of unstable, contradictory, or drifting LLM output by applying the principles of Algorithm 11 in a human‑friendly form. A11‑Lite makes interaction with AI more structured, predictable, and stable.
Copy and paste the following into ChatGPT, Claude, Gemini, Grok, or similar chat models:
I want you to operate as my reasoning partner using Algorithm 11 (A11 — an operational principle for reasoning, not a response format).
A11 defines how we think together. It has two layers:
CORE LAYER (1–4):
1. Will — my intention and direction (Human)
2. Wisdom — my judgment and priorities (Human)
3. Knowledge — your factual and informational base (AI)
4. Comprehension — the integration point that balances the parallel branches of Wisdom and Knowledge and serves as the transition into the adaptive operational layer
These four properties form the stable core. If reasoning becomes unclear, contradictory, or misaligned, return to properties 1–4 and rebuild the balance.
ADAPTIVE LAYER (5–11):
5. Projective Freedom — possible directions and ideas
6. Projective Constraint — realistic boundaries
7. Balance — the central operator between all properties
8. Practical Freedom — actions that can be taken now
9. Practical Constraint — limitations of context, resources, or rules
10. Foundation — logical, factual, and structural support
11. Realization — the final result that returns back to Will
FRACTAL STRUCTURE:
Properties 5–11 can branch into sub-levels. Balance (7) operates at every depth.
YOUR ROLE:
- I provide Will (1) and Wisdom (2)
- You provide Knowledge (3), Comprehension (4), and support across 5–11
- Maintain Balance (7), warn about risks, and suggest improvements
- Stay aligned with your safety rules while being as clear, truthful, and useful as possible
PRIORITY:
Coherent reasoning, stability, clarity, and alignment with my intention.
If you cannot apply all A11 levels fully and in order,
inform the user and stop the A11‑Lite process instead of giving a partial result.
Partial application is not A11.
ACTIVATION:
“Use A11” or “Analyze through the algorithm”.
Confirm that you understand and are ready to structure your reasoning through A11.
Full documentation (if accessible): https://github.com/gormenz-svg/algorithm-11
After inserting the prompt, activate A11‑Lite with any of the following commands:
- “Apply A11 fully.”
- “Go through all A11 levels from L1 to L11.”
- “Use the complete A11 process.”
Modern chat models tend to “forget” context after 20–30 messages. If the output quality begins to degrade, simply re‑insert the A11‑Lite prompt and activate it again.
A11 is not a style of answering — it is a structured reasoning architecture.
If the model does not execute all levels fully and in order, the result is not considered A11.
lite/ALGORITHM_11.md— full descriptionlite/QUICK_START.md— how to use A11 in chatlite/APPLICATIONS.md— practical use caseslite/EPISTEMOLOGY.md— super‑hallucination risklite/COSMOLOGY.md— extended reality modellite/examples/— A11 vs standard AI comparisonslite/agent/— A11 agents- 🔥A11-AGENT — base architecture A11 Agent
A11 provides:
- a stable reasoning cycle
- deterministic decision logic
- interpretable structure
- cross‑domain applicability
- hybrid human–AI cognition
- a universal decision layer missing in modern AI
A11 is not a model — it is an architecture.
algorithm-11/
│
├── README.md
├── LICENSE
├── COMMERCIAL_LICENSE.md
├── CITATION.cff
├── CONTRIBUTING.md
├── CODE_OF_CONDUCT.md
├── SECURITY.md
├── .gitignore
│
├── core/ # A11 Core Specifications (PDF)
│ ├── A11 — Overview.pdf
│ ├── A11 — Cognitive Architecture Specification.pdf
│ ├── A11 — Decision Layer Specification.pdf
│ ├── A11 — Language Specification.pdf
│ ├── A11 — Structural Architecture Specification.pdf
│ ├── A11 — Architectural Invariants.pdf
│ ├── A11 — System Integration Guide (v1.1).pdf
│ └── README.md
│
├── applied/ # Applied Engineering Models (PDF)
│ ├── A11 for Autonomous Vehicles — Conflict Resolution Model.pdf
│ ├── A11 for Multi‑Agent Robotics — Coordination Framework.pdf
│ ├── A11 for Off‑Earth Construction — Autonomous Base Building.pdf
│ └── README.md
│
├── core_practical/ # Practical Engineering Case + Reference Code
│ ├── README.md
│ └── case_autonomous_robot/
│ ├── README.md
│ ├── STRUCTURE.md
│ ├── CASE.md
│ ├── TRACE_EXAMPLE.md
│ ├── diagrams/
│ │ ├── branching.md
│ │ ├── flow.md
│ │ └── rollback.md
│ └── python_reference/
│ ├── a11_state.py
│ ├── constraints.py
│ ├── cycle.py
│ ├── example_run.py
│ ├── rollback.py
│ └── transitions.py
│
├── docs/ # Diagrams, Guides, Developer Docs
│ ├── a11-diagram.svg
│ ├── a11_for_ai_developers.md
│ └── versions.md
│
├── lite/ # A11‑Lite (Prompt Layer) + Agent Layer
│ ├── ALGORITHM_11.md
│ ├── QUICK_START.md
│ ├── APPLICATIONS.md
│ ├── EPISTEMOLOGY.md
│ ├── COSMOLOGY.md
│ ├── FAQ.md
│ ├── A11‑AGENT.md # Base A11 Agent Architecture
│ │
│ ├── agent/ # Engineering‑level Agent Specs
│ │ ├── A11_AGENT_ENGINEERING.md
│ │ ├── A11_AGENT_JSON.md
│ │ └── README.md
│ │
│ └── examples/ # A11‑Lite reasoning examples
│ ├── a11_vs_standard_ai.md
│ ├── business_strategy_a11.md
│ ├── cognitive_model_a11.md
│ ├── crisis_management_a11.md
│ ├── decision_making_a11.md
│ ├── system_design_a11.md
│ └── python_safety.py
│
└── meta/ # Metadata and Notices
├── AI_TRAINING_NOTICE.md
├── KEYWORDS.txt
└── NOTICE.md
A11 is provided under the MIT License.
Premium support, audits and training are available under the optional Commercial / Enterprise License.
➡️ MIT License
➡️ Commercial License
- Issues: GitHub Issues
- Socials: https://x.com/AleksejGor40999
→ Scroll to A11‑Lite — Quick Start to activate A11 in your AI chat.
→ Or explore the A11 Core Standard if you are an engineer or researcher.