A 6-Layer Protocol for Strategic AI Workflows & Career Architecture.
The LLM Capability Framework (LCF) is an open-source architectural pattern designed to solve the "Contextual Drift" and "Buzzword Trap" of modern AI workflows. By decoupling complex tasks into distinct functional layers, the LCF transforms raw technical data into high-altitude, Principal-grade professional artifacts.
LCF categorizes every AI task into a specific layer, matching requirement depth to optimal model classes.
| Layer | Role | Purpose | Ideal Model |
|---|---|---|---|
| L1 | The Scout | Deterministic data extraction (JSON). | 4B-14B Distilled |
| L2 | The Engineer | Semantic transformation & the "Mirror Test." | 8B-32B General |
| L3 | The Strategist | Gap analysis & Judgment Alignment (JA). | 14B-70B Logic-heavy |
| L4 | The Researcher | ROI Benchmarking & RAG-based expansion. | Search-Augmented Agents |
| L5 | The Director | Artifact assembly & ATS Optimization. | Frontier (Claude/GPT-5) |
| L6 | The Arbiter | Recursive logic audit & hallucination guarding. | o1/o3 Reasoning models |
Modern professionals often undersell their impact because they lack access to high-level strategic benchmarks. LCF "heals" professional narratives by:
- Quantifying Impact: Bridging technical acts with market-verified ROI.
- Eliminating Hallucinations: Using the L6 Arbiter to cross-reference synthesized narratives against raw source truth.
- Strategic Positioning: Shifting profiles from "Task-Oriented" to "Value-Oriented."
Input your raw CV or LinkedIn profile. The L1 Scout extracts data into a clean JSON schema, while the L2 Engineer standardizes the narrative.
Run the Layer 3 Strategic Audit to identify "Principal Gaps." Use the Layer 4 Researcher to inject real-world ROI metrics and industry benchmarks.
Generate your final Executive CV at Layer 5. Finally, invoke the Layer 6 Arbiter to ensure every claim is 100% defensible and grounded in your original data.
LLM-Capability-Framework-LCF/
├── 1_Capability_Layers/
│ ├── Layer_1_extraction.md
│ ├── Layer_2_translator.md
│ ├── Layer_3_interpretation.md
│ ├── Layer_4_expansion.md
│ ├── Layer_5_composition.md
│ └── Layer_6_agency.md
├── 2_Architectural_Patterns/
│ ├── scout_auditor_pattern/
│ └── sub_question_retrieval/
├── 3_Evaluation_Benchmarks/
│ ├── Layer_1_extraction_accuracy_tests/
│ │ ├── prompt.md
│ │ ├── analysis.md
│ │ ├── claude-haiku-4-5-results.json
│ │ └── gemini-3-pro-results.json
│ ├── Layer_2_the_translator_tests/
│ │ ├── 1_prompt.md
│ │ ├── 2_mirror-test-prompt.md
│ │ ├── gemini-3-flash-output.md
│ │ ├── gemini3_flash_results.md
│ │ ├── haiku_mirror_audit.md
│ │ ├── qwen3-max--gemini3-flash-analysis.md
│ │ ├── qwen3-max-2025-09-23-output.md
│ │ └── qwen3-max-2025-09-23-results.md
│ ├── Layer_3_interpretation_tests/
│ │ ├── 1_main_prompt.md
│ │ ├── 2_Qwen3-32B_response_1_main_prompt.md
│ │ ├── 3_mistral-large-3_response_1_main_prompt.md
│ │ ├── 4_analysis_Qwen3-32B_mistral-large-3_response.md
│ │ ├── 5_peer_review_prompt_contract.md
│ │ ├── 6_deepseek_r1_response_peer_review_prompt_contract.md
│ │ └── 7_analysis_L3.md
│ ├── L4_expansion_tests/
│ │ ├── 1_main_prompt.md
│ │ ├── 2_main_prompt_GPT_52_response.md
│ │ ├── 3_alex_rivera_transformation.md
│ │ └── 4_analysis.md
│ ├── Layer_5_composition_tests/
│ │ ├── 1_prompt.md
│ │ ├── 1_prompt_GPT_52_response.md
│ │ └── 3_prompt_GeminiPro_response.md
│ └── Layer_6_agency_6.md/
│ ├── 1_prompt_positive.md
│ ├── 1_prompt_negative.md
│ ├── Layer_6_DeepSeekR1_positive_case_response.md
│ └── Layer_6_DeepSeekR1_negative_case_response.md
├── CHANGELOG.md
├── LICENSE
├── README.md
├── llms.txt
└── llms-full.txt
M Suhail Tahir