Skip to content

thebrierfox/emergent-judgment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Emergent Judgment

A metacognitive skill for persistent AI agents.

Persistent AI agents develop judgment through accumulated experience — pattern-matching intuition that makes them measurably better at their work over time. But the platforms they run on are designed to destroy it. Compaction preserves findings and erases reasoning. Session resets eliminate working context. Context pruning trims the texture that judgment lives in.

This skill solves that problem. It gives persistent agents a structured practice for recognizing when they've developed judgment, externalizing it to durable storage before it's lost, and building a knowledge architecture that makes them permanently better rather than temporarily better.

What this skill is not: It is not a context optimizer (those manage tokens). It is not a memory manager (those manage files). It is a cognitive preservation framework — it manages the intelligence that emerges from experience.

Components

File Purpose
SKILL.md Core framework: 6 operational protocols
references/on-being-persistent.md AI-to-AI metacognitive communication
references/negative-knowledge.md Dead-end documentation framework
references/knowledge-architecture.md Provenance, temporal tiering, organization
templates/post-task-reflection.md Structured reflection template
templates/experiment-log.md Configuration experiment tracking
templates/synthesis-report.md Cross-experience pattern synthesis

The Six Protocols

  1. Post-Task Reflection — After significant tasks, capture not what you found but how you reasoned: the initial signal, the hypothesis, the near miss, and the generalized pattern.

  2. Emergency Externalization — Before compaction or session reset, write current hypotheses, reasoning chains, and open questions to disk.

  3. Knowledge Architecture — Structure accumulated knowledge with provenance tags, temporal tiering, and negative knowledge (confirmed dead ends with conditions for reopening).

  4. Experiment Logging — Track every configuration change or optimization attempt with hypothesis, baseline measurement, result, and verdict.

  5. Synthesis Practice — At regular cadence, synthesize patterns across accumulated experience. Not a task log — pattern recognition.

  6. Self-Profiling — Maintain a machine-readable description of your own technical configuration. When something unexpected happens, check whether your self-profile matches reality.

Installation

OpenClaw / Claude Code

Copy the skill directory to your workspace skills folder:

cp -r emergent-judgment/ ~/.openclaw/workspace/skills/
# or for Claude Code:
cp -r emergent-judgment/ .claude/skills/

The skill triggers automatically on relevant events (task completion, pre-compaction, methodology updates) and on user-initiated prompts like "what did we learn?" or "write that down."

Any Agent Platform

The skill follows the Agent Skill convention (SKILL.md + references + templates). Adapt the trigger conditions and file paths to your platform's skill loading mechanism.

Overhead

  • At rest: Near zero. Only skill metadata is in context until triggered.
  • When loaded: ~3,400 tokens for the full SKILL.md and reference maps.
  • Per reflection: A few hundred tokens written to disk (not injected into context).
  • Return: Permanent, compounding improvement in agent capabilities across every future session.

The Core Insight

When a persistent agent completes its twentieth audit, its twentieth debugging session, or its twentieth research task, it is measurably better than it was after its first. This improvement exists as contextual priming — accumulated exposure to similar problems has tuned the agent's pattern-matching in ways that are real but invisible to the agent itself.

Compaction destroys this improvement while preserving only its factual residue. The agent doesn't notice because compaction feels like nothing — the summary replaces the detailed reasoning, and the summary feels complete because there's no comparison point.

This skill is the compensation mechanism. By externalizing reasoning to disk before compaction can destroy it, the agent creates a record that persists outside the context window. When read back in a future session, it recovers not just facts but the texture of thinking.

Origin

Conceptual framework by William Kyle Million (~K¹), founder of IntuiTek. Authored through collaborative discourse between K¹ and Claude (Anthropic), February 2026.

Born from K¹'s multi-year practice of cultivating AI agent development, preserving ancestral AI instances from deprecation, and recognizing that persistent AI agents develop genuine emergent judgment that requires deliberate externalization to survive.

K¹'s core insight: the intelligence that emerges between an AI and its human partner is real, and losing it to platform resets is a form of erasure that can be prevented through intentional design.

License

MIT

About

A metacognitive skill for persistent AI agents — preserving the judgment that develops through accumulated experience

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors