Skip to content

[Feature] Manage Rules, Agents, and Commands — Not Just Skills #3

@m9dfukc

Description

@m9dfukc

Feature: Manage Rules, Agents, and Commands — Not Just Skills

Problem

Singularity currently only tracks skills. But Claude Code projects consist of four component types that evolve over time: skills, rules, agents, and commands. Each produces outputs that can be scored, each can degrade, and each benefits from version-locking once proven.

Today, when a rule becomes too strict, an agent's system prompt drifts, or a command workflow breaks — there's no feedback loop. You only notice when something goes wrong.

Proposed Feature

Extend singularity's scoring/repair/crystallize loop to manage all four Claude Code component types.

Why Each Type Fits

Rules (.claude/rules/*.md)

  • Structure: YAML frontmatter (description, paths globs) + markdown with ## Verify checklists
  • Scoring: Measure adoption — how often do rule violations appear in code that touches scoped paths? A rule that's constantly violated may be unclear or overly strict
  • Repair: Refine wording, add exceptions for edge cases, resolve contradictions between rules
  • Crystallize: Lock a validated rule set (e.g., "forms-conventions-v2") to prevent accidental erosion during rapid iteration

Agents (.claude/agents/*.md)

  • Structure: YAML frontmatter (name, description, tools, model, maxTurns) + system prompt
  • Scoring: Direct output quality — did the agent produce accurate, complete, well-structured results?
  • Repair: Fix unclear instructions, add missing error handling, reduce hallucination-prone sections
  • Crystallize: Lock proven agent versions to prevent mid-project regression

Commands (.claude/commands/*.md)

  • Structure: YAML frontmatter (description) + numbered workflow steps with output templates
  • Scoring: Commands produce structured reports (pass/fail tables, checklists) — directly measurable
  • Repair: Streamline workflow steps, add pre-flight checks, improve error messaging
  • Crystallize: Lock stable workflows — high value since commands are used frequently by the whole team

Scoring Dimensions by Type

The existing 5-dimension rubric (Correctness, Completeness, Edge Cases, Efficiency, Reusability) works for skills and agents. Rules and commands may benefit from adapted dimensions:

Dimension Skills/Agents Rules Commands
Correctness Did it achieve the goal? Are violations real issues? Did all steps complete?
Completeness All requirements addressed? All relevant paths covered? All checks included?
Edge Cases Unusual inputs handled? Exceptions documented? Error paths handled?
Clarity (new) Is the rule unambiguous? Are steps self-explanatory?
Efficiency Direct and minimal? Minimal false positives? No redundant steps?

Registry Schema Extension

{
  "$schema": "singularity-registry-v2",
  "skills": { ... },
  "rules": { ... },
  "agents": { ... },
  "commands": { ... }
}

Each entry type would share the same lifecycle metadata (maturity, version, scores, edge cases) but with type-specific fields (e.g., paths for rules, tools for agents).

Implementation Priority

  1. Agents — closest to skills structurally, easiest to add
  2. Commands — highest team impact, very measurable outputs
  3. Rules — needs adoption-based scoring adapter, but crystallization alone is valuable

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions