Skip to content

Latest commit

 

History

History
172 lines (119 loc) · 6.47 KB

File metadata and controls

172 lines (119 loc) · 6.47 KB

ALM Plugin — Installation Guide

Prerequisites

  • Claude Code installed and working (claude --version)
  • Python 3.8+ on your PATH (python3 --version)

That's it. ALM has zero external dependencies — no pip packages, no npm modules, no system tools beyond Python's standard library.

Installation

Step 1: Add the marketplace

Inside Claude Code:

/plugin marketplace add sfw/ALM

Step 2: Install the plugin

/plugin install alm@sfw-ALM

Restart Claude Code after installation.

Step 3: Verify

/alm:status

You should see ALM's status dashboard showing 0 sessions tracked.

Development Mode

To load ALM for a single session without permanent installation (useful for development or testing):

git clone https://github.com/sfw/ALM.git alm-plugin
claude --plugin-dir ./alm-plugin

--plugin-dir only lasts for that session. For daily use, install via the marketplace (above).

What Gets Created

Plugin files (read-only, managed by Claude Code)

The plugin ships with everything it needs. No build step. Claude Code caches plugin files in ~/.claude/plugins/cache/.

Runtime data (created on first session)

ALM creates ~/.claude/alm/ on first run:

~/.claude/alm/
├── state.json               # Install date, eval count, pause state
├── confidence.json          # Per-task-type confidence scores
├── evaluations/             # Session outcome records (JSONL, one file per day)
├── playbooks/               # Learned playbooks (empty until /alm:reflect)
├── classifier/              # TF-IDF model (empty until /alm:reflect)
├── config.json              # User settings (only if you create it)
└── reflect-queue.json       # Auto-reflect suggestions

Nothing is written to your project directory unless you create project-level playbook overrides.

First Session Walkthrough

  1. Session starts — ALM creates ~/.claude/alm/ and injects seed playbooks with best practices for common coding tasks. A brief welcome message appears.

  2. You work normally — Code, debug, refactor. ALM is invisible during the session.

  3. Session ends — The Stop hook silently evaluates: what type of task, whether you corrected Claude, the outcome. A record is appended to ~/.claude/alm/evaluations/{date}.jsonl.

  4. Sessions 2-14 — Progress messages show how many sessions ALM has tracked. After session 5, the most common task type is identified.

  5. Session 15+ — ALM suggests running /alm:reflect. This spawns a Sonnet subagent that analyzes your correction patterns and writes personalized playbooks.

  6. After reflection — The TF-IDF classifier activates. Each prompt is matched to the right playbook automatically. High-confidence task types are skipped (Claude already knows).

Commands

Command Description
/alm:status Dashboard: sessions tracked, confidence scores, classifier status, stale playbook warnings
/alm:reflect Generate/update personalized playbooks from correction history
/alm:review View recent evaluations, override misclassifications
/alm:pause Disable ALM evaluation and injection
/alm:resume Re-enable ALM after pausing
/alm:forget <type> Hard reset a specific task type
/alm:export Export playbooks as standalone markdown
/alm:reset Delete all ALM data and start fresh (requires confirmation)

Configuration

ALM works with zero configuration. To customize behavior, create ~/.claude/alm/config.json with any of these keys:

Key Default Description
minSessionsBeforeReflect 15 Minimum total sessions before suggesting reflection
minEvaluationsPerType 5 Minimum evals per task type before it's eligible for reflection
confidenceThresholdForSkip 0.90 Skip playbook injection when confidence exceeds this
tfidfMinTrainingExamples 10 Minimum training examples to build the TF-IDF classifier
tfidfConfidenceThreshold 0.30 Minimum TF-IDF similarity score to inject a classified playbook
maxPlaybookTokens 800 Target max tokens per generated playbook
maxPlaybooksInjected 3 Maximum playbooks injected at session start
dataRetentionDays 180 Days before evaluation files are archived
showProgressMessages true Show learning progress at session start
enableProjectOverrides true Allow project-level playbooks to override user-level ones
autoSuggestReflection true Auto-queue task types for reflection when thresholds are met

All keys are optional. Missing keys use defaults.

Project-Level Playbook Overrides

Teams can share playbooks that apply to a specific project. Create markdown files in your project's .claude/alm/playbooks/ directory:

mkdir -p .claude/alm/playbooks
# .claude/alm/playbooks/api-design.md

## Approach
- All endpoints return JSON with `code` and `message` fields on error
- Use kebab-case for URL paths
- Version APIs with /v1/ prefix

When ALM classifies a session as api-design and finds a project-level playbook, it uses that instead of the user-level one. The output notes [project override].

Disable with "enableProjectOverrides": false in config.

Uninstalling

# Inside Claude Code:
/plugin uninstall alm@sfw-ALM

# Optionally remove all learning data:
rm -rf ~/.claude/alm

Troubleshooting

ALM welcome message doesn't appear:

  • Verify Python is on PATH: which python3
  • Check hooks are registered: run /hooks inside Claude Code
  • Try development mode to isolate: claude --plugin-dir /path/to/alm

Evaluations not being recorded:

  • Check ~/.claude/alm/evaluations/ for .jsonl files
  • Very short sessions (< 2 user messages, 0 tool calls) are skipped as trivial
  • ALM might be paused: run /alm:status to check, /alm:resume to unpause

TF-IDF classifier not activating:

  • The classifier is created by /alm:reflect, not automatically
  • Check ~/.claude/alm/classifier/model.json exists after reflecting
  • Needs 10+ training examples across task types

Permission errors:

  • ALM creates ~/.claude/alm/ on first run — ensure ~/.claude/ is writable
  • Check: ls -la ~/.claude/

Hooks timing out:

  • All hooks are designed to exit 0 on error — they should never block your session
  • SessionStart timeout: 5s, UserPromptSubmit: 10s, PreCompact: 10s, Stop: 30s
  • If a hook consistently times out, check that python3 starts quickly on your system