- Claude Code installed and working (
claude --version) - Python 3.8+ on your PATH (
python3 --version)
That's it. ALM has zero external dependencies — no pip packages, no npm modules, no system tools beyond Python's standard library.
Inside Claude Code:
/plugin marketplace add sfw/ALM
/plugin install alm@sfw-ALM
Restart Claude Code after installation.
/alm:status
You should see ALM's status dashboard showing 0 sessions tracked.
To load ALM for a single session without permanent installation (useful for development or testing):
git clone https://github.com/sfw/ALM.git alm-plugin
claude --plugin-dir ./alm-plugin--plugin-dir only lasts for that session. For daily use, install via the marketplace (above).
The plugin ships with everything it needs. No build step. Claude Code caches plugin files in ~/.claude/plugins/cache/.
ALM creates ~/.claude/alm/ on first run:
~/.claude/alm/
├── state.json # Install date, eval count, pause state
├── confidence.json # Per-task-type confidence scores
├── evaluations/ # Session outcome records (JSONL, one file per day)
├── playbooks/ # Learned playbooks (empty until /alm:reflect)
├── classifier/ # TF-IDF model (empty until /alm:reflect)
├── config.json # User settings (only if you create it)
└── reflect-queue.json # Auto-reflect suggestions
Nothing is written to your project directory unless you create project-level playbook overrides.
-
Session starts — ALM creates
~/.claude/alm/and injects seed playbooks with best practices for common coding tasks. A brief welcome message appears. -
You work normally — Code, debug, refactor. ALM is invisible during the session.
-
Session ends — The Stop hook silently evaluates: what type of task, whether you corrected Claude, the outcome. A record is appended to
~/.claude/alm/evaluations/{date}.jsonl. -
Sessions 2-14 — Progress messages show how many sessions ALM has tracked. After session 5, the most common task type is identified.
-
Session 15+ — ALM suggests running
/alm:reflect. This spawns a Sonnet subagent that analyzes your correction patterns and writes personalized playbooks. -
After reflection — The TF-IDF classifier activates. Each prompt is matched to the right playbook automatically. High-confidence task types are skipped (Claude already knows).
| Command | Description |
|---|---|
/alm:status |
Dashboard: sessions tracked, confidence scores, classifier status, stale playbook warnings |
/alm:reflect |
Generate/update personalized playbooks from correction history |
/alm:review |
View recent evaluations, override misclassifications |
/alm:pause |
Disable ALM evaluation and injection |
/alm:resume |
Re-enable ALM after pausing |
/alm:forget <type> |
Hard reset a specific task type |
/alm:export |
Export playbooks as standalone markdown |
/alm:reset |
Delete all ALM data and start fresh (requires confirmation) |
ALM works with zero configuration. To customize behavior, create ~/.claude/alm/config.json with any of these keys:
| Key | Default | Description |
|---|---|---|
minSessionsBeforeReflect |
15 |
Minimum total sessions before suggesting reflection |
minEvaluationsPerType |
5 |
Minimum evals per task type before it's eligible for reflection |
confidenceThresholdForSkip |
0.90 |
Skip playbook injection when confidence exceeds this |
tfidfMinTrainingExamples |
10 |
Minimum training examples to build the TF-IDF classifier |
tfidfConfidenceThreshold |
0.30 |
Minimum TF-IDF similarity score to inject a classified playbook |
maxPlaybookTokens |
800 |
Target max tokens per generated playbook |
maxPlaybooksInjected |
3 |
Maximum playbooks injected at session start |
dataRetentionDays |
180 |
Days before evaluation files are archived |
showProgressMessages |
true |
Show learning progress at session start |
enableProjectOverrides |
true |
Allow project-level playbooks to override user-level ones |
autoSuggestReflection |
true |
Auto-queue task types for reflection when thresholds are met |
All keys are optional. Missing keys use defaults.
Teams can share playbooks that apply to a specific project. Create markdown files in your project's .claude/alm/playbooks/ directory:
mkdir -p .claude/alm/playbooks# .claude/alm/playbooks/api-design.md
## Approach
- All endpoints return JSON with `code` and `message` fields on error
- Use kebab-case for URL paths
- Version APIs with /v1/ prefixWhen ALM classifies a session as api-design and finds a project-level playbook, it uses that instead of the user-level one. The output notes [project override].
Disable with "enableProjectOverrides": false in config.
# Inside Claude Code:
/plugin uninstall alm@sfw-ALM
# Optionally remove all learning data:
rm -rf ~/.claude/alm
ALM welcome message doesn't appear:
- Verify Python is on PATH:
which python3 - Check hooks are registered: run
/hooksinside Claude Code - Try development mode to isolate:
claude --plugin-dir /path/to/alm
Evaluations not being recorded:
- Check
~/.claude/alm/evaluations/for.jsonlfiles - Very short sessions (< 2 user messages, 0 tool calls) are skipped as trivial
- ALM might be paused: run
/alm:statusto check,/alm:resumeto unpause
TF-IDF classifier not activating:
- The classifier is created by
/alm:reflect, not automatically - Check
~/.claude/alm/classifier/model.jsonexists after reflecting - Needs 10+ training examples across task types
Permission errors:
- ALM creates
~/.claude/alm/on first run — ensure~/.claude/is writable - Check:
ls -la ~/.claude/
Hooks timing out:
- All hooks are designed to exit 0 on error — they should never block your session
- SessionStart timeout: 5s, UserPromptSubmit: 10s, PreCompact: 10s, Stop: 30s
- If a hook consistently times out, check that
python3starts quickly on your system