Skip to content

Conversation

@mreynolds389
Copy link
Contributor

@mreynolds389 mreynolds389 commented Jan 8, 2026

Description:

Add the github spec-kit and cursor support to the source code

We might want to improve the constitution as some point (it's basically copied from the wiki landing page), but it should be sufficient for now.

Relates: #7174

Summary by Sourcery

Add Spec-Kit support for GitHub/Cursor workflows, including scripts and templates for spec-driven development and multi-agent context management.

Enhancements:

  • Introduce shared bash utilities and prerequisite checks for Spec-Kit feature workflows.
  • Add scripts to create new feature specs and implementation plans, integrating with git and non-git repositories.
  • Provide Cursor command definitions for specifying, planning, clarifying, tasking, analyzing, implementing, checklist creation, constitution management, and GitHub issue generation.
  • Add templates for specifications, plans, tasks, checklists, agent context files, and a project constitution to standardize spec-driven development.
  • Implement automated agent context updating across multiple AI tools based on feature plan metadata.

Chores:

  • Establish a project constitution placeholder to guide future governance and development principles.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 5 issues, and left some high level feedback:

  • The create-new-feature.sh argument parser assumes all options precede the feature description, but several /speckit.specify examples show options coming after the description; either update the parser to tolerate interleaved options/args or fix the examples so they match the actual CLI contract.
  • The agent context update logic repeats the same file paths and update calls across update_specific_agent and update_all_existing_agents; consider centralizing the agent metadata (type, label, file path) in a single associative structure and iterating over it to reduce duplication and the risk of inconsistent behavior.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `create-new-feature.sh` argument parser assumes all options precede the feature description, but several `/speckit.specify` examples show options coming after the description; either update the parser to tolerate interleaved options/args or fix the examples so they match the actual CLI contract.
- The agent context update logic repeats the same file paths and update calls across `update_specific_agent` and `update_all_existing_agents`; consider centralizing the agent metadata (type, label, file path) in a single associative structure and iterating over it to reduce duplication and the risk of inconsistent behavior.

## Individual Comments

### Comment 1
<location> `.specify/scripts/bash/update-agent-context.sh:332-339` </location>
<code_context>
+        recent_change="- $escaped_branch: Added"
+    fi
+
+    local substitutions=(
+        "s|\[PROJECT NAME\]|$project_name|"
+        "s|\[DATE\]|$current_date|"
+        "s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
+        "s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
+        "s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
+        "s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
+        "s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
+    )
+    
</code_context>

<issue_to_address>
**issue (bug_risk):** Sed substitutions don’t escape `&` or other special chars, which can corrupt output when plan values contain them.

These interpolated values (`$project_name`, `$tech_stack`, `$commands`, etc.) are inserted directly into `sed -e 's|...|$value|'`. If any contain `&`, sed will substitute the entire match there; `|` or backslashes can also break the command. Only some characters are (partially) escaped for a couple of variables, and most values are unescaped.

It would be safer to add a helper that escapes all sed replacement metacharacters (at minimum `&` and the chosen delimiter) and apply it to every value before adding to `substitutions[]`. If that’s too brittle, consider replacing sed-based templating with a simpler placeholder expansion mechanism instead.
</issue_to_address>

### Comment 2
<location> `.specify/scripts/bash/update-agent-context.sh:109-117` </location>
<code_context>
+}
+
+# Cleanup function for temporary files
+cleanup() {
+    local exit_code=$?
+    rm -f /tmp/agent_update_*_$$
+    rm -f /tmp/manual_additions_$$
+    exit $exit_code
+}
+
+# Set up cleanup trap
+trap cleanup EXIT INT TERM
+
+#==============================================================================
</code_context>

<issue_to_address>
**suggestion (bug_risk):** Cleanup trap removes temp files with a pattern that no longer matches the mktemp usage.

The trap removes `/tmp/agent_update_*_$$` and `/tmp/manual_additions_$$`, but the current `mktemp` calls no longer create files with those names (and no `manual_additions_$$` file is created at all). As a result, the trap doesn’t clean up the actual temp files and these patterns are now dead code. Consider either tracking the `mktemp` paths in variables and deleting those explicitly, or removing these unused patterns to avoid confusion.

Suggested implementation:

```
=======
log_warning() {
    echo "WARNING: $1" >&2
}

# Cleanup function for temporary files
# Track temp files explicitly so they can be cleaned up reliably
TEMP_FILES=()

register_temp_file() {
    # Usage: register_temp_file "$path"
    # Call this right after creating a temp file (e.g. from mktemp)
    TEMP_FILES+=("$1")
}

cleanup() {
    local exit_code=$?
    local tmp

    # Remove any registered temp files
    for tmp in "${TEMP_FILES[@]}"; do
        if [[ -n "$tmp" && -e "$tmp" ]]; then
            rm -f -- "$tmp"
        fi
    done

    exit "$exit_code"
}

# Set up cleanup trap
trap cleanup EXIT INT TERM

# Configuration and Global Variables
#==============================================================================

```

To fully implement the suggestion, update every place in this script where you call `mktemp` (or otherwise create a temporary file) so that the resulting path is registered, for example:

```bash
tmp_file="$(mktemp)"
register_temp_file "$tmp_file"
```

This will ensure that all actual temp files are removed by `cleanup`, and you can safely remove any old pattern-based `rm` calls (like `rm -f /tmp/agent_update_*_$$` or `rm -f /tmp/manual_additions_$$`) that may exist elsewhere in the script.
</issue_to_address>

### Comment 3
<location> `.specify/scripts/bash/create-new-feature.sh:200-199` </location>
<code_context>
+        if ! echo "$word" | grep -qiE "$stop_words"; then
+            if [ ${#word} -ge 3 ]; then
+                meaningful_words+=("$word")
+            elif echo "$description" | grep -q "\b${word^^}\b"; then
+                # Keep short words if they appear as uppercase in original (likely acronyms)
+                meaningful_words+=("$word")
+            fi
+        fi
</code_context>

<issue_to_address>
**issue (bug_risk):** Use of `${word^^}` may break on environments with older Bash (e.g., macOS default).

`${word^^}` requires Bash 4+, so this will fail on systems still using Bash 3 (e.g., macOS’ `/bin/bash`), despite the `/usr/bin/env bash` shebang.

If older Bash support is needed, use a POSIX-compatible uppercase conversion instead, for example:

```bash
upper_word=$(printf '%s' "$word" | tr '[:lower:]' '[:upper:]')
```

and reference `$upper_word` in the `grep`.
</issue_to_address>

### Comment 4
<location> `.specify/scripts/bash/create-new-feature.sh:30-39` </location>
<code_context>
+            fi
+            SHORT_NAME="$next_arg"
+            ;;
+        --number)
+            if [ $((i + 1)) -gt $# ]; then
+                echo 'Error: --number requires a value' >&2
+                exit 1
+            fi
+            i=$((i + 1))
+            next_arg="${!i}"
+            if [[ "$next_arg" == --* ]]; then
+                echo 'Error: --number requires a value' >&2
+                exit 1
+            fi
+            BRANCH_NUMBER="$next_arg"
+            ;;
+        --help|-h) 
</code_context>

<issue_to_address>
**issue (bug_risk):** `--number` is not validated as numeric, which can cause arithmetic errors later under `set -e`.

Here `--number` is accepted verbatim and later used in `FEATURE_NUM=$(printf "%03d" "$( (10#$BRANCH_NUMBER))")`. With a non-numeric value (e.g. `--number foo`), the arithmetic expansion fails and the script exits due to `set -e`.

Add an explicit numeric check (e.g. `[[ $BRANCH_NUMBER =~ ^[0-9]+$ ]]`), and emit a clear error before using it in arithmetic.
</issue_to_address>

### Comment 5
<location> `.specify/scripts/bash/check-prerequisites.sh:87-96` </location>
<code_context>
+fi
+
+# Output results
+if $JSON_MODE; then
+    # Build JSON array of documents
+    if [[ ${#docs[@]} -eq 0 ]]; then
+        json_docs="[]"
+    else
+        json_docs=$(printf '"%s",' "${docs[@]}")
+        json_docs="[${json_docs%,}]"
+    fi
+    
+    printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
+else
+    # Text output
</code_context>

<issue_to_address>
**suggestion (bug_risk):** JSON output is manually constructed without escaping, which could break if paths contain special characters.

Given the path variables are interpolated directly into JSON, any quotes, backslashes, or control characters in `$FEATURE_DIR` or `docs[@]` will produce invalid output. Consider adding minimal escaping for these characters, or delegating JSON construction to a helper (e.g., a small script using `python -c 'import json; ...'`) so proper escaping is handled centrally.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +332 to +339
local substitutions=(
"s|\[PROJECT NAME\]|$project_name|"
"s|\[DATE\]|$current_date|"
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Sed substitutions don’t escape & or other special chars, which can corrupt output when plan values contain them.

These interpolated values ($project_name, $tech_stack, $commands, etc.) are inserted directly into sed -e 's|...|$value|'. If any contain &, sed will substitute the entire match there; | or backslashes can also break the command. Only some characters are (partially) escaped for a couple of variables, and most values are unescaped.

It would be safer to add a helper that escapes all sed replacement metacharacters (at minimum & and the chosen delimiter) and apply it to every value before adding to substitutions[]. If that’s too brittle, consider replacing sed-based templating with a simpler placeholder expansion mechanism instead.

Comment on lines +109 to +117
cleanup() {
local exit_code=$?
rm -f /tmp/agent_update_*_$$
rm -f /tmp/manual_additions_$$
exit $exit_code
}

# Set up cleanup trap
trap cleanup EXIT INT TERM
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): Cleanup trap removes temp files with a pattern that no longer matches the mktemp usage.

The trap removes /tmp/agent_update_*_$$ and /tmp/manual_additions_$$, but the current mktemp calls no longer create files with those names (and no manual_additions_$$ file is created at all). As a result, the trap doesn’t clean up the actual temp files and these patterns are now dead code. Consider either tracking the mktemp paths in variables and deleting those explicitly, or removing these unused patterns to avoid confusion.

Suggested implementation:

=======
log_warning() {
    echo "WARNING: $1" >&2
}

# Cleanup function for temporary files
# Track temp files explicitly so they can be cleaned up reliably
TEMP_FILES=()

register_temp_file() {
    # Usage: register_temp_file "$path"
    # Call this right after creating a temp file (e.g. from mktemp)
    TEMP_FILES+=("$1")
}

cleanup() {
    local exit_code=$?
    local tmp

    # Remove any registered temp files
    for tmp in "${TEMP_FILES[@]}"; do
        if [[ -n "$tmp" && -e "$tmp" ]]; then
            rm -f -- "$tmp"
        fi
    done

    exit "$exit_code"
}

# Set up cleanup trap
trap cleanup EXIT INT TERM

# Configuration and Global Variables
#==============================================================================

To fully implement the suggestion, update every place in this script where you call mktemp (or otherwise create a temporary file) so that the resulting path is registered, for example:

tmp_file="$(mktemp)"
register_temp_file "$tmp_file"

This will ensure that all actual temp files are removed by cleanup, and you can safely remove any old pattern-based rm calls (like rm -f /tmp/agent_update_*_$$ or rm -f /tmp/manual_additions_$$) that may exist elsewhere in the script.

# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
if ! echo "$word" | grep -qiE "$stop_words"; then
if [ ${#word} -ge 3 ]; then
meaningful_words+=("$word")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Use of ${word^^} may break on environments with older Bash (e.g., macOS default).

${word^^} requires Bash 4+, so this will fail on systems still using Bash 3 (e.g., macOS’ /bin/bash), despite the /usr/bin/env bash shebang.

If older Bash support is needed, use a POSIX-compatible uppercase conversion instead, for example:

upper_word=$(printf '%s' "$word" | tr '[:lower:]' '[:upper:]')

and reference $upper_word in the grep.

Comment on lines +30 to +39
--number)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --number requires a value' >&2
exit 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): --number is not validated as numeric, which can cause arithmetic errors later under set -e.

Here --number is accepted verbatim and later used in FEATURE_NUM=$(printf "%03d" "$( (10#$BRANCH_NUMBER))"). With a non-numeric value (e.g. --number foo), the arithmetic expansion fails and the script exits due to set -e.

Add an explicit numeric check (e.g. [[ $BRANCH_NUMBER =~ ^[0-9]+$ ]]), and emit a clear error before using it in arithmetic.

Comment on lines +87 to +96
if $JSON_MODE; then
# Minimal JSON paths payload (no validation performed)
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
else
echo "REPO_ROOT: $REPO_ROOT"
echo "BRANCH: $CURRENT_BRANCH"
echo "FEATURE_DIR: $FEATURE_DIR"
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (bug_risk): JSON output is manually constructed without escaping, which could break if paths contain special characters.

Given the path variables are interpolated directly into JSON, any quotes, backslashes, or control characters in $FEATURE_DIR or docs[@] will produce invalid output. Consider adding minimal escaping for these characters, or delegating JSON construction to a helper (e.g., a small script using python -c 'import json; ...') so proper escaping is handled centrally.

Copy link
Member

@droideck droideck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea but I think it needs some work before it's useful for us...

The constitution is just an empty template right now - without our actual principles (RFC compliance, 389-ds architecture specifics, lib389 usage/testing, etc.) it doesn't really help. The templates also assume web/mobile project structures that don't match our LDAP/C/Rust/Python setup.

I'm also a bit wary that spec-kit is still evolving, so we'd need to keep things in sync or risk stale guidance (but I might be wrong here and it's already solid and defined).

I think we either fill in the constitution with 389-ds specifics and customize or drop the generic templates, or skip spec-kit for now and just add a simple CLAUDE.md or/and .cursor/rules/ with project guidance. Either works for me - happy to help with some parts (as there's a lot to cover), if we want something like that in the repo.

@mreynolds389
Copy link
Contributor Author

I like the idea but I think it needs some work before it's useful for us...

The constitution is just an empty template right now - without our actual principles (RFC compliance, 389-ds architecture specifics, lib389 usage/testing, etc.) it doesn't really help. The templates also assume web/mobile project structures that don't match our LDAP/C/Rust/Python setup.

Hmmm the constitution file was filled out, but it must have been lost when I tried moving things into a separate PR. Errr.

As for the templates I'd have to look into that more. I used this as is to work on a complicated C problem and it worked just fine.

I'm also a bit wary that spec-kit is still evolving, so we'd need to keep things in sync or risk stale guidance (but I might be wrong here and it's already solid and defined).

Well everything is always envolving :-) Doesn't mean can we can't update as it improves. I thought it would be nice to get into place so others could easily use it if they wanted to, but it's not a big deal as it's still really easy to just do it yourself in your own env.

I think we either fill in the constitution with 389-ds specifics and customize or drop the generic templates, or skip spec-kit for now and just add a simple CLAUDE.md or/and .cursor/rules/ with project guidance. Either works for me - happy to help with some parts (as there's a lot to cover), if we want something like that in the repo.

Let me look into this later (when I'm feeling better). But we don't need to add this at all, I just found it really impressive and thought others would benefit from it. I also "think" spec-driven development is going to become more popular and strongly encouraged at some point. It would be nice to be ahead of the curve instead of catching up, but who knows :-) I created the PR because it's quite minimal in size and it doesn't force you to use it.

Description:

Add the github spec-kit and cursor support to the source code

We might want to improve the constitution as some point (it's basically copied
from the wiki landing page), but it should be sufficient for now.

Relates: 389ds#7174

Reviewed by: spichugi(Thanks!)
@mreynolds389 mreynolds389 force-pushed the github_speckit_cursor branch from de13e13 to d9ffa6c Compare January 27, 2026 18:04
@mreynolds389
Copy link
Contributor Author

Added missing constitution.md update. Will discuss other topics with @droideck soon...

@droideck
Copy link
Member

The constitution concept is valuable - capturing our principles (reliability, performance, RFC compliance, ASAN testing) in a machine-readable way helps AI assistants understand our standards. I'd keep that and improve it over time.

The rest of the workflow feels too heavyweight for what we typically do (bug fixes, performance improvements, relatively small RFEs, etc.). The specify → clarify → plan → tasks → analyze → checklist → implement pipeline is too much, especially with the *.sh scripts that collect extensive project information (there's no LLM that will be able to effectively hold that context).

If we decide to keep the constitution, I'd move it out of the .cursor directory since not everyone uses that IDE. Something like CONSTITUTION.md or AGENTS.md at the repo root would make it accessible to all AI tools.

That's my 5 cents :) But I'm open to other points of view/discussions. Let's see what the rest of the team thinks.

@mreynolds389
Copy link
Contributor Author

The constitution concept is valuable - capturing our principles (reliability, performance, RFC compliance, ASAN testing) in a machine-readable way helps AI assistants understand our standards. I'd keep that and improve it over time.

The rest of the workflow feels too heavyweight for what we typically do (bug fixes, performance improvements, relatively small RFEs, etc.). The specify → clarify → plan → tasks → analyze → checklist → implement pipeline is too much, especially with the *.sh scripts that collect extensive project information (there's no LLM that will be able to effectively hold that context).

It's kind of all or nothing. You need all those scripts for it to work. And I'm not sure I understand your concern about LLM's not handling it. I just did a demo showing that it works :-)

If we decide to keep the constitution, I'd move it out of the .cursor directory since not everyone uses that IDE. Something like CONSTITUTION.md or AGENTS.md at the repo root would make it accessible to all AI tools.

That's my 5 cents :) But I'm open to other points of view/discussions. Let's see what the rest of the team thinks.

I'm now leaning towards holding off on this unless a majority of the team want to start using it. Let's see what others say, but like you said it might be premature to add it.

@droideck
Copy link
Member

The constitution concept is valuable - capturing our principles (reliability, performance, RFC compliance, ASAN testing) in a machine-readable way helps AI assistants understand our standards. I'd keep that and improve it over time.
The rest of the workflow feels too heavyweight for what we typically do (bug fixes, performance improvements, relatively small RFEs, etc.). The specify → clarify → plan → tasks → analyze → checklist → implement pipeline is too much, especially with the *.sh scripts that collect extensive project information (there's no LLM that will be able to effectively hold that context).

It's kind of all or nothing. You need all those scripts for it to work. And I'm not sure I understand your concern about LLM's not handling it. I just did a demo showing that it works :-)

Yeah, it's still useful and it works. I'm just not sure it'll be as efficient for our typical workflow.

My concern is about context window usage. When update-agent-context.sh and the other scripts collect project metadata, file structures, and cross-reference multiple documents (spec.md, plan.md, tasks.md, research.md, data-model.md, contracts/), that eats up a lot of context space. With a codebase the size of 389 DS, I think it'll get pretty large — and for our common use cases, the benefit might not be worth it. When context overflows, output quality tends to drop as the model loses track of earlier parts of the conversation.

On the "all or nothing" point, — hmm, I still see the constitution file as independently valuable. Having our core principles (reliability, RFC compliance, ASAN testing) in a machine-readable format is useful even without the full specify → clarify → plan → tasks → analyze → checklist → implement pipeline, IMO.

That pipeline makes a lot of sense for "greenfield development", but our day-to-day tends to be bug fixes, performance work, etc. For those, the full workflow might add more ceremony than value.

So my 5 cents would be to start with just the constitution and see how it goes. But happy to hear what the rest of the team thinks. If it's something others are willing to try, I'm okay with that.:) We can always remove it later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants