-
Couldn't load subscription status.
- Fork 2.3k
Add LM Studio integration and update config handling #1207
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: next
Are you sure you want to change the base?
Conversation
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Ralph Khreish <35776126+Crunchyman-ralph@users.noreply.github.com>
|
WalkthroughAdds LM Studio provider support and an automatic PRD auto-expansion workflow: new LM Studio provider, config defaults/getter, CLI flags and model-selection updates, supported-models and example configs, documentation and tests, cursor-agent integration, and orchestration for automatic complexity analysis + expansion. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant CLI as CLI / Commands
participant TM as Task Manager
participant USvc as Unified AI Service
participant Cfg as Config Manager
participant Prov as LMStudioAIProvider
participant LM as LM Studio API
CLI->>TM: setModel(role, providerHint=lmstudio, modelId)
TM->>TM: getAvailableModels(projectRoot)
TM->>TM: resolve provider/model for role
TM-->>CLI: confirmation
CLI->>USvc: generateText(params with provider=lmstudio)
USvc->>Cfg: getLMStudioBaseURL(projectRoot)
Cfg-->>USvc: lmstudioBaseURL
USvc->>Prov: create client(modelId, baseURL, params)
Prov->>LM: POST /v1/chat/completions
LM-->>Prov: response + usage
Prov-->>USvc: text + usage
USvc-->>CLI: result
sequenceDiagram
autonumber
participant CLI as CLI / Commands
participant ParsePRD as parse-prd
participant Auto as Auto-Complexity Expansion
participant FS as Filesystem
CLI->>ParsePRD: parse-prd(file, --auto, --auto-threshold)
ParsePRD->>FS: write tasks.json
alt auto enabled
ParsePRD->>Auto: runAutoComplexityExpansion(tasksPath, threshold, research)
Auto->>Auto: analyze complexity -> produce report
Auto->>Auto: expand high-complexity tasks
Auto-->>ParsePRD: expansion result (expanded/skipped/failed)
ParsePRD->>FS: update tasks.json / write complexity report
else auto disabled
ParsePRD-->>CLI: parse result (no autoExpansion)
end
ParsePRD-->>CLI: final result (includes autoExpansion when ran)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 21
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
tests/unit/config-manager.test.js (1)
75-88: Avoid process.exit in unit tests; remove hard dependency on “real” supported-models.jsonCalling process.exit(1) will kill the test runner. Use thrown errors or fixtures instead.
Apply:
-} catch (err) { - console.error( - 'FATAL TEST SETUP ERROR: Could not read or parse real supported-models.json', - err - ); - REAL_SUPPORTED_MODELS_CONTENT = '{}'; // Default to empty object on error - REAL_SUPPORTED_MODELS_DATA = {}; - process.exit(1); // Exit if essential test data can't be loaded -} +} catch (err) { + console.error('Test setup: falling back to empty supported-models data', err); + REAL_SUPPORTED_MODELS_CONTENT = '{}'; + REAL_SUPPORTED_MODELS_DATA = {}; +}scripts/modules/config-manager.js (4)
711-721: LM Studio missing from internal no‑key list in isApiKeySet()Guideline: providers not requiring keys must short‑circuit. Add LM Studio here to prevent “Unknown provider” warnings and false negatives in other call sites.
const providersWithoutApiKeys = [ CUSTOM_PROVIDERS.OLLAMA, CUSTOM_PROVIDERS.BEDROCK, - CUSTOM_PROVIDERS.MCP, - CUSTOM_PROVIDERS.GEMINI_CLI + CUSTOM_PROVIDERS.MCP, + CUSTOM_PROVIDERS.GEMINI_CLI, + CUSTOM_PROVIDERS.LMSTUDIO ];
747-761: keyMap does not include 'lmstudio'Even with the early return, adding an entry avoids future “Unknown provider name” warnings if logic changes.
const keyMap = { openai: 'OPENAI_API_KEY', anthropic: 'ANTHROPIC_API_KEY', @@ - bedrock: 'AWS_ACCESS_KEY_ID' // Bedrock uses AWS credentials + bedrock: 'AWS_ACCESS_KEY_ID', // Bedrock uses AWS credentials + lmstudio: 'LMSTUDIO_API_KEY' // Optional; typically unused };
772-852: getMcpApiKeyStatus() lacks LM Studio handlingReturn true early for lmstudio to reflect that no key is required.
switch (providerName) { + case 'lmstudio': + return true; // No key needed case 'anthropic': apiKeyToCheck = mcpEnv.ANTHROPIC_API_KEY;
1021-1027: Duplicate sources of truth for providersWithoutApiKeysYou maintain one list inside isApiKeySet and another exported. Consolidate to a single exported constant and reference it in isApiKeySet to prevent drift.
- const providersWithoutApiKeys = [ - CUSTOM_PROVIDERS.OLLAMA, - CUSTOM_PROVIDERS.BEDROCK, - CUSTOM_PROVIDERS.MCP, - CUSTOM_PROVIDERS.GEMINI_CLI, - CUSTOM_PROVIDERS.LMSTUDIO - ]; + // Reuse exported list + const providersWithoutApiKeys = exports.providersWithoutApiKeys;scripts/modules/ai-services-unified.js (1)
556-571: Add 'lmstudio' to providersWithoutApiKeys and fix duplicate/inconsistent definitions in scripts/modules/config-manager.js
- scripts/modules/config-manager.js — providersWithoutApiKeys is declared at ~line 715 and exported at ~line 1021; neither includes 'lmstudio' and the two lists are inconsistent (MCP present in the first, missing in the export). Add 'lmstudio' to the exported array and consolidate/remove the duplicate declaration so there's a single source of truth.
- Also ensure CUSTOM_PROVIDERS contains an LMSTUDIO entry (if absent) and that isApiKeySet returns true for providersWithoutApiKeys entries so ai-services-unified's early-skip behaves correctly.
scripts/modules/task-manager/models.js (1)
671-676: Exclude LM Studio from API key status checks.LM Studio doesn’t require keys like Ollama. It should be filtered out to avoid false “missing key” warnings.
Apply:
- const providersToCheck = providers.filter( - (p) => p.toLowerCase() !== 'ollama' - ); + const providersToCheck = providers.filter((p) => { + const name = p.toLowerCase(); + return name !== 'ollama' && name !== 'lmstudio'; + });scripts/modules/commands.js (1)
3785-3791: Provider‑flag validation regressed: missing --azure and --vertex.Multiple‑provider guard no longer counts Azure/Vertex, allowing conflicting flags to slip through.
Apply:
- const providerFlags = [ - options.openrouter, - options.ollama, - options.bedrock, - options.claudeCode, - options.geminiCli, - options.lmstudio - ].filter(Boolean).length; + const providerFlags = [ + options.openrouter, + options.ollama, + options.bedrock, + options.claudeCode, + options.azure, + options.vertex, + options.geminiCli, + options.lmstudio + ].filter(Boolean).length;
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (19)
.taskmaster/LM_STUDIO_CONFIG_GUIDE.md(1 hunks).taskmaster/config-backup.json(1 hunks).taskmaster/config-lmstudio-comprehensive-example.json(1 hunks).taskmaster/config-lmstudio-example.json(1 hunks).taskmaster/config.json(1 hunks)debug-models.js(1 hunks)docs/lmstudio-integration.md(1 hunks)examples/lmstudio-config-example.json(1 hunks)scripts/modules/ai-services-unified.js(4 hunks)scripts/modules/commands.js(7 hunks)scripts/modules/config-manager.js(4 hunks)scripts/modules/supported-models.json(1 hunks)scripts/modules/task-manager/models.js(4 hunks)src/ai-providers/index.js(1 hunks)src/ai-providers/lmstudio.js(1 hunks)src/constants/providers.js(1 hunks)tests/unit/ai-providers/lmstudio.test.js(1 hunks)tests/unit/ai-services-unified.test.js(2 hunks)tests/unit/config-manager.test.js(1 hunks)
🧰 Additional context used
📓 Path-based instructions (24)
src/ai-providers/*.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.
Files:
src/ai-providers/index.jssrc/ai-providers/lmstudio.js
**/*.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.
Files:
src/ai-providers/index.jstests/unit/config-manager.test.jssrc/constants/providers.jsdebug-models.jstests/unit/ai-providers/lmstudio.test.jssrc/ai-providers/lmstudio.jsscripts/modules/task-manager/models.jsscripts/modules/ai-services-unified.jsscripts/modules/config-manager.jstests/unit/ai-services-unified.test.jsscripts/modules/commands.js
tests/{unit,integration,e2e,fixtures}/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Test files must be organized as follows: unit tests in tests/unit/, integration tests in tests/integration/, end-to-end tests in tests/e2e/, and test fixtures in tests/fixtures/.
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/lmstudio.test.jstests/unit/ai-services-unified.test.js
tests/unit/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Each module should have a corresponding unit test file in tests/unit/ that reflects the module structure (one test file per module).
Files:
tests/unit/config-manager.test.jstests/unit/ai-services-unified.test.js
**/*.{test,spec}.{js,ts,jsx,tsx}
📄 CodeRabbit inference engine (.cursor/rules/git_workflow.mdc)
**/*.{test,spec}.{js,ts,jsx,tsx}: Create a test file and ensure all tests pass when all subtasks are complete; commit tests if added or modified
When all subtasks are complete, run final testing using the appropriate test runner (e.g., npm test, jest, or manual testing)
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/lmstudio.test.jstests/unit/ai-services-unified.test.js
**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
**/*.test.js: Never use asynchronous operations in tests. Make all mocks return synchronous values when possible.
Always mock tests properly based on the way the tested functions are defined and used.
Follow the test file organization: mocks must be set up before importing modules under test, and spies on mocked modules should be set up after imports.
Use fixtures from tests/fixtures/ for consistent sample data across tests.
Always declare mocks before importing the modules being tested in Jest test files.
Use jest.spyOn() after imports to create spies on mock functions and reference these spies in test assertions.
When testing functions with callbacks, get the callback from your mock's call arguments, execute it directly with test inputs, and verify the results.
For ES modules, use jest.mock() before static imports and jest.unstable_mockModule() before dynamic imports to mock dependencies.
Reset mock functions (mockFn.mockReset()) before dynamic imports if they might have been called previously.
When verifying console assertions, assert against the actual arguments passed (single formatted string), not multiple arguments.
Use mock-fs to mock file system operations in tests, and restore the file system after each test.
Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Set mock environment variables in test setup and restore them after each test.
Maintain test fixtures separate from test logic.
Follow the mock-first-then-import pattern for all Jest mocks.
Do not define mock variables before jest.mock() calls (they won't be accessible due to hoisting).
Use test-specific file paths (e.g., 'test-tasks.json') for all file operations in tests.
Mock readJSON and writeJSON to avoid real file system interactions in tests.
Verify file operations use the correct paths in expect statements.
Use different file paths for each test to avoid test interdependence.
Verify modifications on the in-memory task objects passed to w...
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/lmstudio.test.jstests/unit/ai-services-unified.test.js
tests/unit/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/unit/**/*.test.js: Unit tests must be located in tests/unit/, test individual functions and utilities in isolation, mock all external dependencies, and keep tests small, focused, and fast.
Do not include actual command execution in unit tests.
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/lmstudio.test.jstests/unit/ai-services-unified.test.js
tests/{unit,integration,e2e}/**/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)
tests/{unit,integration,e2e}/**/*.test.js: When testing CLI commands built with Commander.js, test the command action handlers directly rather than trying to mock the entire Commander.js chain.
When mocking the Commander.js chain, mock ALL chainable methods (option, argument, action, on, etc.) and return this (or the mock object) from all chainable method mocks.
Explicitly handle all options, including defaults and shorthand flags (e.g., -p for --prompt), and include null/undefined checks in test implementations for parameters that might be optional.
Do not try to use the real action implementation without proper mocking, and do not mock Commander partially—either mock it completely or test the action directly.
Mock the action handlers for CLI commands and verify they're called with correct arguments.
Use sample task fixtures for consistent test data, mock file system operations, and test both success and error paths for task operations.
Mock console output and verify correct formatting in UI function tests. Use flexible assertions like toContain() or toMatch() for formatted output.
Mock chalk functions to return the input text to make testing easier while still verifying correct function calls.
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/lmstudio.test.jstests/unit/ai-services-unified.test.js
**/*.{test,spec}.*
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Test files should follow naming conventions: .test., .spec., or _test. depending on the language
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/lmstudio.test.jstests/unit/ai-services-unified.test.js
tests/{unit,integration,e2e}/**
📄 CodeRabbit inference engine (.cursor/rules/test_workflow.mdc)
Organize test directories by test type (unit, integration, e2e) and mirror source structure where possible
Files:
tests/unit/config-manager.test.jstests/unit/ai-providers/lmstudio.test.jstests/unit/ai-services-unified.test.js
.taskmaster/config.json
📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)
.taskmaster/config.json: Store Taskmaster configuration settings (AI model selections, parameters, logging level, default subtasks/priority, project name, etc.) in the.taskmaster/config.jsonfile located in the project root directory. Do not configure non-API key settings via environment variables.
Do not manually edit.taskmaster/config.jsonunless you are certain of the changes; use thetask-master modelscommand ormodelsMCP tool for configuration.
.taskmaster/config.json: Do not manually edit the .taskmaster/config.json file. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
All other Taskmaster settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in .taskmaster/config.json via the task-master models command or models MCP tool.
Do not manually edit the .taskmaster/config.json file; always use the provided CLI or MCP tools for configuration changes.
Files:
.taskmaster/config.json
docs/**/*
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Add feature documentation to '/docs' folder, include tagged system usage examples, update command reference documentation, and provide migration notes if relevant.
Files:
docs/lmstudio-integration.md
tests/unit/ai-providers/*.test.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Create unit tests for the new provider in tests/unit/ai-providers/.test.js, mocking @ai-sdk/ and core ai module functions, and testing all exported functions for correct behavior and error handling.
Files:
tests/unit/ai-providers/lmstudio.test.js
scripts/modules/task-manager/*.js
📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)
scripts/modules/task-manager/*.js: Centralize all LLM calls throughgenerateTextServiceorgenerateObjectService.
Do not import or call anything from the oldai-services.js,ai-client-factory.js, orai-client-utils.jsfiles.
Do not initialize AI clients (Anthropic, Perplexity, etc.) directly within core logic (task-manager/) or MCP direct functions.
Do not fetch AI-specific parameters (model ID, max tokens, temp) usingconfig-manager.jsgetters for the AI call. Pass theroleinstead.
Do not implement fallback or retry logic outsideai-services-unified.js.
Do not handle API key resolution outside the service layer (it usesutils.jsinternally).
Determine the appropriaterole(main,research,fallback) in your core logic and pass it to the service.
Pass thesessionobject (received in thecontextparameter, especially from direct function wrappers) to the service call when in MCP context.
UsegenerateTextServiceand implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, asgenerateObjectServicehas shown unreliability with some providers/schemas.
Be aware of potential reliability issues withgenerateObjectServiceacross different providers and complex schemas. PrefergenerateTextService+ manual parsing as a more robust alternative for structured data needs.Files in scripts/modules/task-manager/ should each handle a specific action related to task management (e.g., add-task.js, expand-task.js), supporting the tagged task lists system and backward compatibility.
Files:
scripts/modules/task-manager/models.js
scripts/modules/**
📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)
When using the MCP server, restart it if core logic in
scripts/modulesor MCP tool/direct function definitions change.
Files:
scripts/modules/task-manager/models.jsscripts/modules/ai-services-unified.jsscripts/modules/supported-models.jsonscripts/modules/config-manager.jsscripts/modules/commands.js
scripts/modules/task-manager/*
📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)
scripts/modules/task-manager/*: All core functions in scripts/modules/task-manager/ must accept a context parameter and use it to extract projectRoot and tag
All core functions in scripts/modules/task-manager/ must use readJSON(tasksPath, projectRoot, tag) and writeJSON(tasksPath, data, projectRoot, tag)
Files:
scripts/modules/task-manager/models.js
scripts/modules/task-manager/**/*.js
📄 CodeRabbit inference engine (.cursor/rules/telemetry.mdc)
scripts/modules/task-manager/**/*.js: Functions in scripts/modules/task-manager/ that invoke AI services must call the appropriate AI service function (e.g., generateObjectService), passing commandName and outputType in the params object.
Core logic functions in scripts/modules/task-manager/ must return an object that includes aiServiceResponse.telemetryData.
If the core logic function handles CLI output (outputFormat === 'text' or 'cli'), and aiServiceResponse.telemetryData is available, it must call displayAiUsageSummary(aiServiceResponse.telemetryData, 'cli') from scripts/modules/ui.js.Do not call AI-specific getters (like
getMainModelId,getMainMaxTokens) from core logic functions inscripts/modules/task-manager/*; instead, pass theroleto the unified AI service.
Files:
scripts/modules/task-manager/models.js
scripts/modules/ai-services-unified.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.
scripts/modules/ai-services-unified.js: Centralize all LLM calls throughgenerateTextServiceorgenerateObjectService.
Do not import or call anything from the oldai-services.js,ai-client-factory.js, orai-client-utils.jsfiles.
Do not fetch AI-specific parameters (model ID, max tokens, temp) usingconfig-manager.jsgetters for the AI call. Pass theroleinstead.
Do not implement fallback or retry logic outsideai-services-unified.js.
Do not handle API key resolution outside the service layer (it usesutils.jsinternally).The telemetryData object returned by ai-services-unified.js must include the fields: timestamp, userId, commandName, modelUsed, providerName, inputTokens, outputTokens, totalTokens, totalCost, and currency.
Files:
scripts/modules/ai-services-unified.js
scripts/modules/*.js
📄 CodeRabbit inference engine (.cursor/rules/architecture.mdc)
Each module in scripts/modules/ should be focused on a single responsibility, following the modular architecture (e.g., commands.js for CLI command handling, task-manager.js for task data and core logic, dependency-manager.js for dependency management, ui.js for CLI output formatting, ai-services-unified.js for AI service integration, config-manager.js for configuration management, utils.js for utility functions).
scripts/modules/*.js: Export all core functions, helper functions, and utility methods needed by your new function or command from their respective modules. Explicitly review the module's export block to ensure every required dependency is included.
Pass all required parameters to functions you call within your implementation and verify that direct function parameters match their core function counterparts.
Use consistent file naming conventions: 'task_${id.toString().padStart(3, '0')}.txt', use path.join for composing file paths, and use appropriate file extensions (.txt for tasks, .json for data).
Use structured error objects with code and message properties, include clear error messages, and handle both function-specific and file system errors.
Import all silent mode utilities together from 'scripts/modules/utils.js' and always use isSilentMode() to check global silent mode status. Wrap core function calls within direct functions using enableSilentMode() and disableSilentMode() in a try/finally block if the core function might produce console output.
Core functions should check outputFormat === 'text' before displaying UI elements and use internal logging that respects silent mode.
Design functions to accept dependencies as parameters (dependency injection) and avoid hard-coded dependencies that are difficult to mock.
Keep pure logic separate from I/O operations or UI rendering to allow testing the logic without mocking complex dependencies.
When implementing core logic for new features, do so in 'scripts/modules/' before CLI or MCP interfaces, and d...
Files:
scripts/modules/ai-services-unified.jsscripts/modules/config-manager.jsscripts/modules/commands.js
scripts/modules/ai-services*.js
📄 CodeRabbit inference engine (.cursor/rules/new_features.mdc)
Ensure AI calls correctly handle and propagate telemetryData as described in 'telemetry.mdc'.
Files:
scripts/modules/ai-services-unified.js
scripts/modules/*
📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)
scripts/modules/*: Every command that reads or writes tasks.json must be tag-aware
All command files must import getCurrentTag from utils.js
Every CLI command that operates on tasks must include the --tag CLI option
All commands must resolve the tag using the pattern: options.tag || getCurrentTag(projectRoot) || 'master'
All commands must find projectRoot with error handling before proceeding
All commands must pass { projectRoot, tag } as context to core functions
MCP direct functions must accept and use a context object containing projectRoot and tag, and pass them to core functions
Do not hard-code tag resolution (e.g., const tag = options.tag || 'master';); always use getCurrentTag
Do not omit the --tag CLI option in commands that operate on tasks
Do not omit the context parameter when calling core functions from commands
Do not call readJSON or writeJSON without passing projectRoot and tag
Files:
scripts/modules/ai-services-unified.jsscripts/modules/supported-models.jsonscripts/modules/config-manager.jsscripts/modules/commands.js
scripts/modules/supported-models.json
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.
Files:
scripts/modules/supported-models.json
scripts/modules/config-manager.js
📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)
scripts/modules/config-manager.js: Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.
If adding Ollama or another provider not requiring an API key, add a specific check at the beginning of isApiKeySet and getMcpApiKeyStatus in scripts/modules/config-manager.js to return true immediately for that provider.
scripts/modules/config-manager.js: Import and use specific getters fromscripts/modules/config-manager.jsto access configuration values needed for application logic; pass theexplicitRootparameter to getters if calling from MCP direct functions.
UseisApiKeySet(providerName, session)fromconfig-manager.jsto check if a provider's key is available before attempting an AI call.
Handle potentialConfigurationErrorif the.taskmasterconfigfile is missing or invalid when accessed viagetConfig.
Files:
scripts/modules/config-manager.js
scripts/modules/commands.js
📄 CodeRabbit inference engine (.cursor/rules/ai_services.mdc)
scripts/modules/commands.js: Centralize all LLM calls throughgenerateTextServiceorgenerateObjectService.
Do not import or call anything from the oldai-services.js,ai-client-factory.js, orai-client-utils.jsfiles.
Do not fetch AI-specific parameters (model ID, max tokens, temp) usingconfig-manager.jsgetters for the AI call. Pass theroleinstead.
Do not implement fallback or retry logic outsideai-services-unified.js.
Do not handle API key resolution outside the service layer (it usesutils.jsinternally).
Determine the appropriaterole(main,research,fallback) in your core logic and pass it to the service.
Pass thesessionobject (received in thecontextparameter, especially from direct function wrappers) to the service call when in MCP context.
UsegenerateTextServiceand implement robust manual JSON parsing (with Zod validation after parsing) when structured output is needed, asgenerateObjectServicehas shown unreliability with some providers/schemas.
Be aware of potential reliability issues withgenerateObjectServiceacross different providers and complex schemas. PrefergenerateTextService+ manual parsing as a more robust alternative for structured data needs.
scripts/modules/commands.js: All new user-facing commands should be added to 'scripts/modules/commands.js'.
Use consistent patterns for option naming and help text in CLI commands.
Follow the Commander.js model for subcommand structure in CLI commands.
When using callbacks (like in Commander.js commands), define them separately to allow testing the callback logic independently.
Add help text to the command definition and update 'dev_workflow.mdc' with command reference when adding a new feature.
Follow the established pattern in 'commands.js' for CLI command implementation, using Commander.js for argument parsing, including comprehensive help text and examples, and supporting tagged task context awareness.
Provide clear error messages for common failu...
Files:
scripts/modules/commands.js
🧠 Learnings (45)
📓 Common learnings
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/config-manager.js : Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Create a new provider module file in src/ai-providers/ named <provider-name>.js when adding a new AI provider.
Applied to files:
src/ai-providers/index.jssrc/constants/providers.jssrc/ai-providers/lmstudio.jsscripts/modules/ai-services-unified.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Applied to files:
src/ai-providers/index.jstests/unit/ai-providers/lmstudio.test.jssrc/ai-providers/lmstudio.jsscripts/modules/ai-services-unified.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Applies to src/ai-providers/*.js : Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.
Applied to files:
src/ai-providers/index.jstests/unit/ai-providers/lmstudio.test.jssrc/ai-providers/lmstudio.jsscripts/modules/ai-services-unified.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/ai-services-unified.js : Integrate the new provider module with scripts/modules/ai-services-unified.js by importing it and adding an entry to the PROVIDER_FUNCTIONS map.
Applied to files:
src/ai-providers/index.jssrc/ai-providers/lmstudio.jsscripts/modules/ai-services-unified.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to tests/unit/ai-providers/*.test.js : Create unit tests for the new provider in tests/unit/ai-providers/<provider-name>.test.js, mocking ai-sdk/<provider-name> and core ai module functions, and testing all exported functions for correct behavior and error handling.
Applied to files:
src/ai-providers/index.jstests/unit/ai-providers/lmstudio.test.jssrc/ai-providers/lmstudio.jsscripts/modules/ai-services-unified.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must export three functions: generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object.
Applied to files:
src/ai-providers/index.jssrc/ai-providers/lmstudio.jsscripts/modules/ai-services-unified.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/ai-services.js : Features that use AI models belong in 'scripts/modules/ai-services.js'.
Applied to files:
src/ai-providers/index.jsdebug-models.jssrc/ai-providers/lmstudio.jsscripts/modules/ai-services-unified.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Implement generate<ProviderName>Text, stream<ProviderName>Text, and generate<ProviderName>Object functions in provider modules with basic validation and try/catch error handling.
Applied to files:
src/ai-providers/index.jstests/unit/ai-providers/lmstudio.test.jssrc/ai-providers/lmstudio.jsscripts/modules/ai-services-unified.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:10:02.683Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:02.683Z
Learning: Applies to .taskmaster/config.json : Store Taskmaster configuration settings (AI model selections, parameters, logging level, default subtasks/priority, project name, tag management) in `.taskmaster/config.json` in the project root. Do not configure these via environment variables.
Applied to files:
.taskmaster/config-lmstudio-comprehensive-example.json.taskmaster/config.json.taskmaster/config-backup.json.taskmaster/config-lmstudio-example.json.taskmaster/LM_STUDIO_CONFIG_GUIDE.mdexamples/lmstudio-config-example.json
📚 Learning: 2025-07-31T22:08:16.039Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/taskmaster.mdc:0-0
Timestamp: 2025-07-31T22:08:16.039Z
Learning: Applies to .taskmaster/config.json : All other Taskmaster settings (model choice, max tokens, temperature, log level, custom endpoints) are managed in .taskmaster/config.json via the task-master models command or models MCP tool.
Applied to files:
.taskmaster/config-lmstudio-comprehensive-example.json.taskmaster/config.json.taskmaster/config-backup.json.taskmaster/config-lmstudio-example.json.taskmaster/LM_STUDIO_CONFIG_GUIDE.mdexamples/lmstudio-config-example.jsonscripts/modules/commands.js
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: Applies to .taskmaster/config.json : Store Taskmaster configuration settings (AI model selections, parameters, logging level, default subtasks/priority, project name, etc.) in the `.taskmaster/config.json` file located in the project root directory. Do not configure non-API key settings via environment variables.
Applied to files:
.taskmaster/config-lmstudio-comprehensive-example.json.taskmaster/config.json.taskmaster/config-backup.json.taskmaster/config-lmstudio-example.json.taskmaster/LM_STUDIO_CONFIG_GUIDE.mdexamples/lmstudio-config-example.json
📚 Learning: 2025-09-01T09:55:15.061Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/AGENTS.md:0-0
Timestamp: 2025-09-01T09:55:15.061Z
Learning: Applies to assets/.taskmaster/config.json : Never manually edit .taskmaster/config.json; configure models via `task-master models`
Applied to files:
.taskmaster/config-lmstudio-comprehensive-example.json.taskmaster/config.json.taskmaster/config-backup.json.taskmaster/config-lmstudio-example.json.taskmaster/LM_STUDIO_CONFIG_GUIDE.mdexamples/lmstudio-config-example.jsonscripts/modules/commands.js
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: Applies to .taskmaster/config.json : Do not manually edit `.taskmaster/config.json` unless you are certain of the changes; use the `task-master models` command or `models` MCP tool for configuration.
Applied to files:
.taskmaster/config-lmstudio-comprehensive-example.json.taskmaster/config.json.taskmaster/config-backup.json.taskmaster/config-lmstudio-example.json.taskmaster/LM_STUDIO_CONFIG_GUIDE.md
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to .taskmasterconfig : Use `.taskmasterconfig` (JSON) in the project root for storing Taskmaster configuration (excluding API keys), and manage it via the `task-master models --setup` CLI command or the `models` MCP tool.
Applied to files:
.taskmaster/config-lmstudio-comprehensive-example.json.taskmaster/config-backup.json.taskmaster/config-lmstudio-example.json.taskmaster/LM_STUDIO_CONFIG_GUIDE.md
📚 Learning: 2025-07-31T22:08:16.039Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/taskmaster.mdc:0-0
Timestamp: 2025-07-31T22:08:16.039Z
Learning: Applies to .taskmaster/config.json : Do not manually edit the .taskmaster/config.json file; always use the provided CLI or MCP tools for configuration changes.
Applied to files:
.taskmaster/config-lmstudio-comprehensive-example.json.taskmaster/config-backup.json
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/config-manager.js : Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.
Applied to files:
src/constants/providers.jssrc/ai-providers/lmstudio.jsscripts/modules/task-manager/models.jsscripts/modules/ai-services-unified.jsscripts/modules/config-manager.jstests/unit/ai-services-unified.test.jsscripts/modules/commands.js
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: For CLI usage, install Taskmaster globally with `npm install -g task-master-ai` or use locally via `npx task-master-ai ...`.
Applied to files:
docs/lmstudio-integration.md
📚 Learning: 2025-07-18T17:10:12.881Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/dev_workflow.mdc:0-0
Timestamp: 2025-07-18T17:10:12.881Z
Learning: Applies to .taskmaster/state.json : Track Taskmaster's current tag context and migration status in `.taskmaster/state.json`, which is automatically created and managed by the system.
Applied to files:
.taskmaster/config-backup.json
📚 Learning: 2025-07-31T22:08:16.039Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/taskmaster.mdc:0-0
Timestamp: 2025-07-31T22:08:16.039Z
Learning: Applies to .taskmaster/config.json : Do not manually edit the .taskmaster/config.json file. Use the included commands either in the MCP or CLI format as needed. Always prioritize MCP tools when available and use the CLI as a fallback.
Applied to files:
.taskmaster/config-backup.json
📚 Learning: 2025-09-01T09:55:15.061Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: assets/AGENTS.md:0-0
Timestamp: 2025-09-01T09:55:15.061Z
Learning: Applies to assets/.taskmaster/tasks/tasks.json : If tasks.json was manually changed, run `task-master generate` to regenerate task files
Applied to files:
.taskmaster/config-backup.json
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/supported-models.json : Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.
Applied to files:
debug-models.jsscripts/modules/supported-models.json
📚 Learning: 2025-07-18T17:18:17.759Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/utilities.mdc:0-0
Timestamp: 2025-07-18T17:18:17.759Z
Learning: Applies to scripts/modules/task-manager/**/*.js : Do not call AI-specific getters (like `getMainModelId`, `getMainMaxTokens`) from core logic functions in `scripts/modules/task-manager/*`; instead, pass the `role` to the unified AI service.
Applied to files:
debug-models.jsscripts/modules/task-manager/models.jsscripts/modules/commands.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Determine the appropriate `role` (`main`, `research`, `fallback`) in your core logic and pass it to the service.
Applied to files:
debug-models.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Do not fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters for the AI call. Pass the `role` instead.
Applied to files:
debug-models.jsscripts/modules/task-manager/models.jsscripts/modules/commands.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Be aware of potential reliability issues with `generateObjectService` across different providers and complex schemas. Prefer `generateTextService` + manual parsing as a more robust alternative for structured data needs.
Applied to files:
debug-models.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not import real AI service clients in tests; create fully mocked versions that return predictable responses.
Applied to files:
tests/unit/ai-providers/lmstudio.test.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not use real AI client initialization logic in tests; create test-specific paths that bypass client initialization.
Applied to files:
tests/unit/ai-providers/lmstudio.test.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-08-11T12:30:23.843Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-11T12:30:23.843Z
Learning: Import Task Master's development workflow commands and guidelines; treat the contents of ./.taskmaster/CLAUDE.md as if included in the main CLAUDE.md
Applied to files:
.taskmaster/LM_STUDIO_CONFIG_GUIDE.md
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Ensure `.taskmasterconfig` exists and has valid provider/model IDs for the roles you intend to use (manage via `task-master models --setup`).
Applied to files:
.taskmaster/LM_STUDIO_CONFIG_GUIDE.md
📚 Learning: 2025-07-18T21:57:56.681Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1011
File: scripts/modules/task-manager/models.js:29-30
Timestamp: 2025-07-18T21:57:56.681Z
Learning: The `task-master init` command creates the config.json file inside the .taskmaster directory, while `task-master models --setup` does not create this file. When the configuration file is missing, users should be directed to run `task-master init`.
Applied to files:
.taskmaster/LM_STUDIO_CONFIG_GUIDE.md
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.
Applied to files:
src/ai-providers/lmstudio.jsscripts/modules/ai-services-unified.jstests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:12:57.903Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/new_features.mdc:0-0
Timestamp: 2025-07-18T17:12:57.903Z
Learning: Applies to scripts/modules/*.js : Ensure new features work with existing projects seamlessly, supporting both legacy and tagged task data formats, and support silent migration during feature usage.
Applied to files:
scripts/modules/task-manager/models.js
📚 Learning: 2025-07-18T05:38:17.352Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#943
File: scripts/modules/task-manager/move-task.js:24-24
Timestamp: 2025-07-18T05:38:17.352Z
Learning: In the Claude Task Master system, core task-manager functions are designed with fallback mechanisms for missing projectRoot parameters using the pattern `const projectRoot = providedProjectRoot || findProjectRoot();`. The readJSON and writeJSON functions have default parameters (projectRoot = null, tag = null) and handle missing parameters gracefully. Adding strict validation to these core functions would break the established flexible architecture pattern.
Applied to files:
scripts/modules/task-manager/models.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Do not import or call anything from the old `ai-services.js`, `ai-client-factory.js`, or `ai-client-utils.js` files.
Applied to files:
scripts/modules/ai-services-unified.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Do not fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters for the AI call. Pass the `role` instead.
Applied to files:
scripts/modules/ai-services-unified.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/task-manager/*.js : Centralize all LLM calls through `generateTextService` or `generateObjectService`.
Applied to files:
scripts/modules/ai-services-unified.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/ai-services-unified.js : Do not implement fallback or retry logic outside `ai-services-unified.js`.
Applied to files:
scripts/modules/ai-services-unified.js
📚 Learning: 2025-07-21T14:14:48.694Z
Learnt from: rtmcrc
PR: eyaltoledano/claude-task-master#933
File: scripts/modules/supported-models.json:238-238
Timestamp: 2025-07-21T14:14:48.694Z
Learning: Model version updates in scripts/modules/supported-models.json may be included in feature PRs if they provide practical improvements like reduced error rates, even if not directly related to the main feature being implemented.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: In this repo, the supported models list is auto-generated by CI into docs/models.md from scripts/modules/supported-models.json via .github/workflows/update-models-md.yml and docs/scripts/models-json-to-markdown.js. Don’t request manual edits to the Markdown; ensure the JSON is correct instead.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-08-08T11:34:45.482Z
Learnt from: Crunchyman-ralph
PR: eyaltoledano/claude-task-master#1105
File: .changeset/vast-weeks-fetch.md:5-5
Timestamp: 2025-08-08T11:34:45.482Z
Learning: This repo’s supported models list is CI-generated into docs/models.md from scripts/modules/supported-models.json (workflow: .github/workflows/update-models-md.yml using docs/scripts/models-json-to-markdown.js). Don’t request manual edits to the Markdown; ensure the JSON is correct instead. README links to docs/models.md.
Applied to files:
scripts/modules/supported-models.json
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Mock API calls (e.g., Anthropic/Claude) by mocking the entire module and providing predictable responses.
Applied to files:
tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:07:39.336Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/architecture.mdc:0-0
Timestamp: 2025-07-18T17:07:39.336Z
Learning: Module dependencies should be mocked before importing the test module, following Jest's hoisting behavior.
Applied to files:
tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:16:13.793Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/tests.mdc:0-0
Timestamp: 2025-07-18T17:16:13.793Z
Learning: Applies to **/*.test.js : Do not rely on environment variables for API keys in tests; set mock environment variables in test setup.
Applied to files:
tests/unit/ai-services-unified.test.js
📚 Learning: 2025-07-18T17:06:57.833Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_services.mdc:0-0
Timestamp: 2025-07-18T17:06:57.833Z
Learning: Applies to scripts/modules/commands.js : Do not fetch AI-specific parameters (model ID, max tokens, temp) using `config-manager.js` getters for the AI call. Pass the `role` instead.
Applied to files:
scripts/modules/commands.js
🧬 Code graph analysis (7)
debug-models.js (3)
scripts/modules/ai-services-unified.js (2)
provider(343-343)provider(507-507)scripts/modules/config-manager.js (2)
provider(1012-1012)providerModels(633-633)tests/unit/ai-providers/lmstudio.test.js (1)
provider(22-22)
tests/unit/ai-providers/lmstudio.test.js (1)
src/ai-providers/lmstudio.js (1)
LMStudioAIProvider(11-172)
src/ai-providers/lmstudio.js (2)
src/ai-providers/base-provider.js (1)
BaseAIProvider(16-352)scripts/modules/ai-services-unified.js (2)
apiKey(358-358)apiKey(505-505)
scripts/modules/task-manager/models.js (2)
scripts/modules/task-manager/update-subtask-by-id.js (1)
projectRoot(90-90)src/constants/providers.js (2)
CUSTOM_PROVIDERS(18-28)CUSTOM_PROVIDERS(18-28)
scripts/modules/ai-services-unified.js (2)
src/ai-providers/lmstudio.js (1)
LMStudioAIProvider(11-172)scripts/modules/config-manager.js (1)
providerName(626-626)
scripts/modules/config-manager.js (1)
src/constants/providers.js (2)
CUSTOM_PROVIDERS(18-28)CUSTOM_PROVIDERS(18-28)
scripts/modules/commands.js (2)
scripts/modules/task-manager/update-subtask-by-id.js (1)
projectRoot(90-90)scripts/modules/task-manager/models.js (4)
options(155-155)options(283-283)options(374-374)options(664-664)
🪛 markdownlint-cli2 (0.17.2)
docs/lmstudio-integration.md
67-67: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
68-68: Lists should be surrounded by blank lines
(MD032, blanks-around-lists)
72-72: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
73-73: Lists should be surrounded by blank lines
(MD032, blanks-around-lists)
77-77: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
78-78: Lists should be surrounded by blank lines
(MD032, blanks-around-lists)
81-81: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
82-82: Lists should be surrounded by blank lines
(MD032, blanks-around-lists)
85-85: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
86-86: Lists should be surrounded by blank lines
(MD032, blanks-around-lists)
89-89: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
90-90: Lists should be surrounded by blank lines
(MD032, blanks-around-lists)
.taskmaster/LM_STUDIO_CONFIG_GUIDE.md
112-112: Lists should be surrounded by blank lines
(MD032, blanks-around-lists)
190-190: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
191-191: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
207-207: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
208-208: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
224-224: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below
(MD022, blanks-around-headings)
225-225: Fenced code blocks should be surrounded by blank lines
(MD031, blanks-around-fences)
🔇 Additional comments (18)
.taskmaster/config-lmstudio-example.json (1)
1-39: LGTM — example aligns with supported-models entries and global baseURL patternNo issues spotted.
.taskmaster/config-backup.json (1)
1-39: LGTM — sensible backup snapshot including lmstudioBaseURLEnsure this file isn’t mistaken for the live config; docs already advise configuring via CLI.
.taskmaster/config-lmstudio-comprehensive-example.json (1)
1-39: LGTM — comprehensive example matches provider IDs present in supported-modelsNo blocking issues.
tests/unit/config-manager.test.js (1)
147-151: LGTM on adding lmstudioBaseURL default; also add provider validation coverageThe default aligns with docs. Add 'lmstudio' to the valid providers test.
Apply:
- expect(configManager.validateProvider('openrouter')).toBe(true); - expect(configManager.validateProvider('bedrock')).toBe(true); + expect(configManager.validateProvider('openrouter')).toBe(true); + expect(configManager.validateProvider('bedrock')).toBe(true); + expect(configManager.validateProvider('lmstudio')).toBe(true);src/ai-providers/index.js (1)
19-19: Resolved — LMStudio provider exported and registeredExport confirmed at src/ai-providers/index.js:19; ai-services-unified imports and registers LMStudio (scripts/modules/ai-services-unified.js:48, 75; provider-name check at 588).
src/constants/providers.js (1)
26-28: Add LM Studio provider constant — looks goodThe LMSTUDIO entry and trailing comma on GEMINI_CLI are correct and align with how CUSTOM_PROVIDERS is consumed elsewhere.
scripts/modules/ai-services-unified.js (2)
18-19: Provider wiring for LM Studio is correctImporting LMStudio provider and registering it in PROVIDERS is consistent with existing providers.
Also applies to: 48-55, 61-76
574-592: Approve — LM Studio base URL fallback added and runtime exposure verifiedgetLMStudioBaseURL is implemented/exported in scripts/modules/config-manager.js; LMStudioAIProvider is registered and ai-services-unified.js correctly uses the global LM Studio base URL fallback (scripts/modules/ai-services-unified.js, ~lines 75 and 588–592).
scripts/modules/config-manager.js (2)
75-76: Add lmstudioBaseURL default — looks goodDefault aligns with the provider’s OpenAI-compatible endpoint.
579-583: Expose getLMStudioBaseURL — goodGetter follows the same pattern as other base URL getters.
scripts/modules/task-manager/models.js (3)
12-12: Import alias rename is fine.Avoids shadowing with the wrapper below.
28-32: Compat wrapper looks good.Signature preserved without leaking base impl details.
If any callers depended on side effects of the old zero‑arg getter, run a quick repo check to confirm all callsites now pass projectRoot.
710-711: Exporting the wrapper is correct.Keeps the public API consistent with CLI usage.
scripts/modules/commands.js (3)
132-133: Import source switch — OK.Matches the new wrapper location.
296-296: Using getAvailableModels(projectRoot) — OK.Maintains tag/root awareness in setup.
3840-3844: Provider hint mapping includes LM Studio — LGTM.Consistent across main/research/fallback.
Also applies to: 3868-3872, 3898-3902
src/ai-providers/lmstudio.js (2)
20-24: Auth override is correct for local LM Studio.No API key required; keep optional support as you did.
34-45: Client factory — LGTM.Honors configurable baseURL and optional apiKey.
| "global": { | ||
| "logLevel": "info", | ||
| "debug": false, | ||
| "defaultNumTasks": 10, | ||
| "defaultSubtasks": 5, | ||
| "defaultPriority": "medium", | ||
| "projectName": "Taskmaster", | ||
| "ollamaBaseURL": "http://localhost:11434/api", | ||
| "bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com", | ||
| "lmstudioBaseURL": "http://127.0.0.1:1234/v1", | ||
| "responseLanguage": "English", | ||
| "enableCodebaseAnalysis": true, | ||
| "userId": "1234567890", | ||
| "azureBaseURL": "https://your-endpoint.azure.com/", | ||
| "defaultTag": "master" | ||
| }, | ||
| "claudeCode": {} | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Global lmstudioBaseURL in committed config is unnecessary
Defaults already provide lmstudioBaseURL. Keeping it here hard-codes local host details for everyone. Prefer documenting this in the guide and letting users set it via CLI.
- "lmstudioBaseURL": "http://127.0.0.1:1234/v1",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "global": { | |
| "logLevel": "info", | |
| "debug": false, | |
| "defaultNumTasks": 10, | |
| "defaultSubtasks": 5, | |
| "defaultPriority": "medium", | |
| "projectName": "Taskmaster", | |
| "ollamaBaseURL": "http://localhost:11434/api", | |
| "bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com", | |
| "lmstudioBaseURL": "http://127.0.0.1:1234/v1", | |
| "responseLanguage": "English", | |
| "enableCodebaseAnalysis": true, | |
| "userId": "1234567890", | |
| "azureBaseURL": "https://your-endpoint.azure.com/", | |
| "defaultTag": "master" | |
| }, | |
| "claudeCode": {} | |
| } | |
| "global": { | |
| "logLevel": "info", | |
| "debug": false, | |
| "defaultNumTasks": 10, | |
| "defaultSubtasks": 5, | |
| "defaultPriority": "medium", | |
| "projectName": "Taskmaster", | |
| "ollamaBaseURL": "http://localhost:11434/api", | |
| "bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com", | |
| "responseLanguage": "English", | |
| "enableCodebaseAnalysis": true, | |
| "userId": "1234567890", | |
| "azureBaseURL": "https://your-endpoint.azure.com/", | |
| "defaultTag": "master" | |
| }, | |
| "claudeCode": {} | |
| } |
🤖 Prompt for AI Agents
In .taskmaster/config.json around lines 22 to 39, the global lmstudioBaseURL is
hard-coded to a local host value which duplicates defaults and leaks
machine-specific details; remove the "lmstudioBaseURL" entry from the "global"
object so the system falls back to the canonical default, and update the
README/docs to document how users can override this value via the CLI or an
environment variable (and ensure any CLI/env handling code remains unchanged so
overrides still work).
| ## Available LM Studio Models | ||
|
|
||
| Task Master supports these LM Studio models (aligned with Ollama): | ||
|
|
||
| - `gpt-oss:latest` (60.7% SWE Score) | ||
| - `gpt-oss:20b` (60.7% SWE Score) | ||
| - `gpt-oss:120b` (62.4% SWE Score) | ||
| - `devstral:latest` | ||
| - `qwen3:latest` | ||
| - `qwen3:14b` | ||
| - `qwen3:32b` | ||
| - `mistral-small3.1:latest` | ||
| - `llama3.3:latest` | ||
| - `phi4:latest` | ||
| - `custom` (for flexible configuration) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Fix markdownlint issues: add blank lines around headings and code fences
Insert blank lines before/after lists, headings, and fenced blocks to satisfy MD022/MD031/MD032.
@@
-## Available LM Studio Models
+## Available LM Studio Models
@@
-### High-Performance Setup
+
+### High-Performance Setup
@@
-### Balanced Setup
+
+### Balanced Setup
@@
-### Lightweight Setup
+
+### Lightweight SetupAlso applies to: 188-205, 207-222, 224-239
🏁 Script executed:
#!/bin/bash
npx -y markdownlint-cli2 .taskmaster/LM_STUDIO_CONFIG_GUIDE.md || trueLength of output: 1478
Fix markdownlint errors in .taskmaster/LM_STUDIO_CONFIG_GUIDE.md
markdownlint-cli2 reports 9 errors — add blank lines around headings, fenced code blocks, and lists, and wrap long lines to <=80 chars.
- Lines 3, 9: MD013 — wrap lines to ≤80 chars (current lengths 90 and 135).
- Line 112: MD032 — surround the list with blank lines.
- Lines 190, 207, 224: MD022 — add blank line(s) before/after the "### High-Performance/Balanced/Lightweight Setup" headings.
- Lines 191, 208, 225: MD031 — ensure the fenced
jsonblocks are surrounded by blank lines.
🤖 Prompt for AI Agents
In .taskmaster/LM_STUDIO_CONFIG_GUIDE.md around lines 81-96 (and referenced
lines 3, 9, 112, 190, 191, 207, 208, 224, 225), fix markdownlint issues by
wrapping long lines to ≤80 characters (address MD013 for lines 3 and 9), add a
blank line before and after the list of models so the list is surrounded by
blank lines (MD032 for line 112), insert blank lines before and after the "###
High-Performance/Balanced/Lightweight Setup" headings (MD022 for lines 190, 207,
224), and ensure each fenced ```json``` block has an empty line above and below
it (MD031 for lines 191, 208, 225); make these minimal formatting changes only,
keeping content intact.
💡 Verification agent
🧩 Analysis chain
Keep model list in sync with supported-models.json
Hard-coding model names can drift. Consider generating this section from scripts/modules/supported-models.json during docs build.
🏁 Script executed:
#!/bin/bash
jq -r '.lmstudio[].id' scripts/modules/supported-models.jsonLength of output: 147
Generate LM Studio model list from scripts/modules/supported-models.json
Replace the hard-coded model block in .taskmaster/LM_STUDIO_CONFIG_GUIDE.md (lines 81–96) with content generated from scripts/modules/supported-models.json (source of truth). CI already generates docs/models.md from that JSON — update the JSON, not the markdown.
🤖 Prompt for AI Agents
.taskmaster/LM_STUDIO_CONFIG_GUIDE.md lines 81-96: the LM Studio model list is
hard-coded but should come from scripts/modules/supported-models.json (the
canonical source); replace the static block with generated content by wiring the
existing CI/doc generation step (or a small script) to emit the LM Studio model
list into this file (or include the generated fragment) using the JSON as source
of truth, and remove/guard the hard-coded entries so future updates come from
scripts/modules/supported-models.json (update the generation pipeline/docs build
to write or inject that section rather than editing the markdown manually).
| ### Manual Configuration | ||
|
|
||
| 1. Copy the example config to your `.taskmaster/config.json` | ||
| 2. Modify the `modelId` to match your loaded model | ||
| 3. Adjust `maxTokens` based on your model's context length | ||
| 4. Set `lmstudioBaseURL` if using a non-default port | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docs conflict with workflow: do not recommend manual edits to config.json
Project guidelines: manage .taskmaster/config.json via CLI/MCP, not manual edits. Replace “Manual Configuration” steps with CLI/MCP instructions and clearly mark manual edits as advanced/at-your-own-risk.
-### Manual Configuration
-
-1. Copy the example config to your `.taskmaster/config.json`
-2. Modify the `modelId` to match your loaded model
-3. Adjust `maxTokens` based on your model's context length
-4. Set `lmstudioBaseURL` if using a non-default port
+### Configure via CLI (recommended)
+
+Use the models command or MCP tool:
+
+```bash
+task-master models --setup
+task-master models --set-main gpt-oss:20b --lmstudio
+task-master models --set-research gpt-oss:120b --lmstudio
+task-master models --set-fallback gpt-oss:latest --lmstudio
+```
+
+### Advanced: Manual edits (not recommended)
+Only if CLI/MCP is unavailable. Ensure you restore to CLI-managed state afterward.🤖 Prompt for AI Agents
In .taskmaster/LM_STUDIO_CONFIG_GUIDE.md around lines 150–156, the "Manual
Configuration" section conflicts with the project's workflow that requires
managing .taskmaster/config.json via the CLI/MCP; replace those manual edit
steps with explicit CLI/MCP instructions for setup and model assignment (e.g.,
instruct to run the task-master models --setup command, then use task-master
models --set-main <model> --lmstudio, task-master models --set-research <model>
--lmstudio, and task-master models --set-fallback <model> --lmstudio) and add a
clearly labeled "Advanced: Manual edits (not recommended)" subsection that
explains manual edits are only for emergency use, must be done at-your-own-risk,
and must be reverted to CLI-managed state afterwards.
| // Test the roleChoices construction | ||
| const role = 'main'; | ||
| const roleChoices = Object.entries(modelsByProvider) | ||
| .map(([provider, models]) => { | ||
| const providerModels = models | ||
| .filter((m) => m.allowed_roles.includes(role)) | ||
| .map((m) => ({ | ||
| name: `${provider} / ${m.id} ${ | ||
| m.cost_per_1m_tokens | ||
| ? `($${m.cost_per_1m_tokens.input.toFixed(2)} input | $${m.cost_per_1m_tokens.output.toFixed(2)} output)` | ||
| : '' | ||
| }`, | ||
| value: { id: m.id, provider }, | ||
| short: `${provider}/${m.id}` | ||
| })); | ||
| if (providerModels.length > 0) { | ||
| return [...providerModels]; | ||
| } | ||
| return null; | ||
| }) | ||
| .filter(Boolean) | ||
| .flat(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Guard against missing allowed_roles to avoid TypeError
If any model lacks allowed_roles, .includes() will throw. getAvailableModels currently adds defaults, but future changes could regress.
- .filter((m) => m.allowed_roles.includes(role))
+ .filter((m) => Array.isArray(m.allowed_roles) && m.allowed_roles.includes(role))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Test the roleChoices construction | |
| const role = 'main'; | |
| const roleChoices = Object.entries(modelsByProvider) | |
| .map(([provider, models]) => { | |
| const providerModels = models | |
| .filter((m) => m.allowed_roles.includes(role)) | |
| .map((m) => ({ | |
| name: `${provider} / ${m.id} ${ | |
| m.cost_per_1m_tokens | |
| ? `($${m.cost_per_1m_tokens.input.toFixed(2)} input | $${m.cost_per_1m_tokens.output.toFixed(2)} output)` | |
| : '' | |
| }`, | |
| value: { id: m.id, provider }, | |
| short: `${provider}/${m.id}` | |
| })); | |
| if (providerModels.length > 0) { | |
| return [...providerModels]; | |
| } | |
| return null; | |
| }) | |
| .filter(Boolean) | |
| .flat(); | |
| // Test the roleChoices construction | |
| const role = 'main'; | |
| const roleChoices = Object.entries(modelsByProvider) | |
| .map(([provider, models]) => { | |
| const providerModels = models | |
| .filter((m) => Array.isArray(m.allowed_roles) && m.allowed_roles.includes(role)) | |
| .map((m) => ({ | |
| name: `${provider} / ${m.id} ${ | |
| m.cost_per_1m_tokens | |
| ? `($${m.cost_per_1m_tokens.input.toFixed(2)} input | $${m.cost_per_1m_tokens.output.toFixed(2)} output)` | |
| : '' | |
| }`, | |
| value: { id: m.id, provider }, | |
| short: `${provider}/${m.id}` | |
| })); | |
| if (providerModels.length > 0) { | |
| return [...providerModels]; | |
| } | |
| return null; | |
| }) | |
| .filter(Boolean) | |
| .flat(); |
🤖 Prompt for AI Agents
In debug-models.js around lines 23 to 45, the code calls
m.allowed_roles.includes(role) which will throw if allowed_roles is missing or
not an array; update the filtering to defensively handle missing/invalid
allowed_roles by coalescing to an empty array (or checking Array.isArray) before
calling includes, e.g. treat non-array/missing allowed_roles as [] so models
without allowed_roles are simply excluded; keep the rest of the
mapping/formatting unchanged.
| // Use the raw OpenAI client with function calling | ||
| const response = await openaiClient.chat.completions.create({ | ||
| model: params.modelId, | ||
| messages: params.messages, | ||
| tools: [{ | ||
| type: 'function', | ||
| function: functionSchema | ||
| }], | ||
| tool_choice: 'required', // Use string instead of object | ||
| ...this.prepareTokenParam(params.modelId, params.maxTokens), | ||
| temperature: params.temperature | ||
| }); | ||
|
|
||
| // Extract the function call result | ||
| const choice = response.choices[0]; | ||
| if (!choice.message.tool_calls || choice.message.tool_calls.length === 0) { | ||
| throw new Error('No function call found in response'); | ||
| } | ||
|
|
||
| const toolCall = choice.message.tool_calls[0]; | ||
| const resultObject = JSON.parse(toolCall.function.arguments); | ||
|
|
||
| log( | ||
| 'debug', | ||
| `${this.name} generateObject completed successfully for model: ${params.modelId}` | ||
| ); | ||
|
|
||
| return { | ||
| object: resultObject, | ||
| usage: { | ||
| inputTokens: response.usage?.prompt_tokens || 0, | ||
| outputTokens: response.usage?.completion_tokens || 0, | ||
| totalTokens: response.usage?.total_tokens || 0 | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Harden tool-call parsing.
Guard and surface better error when arguments aren’t valid JSON to avoid unhandled exceptions.
- const toolCall = choice.message.tool_calls[0];
- const resultObject = JSON.parse(toolCall.function.arguments);
+ const toolCall = choice.message.tool_calls[0];
+ let resultObject;
+ try {
+ resultObject = JSON.parse(toolCall.function.arguments || '{}');
+ } catch (e) {
+ throw new Error(`LM Studio returned non-JSON tool arguments for '${functionSchema.name}': ${e.message}`);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Use the raw OpenAI client with function calling | |
| const response = await openaiClient.chat.completions.create({ | |
| model: params.modelId, | |
| messages: params.messages, | |
| tools: [{ | |
| type: 'function', | |
| function: functionSchema | |
| }], | |
| tool_choice: 'required', // Use string instead of object | |
| ...this.prepareTokenParam(params.modelId, params.maxTokens), | |
| temperature: params.temperature | |
| }); | |
| // Extract the function call result | |
| const choice = response.choices[0]; | |
| if (!choice.message.tool_calls || choice.message.tool_calls.length === 0) { | |
| throw new Error('No function call found in response'); | |
| } | |
| const toolCall = choice.message.tool_calls[0]; | |
| const resultObject = JSON.parse(toolCall.function.arguments); | |
| log( | |
| 'debug', | |
| `${this.name} generateObject completed successfully for model: ${params.modelId}` | |
| ); | |
| return { | |
| object: resultObject, | |
| usage: { | |
| inputTokens: response.usage?.prompt_tokens || 0, | |
| outputTokens: response.usage?.completion_tokens || 0, | |
| totalTokens: response.usage?.total_tokens || 0 | |
| } | |
| }; | |
| // Use the raw OpenAI client with function calling | |
| const response = await openaiClient.chat.completions.create({ | |
| model: params.modelId, | |
| messages: params.messages, | |
| tools: [{ | |
| type: 'function', | |
| function: functionSchema | |
| }], | |
| tool_choice: 'required', // Use string instead of object | |
| ...this.prepareTokenParam(params.modelId, params.maxTokens), | |
| temperature: params.temperature | |
| }); | |
| // Extract the function call result | |
| const choice = response.choices[0]; | |
| if (!choice.message.tool_calls || choice.message.tool_calls.length === 0) { | |
| throw new Error('No function call found in response'); | |
| } | |
| const toolCall = choice.message.tool_calls[0]; | |
| let resultObject; | |
| try { | |
| resultObject = JSON.parse(toolCall.function.arguments || '{}'); | |
| } catch (e) { | |
| throw new Error(`LM Studio returned non-JSON tool arguments for '${functionSchema.name}': ${e.message}`); | |
| } | |
| log( | |
| 'debug', | |
| `${this.name} generateObject completed successfully for model: ${params.modelId}` | |
| ); | |
| return { | |
| object: resultObject, | |
| usage: { | |
| inputTokens: response.usage?.prompt_tokens || 0, | |
| outputTokens: response.usage?.completion_tokens || 0, | |
| totalTokens: response.usage?.total_tokens || 0 | |
| } | |
| }; |
🤖 Prompt for AI Agents
In src/ai-providers/lmstudio.js around lines 117 to 151, the code assumes
toolCall.function.arguments is valid JSON and does a direct JSON.parse, which
can throw an unhandled exception; wrap the parsing in a try/catch, first
validate that toolCall and toolCall.function and toolCall.function.arguments
exist and are strings, then attempt JSON.parse and on failure throw a new Error
that includes the modelId, tool name (if available) and the raw arguments so the
caller gets a clear, surfaced error; avoid swallowing the original error—include
the original parse error message in the thrown error for debugging.
| const response = await openaiClient.chat.completions.create({ | ||
| model: params.modelId, | ||
| messages: params.messages, | ||
| tools: [{ | ||
| type: 'function', | ||
| function: functionSchema | ||
| }], | ||
| tool_choice: 'required', // Use string instead of object | ||
| ...this.prepareTokenParam(params.modelId, params.maxTokens), | ||
| temperature: params.temperature | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrong token limit parameter for OpenAI Chat Completions.
OpenAI/LM Studio chat completions expect max_tokens; passing maxTokens/max_completion_tokens won’t be honored.
Apply:
- const response = await openaiClient.chat.completions.create({
+ const response = await openaiClient.chat.completions.create({
model: params.modelId,
messages: params.messages,
tools: [{
type: 'function',
function: functionSchema
}],
tool_choice: 'required', // Use string instead of object
- ...this.prepareTokenParam(params.modelId, params.maxTokens),
+ ...(params.maxTokens !== undefined
+ ? { max_tokens: Math.floor(Number(params.maxTokens)) }
+ : {}),
temperature: params.temperature
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const response = await openaiClient.chat.completions.create({ | |
| model: params.modelId, | |
| messages: params.messages, | |
| tools: [{ | |
| type: 'function', | |
| function: functionSchema | |
| }], | |
| tool_choice: 'required', // Use string instead of object | |
| ...this.prepareTokenParam(params.modelId, params.maxTokens), | |
| temperature: params.temperature | |
| }); | |
| const response = await openaiClient.chat.completions.create({ | |
| model: params.modelId, | |
| messages: params.messages, | |
| tools: [{ | |
| type: 'function', | |
| function: functionSchema | |
| }], | |
| tool_choice: 'required', // Use string instead of object | |
| ...(params.maxTokens !== undefined | |
| ? { max_tokens: Math.floor(Number(params.maxTokens)) } | |
| : {}), | |
| temperature: params.temperature | |
| }); |
🤖 Prompt for AI Agents
In src/ai-providers/lmstudio.js around lines 118 to 128, the chat completion
call is currently spreading token params that produce
maxTokens/max_completion_tokens which LM Studio/OpenAI chat completions ignore;
replace that with a single max_tokens numeric property (e.g., from
params.maxTokens or params.max_completion_tokens) so the API receives
max_tokens, or update prepareTokenParam to return { max_tokens: <value> }
instead of maxTokens/max_completion_tokens; ensure the value is an integer and
remove any other token-limit keys so only max_tokens is sent.
| import { jest } from '@jest/globals'; | ||
|
|
||
| // Mock the utils module to prevent logging during tests | ||
| jest.mock('../../../scripts/modules/utils.js', () => ({ | ||
| log: jest.fn() | ||
| })); | ||
|
|
||
| // Import the provider | ||
| import { LMStudioAIProvider } from '../../../src/ai-providers/lmstudio.js'; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mock external SDKs to avoid real client initialization
Tests should not import real ai/openai clients. Mock them so getClient() returns a lightweight stub and generateObject remains isolated.
import { jest } from '@jest/globals';
+// Mock Vercel AI SDK and OpenAI client before importing the provider
+jest.unstable_mockModule('ai', () => ({
+ createOpenAI: () => (modelId) => ({ modelId })
+}));
+jest.unstable_mockModule('openai', () => ({
+ default: class OpenAI {
+ constructor() {}
+ chat = {
+ completions: {
+ create: jest.fn().mockResolvedValue({
+ choices: [{ message: { tool_calls: [{ function: { arguments: '{}' } }] } }],
+ usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 }
+ })
+ }
+ };
+ }
+}));
+
// Import the provider
import { LMStudioAIProvider } from '../../../src/ai-providers/lmstudio.js';Committable suggestion skipped: line range outside the PR's diff.
| describe('getClient', () => { | ||
| it('should create client with default base URL', () => { | ||
| const params = {}; | ||
|
|
||
| // The getClient method should return a function | ||
| const client = provider.getClient(params); | ||
| expect(typeof client).toBe('function'); | ||
|
|
||
| // The client function should be callable and return a model object | ||
| const model = client('llama-3.1-8b-instruct'); | ||
| expect(model).toBeDefined(); | ||
| expect(model.modelId).toBe('llama-3.1-8b-instruct'); | ||
| }); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Assumptions about SDK return shape are fine with mocks
With the above mocks, assertions on model.modelId remain deterministic.
🏁 Script executed:
#!/bin/bash
# Ensure tests don't import real clients
rg -n "from 'ai'|from 'openai'" tests/unit/ai-providers/lmstudio.test.js -NLength of output: 0
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "PWD: $(pwd)"
echo "Top-level files:"
ls -la | sed -n '1,200p' || true
TARGET="tests/unit/ai-providers/lmstudio.test.js"
echo "Checking for $TARGET"
if [ -f "$TARGET" ]; then
echo "FOUND: $TARGET"
echo "----- file head (lines 1..240) -----"
sed -n '1,240p' "$TARGET" || true
echo "----- searching for imports/requires of real SDKs in $TARGET -----"
rg -n --no-heading "from ['\"]openai['\"]|require\(['\"]openai['\"]\)" "$TARGET" || true
rg -n --no-heading "from ['\"]anthropic['\"]|require\(['\"]anthropic['\"]\)" "$TARGET" || true
rg -n --no-heading "from ['\"]ai['\"]|require\(['\"]ai['\"]\)" "$TARGET" || true
rg -n --no-heading "from ['\"]ai-sdk['\"]|require\(['\"]ai-sdk['\"]\)" "$TARGET" || true
rg -n --no-heading "new\s+OpenAI|new\s+Anthropic|getClient\(" "$TARGET" || true
else
echo "$TARGET not found; listing tests/unit/ai-providers (if present):"
ls -la tests/unit/ai-providers || true
echo "Searching repo for files/occurrences of 'lmstudio'..."
rg -n --hidden -S "lmstudio" || true
fiLength of output: 10623
🏁 Script executed:
#!/bin/bash
set -euo pipefail
FILE="src/ai-providers/lmstudio.js"
echo "Checking for: $FILE"
if [ -f "$FILE" ]; then
echo "FOUND: $FILE"
echo "----- file (1..400) -----"
sed -n '1,400p' "$FILE" || true
echo "----- quick searches -----"
rg -n --no-heading "getClient\\s*\\(|class\\s+LMStudioAIProvider|export\\s+class|export\\s+default|module\\.exports" "$FILE" || true
rg -n --no-heading "from ['\"](openai|anthropic|ai|ai-sdk|@anthropic|axios|node-fetch|undici|fetch)['\"]|require\\(['\"](openai|anthropic|ai|ai-sdk|axios|node-fetch|undici|fetch)['\"]" "$FILE" || true
echo "----- repo-wide lmstudio references (context lines) -----"
rg -n -C2 --hidden -S "lmstudio" || true
else
echo "$FILE not found"
rg -n --hidden -S "lmstudio" || true
fiLength of output: 33633
Mock @ai-sdk/openai and openai in LMStudio tests
tests/unit/ai-providers/lmstudio.test.js instantiates LMStudioAIProvider which imports createOpenAI and OpenAI (src/ai-providers/lmstudio.js); add explicit jest.mock for '@ai-sdk/openai' (returning a createOpenAI factory that yields a deterministic model function) and mock 'openai' — or refactor getClient to accept an injected stub — to avoid loading real SDKs during unit tests.
Locations: src/ai-providers/lmstudio.js, tests/unit/ai-providers/lmstudio.test.js.
🤖 Prompt for AI Agents
In tests/unit/ai-providers/lmstudio.test.js around lines 55 to 68, the test
instantiates LMStudioAIProvider which imports real createOpenAI/OpenAI SDKs; add
explicit Jest mocks at the top of this test file to mock '@ai-sdk/openai'
(exporting a createOpenAI factory that returns a deterministic client function
which, when called with a modelId, returns a predictable model object) and mock
the 'openai' package similarly so no real SDK is loaded during unit tests;
alternatively, refactor src/ai-providers/lmstudio.js to accept an injected
createOpenAI/OpenAI via getClient params and update the test to pass a stubbed
factory—ensure the mock returns the expected shape used in the provider (a
callable client that returns an object with modelId) and keep the rest of the
test assertions unchanged.
| getLMStudioBaseURL: jest.fn(() => 'http://127.0.0.1:1234/v1'), | ||
| getAzureBaseURL: mockGetAzureBaseURL, | ||
| getBedrockBaseURL: mockGetBedrockBaseURL, | ||
| getVertexProjectId: mockGetVertexProjectId, | ||
| getVertexLocation: mockGetVertexLocation, | ||
| getMcpApiKeyStatus: mockGetMcpApiKeyStatus, | ||
|
|
||
| // Providers without API keys | ||
| providersWithoutApiKeys: ['ollama', 'bedrock', 'gemini-cli'] | ||
| providersWithoutApiKeys: ['ollama', 'bedrock', 'gemini-cli', 'lmstudio'] | ||
| })); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
LM Studio test scaffolding present
Mocking getLMStudioBaseURL and LMStudioAIProvider is correct. Consider adding a targeted test to assert LM Studio baseURL fallback is passed to the provider when role baseURL is not set.
@@
describe('generateTextService', () => {
+ test('uses LM Studio global base URL when provider is lmstudio and no role baseURL set', async () => {
+ // Arrange: set main role to lmstudio
+ mockGetMainProvider.mockReturnValue('lmstudio');
+ mockGetMainModelId.mockReturnValue('gpt-oss:latest');
+ // Provider mock result
+ const lmstudio = (await import('../../src/ai-providers/index.js')).LMStudioAIProvider();
+ lmstudio.generateText.mockResolvedValue({
+ text: 'LM Studio OK',
+ usage: { inputTokens: 1, outputTokens: 1, totalTokens: 2 }
+ });
+ // Act
+ const result = await generateTextService({ role: 'main', prompt: 'ping' });
+ // Assert
+ expect(result.mainResult).toBe('LM Studio OK');
+ expect(lmstudio.generateText).toHaveBeenCalledWith(
+ expect.objectContaining({ baseURL: 'http://127.0.0.1:1234/v1' })
+ );
+ });Also applies to: 231-237
🤖 Prompt for AI Agents
In tests/unit/ai-services-unified.test.js around lines 118-127 (and likewise at
231-237), there is LM Studio mocking but no targeted test verifying the fallback
baseURL is passed to LMStudioAIProvider when the role-specific baseURL is unset;
add a unit test that clears the role baseURL, ensures getLMStudioBaseURL is
mocked to a known URL, invokes the code that constructs/initializes
LMStudioAIProvider, and assert the provider was called/constructed with the
mocked baseURL value (use existing jest.fn mocks and spy on LMStudioAIProvider
constructor or factory to validate the baseURL argument).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 25
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
tests/README-AUTO-FEATURES.md (1)
1-313: Fix markdownlint violations in this new README.Address MD022 (blank lines around headings) and MD032 (lists need surrounding blank lines); there’s also a trailing space at Line 18. A quick pass with markdownlint --fix or Biome will clear these.
scripts/modules/commands.js (1)
502-677: Implement LM Studio custom handler in setup.Without this branch, selecting “Custom LM Studio model” will fall into the unexpected selection path.
} else if (selectedValue === '__CUSTOM_VERTEX__') { … +} else if (selectedValue === '__CUSTOM_LMSTUDIO__') { + isCustomSelection = true; + const { customId } = await inquirer.prompt([ + { + type: 'input', + name: 'customId', + message: `Enter the LM Studio Model ID for the ${role} role (OpenAI-compatible ID):` + } + ]); + if (!customId) { + console.log(chalk.yellow('No custom ID entered. Skipping role.')); + return true; + } + modelIdToSet = customId; + providerHint = CUSTOM_PROVIDERS.LMSTUDIO;tests/unit/scripts/modules/task-manager/parse-prd.test.js (1)
529-536: Avoid asserting undefined fields; use partial shape to future‑proof tests.Asserting
tagInfo: undefinedmakes the tests brittle. Preferexpect.objectContainingand omit undefined fields. Apply to all similar assertions in this file.- expect(result).toEqual({ - success: true, - tasksPath: 'tasks/tasks.json', - telemetryData: {}, - autoExpansion: null, - tagInfo: undefined - }); + expect(result).toEqual(expect.objectContaining({ + success: true, + tasksPath: 'tasks/tasks.json', + telemetryData: {}, + autoExpansion: null + }));Also applies to: 737-744, 837-844, 953-960, 1036-1042, 1353-1359
mcp-server/src/core/direct-functions/parse-prd.js (1)
183-190: Bug: return payload uses outputPath but MCP tool expects tasksPath.
mcp-server/src/tools/parse-prd.jsreadsresult.data.tasksPath, but here you returnoutputPath, causing auto‑expansion to receiveundefined. ReturntasksPath(and keepoutputPathfor backward compatibility).- return { - success: true, - data: { - message: successMsg, - outputPath: result.tasksPath, - telemetryData: result.telemetryData, - tagInfo: result.tagInfo - } - }; + return { + success: true, + data: { + message: successMsg, + tasksPath: result.tasksPath, + // Back-compat for any callers expecting outputPath + outputPath: result.tasksPath, + telemetryData: result.telemetryData, + tagInfo: result.tagInfo + } + };
♻️ Duplicate comments (5)
.taskmaster/config.json (2)
1-39: Do not commit edits to .taskmaster/config.json — configure via CLI/MCP and keep repo-safe defaults.Per repo rules, this file should not be manually edited or set to machine-local defaults. Move LM Studio examples to docs/examples and revert the committed config to remote-safe defaults so CI and other devs aren’t forced into local runtimes.
Option A — revert to remote-safe defaults for main model and drop machine-local/global user entries:
- "provider": "claude-code", - "modelId": "sonnet", - "maxTokens": 64000, + "provider": "anthropic", + "modelId": "claude-3-7-sonnet-20250219", + "maxTokens": 64000,- "lmstudioBaseURL": "http://127.0.0.1:1234/v1",- "enableCodebaseAnalysis": true, + "enableCodebaseAnalysis": false,- "userId": "1234567890",- "defaultTag": "master" + "defaultTag": "main"
31-31: Remove hard-coded lmstudioBaseURL from committed config.This duplicates defaults and bakes a localhost assumption into shared config; keep it configurable via CLI/MCP.
- "lmstudioBaseURL": "http://127.0.0.1:1234/v1",scripts/modules/commands.js (2)
315-322: Interactive setup parity: add “Custom LM Studio model”.The CLI adds --lmstudio but the interactive setup lacks a matching choice, causing discoverability drift.
const customProviderOptions = [ { name: '* Custom OpenRouter model', value: '__CUSTOM_OPENROUTER__' }, { name: '* Custom Ollama model', value: '__CUSTOM_OLLAMA__' }, { name: '* Custom Bedrock model', value: '__CUSTOM_BEDROCK__' }, { name: '* Custom Azure model', value: '__CUSTOM_AZURE__' }, - { name: '* Custom Vertex model', value: '__CUSTOM_VERTEX__' } + { name: '* Custom Vertex model', value: '__CUSTOM_VERTEX__' }, + { name: '* Custom LM Studio model', value: '__CUSTOM_LMSTUDIO__' } ];Add a handler branch in handleSetModel for CUSTOM_LMSTUDIO that prompts for model ID and sets providerHint = CUSTOM_PROVIDERS.LMSTUDIO (no live validation needed).
3798-3800: Add LM Studio example in models help text.Include an example mirroring other providers so users discover the flag.
$ task-master models --set-main gemini-2.5-pro --gemini-cli # Set Gemini CLI model for main role + $ task-master models --set-main llama-3.1-8b-instruct --lmstudio # Set LM Studio local model for main role $ task-master models --setup # Run interactive setuptests/unit/ai-providers/lmstudio.test.js (1)
11-20: Mock external SDKs to prevent real client initializationThe test should mock
@ai-sdk/openaiandopenaipackages to avoid loading real SDKs during unit tests.import { jest } from '@jest/globals'; +// Mock Vercel AI SDK and OpenAI client before importing the provider +jest.unstable_mockModule('@ai-sdk/openai', () => ({ + createOpenAI: jest.fn(() => (modelId) => ({ modelId })) +})); + +jest.unstable_mockModule('openai', () => ({ + default: class OpenAI { + constructor() {} + chat = { + completions: { + create: jest.fn().mockResolvedValue({ + choices: [{ message: { tool_calls: [{ function: { arguments: '{}' } }] } }], + usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 } + }) + } + }; + } +})); + // Mock the utils module to prevent logging during tests jest.mock('../../../scripts/modules/utils.js', () => ({ log: jest.fn() })); // Import the provider import { LMStudioAIProvider } from '../../../src/ai-providers/lmstudio.js';
| * `append`: `Append generated tasks to existing file.` (CLI: `--append`) | ||
| * `research`: `Enable Taskmaster to use the research role for potentially more informed task generation. Requires appropriate API key.` (CLI: `-r, --research`) | ||
| * `auto`: `Automatically analyze complexity and expand high-complexity tasks after PRD parsing.` (CLI: `--auto`) | ||
| * `autoThreshold`: `Complexity threshold for auto-expansion (default: 7).` (CLI: `--auto-threshold <number>`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Clarify auto workflow outputs and fix lint around the new options block.
- Add a one‑liner noting the complexity report is written to a tag‑aware path (.taskmaster/reports/task-complexity-report[_].json) and is used by expand --all.
- Address markdownlint MD022/MD032 near this list by ensuring blank lines before/after headings and lists in this section.
I can open a small doc PR to add the path note and fix spacing—want me to proceed?
Also applies to: 62-62
🤖 Prompt for AI Agents
In .cursor/rules/taskmaster.mdc around lines 56 to 59 (and also apply the same
fix at line 62), add a one‑line note after the `autoThreshold` bullet that the
complexity report is written to a tag‑aware path:
.taskmaster/reports/task-complexity-report[_<tag>].json and that this report is
consumed by `expand --all`; also fix markdownlint MD022/MD032 by ensuring there
is a blank line before the heading that precedes this list and a blank line
after the list (or before the next heading/paragraph) so the list is properly
separated from surrounding content.
.taskmaster/config.json
Outdated
| "bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com", | ||
| "lmstudioBaseURL": "http://127.0.0.1:1234/v1", | ||
| "responseLanguage": "English", | ||
| "enableCodebaseAnalysis": true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Disable enableCodebaseAnalysis by default or gate behind a flag.
On large repos this can add noticeable latency and unexpected CPU/disk churn for users and CI.
- "enableCodebaseAnalysis": true,
+ "enableCodebaseAnalysis": false,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "enableCodebaseAnalysis": true, | |
| "enableCodebaseAnalysis": false, |
🤖 Prompt for AI Agents
In .taskmaster/config.json around line 33, the enableCodebaseAnalysis flag is
set to true which causes heavy CPU/disk work on large repos; change the default
to false (or move this setting behind a runtime flag/env var) so analysis is
opt-in. Update the config to set "enableCodebaseAnalysis": false and adjust any
code that reads this config to also respect an environment variable or CLI flag
(e.g., TASKMASTER_ENABLE_CODEBASE_ANALYSIS=true) so callers can explicitly
enable it when needed.
| beforeAll(() => { | ||
| // Create temporary directory for test files | ||
| tempDir = fs.mkdtempSync(path.join(os.tmpdir(), 'auto-complexity-test-')); | ||
| mockTasksPath = path.join(tempDir, 'tasks.json'); | ||
| mockComplexityReportPath = path.join(tempDir, 'complexity-report.json'); | ||
|
|
||
| // Write mock tasks file | ||
| fs.writeFileSync(mockTasksPath, JSON.stringify(mockTasksData, null, 2)); | ||
| }); | ||
|
|
||
| afterAll(() => { | ||
| // Clean up temporary directory | ||
| fs.rmSync(tempDir, { recursive: true, force: true }); | ||
| }); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Prefer mock-fs over real filesystem I/O in unit tests.
Replace temp dir setup with mock-fs to avoid disk writes and improve isolation.
Do you want a follow-up patch switching to mock-fs with setup/teardown in beforeEach/afterEach?
… tool calls - Added isUpdateTodosToolCall() function to detect update_todos JSON events - Enhanced stdout data handler to automatically update tasks.json when tool calls are received - Updated directive prompt to instruct cursor-agent to use specific JSON format - Added proper tag resolution and atomic file updates using setTaskStatus - Supports both main task IDs (22) and subtask IDs (22.1) with dot notation - Works in both normal and --silent modes - Added comprehensive unit tests for new functionality - Maintains compatibility with Task-Master's tagged task system The cursor-agent integration now automatically keeps .taskmaster/tasks/tasks.json in sync with progress as Cursor-Agent emits structured tool call events.
Exclude temporary files and directories from Git tracking to keep the repository clean.
- Fix .gitignore patterns for tmp/ directory - Remove generated report file from repo - Remove unused g dependency - Add test:ci script - Fix config defaults and remove sensitive data - Fix cursor rules formatting - Fix auto/autoThreshold validation across MCP and CLI - Fix test property names and error assertions - Fix hardcoded API key in tests All PR review comments addressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
scripts/modules/config-manager.js (2)
761-769: Add LM Studio to no-key providers in isApiKeySet.LM Studio doesn’t require an API key. The local providersWithoutApiKeys inside isApiKeySet omits LMSTUDIO, so LM Studio users will see false “missing key” flows. Add it.
Apply:
// Providers that don't require API keys for authentication const providersWithoutApiKeys = [ CUSTOM_PROVIDERS.OLLAMA, CUSTOM_PROVIDERS.BEDROCK, CUSTOM_PROVIDERS.MCP, - CUSTOM_PROVIDERS.GEMINI_CLI + CUSTOM_PROVIDERS.GEMINI_CLI, + CUSTOM_PROVIDERS.LMSTUDIO ];
847-898: Return true for LM Studio in MCP key status check.getMcpApiKeyStatus should mirror isApiKeySet: LM Studio needs no key. Add a case to return true.
case 'ollama': return true; // No key needed case 'claude-code': return true; // No key needed + case 'lmstudio': + return true; // No key needed
♻️ Duplicate comments (13)
.cursor/rules/cursor_rules.mdc (1)
55-72: Temporary Work Directory rule is now enforceable — nice upgrade.Clear guidance, gitignore snippet, tool exclusions, and “no prod imports” requirement all included. This addresses prior feedback.
package.json (2)
21-27: Test script additions look good.Scripts cover CI, unit, integration, E2E, and targeted suites per guidelines.
21-21: Make test:ci more CI-stable by running in-band.Serializing tests reduces flakiness on constrained runners.
Apply:
- "test:ci": "node --experimental-vm-modules node_modules/.bin/jest --ci --coverage", + "test:ci": "node --experimental-vm-modules node_modules/.bin/jest --ci --coverage --runInBand",tests/unit/task-manager/auto-complexity-expansion.test.js (1)
175-189: Use mock-fs for unit tests instead of real FS.Keeps unit tests fast and isolated.
I can provide a patch to wrap setup/teardown with mock-fs on request.
README.md (1)
432-439: Add blank line before list under “How it works”.Fixes MD032.
Apply:
-**How it works:** +**How it works:** + 1. **Parses your PRD** and generates initial tasksscripts/modules/task-manager/parse-prd/parse-prd.js (1)
92-124: Preserve error diagnostics in auto-expansion result.Include stack/name to aid debugging; severity can remain warn.
Apply:
- const errorType = autoError.message?.includes('threshold') ? 'INVALID_THRESHOLD' : 'AUTO_EXPANSION_FAILED'; - logger.report(`Auto-expansion failed (${errorType}): ${autoError.message}`, 'warn'); + const errorType = autoError.message?.includes('threshold') ? 'INVALID_THRESHOLD' : 'AUTO_EXPANSION_FAILED'; + logger.report(`Auto-expansion failed (${errorType}): ${autoError.message}`, 'warn'); autoExpansionResult = { success: false, - error: autoError.message, - errorType + error: autoError.message, + errorType, + errorStack: autoError.stack };.taskmaster/config.json (2)
2-36: Do not manually edit .taskmaster/config.json in PRsProject-wide config changes should be applied via CLI/MCP and not committed, or moved to an example config under examples/ with documentation.
31-31: Remove hard-coded lmstudioBaseURL from committed configAvoid committing machine-specific localhost defaults; rely on defaults and let users set via CLI/MCP if needed.
"bedrockBaseURL": "https://bedrock.us-east-1.amazonaws.com", - "lmstudioBaseURL": "http://127.0.0.1:1234/v1", "responseLanguage": "English",tests/unit/ai-providers/lmstudio.test.js (1)
11-20: Mock external SDKs before importing the provider (ESM-safe)Tests currently import real SDKs via the provider; add explicit mocks and switch to dynamic import to avoid loading real clients.
-import { jest } from '@jest/globals'; +import { jest } from '@jest/globals'; + +// Mock external SDKs before importing the provider (ESM) +jest.unstable_mockModule('@ai-sdk/openai', () => ({ + // createOpenAI returns a factory: (opts) => (modelId) => ({ modelId, opts }) + createOpenAI: () => (opts) => (modelId) => ({ modelId, opts }) +})); +jest.unstable_mockModule('openai', () => ({ + default: class OpenAI { + constructor() {} + chat = { + completions: { + create: jest.fn().mockResolvedValue({ + choices: [{ message: { tool_calls: [{ function: { arguments: '{}' } }] } }], + usage: { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 } + }) + } + }; + } +})); -// Import the provider -import { LMStudioAIProvider } from '../../../src/ai-providers/lmstudio.js'; +// Import the provider after mocks +let LMStudioAIProvider; +beforeAll(async () => { + ({ LMStudioAIProvider } = await import('../../../src/ai-providers/lmstudio.js')); +});tests/integration/mcp-server/parse-prd-auto-mcp.test.js (1)
334-338: Return autoExpansion from parsePRDDirectparsePRDDirect returns data without autoExpansion while the core parsePRD implementation produces autoExpansion; add it to the success return so tests that assert result.data.autoExpansion pass.
Location: mcp-server/src/core/direct-functions/parse-prd.js — return around lines 179–190.
return { success: true, data: { message: successMsg, - outputPath: result.tasksPath, + outputPath: result.tasksPath, + autoExpansion: result.autoExpansionResult || result.autoExpansion, telemetryData: result.telemetryData, tagInfo: result.tagInfo } };scripts/modules/commands.js (3)
3800-3823: Add LM Studio example to models help.You added --lmstudio but didn’t expose an example in help text.
$ task-master models --set-main gemini-2.5-pro --gemini-cli # Set Gemini CLI model for main role + $ task-master models --set-main qwen2.5-32b-instruct --lmstudio # Set LM Studio model for main role $ task-master models --setup # Run interactive setup
319-325: Add “Custom LM Studio model” to interactive setup choices.Parity with the new --lmstudio flag; improves discoverability.
const customProviderOptions = [ { name: '* Custom OpenRouter model', value: '__CUSTOM_OPENROUTER__' }, { name: '* Custom Ollama model', value: '__CUSTOM_OLLAMA__' }, { name: '* Custom Bedrock model', value: '__CUSTOM_BEDROCK__' }, { name: '* Custom Azure model', value: '__CUSTOM_AZURE__' }, - { name: '* Custom Vertex model', value: '__CUSTOM_VERTEX__' } + { name: '* Custom Vertex model', value: '__CUSTOM_VERTEX__' }, + { name: '* Custom LM Studio model', value: '__CUSTOM_LMSTUDIO__' } ];And in handleSetModel, support the new selection:
+ } else if (selectedValue === '__CUSTOM_LMSTUDIO__') { + isCustomSelection = true; + const { customId } = await inquirer.prompt([ + { type: 'input', name: 'customId', message: `Enter the LM Studio Model ID for the ${role} role:` } + ]); + if (!customId) { + console.log(chalk.yellow('No custom ID entered. Skipping role.')); + return true; + } + modelIdToSet = customId; + providerHint = CUSTOM_PROVIDERS.LMSTUDIO;
969-990: Surface auto-expansion summary counts in CLI output.After runAutoComplexityExpansion, print expanded/skipped/failed counts to align with other commands’ UX.
- await runAutoComplexityExpansion({ + const autoResult = await runAutoComplexityExpansion({ tasksPath: outputPath, threshold: autoThreshold, research: research, projectRoot: taskMaster.getProjectRoot(), tag: tag }); - - console.log(chalk.green('✅ Auto-expansion completed successfully!')); + console.log(chalk.green('✅ Auto-expansion completed successfully!')); + if (autoResult) { + console.log( + chalk.blue( + `📋 Expanded: ${autoResult.expandedTasks} Skipped: ${autoResult.skippedTasks} Failed: ${autoResult.failedTasks}` + ) + ); + }
| # Generated reports | ||
| .taskmaster/reports/*.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Broaden report ignore to nested paths (if applicable).
If reports may appear in subfolders, prefer a recursive pattern.
Apply if nested structure is possible:
-.taskmaster/reports/*.json
+.taskmaster/reports/**/*.json📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Generated reports | |
| .taskmaster/reports/*.json | |
| # Generated reports | |
| .taskmaster/reports/**/*.json |
🤖 Prompt for AI Agents
In .gitignore around lines 102-103, the current ignore entry
".taskmaster/reports/*.json" only matches JSON files directly under that folder;
update it to a recursive pattern like ".taskmaster/reports/**/*.json" (or
".taskmaster/**/reports/**/*.json" if reports folders may appear at multiple
depths) so JSON report files in nested subdirectories are also ignored.
| .describe('Append generated tasks to existing file.'), | ||
| auto: z | ||
| .boolean() | ||
| .optional() | ||
| .default(false) | ||
| .describe('Automatically analyze complexity and expand high-complexity tasks after PRD parsing.'), | ||
| autoThreshold: z | ||
| .preprocess(v => { | ||
| if (v === undefined || v === null || v === '') return 7; | ||
| const n = Number(v); | ||
| return Number.isFinite(n) ? n : 7; | ||
| }, z.number().min(0)) | ||
| .optional() | ||
| .default(7) | ||
| .describe('Complexity threshold for auto-expansion (default: 7).') | ||
| }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Constrain autoThreshold to [1,10] and keep numeric at the boundary.
Schema currently allows 0 and >10.
Apply:
- autoThreshold: z
- .preprocess(v => {
- if (v === undefined || v === null || v === '') return 7;
- const n = Number(v);
- return Number.isFinite(n) ? n : 7;
- }, z.number().min(0))
+ autoThreshold: z
+ .preprocess((v) => {
+ if (v === undefined || v === null || v === '') return 7;
+ const n = Number(v);
+ return Number.isFinite(n) ? n : 7;
+ }, z.number().min(1).max(10))
.optional()
.default(7)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .describe('Append generated tasks to existing file.'), | |
| auto: z | |
| .boolean() | |
| .optional() | |
| .default(false) | |
| .describe('Automatically analyze complexity and expand high-complexity tasks after PRD parsing.'), | |
| autoThreshold: z | |
| .preprocess(v => { | |
| if (v === undefined || v === null || v === '') return 7; | |
| const n = Number(v); | |
| return Number.isFinite(n) ? n : 7; | |
| }, z.number().min(0)) | |
| .optional() | |
| .default(7) | |
| .describe('Complexity threshold for auto-expansion (default: 7).') | |
| }), | |
| .describe('Append generated tasks to existing file.'), | |
| auto: z | |
| .boolean() | |
| .optional() | |
| .default(false) | |
| .describe('Automatically analyze complexity and expand high-complexity tasks after PRD parsing.'), | |
| autoThreshold: z | |
| .preprocess(v => { | |
| if (v === undefined || v === null || v === '') return 7; | |
| const n = Number(v); | |
| return Number.isFinite(n) ? n : 7; | |
| }, z.number().min(1).max(10)) | |
| .optional() | |
| .default(7) | |
| .describe('Complexity threshold for auto-expansion (default: 7).') | |
| }), |
🤖 Prompt for AI Agents
In mcp-server/src/tools/parse-prd.js around lines 66 to 81, the autoThreshold
schema currently allows 0 and values >10; change the preprocessing and
validation so numeric inputs are constrained to the inclusive range [1,10] while
preserving numeric boundary values: in the preprocess, coerce
empty/undefined/null to 7, coerce to Number, then if finite clamp the value to
the range 1–10 (e.g., Math.max(1, Math.min(10, n))); update the z.number()
validator to .min(1).max(10) and keep .default(7) so non-numeric or missing
inputs still default to 7.
| // Handle auto-expansion if enabled and PRD parsing was successful | ||
| if (args.auto && result.success && result.data) { | ||
| try { | ||
| log.info('Running automatic complexity analysis and expansion...'); | ||
|
|
||
| // Import the auto workflow function | ||
| const { runAutoComplexityExpansion } = await import('../../../../scripts/modules/task-manager/auto-complexity-expansion.js'); | ||
|
|
||
| const tasksPath = result.data.tasksPath || result.data.outputPath; | ||
| if (!tasksPath) { | ||
| log.warn('Auto-expansion skipped: tasksPath not found in parse result.'); | ||
| } else { | ||
| const parsed = Number(args.autoThreshold); | ||
| const threshold = Number.isFinite(parsed) ? parsed : 7; | ||
| const autoResult = await runAutoComplexityExpansion({ | ||
| tasksPath, | ||
| threshold, | ||
| research: !!args.research, | ||
| projectRoot: args.projectRoot, | ||
| tag: resolvedTag | ||
| }); | ||
| // Add auto-expansion results to the response | ||
| result.data.autoExpansion = autoResult; | ||
| log.info(`Auto-expansion completed: ${autoResult.expandedTasks} tasks expanded`); | ||
| } | ||
|
|
||
| } catch (autoError) { | ||
| log.warn(`Auto-expansion failed: ${autoError.message}`); | ||
| // Don't fail the entire operation, just log the auto-expansion failure | ||
| result.data.autoExpansionError = autoError.message; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Unify auto-expansion result shape on error.
Expose a consistent result object instead of mixing autoExpansion and autoExpansionError.
Apply:
- } catch (autoError) {
- log.warn(`Auto-expansion failed: ${autoError.message}`);
- // Don't fail the entire operation, just log the auto-expansion failure
- result.data.autoExpansionError = autoError.message;
- }
+ } catch (autoError) {
+ log.warn(`Auto-expansion failed: ${autoError.message}`);
+ result.data.autoExpansion = {
+ success: false,
+ error: autoError.message,
+ errorType: autoError.name,
+ errorStack: autoError.stack
+ };
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Handle auto-expansion if enabled and PRD parsing was successful | |
| if (args.auto && result.success && result.data) { | |
| try { | |
| log.info('Running automatic complexity analysis and expansion...'); | |
| // Import the auto workflow function | |
| const { runAutoComplexityExpansion } = await import('../../../../scripts/modules/task-manager/auto-complexity-expansion.js'); | |
| const tasksPath = result.data.tasksPath || result.data.outputPath; | |
| if (!tasksPath) { | |
| log.warn('Auto-expansion skipped: tasksPath not found in parse result.'); | |
| } else { | |
| const parsed = Number(args.autoThreshold); | |
| const threshold = Number.isFinite(parsed) ? parsed : 7; | |
| const autoResult = await runAutoComplexityExpansion({ | |
| tasksPath, | |
| threshold, | |
| research: !!args.research, | |
| projectRoot: args.projectRoot, | |
| tag: resolvedTag | |
| }); | |
| // Add auto-expansion results to the response | |
| result.data.autoExpansion = autoResult; | |
| log.info(`Auto-expansion completed: ${autoResult.expandedTasks} tasks expanded`); | |
| } | |
| } catch (autoError) { | |
| log.warn(`Auto-expansion failed: ${autoError.message}`); | |
| // Don't fail the entire operation, just log the auto-expansion failure | |
| result.data.autoExpansionError = autoError.message; | |
| } | |
| } | |
| // Handle auto-expansion if enabled and PRD parsing was successful | |
| if (args.auto && result.success && result.data) { | |
| try { | |
| log.info('Running automatic complexity analysis and expansion...'); | |
| // Import the auto workflow function | |
| const { runAutoComplexityExpansion } = await import('../../../../scripts/modules/task-manager/auto-complexity-expansion.js'); | |
| const tasksPath = result.data.tasksPath || result.data.outputPath; | |
| if (!tasksPath) { | |
| log.warn('Auto-expansion skipped: tasksPath not found in parse result.'); | |
| } else { | |
| const parsed = Number(args.autoThreshold); | |
| const threshold = Number.isFinite(parsed) ? parsed : 7; | |
| const autoResult = await runAutoComplexityExpansion({ | |
| tasksPath, | |
| threshold, | |
| research: !!args.research, | |
| projectRoot: args.projectRoot, | |
| tag: resolvedTag | |
| }); | |
| // Add auto-expansion results to the response | |
| result.data.autoExpansion = autoResult; | |
| log.info(`Auto-expansion completed: ${autoResult.expandedTasks} tasks expanded`); | |
| } | |
| } catch (autoError) { | |
| log.warn(`Auto-expansion failed: ${autoError.message}`); | |
| result.data.autoExpansion = { | |
| success: false, | |
| error: autoError.message, | |
| errorType: autoError.name, | |
| errorStack: autoError.stack | |
| }; | |
| } | |
| } |
🤖 Prompt for AI Agents
mcp-server/src/tools/parse-prd.js around lines 102 to 133: the code currently
sets result.data.autoExpansion on success but sets
result.data.autoExpansionError on failure, creating inconsistent response
shapes; always ensure result.data exists then replace both branches to set a
single result.data.autoExpansion object with a consistent shape (e.g. { success:
true, expandedTasks: number, details: {...} } on success and { success: false,
error: string } on failure), remove any separate autoExpansionError assignment,
and update the success log to read from the new object.
| ``` | ||
| 🚀 Starting automated task execution pipeline... | ||
| 📋 Parsing PRD and generating tasks... | ||
| ✅ Generated 8 tasks from PRD | ||
| 🔍 Analyzing task complexity and expanding... | ||
| ✅ Expanded tasks. Total tasks: 12 | ||
| 🤖 Running cursor-agent on 12 pending tasks... | ||
| 📊 Task 1 status: in-progress | ||
| 📊 Task 2 status: done | ||
| 📊 Task 3 status: in-progress | ||
| 📊 New task detected: Add error handling for API failures | ||
| 🎉 Automated execution completed successfully! | ||
| 📊 Execution Statistics: | ||
| • Total tasks: 12 | ||
| • Completed: 10 | ||
| • Failed: 1 | ||
| • New tasks added: 2 | ||
| • Iterations: 3 | ||
| • Duration: 4m 32s | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Specify language for fenced code block.
Fixes MD040.
Apply:
-```
+```text
🚀 Starting automated task execution pipeline...
...
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 markdownlint-cli2 (0.17.2)</summary>
263-263: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
In README.md around lines 263 to 287, the fenced code block is missing a
language spec which triggers MD040; update the opening fence from totext
(or another appropriate language label) so the block is a fenced code block with
a language, keep the block content unchanged (including the leading
emoji/spacing as desired) and ensure the closing ``` remains; commit the change
to satisfy MD040.
</details>
<!-- fingerprinting:phantom:medusa:chinchilla -->
<!-- This is an auto-generated comment by CodeRabbit -->
| async function isCursorAgentAvailable() { | ||
| return new Promise((resolve) => { | ||
| const process = spawn('cursor-agent', ['--version'], { | ||
| stdio: ['ignore', 'pipe', 'pipe'] | ||
| }); | ||
|
|
||
| process.on('error', () => { | ||
| resolve(false); | ||
| }); | ||
|
|
||
| process.on('close', (code) => { | ||
| resolve(code === 0); | ||
| }); | ||
|
|
||
| // Timeout after 5 seconds | ||
| setTimeout(() => { | ||
| process.kill(); | ||
| resolve(false); | ||
| }, 5000); | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Avoid shadowing global process and resolve-timeout race.
Naming the child process variable process is confusing and kill() may fire after close. Use a different name and clear the timeout.
- return new Promise((resolve) => {
- const process = spawn('cursor-agent', ['--version'], {
+ return new Promise((resolve) => {
+ const child = spawn('cursor-agent', ['--version'], {
stdio: ['ignore', 'pipe', 'pipe']
});
-
- process.on('error', () => {
+ let to;
+ child.on('error', () => {
resolve(false);
});
-
- process.on('close', (code) => {
+ child.on('close', (code) => {
+ if (to) clearTimeout(to);
resolve(code === 0);
});
-
- // Timeout after 5 seconds
- setTimeout(() => {
- process.kill();
- resolve(false);
- }, 5000);
+
+ // Timeout after 5 seconds
+ to = setTimeout(() => {
+ try { child.kill(); } catch {}
+ resolve(false);
+ }, 5000);
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async function isCursorAgentAvailable() { | |
| return new Promise((resolve) => { | |
| const process = spawn('cursor-agent', ['--version'], { | |
| stdio: ['ignore', 'pipe', 'pipe'] | |
| }); | |
| process.on('error', () => { | |
| resolve(false); | |
| }); | |
| process.on('close', (code) => { | |
| resolve(code === 0); | |
| }); | |
| // Timeout after 5 seconds | |
| setTimeout(() => { | |
| process.kill(); | |
| resolve(false); | |
| }, 5000); | |
| }); | |
| } | |
| async function isCursorAgentAvailable() { | |
| return new Promise((resolve) => { | |
| const child = spawn('cursor-agent', ['--version'], { | |
| stdio: ['ignore', 'pipe', 'pipe'] | |
| }); | |
| let to; | |
| child.on('error', () => { | |
| resolve(false); | |
| }); | |
| child.on('close', (code) => { | |
| if (to) clearTimeout(to); | |
| resolve(code === 0); | |
| }); | |
| // Timeout after 5 seconds | |
| to = setTimeout(() => { | |
| try { child.kill(); } catch {} | |
| resolve(false); | |
| }, 5000); | |
| }); | |
| } |
🤖 Prompt for AI Agents
In scripts/modules/cursor-agent.js around lines 25 to 45, the child process is
named process (shadowing global process) and the timeout can race with the child
exiting; change the variable name to something like child or childProcess, store
the timeout id returned by setTimeout, and in both the 'error' and 'close'
handlers call clearTimeout(timeoutId) before resolving; in the timeout handler
check that the child hasn't already exited (e.g., child.exitCode === null)
before calling child.kill(), then resolve(false); also remove or guard duplicate
resolves by ensuring handlers run only once (for example, set a local settled
flag or rely on clearTimeout to prevent the timeout handler from firing after
close/error).
| import fs from 'fs'; | ||
| import path from 'path'; | ||
| import os from 'os'; | ||
| import { spawn } from 'child_process'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Remove unused import.
spawn is not used.
Apply:
-import { spawn } from 'child_process';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { spawn } from 'child_process'; |
🤖 Prompt for AI Agents
In tests/integration/parse-prd-auto-workflow.test.js around line 17, the file
imports spawn from 'child_process' but never uses it; remove the unused import
statement (delete "import { spawn } from 'child_process';") to eliminate the
dead dependency and any linter warnings.
| it('should continue with PRD parsing if auto-expansion fails', async () => { | ||
| const { parsePRD } = await import('../../scripts/modules/task-manager/parse-prd/parse-prd.js'); | ||
|
|
||
| // Mock auto-expansion failure | ||
| const { runAutoComplexityExpansion } = await import('../../scripts/modules/task-manager/auto-complexity-expansion.js'); | ||
| runAutoComplexityExpansion.mockRejectedValue(new Error('Auto-expansion failed')); | ||
|
|
||
| const result = await parsePRD(testPRDPath, tasksPath, 3, { | ||
| force: true, | ||
| auto: true, | ||
| autoThreshold: '7', | ||
| mcpLog: { | ||
| info: jest.fn(), | ||
| warn: jest.fn(), | ||
| error: jest.fn(), | ||
| debug: jest.fn(), | ||
| success: jest.fn() | ||
| }, | ||
| projectRoot: tempDir | ||
| }); | ||
|
|
||
| // PRD parsing should still succeed | ||
| expect(result.success).toBe(true); | ||
| expect(result.tasksPath).toBe(tasksPath); | ||
|
|
||
| // Auto-expansion should be marked as failed | ||
| expect(result.autoExpansion).toBeUndefined(); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Align expectation with implementation: autoExpansion returns an object on failure.
parse-prd sets autoExpansion to { success: false, … } when auto-expansion fails; test expects undefined.
Apply:
- // Auto-expansion should be marked as failed
- expect(result.autoExpansion).toBeUndefined();
+ // Auto-expansion should be marked as failed but still present
+ expect(result.autoExpansion).toBeDefined();
+ expect(result.autoExpansion.success).toBe(false);
+ expect(result.autoExpansion.error).toBeDefined();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('should continue with PRD parsing if auto-expansion fails', async () => { | |
| const { parsePRD } = await import('../../scripts/modules/task-manager/parse-prd/parse-prd.js'); | |
| // Mock auto-expansion failure | |
| const { runAutoComplexityExpansion } = await import('../../scripts/modules/task-manager/auto-complexity-expansion.js'); | |
| runAutoComplexityExpansion.mockRejectedValue(new Error('Auto-expansion failed')); | |
| const result = await parsePRD(testPRDPath, tasksPath, 3, { | |
| force: true, | |
| auto: true, | |
| autoThreshold: '7', | |
| mcpLog: { | |
| info: jest.fn(), | |
| warn: jest.fn(), | |
| error: jest.fn(), | |
| debug: jest.fn(), | |
| success: jest.fn() | |
| }, | |
| projectRoot: tempDir | |
| }); | |
| // PRD parsing should still succeed | |
| expect(result.success).toBe(true); | |
| expect(result.tasksPath).toBe(tasksPath); | |
| // Auto-expansion should be marked as failed | |
| expect(result.autoExpansion).toBeUndefined(); | |
| }); | |
| it('should continue with PRD parsing if auto-expansion fails', async () => { | |
| const { parsePRD } = await import('../../scripts/modules/task-manager/parse-prd/parse-prd.js'); | |
| // Mock auto-expansion failure | |
| const { runAutoComplexityExpansion } = await import('../../scripts/modules/task-manager/auto-complexity-expansion.js'); | |
| runAutoComplexityExpansion.mockRejectedValue(new Error('Auto-expansion failed')); | |
| const result = await parsePRD(testPRDPath, tasksPath, 3, { | |
| force: true, | |
| auto: true, | |
| autoThreshold: '7', | |
| mcpLog: { | |
| info: jest.fn(), | |
| warn: jest.fn(), | |
| error: jest.fn(), | |
| debug: jest.fn(), | |
| success: jest.fn() | |
| }, | |
| projectRoot: tempDir | |
| }); | |
| // PRD parsing should still succeed | |
| expect(result.success).toBe(true); | |
| expect(result.tasksPath).toBe(tasksPath); | |
| // Auto-expansion should be marked as failed but still present | |
| expect(result.autoExpansion).toBeDefined(); | |
| expect(result.autoExpansion.success).toBe(false); | |
| expect(result.autoExpansion.error).toBeDefined(); | |
| }); |
🤖 Prompt for AI Agents
In tests/integration/parse-prd-auto-workflow.test.js around lines 448 to 475,
the test assumes parsePRD sets result.autoExpansion to undefined on
auto-expansion failure but the implementation returns an object { success:
false, ... }; update the assertions to reflect that by checking
result.autoExpansion is an object and result.autoExpansion.success is false (and
optionally check any error/message fields if needed) instead of expecting
undefined.
| it('should handle null token values', () => { | ||
| const result = provider.prepareTokenParam('test-model', null); | ||
| expect(result).toEqual({ maxTokens: 0 }); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Unify null/undefined handling for maxTokens
Prefer treating null like undefined (omit param) to avoid sending maxTokens: 0 downstream.
-const result = provider.prepareTokenParam('test-model', null);
-expect(result).toEqual({ maxTokens: 0 });
+const result = provider.prepareTokenParam('test-model', null);
+expect(result).toEqual({});If the provider intentionally floors null to 0, keep the test and document the rationale.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('should handle null token values', () => { | |
| const result = provider.prepareTokenParam('test-model', null); | |
| expect(result).toEqual({ maxTokens: 0 }); | |
| }); | |
| it('should handle null token values', () => { | |
| const result = provider.prepareTokenParam('test-model', null); | |
| expect(result).toEqual({}); | |
| }); |
| it('should return false on timeout', async () => { | ||
| // Mock timeout scenario | ||
| mockProcess.on.mockImplementation((event, callback) => { | ||
| // Don't call any callbacks to simulate timeout | ||
| }); | ||
|
|
||
| const result = await isCursorAgentAvailable(); | ||
| expect(result).toBe(false); | ||
| expect(mockProcess.kill).toHaveBeenCalled(); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make timeout test deterministic with fake timers
Without fake timers, this test may hang waiting for real timeouts; use Jest timers and advance them.
- it('should return false on timeout', async () => {
- // Mock timeout scenario
- mockProcess.on.mockImplementation((event, callback) => {
- // Don't call any callbacks to simulate timeout
- });
-
- const result = await isCursorAgentAvailable();
- expect(result).toBe(false);
- expect(mockProcess.kill).toHaveBeenCalled();
- });
+ it('should return false on timeout', async () => {
+ jest.useFakeTimers();
+ // Mock no events fired
+ mockProcess.on.mockImplementation(() => {});
+ const promise = isCursorAgentAvailable();
+ // Fast-forward internal timeout
+ jest.runOnlyPendingTimers();
+ const result = await promise;
+ expect(result).toBe(false);
+ expect(mockProcess.kill).toHaveBeenCalled();
+ jest.useRealTimers();
+ });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it('should return false on timeout', async () => { | |
| // Mock timeout scenario | |
| mockProcess.on.mockImplementation((event, callback) => { | |
| // Don't call any callbacks to simulate timeout | |
| }); | |
| const result = await isCursorAgentAvailable(); | |
| expect(result).toBe(false); | |
| expect(mockProcess.kill).toHaveBeenCalled(); | |
| }); | |
| it('should return false on timeout', async () => { | |
| jest.useFakeTimers(); | |
| // Mock no events fired | |
| mockProcess.on.mockImplementation(() => {}); | |
| const promise = isCursorAgentAvailable(); | |
| // Fast-forward internal timeout | |
| jest.runOnlyPendingTimers(); | |
| const result = await promise; | |
| expect(result).toBe(false); | |
| expect(mockProcess.kill).toHaveBeenCalled(); | |
| jest.useRealTimers(); | |
| }); |
🤖 Prompt for AI Agents
In tests/unit/cursor-agent.test.js around lines 128 to 137, the timeout test can
hang because it relies on real timers; change the test to use Jest fake timers
(jest.useFakeTimers()), trigger the code that sets the timeout, advance the
timers (jest.advanceTimersByTime or jest.runAllTimers) to simulate the timeout,
await any pending promises if needed, then assert result is false and that
mockProcess.kill was called, and finally restore real timers with
jest.useRealTimers().
| jest.spyOn(console, 'log').mockImplementation(() => {}); | ||
| jest.spyOn(console, 'error').mockImplementation(() => {}); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick (assertive)
Restore console spies after suite.
Prevents leakage into other tests.
Apply:
// Mock console methods to prevent output during tests
jest.spyOn(console, 'log').mockImplementation(() => {});
jest.spyOn(console, 'error').mockImplementation(() => {});
+
+afterAll(() => {
+ console.log.mockRestore?.();
+ console.error.mockRestore?.();
+});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| jest.spyOn(console, 'log').mockImplementation(() => {}); | |
| jest.spyOn(console, 'error').mockImplementation(() => {}); | |
| jest.spyOn(console, 'log').mockImplementation(() => {}); | |
| jest.spyOn(console, 'error').mockImplementation(() => {}); | |
| afterAll(() => { | |
| console.log.mockRestore?.(); | |
| console.error.mockRestore?.(); | |
| }); |
🤖 Prompt for AI Agents
In tests/unit/task-manager/auto-complexity-expansion.test.js around lines 51-53,
the console.log and console.error spies are created but not restored, which can
leak into other tests; add teardown to restore them (e.g., call mockRestore on
both spies in an afterAll or afterEach block) so the original console methods
are reinstated once the test suite finishes.
|
please rebase, review coderabbit comments and going to be reviewing this shortly! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a cool PR but first of all, feels like we're trying to do too many features in one PR, feels like you added something called "auto complexity expansion" which is a cool feature but its a separate PR.
And then LM studio is a separate PR.
also, we just moved to ai sdk v5, we're about to merge that PR #1246 soon into next. I think you should maybe merge your PR there, and also rebase so that you're using ai sdk v5.
Also lm studio have their own provider on ai sdk v5, maybe you can use that instead ?
https://ai-sdk.dev/providers/openai-compatible-providers/lmstudio
What type of PR is this?
Description
This PR adds comprehensive LM Studio integration to Task Master, enabling users to run AI models locally without requiring API keys. The integration includes:
http://127.0.0.1:1234/v1)Related Issues
N/A - New feature implementation
How to Test This
Expected result:
Contributor Checklist
npm run changesetnpm testnpm run format-check(ornpm run formatto fix)Changelog Entry
Added LM Studio integration for local AI model usage without API keys
For Maintainers
Additional Details
Files Added/Modified:
src/ai-providers/lmstudio.js- Complete LM Studio provider implementationdocs/lmstudio-integration.md- Comprehensive integration guidetests/unit/ai-providers/lmstudio.test.js- Full test coveragesupported-models.jsonwith LM Studio modelsKey Features:
Testing Coverage:
This integration expands Task Master's capabilities by enabling LM Studio usage.
Summary by CodeRabbit
New Features
Documentation
Chores
Tests