Skip to content

Conversation

@Crunchyman-ralph
Copy link
Collaborator

@Crunchyman-ralph Crunchyman-ralph commented Oct 14, 2025

  • Add azure, openrouter, bedrock, and ollama to VALIDATED_PROVIDERS array
  • Add Azure reasoning models (GPT-5, o1, o3, o3-mini, o4-mini) to supported-models.json
  • Implement automatic API endpoint detection for Azure reasoning models
  • Add dual endpoint support (chat/completions vs responses) in AzureProvider
  • Add smart URL adjustment logic for different Azure configurations
  • Maintain backward compatibility with existing Azure setups

Fixes #638

🤖 Generated with Claude Code

What type of PR is this?

  • 🐛 Bug fix
  • ✨ Feature
  • 🔌 Integration
  • 📝 Docs
  • 🧹 Refactor
  • Other:

Description

Related Issues

How to Test This

# Example commands or steps

Expected result:

Contributor Checklist

  • Created changeset: npm run changeset
  • Tests pass: npm test
  • Format check passes: npm run format-check (or npm run format to fix)
  • Addressed CodeRabbit comments (if any)
  • Linked related issues (if any)
  • Manually tested the changes

Changelog Entry


For Maintainers

  • PR title follows conventional commits
  • Target branch correct
  • Labels added
  • Milestone assigned (if applicable)

Summary by CodeRabbit

  • New Features

    • Added support for new Azure OpenAI models: gpt-5, o1, o3, o3-mini, and o4-mini, with consistent max token limits (up to 100k).
  • Improvements

    • Automatic endpoint selection for reasoning models using the Responses API, reducing setup and errors.
    • Expanded provider validation to include Azure, OpenRouter, Bedrock, and Ollama for more reliable compatibility checks.

- Add azure, openrouter, bedrock, and ollama to VALIDATED_PROVIDERS array
- Add Azure reasoning models (GPT-5, o1, o3, o3-mini, o4-mini) to supported-models.json
- Implement automatic API endpoint detection for Azure reasoning models
- Add dual endpoint support (chat/completions vs responses) in AzureProvider
- Add smart URL adjustment logic for different Azure configurations
- Maintain backward compatibility with existing Azure setups

Fixes #638

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Ralph Khreish <Crunchyman-ralph@users.noreply.github.com>
@changeset-bot
Copy link

changeset-bot bot commented Oct 14, 2025

⚠️ No Changeset found

Latest commit: 59ee1e7

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 14, 2025

Walkthrough

Adds new Azure OpenAI reasoning model entries to the model registry, updates the Azure provider to auto-select the Responses API endpoint based on model metadata, and expands the validated provider list to include Azure, OpenRouter, Bedrock, and Ollama.

Changes

Cohort / File(s) Summary
Model registry updates
scripts/modules/supported-models.json
Added five Azure OpenAI models (gpt-5, o1, o3, o3-mini, o4-mini) with api_type: "responses", max_tokens: 100000, cost fields, roles, and swe_score. gpt-5 includes temperature: 1. No existing entries changed.
Azure provider endpoint routing logic
src/ai-providers/azure.js
Imported model metadata map; added isReasoningModel(modelId); added adjustBaseURL(baseURL, modelId) to route to /responses for reasoning models; updated getClient(params) to accept modelId and use adjusted base URL with existing error handling.
Provider validation list expansion
src/constants/providers.js
Expanded VALIDATED_PROVIDERS to include 'azure', 'openrouter', 'bedrock', and 'ollama' (mistral remains).

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Caller as Caller
  participant AzureProvider as AzureProvider
  participant ModelMap as Supported Models JSON
  participant AzureSDK as createAzure()

  Caller->>AzureProvider: getClient({ baseURL, modelId, ... })
  AzureProvider->>ModelMap: Lookup modelId metadata
  ModelMap-->>AzureProvider: api_type (e.g., "responses" or other)
  alt Reasoning model (api_type = "responses")
    AzureProvider->>AzureProvider: adjustBaseURL(baseURL) → ensure /responses
  else Non-reasoning model
    AzureProvider->>AzureProvider: Leave baseURL unchanged
  end
  AzureProvider->>AzureSDK: createAzure({ baseURL: adjustedBaseURL, ... })
  AzureSDK-->>AzureProvider: Client
  AzureProvider-->>Caller: Client
  note over AzureProvider,AzureSDK: Errors follow existing handling path
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Linked Issues Check ⚠️ Warning The pull request implements the core coding fix for issue #638 by adding Azure to the validated providers and enabling reasoning model endpoint detection, but it does not include the requested documentation updates for configuring Azure via MCP and CLI. Add or update the relevant documentation files to show how to configure Azure as a provider through both MCP and the CLI to fully satisfy issue #638.
Out of Scope Changes Check ⚠️ Warning While the changes correctly address Azure support, the addition of openrouter, bedrock, and ollama to the VALIDATED_PROVIDERS list falls outside the scope of the linked issue #638, which only requested Azure provider support. Remove or relocate the openrouter, bedrock, and ollama provider additions into a separate pull request so this change set focuses solely on Azure support.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title “fix: support Azure provider with reasoning models” succinctly and accurately reflects the main change of adding Azure provider support and reasoning model endpoint logic, giving clear context without extra noise.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch claude/issue-638-20251014-0756

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a98d96e and 59ee1e7.

📒 Files selected for processing (3)
  • scripts/modules/supported-models.json (1 hunks)
  • src/ai-providers/azure.js (2 hunks)
  • src/constants/providers.js (1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.js

📄 CodeRabbit inference engine (.cursor/rules/tests.mdc)

**/*.js: Declare and initialize global variables at the top of modules to avoid hoisting issues.
Use proper function declarations to avoid hoisting issues and initialize variables before they are referenced.
Do not reference variables before their declaration in module scope.
Use dynamic imports (import()) to avoid initialization order issues in modules.

Files:

  • src/constants/providers.js
  • src/ai-providers/azure.js
src/ai-providers/*.js

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

src/ai-providers/*.js: Create a new provider module file in src/ai-providers/ named .js when adding a new AI provider.
Provider modules must export three functions: generateText, streamText, and generateObject.
Provider modules must import the provider's create function from @ai-sdk/, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.
Implement generateText, streamText, and generateObject functions in provider modules with basic validation and try/catch error handling.

Provider-specific wrappers for Vercel AI SDK functions must be implemented in src/ai-providers/*.js, each file corresponding to a provider.

Files:

  • src/ai-providers/azure.js
scripts/modules/supported-models.json

📄 CodeRabbit inference engine (.cursor/rules/ai_providers.mdc)

Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.

Files:

  • scripts/modules/supported-models.json
scripts/modules/**

📄 CodeRabbit inference engine (.cursor/rules/dev_workflow.mdc)

When using the MCP server, restart it if core logic in scripts/modules or MCP tool/direct function definitions change.

Files:

  • scripts/modules/supported-models.json
scripts/modules/*

📄 CodeRabbit inference engine (.cursor/rules/tags.mdc)

scripts/modules/*: Every command that reads or writes tasks.json must be tag-aware
All command files must import getCurrentTag from utils.js
Every CLI command that operates on tasks must include the --tag CLI option
All commands must resolve the tag using the pattern: options.tag || getCurrentTag(projectRoot) || 'master'
All commands must find projectRoot with error handling before proceeding
All commands must pass { projectRoot, tag } as context to core functions
MCP direct functions must accept and use a context object containing projectRoot and tag, and pass them to core functions
Do not hard-code tag resolution (e.g., const tag = options.tag || 'master';); always use getCurrentTag
Do not omit the --tag CLI option in commands that operate on tasks
Do not omit the context parameter when calling core functions from commands
Do not call readJSON or writeJSON without passing projectRoot and tag

Files:

  • scripts/modules/supported-models.json
🧠 Learnings (3)
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/config-manager.js : Update scripts/modules/config-manager.js to add the new provider to MODEL_MAP, ensure it is included in VALID_PROVIDERS, and update API key handling logic.

Applied to files:

  • src/constants/providers.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to src/ai-providers/*.js : Provider modules must import the provider's create<ProviderName> function from ai-sdk/<provider-name>, and import generateText, streamText, generateObject from the core ai package, as well as the log utility from ../../scripts/modules/utils.js.

Applied to files:

  • src/ai-providers/azure.js
📚 Learning: 2025-07-18T17:06:04.909Z
Learnt from: CR
PR: eyaltoledano/claude-task-master#0
File: .cursor/rules/ai_providers.mdc:0-0
Timestamp: 2025-07-18T17:06:04.909Z
Learning: Applies to scripts/modules/supported-models.json : Add a new key for the provider and an array of model objects under it in scripts/modules/supported-models.json, including id, name, allowed_roles, and optionally swe_score, cost_per_1m_tokens, and max_tokens.

Applied to files:

  • scripts/modules/supported-models.json
🧬 Code graph analysis (1)
src/ai-providers/azure.js (2)
scripts/modules/ai-services-unified.js (2)
  • modelId (529-529)
  • baseURL (533-533)
scripts/modules/config-manager.js (1)
  • modelId (703-703)
🪛 GitHub Actions: CI
src/ai-providers/azure.js

[error] 8-9: Formatter would have printed content with import MODEL_MAP from '../../scripts/modules/supported-models.json' with { type: 'json' } - formatting mismatch detected by biome format. Run 'biome format .' to fix.


[error] 46-49: Formatter would have printed content for: const modelDef = azureModels.find(m => m.id === modelId); - ensure spacing and arrow function formatting matches formatter output. Run 'biome format .' to fix.

🔇 Additional comments (2)
src/ai-providers/azure.js (2)

90-100: AzureProvider.getClient always receives modelId. All invocations (via ai-services-unified) pass modelId from config, so the reasoning-URL switch is safe.

Likely an incorrect or invalid review comment.


8-8: No Biome formatting errors remain The JSON import already adheres to the formatter’s style.

Comment on lines +1001 to 1061
{
"id": "gpt-5",
"swe_score": 0.749,
"cost_per_1m_tokens": {
"input": 5.0,
"output": 20.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000,
"temperature": 1,
"supported": true,
"api_type": "responses"
},
{
"id": "o1",
"swe_score": 0.489,
"cost_per_1m_tokens": {
"input": 15.0,
"output": 60.0
},
"allowed_roles": ["main"],
"max_tokens": 100000,
"supported": true,
"api_type": "responses"
},
{
"id": "o3",
"swe_score": 0.5,
"cost_per_1m_tokens": {
"input": 2.0,
"output": 8.0
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000,
"supported": true,
"api_type": "responses"
},
{
"id": "o3-mini",
"swe_score": 0.493,
"cost_per_1m_tokens": {
"input": 1.1,
"output": 4.4
},
"allowed_roles": ["main"],
"max_tokens": 100000,
"supported": true,
"api_type": "responses"
},
{
"id": "o4-mini",
"swe_score": 0.45,
"cost_per_1m_tokens": {
"input": 1.1,
"output": 4.4
},
"allowed_roles": ["main", "fallback"],
"max_tokens": 100000,
"supported": true,
"api_type": "responses"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Azure reasoning models metadata looks good; add display names to align with guidelines

  • The new Azure entries correctly mark reasoning models with api_type: "responses" and include roles/max_tokens. Good.
  • Please add a human-friendly "name" for each model (e.g., "GPT‑5", "o3", "o3‑mini", "o4‑mini", "o1") to match our model schema expectations for provider entries. This improves UX and aligns with our JSON structure across providers.

As per coding guidelines

Comment on lines +41 to +50
/**
* Determines if a model requires the responses API endpoint instead of chat/completions
* @param {string} modelId - The model ID to check
* @returns {boolean} True if the model needs the responses API
*/
isReasoningModel(modelId) {
const azureModels = MODEL_MAP.azure || [];
const modelDef = azureModels.find(m => m.id === modelId);
return modelDef?.api_type === 'responses';
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Model detection may miss Azure deployments named differently; consider robust detection

isReasoningModel strictly matches modelId against MODEL_MAP.azure ids. Azure deployments often use custom deployment names, so this can return false even for reasoning models.

Consider:

  • Matching by normalized base model names (e.g., o1, o3, o3-mini, o4-mini, gpt-5) in addition to JSON lookup, or
  • Allow an explicit useResponses flag/override from config, or
  • Map deploymentName → model id in config and pass the canonical model id here.

At minimum, add a fallback heuristic:

const canonical = (modelId || '').toLowerCase();
return modelDef?.api_type === 'responses' || /^(gpt-5|o1|o3|o4)/.test(canonical);

Please confirm callers pass canonical model ids, not arbitrary deployment names. Based on learnings

🧰 Tools
🪛 GitHub Actions: CI

[error] 46-49: Formatter would have printed content for: const modelDef = azureModels.find(m => m.id === modelId); - ensure spacing and arrow function formatting matches formatter output. Run 'biome format .' to fix.

🤖 Prompt for AI Agents
In src/ai-providers/azure.js around lines 41 to 50, isReasoningModel only checks
MODEL_MAP.azure ids and can miss custom Azure deployment names; update the
function to first attempt the existing MODEL_MAP lookup, then fallback to a
normalized heuristic that lowercases modelId and tests for canonical reasoning
base names (e.g., gpt-5, o1, o3, o4 and their -mini variants) OR honor an
explicit useResponses flag/override from config (or a deploymentName→modelId
mapping) if provided; also add a short comment documenting the fallback and add
a TODO to verify callers supply canonical model ids or mapping.

Comment on lines +58 to +79
adjustBaseURL(baseURL, modelId) {
if (!this.isReasoningModel(modelId)) {
return baseURL;
}

// Convert chat/completions URL to responses URL for reasoning models
if (baseURL.includes('/chat/completions')) {
return baseURL.replace('/chat/completions', '/responses');
}

// If baseURL ends with deployments/<model-name>, add responses endpoint
if (baseURL.includes('/deployments/')) {
return baseURL.replace(/\/deployments\/[^\/]+$/, '/responses');
}

// If baseURL is just the base, ensure it ends with /responses
if (!baseURL.endsWith('/responses')) {
return baseURL.replace(/\/$/, '') + '/responses';
}

return baseURL;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bug: URL rewrite can corrupt query strings; append /responses before ?api-version

When baseURL includes query params (common in Azure, e.g., ?api-version=...), appending '/responses' via string concat can place it after the query, yielding invalid URLs. Also the /deployments/ regex only matches at end and misses cases with queries or trailing slashes.

Apply this safer, query-preserving rewrite:

-  adjustBaseURL(baseURL, modelId) {
-    if (!this.isReasoningModel(modelId)) {
-      return baseURL;
-    }
-
-    // Convert chat/completions URL to responses URL for reasoning models
-    if (baseURL.includes('/chat/completions')) {
-      return baseURL.replace('/chat/completions', '/responses');
-    }
-
-    // If baseURL ends with deployments/<model-name>, add responses endpoint
-    if (baseURL.includes('/deployments/')) {
-      return baseURL.replace(/\/deployments\/[^\/]+$/, '/responses');
-    }
-
-    // If baseURL is just the base, ensure it ends with /responses
-    if (!baseURL.endsWith('/responses')) {
-      return baseURL.replace(/\/$/, '') + '/responses';
-    }
-
-    return baseURL;
-  }
+  adjustBaseURL(baseURL, modelId) {
+    if (!this.isReasoningModel(modelId) || !baseURL) return baseURL;
+    try {
+      const url = new URL(baseURL);
+      // Normalize trailing slashes (don’t touch query/search)
+      let pathname = url.pathname.replace(/\/+$/, '');
+
+      // 1) Replace .../chat/completions with .../responses
+      pathname = pathname.replace(/\/chat\/completions$/, '/responses');
+
+      // 2) If path ends with /openai/deployments/<dep>, append /responses
+      if (/\/openai\/deployments\/[^/]+$/.test(pathname)) {
+        pathname = `${pathname}/responses`;
+      }
+      // 3) If path ends with /openai, append /responses
+      else if (/\/openai$/.test(pathname)) {
+        pathname = `${pathname}/responses`;
+      }
+
+      url.pathname = pathname;
+      return url.toString();
+    } catch {
+      // Fallback that preserves query string positioning
+      const qIdx = baseURL.indexOf('?');
+      const path = qIdx >= 0 ? baseURL.slice(0, qIdx) : baseURL;
+      const qs = qIdx >= 0 ? baseURL.slice(qIdx) : '';
+      let newPath = path.replace('/chat/completions', '/responses');
+      if (!/\/responses$/.test(newPath)) {
+        if (/\/openai\/deployments\/[^/]+$/.test(newPath) || /\/openai$/.test(newPath)) {
+          newPath = newPath.replace(/\/+$/, '') + '/responses';
+        }
+      }
+      return newPath + qs;
+    }
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
adjustBaseURL(baseURL, modelId) {
if (!this.isReasoningModel(modelId)) {
return baseURL;
}
// Convert chat/completions URL to responses URL for reasoning models
if (baseURL.includes('/chat/completions')) {
return baseURL.replace('/chat/completions', '/responses');
}
// If baseURL ends with deployments/<model-name>, add responses endpoint
if (baseURL.includes('/deployments/')) {
return baseURL.replace(/\/deployments\/[^\/]+$/, '/responses');
}
// If baseURL is just the base, ensure it ends with /responses
if (!baseURL.endsWith('/responses')) {
return baseURL.replace(/\/$/, '') + '/responses';
}
return baseURL;
}
adjustBaseURL(baseURL, modelId) {
- if (!this.isReasoningModel(modelId)) {
- return baseURL;
- }
-
- // Convert chat/completions URL to responses URL for reasoning models
- if (baseURL.includes('/chat/completions')) {
- return baseURL.replace('/chat/completions', '/responses');
- }
-
- // If baseURL ends with deployments/<model-name>, add responses endpoint
- if (baseURL.includes('/deployments/')) {
- return baseURL.replace(/\/deployments\/[^\/]+$/, '/responses');
- }
-
- // If baseURL is just the base, ensure it ends with /responses
- if (!baseURL.endsWith('/responses')) {
- return baseURL.replace(/\/$/, '') + '/responses';
- }
-
- return baseURL;
if (!this.isReasoningModel(modelId) || !baseURL) return baseURL;
try {
const url = new URL(baseURL);
// Normalize trailing slashes (don’t touch query/search)
let pathname = url.pathname.replace(/\/+$/, '');
// 1) Replace .../chat/completions with .../responses
pathname = pathname.replace(/\/chat\/completions$/, '/responses');
// 2) If path ends with /openai/deployments/<dep>, append /responses
if (/\/openai\/deployments\/[^/]+$/.test(pathname)) {
pathname = `${pathname}/responses`;
}
// 3) If path ends with /openai, append /responses
else if (/\/openai$/.test(pathname)) {
pathname = `${pathname}/responses`;
}
url.pathname = pathname;
return url.toString();
} catch {
// Fallback that preserves query string positioning
const qIdx = baseURL.indexOf('?');
const path = qIdx >= 0 ? baseURL.slice(0, qIdx) : baseURL;
const qs = qIdx >= 0 ? baseURL.slice(qIdx) : '';
let newPath = path.replace('/chat/completions', '/responses');
if (!/\/responses$/.test(newPath)) {
if (/\/openai\/deployments\/[^/]+$/.test(newPath) || /\/openai$/.test(newPath)) {
newPath = newPath.replace(/\/+$/, '') + '/responses';
}
}
return newPath + qs;
}
}
🤖 Prompt for AI Agents
In src/ai-providers/azure.js around lines 58 to 79, the adjustBaseURL
implementation can corrupt query strings and misses deployments/ matches when
queries or trailing slashes exist; instead of naive string replaces, parse the
URL (or split on the first '?') to separate path and query, operate only on the
path: for reasoning models, if path contains '/chat/completions' replace that
segment with '/responses'; if path contains '/deployments/' replace a trailing
'/deployments/<name>' (allowing optional trailing slash) with '/responses';
otherwise ensure the path ends with '/responses' (avoiding duplicate slashes);
finally reattach the original query string (including the leading '?') and
return the reconstructed URL.

Comment on lines +14 to +18
'mistral',
'azure',
'openrouter',
'bedrock',
'ollama'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid duplicates in ALL_PROVIDERS (azure/openrouter/bedrock/ollama appear twice)

Adding these to VALIDATED_PROVIDERS while they also exist in CUSTOM_PROVIDERS causes duplicates in ALL_PROVIDERS. Use a Set to ensure uniqueness.

Apply this diff:

 export const ALL_PROVIDERS = [
-  ...VALIDATED_PROVIDERS,
-  ...CUSTOM_PROVIDERS_ARRAY
-];
+  ...new Set([...VALIDATED_PROVIDERS, ...CUSTOM_PROVIDERS_ARRAY]),
+];

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In src/constants/providers.js around lines 14 to 18, VALIDATED_PROVIDERS entries
are being concatenated with CUSTOM_PROVIDERS which causes duplicates
(azure/openrouter/bedrock/ollama appear twice); replace the current
concatenation with a unique union using a Set (e.g. construct ALL_PROVIDERS =
Array.from(new Set([...VALIDATED_PROVIDERS, ...CUSTOM_PROVIDERS]))) and export
that deduplicated array so ALL_PROVIDERS contains each provider only once.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants