Skip to content

feat: add MiniMax as direct LLM provider with M2.7 default#1457

Closed
octo-patch wants to merge 2 commits intovercel:main-2026-03-20from
octo-patch:feature/add-minimax-provider
Closed

feat: add MiniMax as direct LLM provider with M2.7 default#1457
octo-patch wants to merge 2 commits intovercel:main-2026-03-20from
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 16, 2026

Summary

Add MiniMax as a direct LLM provider via the OpenAI-compatible API (@ai-sdk/openai), with MiniMax-M2.7 as the default model.

Changes

  • Add getMiniMaxProvider() factory using @ai-sdk/openai with custom base URL
  • Route minimax/* model IDs directly to MiniMax API (bypassing AI Gateway)
  • Register 4 MiniMax models:
    • MiniMax-M2.7 (default) – Latest flagship model with enhanced reasoning and coding
    • MiniMax-M2.7-highspeed – High-speed version for low-latency scenarios
    • MiniMax-M2.5 – Peak performance with 204K context window
    • MiniMax-M2.5-highspeed – Same performance, faster and more agile
  • Add MiniMax provider logo in model selector UI
  • Update README with MiniMax setup instructions
  • Add integration tests (model list + API chat + streaming)

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities. This gives users access to MiniMax models alongside existing providers.

Configuration

Set MINIMAX_API_KEY environment variable. Get your key at https://platform.minimax.io

Testing

  • Integration tests passing (model list, basic chat, streaming)
  • All existing tests unaffected

- Add MiniMax-M2.5 and MiniMax-M2.5-highspeed models to the model selector
- Route MiniMax models directly via @ai-sdk/openai (OpenAI-compatible API)
  instead of through the AI Gateway, enabling users to use their own API key
- Add MINIMAX_API_KEY environment variable support
- Add MiniMax provider group in the model selector UI
- Add integration tests for MiniMax API
- Update README to mention MiniMax support
@vercel
Copy link
Contributor

vercel bot commented Mar 16, 2026

@octo-patch is attempting to deploy a commit to the Zeb Hermann's projects Team on Vercel.

A member of the Team first needs to authorize it.

- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to model list
- Place M2.7 models before existing M2.5 models
- Keep all previous models as alternatives
- Update integration tests to verify 4 models with M2.7 first
- Update README with M2.7 model descriptions
@octo-patch octo-patch changed the title feat: add MiniMax as a direct LLM provider feat: add MiniMax as direct LLM provider with M2.7 default Mar 18, 2026
@dancer dancer deleted the branch vercel:main-2026-03-20 March 20, 2026 11:23
@dancer dancer closed this Mar 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants