Skip to content

feat: add MiniMax as first-class LLM provider with M2.7 default#753

Open
octo-patch wants to merge 2 commits intoTheR1D:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider with M2.7 default#753
octo-patch wants to merge 2 commits intoTheR1D:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 17, 2026

Summary

Add first-class MiniMax LLM provider support with the latest M2.7 model as default.

Changes

  • Auto-detect MINIMAX_API_KEY environment variable and route to MiniMax API
  • Temperature clamping for MiniMax models (0.0 → 0.01)
  • MiniMax-M2.7 as recommended default model
  • MiniMax-M2.7-highspeed for low-latency scenarios
  • MiniMax-M2.5 and MiniMax-M2.5-highspeed retained as alternatives
  • README documentation with usage examples and Docker support
  • 6 unit tests + 4 integration tests

Why

MiniMax-M2.7 is the latest flagship model with enhanced reasoning and coding capabilities. The OpenAI-compatible API makes integration seamless with ShellGPT's existing architecture.

Testing

  • All 6 unit tests passing
  • Integration tests verified with real MiniMax API

PR Bot and others added 2 commits March 17, 2026 17:23
Add first-class support for MiniMax LLM models (MiniMax-M2.5,
MiniMax-M2.5-highspeed) via their OpenAI-compatible API endpoint.

Changes:
- Auto-detect MINIMAX_API_KEY env var and configure the API endpoint
- Temperature clamping for MiniMax models (0.0 → 0.01) since MiniMax
  requires temperature in (0.0, 1.0]
- Documentation in README with usage examples and Docker setup
- 5 unit tests + 3 integration tests

Co-Authored-By: Octopus <liyuan851277048@icloud.com>
- Update default model from MiniMax-M2.5 to MiniMax-M2.7 in README examples
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to available models list
- Update unit tests to use M2.7 as primary model
- Update integration tests to use M2.7 as primary model
- Add backward compatibility tests for MiniMax-M2.5
- Keep all previous models as available alternatives
@octo-patch octo-patch changed the title feat: add MiniMax as first-class LLM provider feat: add MiniMax as first-class LLM provider with M2.7 default Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant