Skip to content

feat: add MiniMax provider support for voice agent LLM#15590

Open
octo-patch wants to merge 4 commits intoNVIDIA-NeMo:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax provider support for voice agent LLM#15590
octo-patch wants to merge 4 commits intoNVIDIA-NeMo:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMaxService LLM provider class that connects to MiniMax's OpenAI-compatible API
  • Support MiniMax-M2.7 (default) and MiniMax-M2.7-highspeed models
  • Add minimax as a new backend option in get_llm_service_from_config
  • Add example config llm_configs/minimax.yaml for voice agent setup
  • Add unit tests for MiniMaxService and the factory function

Changes

nemo/agents/voice_agent/pipecat/services/nemo/llm.py

  • New MiniMaxService(OpenAILLMService) class:
    • Reads API key from MINIMAX_API_KEY environment variable (or explicit api_key)
    • Default base URL: https://api.minimax.io/v1 (overseas endpoint)
    • Temperature must be in range (0.0, 1.0] — MiniMax does not support temperature=0
  • Updated get_llm_service_from_config to handle type: minimax
  • Updated assertion to include "minimax" in supported backends

examples/voice_agent/server/server_configs/llm_configs/minimax.yaml

New config file for using MiniMax as the voice agent LLM backend.

examples/voice_agent/tests/test_minimax_llm.py

Unit tests covering instantiation, env-var key resolution, missing key error, default URLs, supported models, and factory function routing.

API Reference

Usage

Set MINIMAX_API_KEY environment variable, then configure the voice agent server:

llm:
  type: minimax
  model: "MiniMax-M2.7"
  model_config: "./server_configs/llm_configs/minimax.yaml"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant