Skip to content

feat: add LLM provider preset system with MiniMax support#594

Open
octo-patch wants to merge 1 commit intogroupultra:mainfrom
octo-patch:feat/add-minimax-provider
Open

feat: add LLM provider preset system with MiniMax support#594
octo-patch wants to merge 1 commit intogroupultra:mainfrom
octo-patch:feat/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 22, 2026

Summary

This PR adds a provider preset system to the LLM settings UI, making it easy to configure MiniMax and other OpenAI-compatible LLM providers.

Changes

  • packages/core/src/llm-providers.ts — New LLM_PROVIDERS registry with OpenAI and MiniMax presets (apiBase, defaultModel, available models), plus detectProviderFromApiBase() utility
  • apps/web/src/pages/settings.vue — Added provider selector dropdown to the LLM section; selecting a provider auto-fills API base URL and default model
  • apps/web/src/locales/{en,zh-CN}.json — i18n translations for the new "Custom" provider option
  • README.md + docs/README_EN.md — Updated docs to mention multi-provider support (OpenAI, MiniMax)
  • packages/core/src/__test__/llm-providers.test.ts — 13 unit tests covering provider presets, keys, and detection logic

MiniMax Provider Details

Field Value
API Base https://api.minimax.io/v1 (OpenAI-compatible)
Default Model MiniMax-M2.7
Available Models MiniMax-M2.7, MiniMax-M2.7-highspeed, MiniMax-M2.5, MiniMax-M2.5-highspeed
Context Window Up to 1M tokens (M2.7), 204K tokens (M2.5-highspeed)

How it works

The settings page now has a Provider dropdown at the top of the LLM section:

  • Selecting MiniMax auto-fills apiBase to https://api.minimax.io/v1 and model to MiniMax-M2.7
  • Selecting OpenAI auto-fills the OpenAI defaults
  • Selecting Custom keeps the fields editable for any OpenAI-compatible endpoint
  • The provider is detected from apiBase — no new stored field needed

No changes to the LLM call logic — MiniMax's API is fully OpenAI-compatible, so xsai handles it transparently.

Test plan

  • pnpm run test:run — 13 new unit tests pass, no regressions
  • pnpm run typecheck — All 14 typecheck tasks pass
  • pnpm run lint:fix — Clean lint
  • Manual: Open Settings → LLM section, verify provider dropdown appears
  • Manual: Select MiniMax, verify apiBase and model auto-fill
  • Manual: Switch to Custom, verify fields remain editable

Note

Low Risk
Low risk: primarily adds preset metadata and UI selection logic, with no changes to LLM request/authorization flow; main risk is misconfigured defaults leading to incorrect endpoints/models.

Overview
Adds a LLM provider preset registry in @tg-search/core (LLM_PROVIDERS, LLM_PROVIDER_KEYS, and detectProviderFromApiBase) with initial presets for OpenAI and MiniMax, and exports these from packages/core/src/index.ts.

Updates the Settings UI (apps/web/src/pages/settings.vue) to include a provider dropdown that detects the preset based on llm.apiBase and, when selected, auto-fills the LLM API base URL and default model (with a translated Custom option). Documentation is updated to mention multi-provider support, and new unit tests validate the presets and detection behavior.

Written by Cursor Bugbot for commit b442f60. This will update automatically on new commits. Configure here.

Add a provider preset dropdown to the LLM settings UI, allowing users
to quickly configure MiniMax (or other OpenAI-compatible providers)
without manually entering API endpoints and model names.

- Add LLM_PROVIDERS registry with OpenAI and MiniMax presets
- Add provider selector to settings page that auto-fills apiBase/model
- Add detectProviderFromApiBase() utility for provider detection
- Add i18n translations for provider selector (en + zh-CN)
- Update README (zh-CN + EN) with multi-provider documentation
- Add 13 unit tests for provider presets and detection logic

MiniMax models: M2.7, M2.7-highspeed, M2.5, M2.5-highspeed
API endpoint: https://api.minimax.io/v1 (OpenAI-compatible)
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the LLM configuration experience by introducing a flexible provider preset system. Users can now easily select from predefined LLM providers like OpenAI and MiniMax, which automatically configure the necessary API base and default model settings. This change streamlines the setup process, reduces manual configuration errors, and lays the groundwork for supporting additional OpenAI-compatible LLM services in the future, making the application more adaptable to various LLM ecosystems.

Highlights

  • LLM Provider Preset System: A new system has been introduced to manage LLM provider configurations, simplifying the setup process for different services within the application's settings.
  • MiniMax Integration: Built-in support for MiniMax has been added, including its API base URL, default model, and a list of available models.
  • Dynamic Settings UI: A provider selector dropdown has been implemented in the LLM settings UI, which automatically populates the API base URL and default model based on the selected provider.
  • Internationalization Updates: English and Chinese localization files have been updated to include the new 'Custom' provider option for the settings UI.
  • Comprehensive Testing: New unit tests were added to ensure the correctness and robustness of the LLM provider detection and preset logic.
  • Documentation Updates: The main README.md and docs/README_EN.md files have been updated to reflect the new multi-provider support.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a provider preset system for LLM settings, which is a great addition for user convenience. The implementation is solid, with new core logic, UI updates, and comprehensive unit tests. I've identified a few opportunities in apps/web/src/pages/settings.vue to refactor for better code reuse and type safety by leveraging more of the new utilities from the core package. My comments provide specific suggestions to address this.


const embeddingDimensions = Object.values([1536, 1024, 768])

const providerKeys = Object.keys(LLM_PROVIDERS) as LLMProviderKey[]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To avoid code duplication, it's better to use the LLM_PROVIDER_KEYS constant already exported from the @tg-search/core package instead of recreating the list of keys here. You'll need to add LLM_PROVIDER_KEYS to your imports from @tg-search/core.

const providerKeys = LLM_PROVIDER_KEYS

Comment on lines +33 to +38
const apiBase = accountSettings.value?.llm?.apiBase ?? ''
for (const key of providerKeys) {
if (apiBase === LLM_PROVIDERS[key].apiBase)
return key
}
return '' // custom
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for detecting a provider from the API base URL is already implemented in the detectProviderFromApiBase function in @tg-search/core. To reduce code duplication and centralize the logic, you should use that utility function here. Remember to import detectProviderFromApiBase from @tg-search/core.

    const apiBase = accountSettings.value?.llm?.apiBase ?? ''
    return detectProviderFromApiBase(apiBase) ?? ''

Comment on lines 269 to 274
<input
v-model="accountSettings.llm.model"
type="text"
placeholder="gpt-4o-mini"
:placeholder="LLM_PROVIDERS[selectedProvider as LLMProviderKey]?.defaultModel ?? 'gpt-4o-mini'"
class="h-10 w-full flex border border-input rounded-md bg-background px-3 py-2 text-sm ring-offset-background disabled:cursor-not-allowed file:border-0 file:bg-transparent file:text-sm placeholder:text-muted-foreground file:font-medium disabled:opacity-50 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-offset-2 focus-visible:ring-ring"
>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The expression for the dynamic placeholder is repeated for both the model and API base inputs, and it uses an unsafe type cast (as LLMProviderKey). This can make the template harder to read and maintain.

Consider creating a computed property for the currently selected preset to simplify the template and improve type safety. For example:

// In <script setup>
const selectedPreset = computed(() => {
  if (selectedProvider.value) {
    return LLM_PROVIDERS[selectedProvider.value as LLMProviderKey];
  }
  return null;
});

Then you can use it in the template like this, which is much cleaner:

<input
  v-model="accountSettings.llm.model"
  type="text"
  :placeholder="selectedPreset?.defaultModel ?? 'gpt-4o-mini'"
  ...
>

This approach would also apply to the API Base URL input.

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 2 potential issues.

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

},
set(key: string) {
if (!key || !accountSettings.value)
return
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Selecting "Custom" provider snaps back to detected provider

Medium Severity

The selectedProvider computed setter returns early when key is '' (the "Custom" option), so it never modifies apiBase. The getter then re-derives the provider from the unchanged apiBase and returns the previously matched provider key. This causes the dropdown to immediately snap back — making the "Custom" option completely unselectable whenever a recognized provider is active. The user can only reach "Custom" state by manually editing the API base URL field.

Additional Locations (1)
Fix in Cursor Fix in Web


const embeddingDimensions = Object.values([1536, 1024, 768])

const providerKeys = Object.keys(LLM_PROVIDERS) as LLMProviderKey[]
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicated provider key list and detection logic

Low Severity

providerKeys in settings.vue is identical to the newly exported LLM_PROVIDER_KEYS from the core package — both compute Object.keys(LLM_PROVIDERS) as LLMProviderKey[]. Similarly, the computed getter's detection loop reimplements detectProviderFromApiBase. Both utilities are introduced and exported in this same PR but not actually imported where they're needed.

Additional Locations (2)
Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant