feat: add LLM provider preset system with MiniMax support#594
feat: add LLM provider preset system with MiniMax support#594octo-patch wants to merge 1 commit intogroupultra:mainfrom
Conversation
Add a provider preset dropdown to the LLM settings UI, allowing users to quickly configure MiniMax (or other OpenAI-compatible providers) without manually entering API endpoints and model names. - Add LLM_PROVIDERS registry with OpenAI and MiniMax presets - Add provider selector to settings page that auto-fills apiBase/model - Add detectProviderFromApiBase() utility for provider detection - Add i18n translations for provider selector (en + zh-CN) - Update README (zh-CN + EN) with multi-provider documentation - Add 13 unit tests for provider presets and detection logic MiniMax models: M2.7, M2.7-highspeed, M2.5, M2.5-highspeed API endpoint: https://api.minimax.io/v1 (OpenAI-compatible)
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the LLM configuration experience by introducing a flexible provider preset system. Users can now easily select from predefined LLM providers like OpenAI and MiniMax, which automatically configure the necessary API base and default model settings. This change streamlines the setup process, reduces manual configuration errors, and lays the groundwork for supporting additional OpenAI-compatible LLM services in the future, making the application more adaptable to various LLM ecosystems. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a provider preset system for LLM settings, which is a great addition for user convenience. The implementation is solid, with new core logic, UI updates, and comprehensive unit tests. I've identified a few opportunities in apps/web/src/pages/settings.vue to refactor for better code reuse and type safety by leveraging more of the new utilities from the core package. My comments provide specific suggestions to address this.
|
|
||
| const embeddingDimensions = Object.values([1536, 1024, 768]) | ||
|
|
||
| const providerKeys = Object.keys(LLM_PROVIDERS) as LLMProviderKey[] |
There was a problem hiding this comment.
| const apiBase = accountSettings.value?.llm?.apiBase ?? '' | ||
| for (const key of providerKeys) { | ||
| if (apiBase === LLM_PROVIDERS[key].apiBase) | ||
| return key | ||
| } | ||
| return '' // custom |
There was a problem hiding this comment.
This logic for detecting a provider from the API base URL is already implemented in the detectProviderFromApiBase function in @tg-search/core. To reduce code duplication and centralize the logic, you should use that utility function here. Remember to import detectProviderFromApiBase from @tg-search/core.
const apiBase = accountSettings.value?.llm?.apiBase ?? ''
return detectProviderFromApiBase(apiBase) ?? ''
| <input | ||
| v-model="accountSettings.llm.model" | ||
| type="text" | ||
| placeholder="gpt-4o-mini" | ||
| :placeholder="LLM_PROVIDERS[selectedProvider as LLMProviderKey]?.defaultModel ?? 'gpt-4o-mini'" | ||
| class="h-10 w-full flex border border-input rounded-md bg-background px-3 py-2 text-sm ring-offset-background disabled:cursor-not-allowed file:border-0 file:bg-transparent file:text-sm placeholder:text-muted-foreground file:font-medium disabled:opacity-50 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-offset-2 focus-visible:ring-ring" | ||
| > |
There was a problem hiding this comment.
The expression for the dynamic placeholder is repeated for both the model and API base inputs, and it uses an unsafe type cast (as LLMProviderKey). This can make the template harder to read and maintain.
Consider creating a computed property for the currently selected preset to simplify the template and improve type safety. For example:
// In <script setup>
const selectedPreset = computed(() => {
if (selectedProvider.value) {
return LLM_PROVIDERS[selectedProvider.value as LLMProviderKey];
}
return null;
});Then you can use it in the template like this, which is much cleaner:
<input
v-model="accountSettings.llm.model"
type="text"
:placeholder="selectedPreset?.defaultModel ?? 'gpt-4o-mini'"
...
>This approach would also apply to the API Base URL input.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| }, | ||
| set(key: string) { | ||
| if (!key || !accountSettings.value) | ||
| return |
There was a problem hiding this comment.
Selecting "Custom" provider snaps back to detected provider
Medium Severity
The selectedProvider computed setter returns early when key is '' (the "Custom" option), so it never modifies apiBase. The getter then re-derives the provider from the unchanged apiBase and returns the previously matched provider key. This causes the dropdown to immediately snap back — making the "Custom" option completely unselectable whenever a recognized provider is active. The user can only reach "Custom" state by manually editing the API base URL field.
Additional Locations (1)
|
|
||
| const embeddingDimensions = Object.values([1536, 1024, 768]) | ||
|
|
||
| const providerKeys = Object.keys(LLM_PROVIDERS) as LLMProviderKey[] |
There was a problem hiding this comment.
Duplicated provider key list and detection logic
Low Severity
providerKeys in settings.vue is identical to the newly exported LLM_PROVIDER_KEYS from the core package — both compute Object.keys(LLM_PROVIDERS) as LLMProviderKey[]. Similarly, the computed getter's detection loop reimplements detectProviderFromApiBase. Both utilities are introduced and exported in this same PR but not actually imported where they're needed.


Summary
This PR adds a provider preset system to the LLM settings UI, making it easy to configure MiniMax and other OpenAI-compatible LLM providers.
Changes
packages/core/src/llm-providers.ts— NewLLM_PROVIDERSregistry with OpenAI and MiniMax presets (apiBase, defaultModel, available models), plusdetectProviderFromApiBase()utilityapps/web/src/pages/settings.vue— Added provider selector dropdown to the LLM section; selecting a provider auto-fills API base URL and default modelapps/web/src/locales/{en,zh-CN}.json— i18n translations for the new "Custom" provider optionREADME.md+docs/README_EN.md— Updated docs to mention multi-provider support (OpenAI, MiniMax)packages/core/src/__test__/llm-providers.test.ts— 13 unit tests covering provider presets, keys, and detection logicMiniMax Provider Details
https://api.minimax.io/v1(OpenAI-compatible)MiniMax-M2.7MiniMax-M2.7,MiniMax-M2.7-highspeed,MiniMax-M2.5,MiniMax-M2.5-highspeedHow it works
The settings page now has a Provider dropdown at the top of the LLM section:
apiBasetohttps://api.minimax.io/v1andmodeltoMiniMax-M2.7apiBase— no new stored field neededNo changes to the LLM call logic — MiniMax's API is fully OpenAI-compatible, so
xsaihandles it transparently.Test plan
pnpm run test:run— 13 new unit tests pass, no regressionspnpm run typecheck— All 14 typecheck tasks passpnpm run lint:fix— Clean lintNote
Low Risk
Low risk: primarily adds preset metadata and UI selection logic, with no changes to LLM request/authorization flow; main risk is misconfigured defaults leading to incorrect endpoints/models.
Overview
Adds a LLM provider preset registry in
@tg-search/core(LLM_PROVIDERS,LLM_PROVIDER_KEYS, anddetectProviderFromApiBase) with initial presets for OpenAI and MiniMax, and exports these frompackages/core/src/index.ts.Updates the Settings UI (
apps/web/src/pages/settings.vue) to include a provider dropdown that detects the preset based onllm.apiBaseand, when selected, auto-fills the LLM API base URL and default model (with a translated Custom option). Documentation is updated to mention multi-provider support, and new unit tests validate the presets and detection behavior.Written by Cursor Bugbot for commit b442f60. This will update automatically on new commits. Configure here.