Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,8 @@ docker run -d --name telegram-search -p 3333:3333 ghcr.io/groupultra/telegram-se
> [!IMPORTANT]
> AI Embedding & LLM 设置现在在应用内**按账户**配置(设置 → API)。
>
> 支持多种 LLM 提供商,包括 [OpenAI](https://platform.openai.com/)、[MiniMax](https://www.minimaxi.com/) 等 OpenAI 兼容的 API 服务。在设置页面选择提供商后,API 地址和默认模型将自动填充。
>
> 请在修改完成 `.env` 文件后,再次执行 `docker compose -f docker-compose.yml up -d` 启动服务。

以下环境变量全部为可选,如果不填写,则会使用默认值。
Expand Down
1 change: 1 addition & 0 deletions apps/web/src/locales/en.json
Original file line number Diff line number Diff line change
Expand Up @@ -194,6 +194,7 @@
"llm": "Large Language Model",
"llmModel": "LLM Model",
"llmProvider": "LLM Provider",
"customProvider": "Custom",
"visionLLM": "Vision LLM (Multimodal)",
"visionLLMDescription": "Multimodal large language model for image understanding. Used to generate image descriptions for semantic search.",
"login": "Login",
Expand Down
1 change: 1 addition & 0 deletions apps/web/src/locales/zh-CN.json
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,7 @@
"llm": "大语言模型",
"llmModel": "LLM 模型",
"llmProvider": "LLM 提供商",
"customProvider": "自定义",
"visionLLM": "视觉大模型(多模态)",
"visionLLMDescription": "多模态大语言模型,用于图片理解。用于生成图片描述以支持语义搜索。",
"login": "登录",
Expand Down
45 changes: 42 additions & 3 deletions apps/web/src/pages/settings.vue
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
<script setup lang="ts">
import type { LLMProviderKey } from '@tg-search/core'

import { useAccountStore, useBridge } from '@tg-search/client'
import { CoreEventType } from '@tg-search/core'
import { CoreEventType, LLM_PROVIDERS } from '@tg-search/core'
import { storeToRefs } from 'pinia'
import { computed, watch } from 'vue'
import { useI18n } from 'vue-i18n'
Expand All @@ -24,6 +26,28 @@ const messageResolvers = [

const embeddingDimensions = Object.values([1536, 1024, 768])

const providerKeys = Object.keys(LLM_PROVIDERS) as LLMProviderKey[]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To avoid code duplication, it's better to use the LLM_PROVIDER_KEYS constant already exported from the @tg-search/core package instead of recreating the list of keys here. You'll need to add LLM_PROVIDER_KEYS to your imports from @tg-search/core.

const providerKeys = LLM_PROVIDER_KEYS

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicated provider key list and detection logic

Low Severity

providerKeys in settings.vue is identical to the newly exported LLM_PROVIDER_KEYS from the core package — both compute Object.keys(LLM_PROVIDERS) as LLMProviderKey[]. Similarly, the computed getter's detection loop reimplements detectProviderFromApiBase. Both utilities are introduced and exported in this same PR but not actually imported where they're needed.

Additional Locations (2)
Fix in Cursor Fix in Web


const selectedProvider = computed({
get() {
const apiBase = accountSettings.value?.llm?.apiBase ?? ''
for (const key of providerKeys) {
if (apiBase === LLM_PROVIDERS[key].apiBase)
return key
}
return '' // custom
Comment on lines +33 to +38
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This logic for detecting a provider from the API base URL is already implemented in the detectProviderFromApiBase function in @tg-search/core. To reduce code duplication and centralize the logic, you should use that utility function here. Remember to import detectProviderFromApiBase from @tg-search/core.

    const apiBase = accountSettings.value?.llm?.apiBase ?? ''
    return detectProviderFromApiBase(apiBase) ?? ''

},
set(key: string) {
if (!key || !accountSettings.value)
return
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Selecting "Custom" provider snaps back to detected provider

Medium Severity

The selectedProvider computed setter returns early when key is '' (the "Custom" option), so it never modifies apiBase. The getter then re-derives the provider from the unchanged apiBase and returns the previously matched provider key. This causes the dropdown to immediately snap back — making the "Custom" option completely unselectable whenever a recognized provider is active. The user can only reach "Custom" state by manually editing the API base URL field.

Additional Locations (1)
Fix in Cursor Fix in Web

const preset = LLM_PROVIDERS[key as LLMProviderKey]
if (!preset)
return
accountSettings.value.llm.apiBase = preset.apiBase
accountSettings.value.llm.model = preset.defaultModel
},
})

function buildDefaultMessageProcessing() {
return {
receiveMessages: { receiveAll: true, downloadMedia: true },
Expand Down Expand Up @@ -224,13 +248,28 @@ function updateConfig() {
</div>

<div class="grid gap-6">
<div class="space-y-2">
<label class="text-sm font-medium">{{ t('settings.llmProvider') }}</label>
<select
v-model="selectedProvider"
class="h-10 w-full flex border border-input rounded-md bg-background px-3 py-2 text-sm ring-offset-background disabled:cursor-not-allowed disabled:opacity-50 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-offset-2 focus-visible:ring-ring"
>
<option v-for="key in providerKeys" :key="key" :value="key">
{{ LLM_PROVIDERS[key].label }}
</option>
<option value="">
{{ t('settings.customProvider') }}
</option>
</select>
</div>

<div class="grid gap-4 sm:grid-cols-2">
<div class="space-y-2">
<label class="text-sm font-medium">{{ t('settings.llmModel') }}</label>
<input
v-model="accountSettings.llm.model"
type="text"
placeholder="gpt-4o-mini"
:placeholder="LLM_PROVIDERS[selectedProvider as LLMProviderKey]?.defaultModel ?? 'gpt-4o-mini'"
class="h-10 w-full flex border border-input rounded-md bg-background px-3 py-2 text-sm ring-offset-background disabled:cursor-not-allowed file:border-0 file:bg-transparent file:text-sm placeholder:text-muted-foreground file:font-medium disabled:opacity-50 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-offset-2 focus-visible:ring-ring"
>
Comment on lines 269 to 274
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The expression for the dynamic placeholder is repeated for both the model and API base inputs, and it uses an unsafe type cast (as LLMProviderKey). This can make the template harder to read and maintain.

Consider creating a computed property for the currently selected preset to simplify the template and improve type safety. For example:

// In <script setup>
const selectedPreset = computed(() => {
  if (selectedProvider.value) {
    return LLM_PROVIDERS[selectedProvider.value as LLMProviderKey];
  }
  return null;
});

Then you can use it in the template like this, which is much cleaner:

<input
  v-model="accountSettings.llm.model"
  type="text"
  :placeholder="selectedPreset?.defaultModel ?? 'gpt-4o-mini'"
  ...
>

This approach would also apply to the API Base URL input.

</div>
Expand All @@ -239,7 +278,7 @@ function updateConfig() {
<input
v-model="accountSettings.llm.apiBase"
type="text"
placeholder="https://api.openai.com/v1"
:placeholder="LLM_PROVIDERS[selectedProvider as LLMProviderKey]?.apiBase ?? 'https://api.openai.com/v1'"
class="h-10 w-full flex border border-input rounded-md bg-background px-3 py-2 text-sm ring-offset-background disabled:cursor-not-allowed file:border-0 file:bg-transparent file:text-sm placeholder:text-muted-foreground file:font-medium disabled:opacity-50 focus-visible:outline-none focus-visible:ring-2 focus-visible:ring-offset-2 focus-visible:ring-ring"
>
</div>
Expand Down
2 changes: 2 additions & 0 deletions docs/README_EN.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,8 @@ docker run -d --name telegram-search -p 3333:3333 ghcr.io/groupultra/telegram-se
> [!IMPORTANT]
> AI Embedding & LLM settings are now **per-account** in-app (Settings → API).
>
> Multiple LLM providers are supported, including [OpenAI](https://platform.openai.com/), [MiniMax](https://www.minimaxi.com/), and other OpenAI-compatible API services. Select a provider on the settings page to auto-fill the API endpoint and default model.
>
> Please restart the service after modifying the `.env` file by running `docker compose -f docker-compose.yml up -d`.

All environment variables are optional. If not provided, the default values will be used.
Expand Down
84 changes: 84 additions & 0 deletions packages/core/src/__test__/llm-providers.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
import { describe, expect, it } from 'vitest'

import { detectProviderFromApiBase, LLM_PROVIDER_KEYS, LLM_PROVIDERS } from '../llm-providers'

describe('llm-providers', () => {
describe('lLM_PROVIDERS', () => {
it('should contain openai and minimax providers', () => {
expect(LLM_PROVIDERS).toHaveProperty('openai')
expect(LLM_PROVIDERS).toHaveProperty('minimax')
})

it('should have valid openai preset', () => {
const openai = LLM_PROVIDERS.openai
expect(openai.label).toBe('OpenAI')
expect(openai.apiBase).toBe('https://api.openai.com/v1')
expect(openai.defaultModel).toBe('gpt-4o-mini')
expect(openai.models.length).toBeGreaterThan(0)
})

it('should have valid minimax preset', () => {
const minimax = LLM_PROVIDERS.minimax
expect(minimax.label).toBe('MiniMax')
expect(minimax.apiBase).toBe('https://api.minimax.io/v1')
expect(minimax.defaultModel).toBe('MiniMax-M2.7')
expect(minimax.models).toContain('MiniMax-M2.7')
expect(minimax.models).toContain('MiniMax-M2.7-highspeed')
expect(minimax.models).toContain('MiniMax-M2.5')
expect(minimax.models).toContain('MiniMax-M2.5-highspeed')
})

it('should have unique apiBase for each provider', () => {
const bases = Object.values(LLM_PROVIDERS).map(p => p.apiBase)
expect(new Set(bases).size).toBe(bases.length)
})

it('should have non-empty label and defaultModel for each provider', () => {
for (const [key, preset] of Object.entries(LLM_PROVIDERS)) {
expect(preset.label, `${key} label`).toBeTruthy()
expect(preset.defaultModel, `${key} defaultModel`).toBeTruthy()
expect(preset.models.length, `${key} models`).toBeGreaterThan(0)
}
})

it('should include defaultModel in models list', () => {
for (const [key, preset] of Object.entries(LLM_PROVIDERS)) {
expect(preset.models, `${key} models should include defaultModel`).toContain(preset.defaultModel)
}
})
})

describe('lLM_PROVIDER_KEYS', () => {
it('should contain all provider keys', () => {
expect(LLM_PROVIDER_KEYS).toContain('openai')
expect(LLM_PROVIDER_KEYS).toContain('minimax')
})

it('should match Object.keys of LLM_PROVIDERS', () => {
expect(LLM_PROVIDER_KEYS).toEqual(Object.keys(LLM_PROVIDERS))
})
})

describe('detectProviderFromApiBase', () => {
it('should detect openai provider', () => {
expect(detectProviderFromApiBase('https://api.openai.com/v1')).toBe('openai')
})

it('should detect minimax provider', () => {
expect(detectProviderFromApiBase('https://api.minimax.io/v1')).toBe('minimax')
})

it('should return undefined for unknown URL', () => {
expect(detectProviderFromApiBase('https://api.example.com/v1')).toBeUndefined()
})

it('should return undefined for empty string', () => {
expect(detectProviderFromApiBase('')).toBeUndefined()
})

it('should not match partial URLs', () => {
expect(detectProviderFromApiBase('https://api.openai.com')).toBeUndefined()
expect(detectProviderFromApiBase('https://api.minimax.io')).toBeUndefined()
})
})
})
2 changes: 2 additions & 0 deletions packages/core/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ export { initDrizzle } from './db'
export type { CoreDB, InitDrizzleResult } from './db'
export type * from './event-handlers'
export { createCoreInstance, destroyCoreInstance } from './instance'
export { detectProviderFromApiBase, LLM_PROVIDER_KEYS, LLM_PROVIDERS } from './llm-providers'
export type { LLMProviderKey, LLMProviderPreset } from './llm-providers'
export * from './models'
export type * from './types'
export { CoreEventType } from './types/events'
Expand Down
38 changes: 38 additions & 0 deletions packages/core/src/llm-providers.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
export interface LLMProviderPreset {
label: string
apiBase: string
defaultModel: string
models: string[]
}

export const LLM_PROVIDERS = {
openai: {
label: 'OpenAI',
apiBase: 'https://api.openai.com/v1',
defaultModel: 'gpt-4o-mini',
models: ['gpt-4o', 'gpt-4o-mini', 'gpt-4.1', 'gpt-4.1-mini', 'gpt-4.1-nano', 'o3-mini'],
},
minimax: {
label: 'MiniMax',
apiBase: 'https://api.minimax.io/v1',
defaultModel: 'MiniMax-M2.7',
models: ['MiniMax-M2.7', 'MiniMax-M2.7-highspeed', 'MiniMax-M2.5', 'MiniMax-M2.5-highspeed'],
},
} as const satisfies Record<string, LLMProviderPreset>

export type LLMProviderKey = keyof typeof LLM_PROVIDERS

export const LLM_PROVIDER_KEYS = Object.keys(LLM_PROVIDERS) as LLMProviderKey[]

/**
* Detect provider key from an API base URL.
* Returns `undefined` for unrecognised or custom endpoints.
*/
export function detectProviderFromApiBase(apiBase: string): LLMProviderKey | undefined {
for (const [key, preset] of Object.entries(LLM_PROVIDERS)) {
if (apiBase === preset.apiBase) {
return key as LLMProviderKey
}
}
return undefined
}