Skip to content

Morph updates #5918

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions core/control-plane/schema.ts
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ const modelDescriptionSchema = z.object({
"nebius",
"siliconflow",
"scaleway",
"watsonx"
"watsonx",
]),
model: z.string(),
apiKey: z.string().optional(),
Expand Down Expand Up @@ -88,13 +88,14 @@ const embeddingsProviderSchema = z.object({
"ollama",
"openai",
"cohere",
"morph",
"free-trial",
"gemini",
"ovhcloud",
"nebius",
"siliconflow",
"scaleway",
"watsonx"
"watsonx",
]),
apiBase: z.string().optional(),
apiKey: z.string().optional(),
Expand All @@ -116,7 +117,7 @@ const embeddingsProviderSchema = z.object({
});

const rerankerSchema = z.object({
name: z.enum(["cohere", "voyage", "llm", "watsonx"]),
name: z.enum(["cohere", "morph", "voyage", "llm", "watsonx"]),
params: z.record(z.any()).optional(),
});

Expand Down
35 changes: 35 additions & 0 deletions core/llm/llms/Morph.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
import { Chunk, LLMOptions } from "../../index.js";
import OpenAI from "./OpenAI.js";

class Morph extends OpenAI {
static providerName = "morph";
static defaultOptions: Partial<LLMOptions> = {
apiBase: "https://api.morphllm.com/v1",
maxEmbeddingBatchSize: 96,
};
static maxStopSequences = 5;

async rerank(query: string, chunks: Chunk[]): Promise<number[]> {
const resp = await this.fetch(new URL("rerank", this.apiBase), {
method: "POST",
headers: {
Authorization: `Bearer ${this.apiKey}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
model: this.model,
query,
documents: chunks.map((chunk) => chunk.content),
}),
});

if (!resp.ok) {
throw new Error(await resp.text());
}

const data = (await resp.json()) as any;
const results = data.results.sort((a: any, b: any) => a.index - b.index);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code assumes data.results exists and is an array without validation. If the API response doesn't include a 'results' property or if it's not an array, this will cause a runtime error. Additionally, using 'any' type for sort parameters loses type safety. The code should validate the response structure and use proper typing.


React with 👍 to tell me that this comment was useful, or 👎 if not (and I'll stop posting more comments like this in the future)

return results.map((result: any) => result.relevance_score);
}
}
export default Morph;
2 changes: 2 additions & 0 deletions core/llm/llms/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ import BedrockImport from "./BedrockImport";
import Cerebras from "./Cerebras";
import Cloudflare from "./Cloudflare";
import Cohere from "./Cohere";
import Morph from "./Morph"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code attempts to import a non-existent module './Morph'. The Morph.ts file does not exist in the core/llm/llms directory. This will cause a runtime error when the module tries to load, as Node.js/TypeScript will fail to resolve the import. The module must be created before it can be imported.


React with 👍 to tell me that this comment was useful, or 👎 if not (and I'll stop posting more comments like this in the future)

import DeepInfra from "./DeepInfra";
import Deepseek from "./Deepseek";
import Docker from "./Docker";
Expand Down Expand Up @@ -116,6 +117,7 @@ export const LLMClasses = [
SiliconFlow,
Scaleway,
Relace,
Morph,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding the non-existent Morph class to LLMClasses array will cause a runtime error. The Morph class is undefined since its module doesn't exist, and this will cause a runtime error when the array is used in llmFromDescription or llmFromProviderAndOptions functions. The Morph implementation must be created and properly implement the ILLM interface before being added to LLMClasses.


React with 👍 to tell me that this comment was useful, or 👎 if not (and I'll stop posting more comments like this in the future)

Inception,
Voyage,
];
Expand Down
5 changes: 3 additions & 2 deletions docs/docs/customize/model-providers/more/morph.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Morph provides a fast apply model that helps you quickly and accurately apply co
<TabItem value="yaml" label="YAML">
```yaml title="config.yaml"
models:
- uses: morphllm/morph-v0
- uses: morphllm/morph-v2
with:
MORPH_API_KEY: ${{ secrets.MORPH_API_KEY }}
```
Expand All @@ -21,7 +21,7 @@ Morph provides a fast apply model that helps you quickly and accurately apply co
{
"title": "Morph Fast Apply",
"provider": "openai",
"model": "morph-v0",
"model": "morph-v2",
"apiKey": "<YOUR_MORPH_API_KEY>",
"apiBase": "https://api.morphllm.com/v1/",
"roles": ["apply", "chat"],
Expand All @@ -34,3 +34,4 @@ Morph provides a fast apply model that helps you quickly and accurately apply co
```
</TabItem>
</Tabs>

5 changes: 5 additions & 0 deletions docs/docs/customize/model-roles/embeddings.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -152,3 +152,8 @@ See [here](../model-providers/more/watsonx.mdx#embeddings-model) for instruction
### LMStudio

See [here](../model-providers/more/lmstudio.mdx#embeddings-model) for instructions on how to use LMStudio for embeddings.

### Morph

See [here](../model-providers/more/morph.mdx#embeddings-model) for instructions on how to use Morph for embeddings.

6 changes: 6 additions & 0 deletions docs/docs/customize/model-roles/reranking.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,8 @@ See Cohere's documentation for rerankers [here](https://docs.cohere.com/docs/rer
</TabItem>
</Tabs>



### LLM

If you only have access to a single LLM, then you can use it as a reranker. This is discouraged unless truly necessary, because it will be much more expensive and still less accurate than any of the above models trained specifically for the task. Note that this will not work if you are using a local model, for example with Ollama, because too many parallel requests need to be made.
Expand Down Expand Up @@ -154,3 +156,7 @@ The `"modelTitle"` field must match one of the models in your "models" array in
```
</TabItem>
</Tabs>

### Morph

See the [Morph guide](../model-providers/more/morph.mdx) for details on how to use MorphLLM for reranking.
23 changes: 22 additions & 1 deletion extensions/vscode/config_schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -900,6 +900,23 @@
}
}
},
{
"if": {
"properties": {
"provider": {
"enum": ["morph"]
}
},
"required": ["provider"]
},
"then": {
"properties": {
"model": {
"enum": ["morph-embedding-v2", "morph-rerank-v2", "morph-v2"]
}
}
}
},
{
"if": {
"properties": {
Expand Down Expand Up @@ -1515,7 +1532,11 @@
"then": {
"properties": {
"model": {
"enum": ["llama3.1-8b", "llama3.1-70b", "llama-4-scout-17b-16e-instruct"]
"enum": [
"llama3.1-8b",
"llama3.1-70b",
"llama-4-scout-17b-16e-instruct"
]
}
}
}
Expand Down
Binary file added gui/public/logos/morph.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
34 changes: 27 additions & 7 deletions gui/src/pages/AddNewModel/configs/providers.ts
Original file line number Diff line number Diff line change
Expand Up @@ -353,6 +353,28 @@ Select the \`GPT-4o\` model below to complete your provider configuration, but n
packages: [models.commandR, models.commandRPlus],
apiKeyUrl: "https://docs.cohere.com/v2/docs/rate-limits",
},
morph: {
title: "Morph",
provider: "morph",
refPage: "morph",
description: "Fast Apply, Embed, and Rerank models",
icon: "morph.png",
tags: [ModelProviderTags.RequiresApiKey],
longDescription:
"To use Morph, visit the [Morph dashboard](https://morphllm.com/dashboard) to create an API key.",
collectInputFor: [
{
inputType: "text",
key: "apiKey",
label: "API Key",
placeholder: "Enter your Morph API key",
required: true,
},
...completionParamsInputsConfigs,
],
packages: [models.morphFastApply, models.morphEmbed, models.morphRerank],
apiKeyUrl: "https://morphllm.com/dashboard",
},
groq: {
title: "Groq",
provider: "groq",
Expand Down Expand Up @@ -797,7 +819,7 @@ To get started, [register](https://dataplatform.cloud.ibm.com/registration/stepo
},
...completionParamsInputsConfigs,
],
packages:[
packages: [
models.llama4Scout,
models.llama4Maverick,
models.llama3370BInstruct,
Expand All @@ -808,7 +830,7 @@ To get started, [register](https://dataplatform.cloud.ibm.com/registration/stepo
models.qwq32B,
models.deepseekR1DistillLlama70B,
models.deepseekR1,
models.deepseekV3
models.deepseekV3,
],
apiKeyUrl: "https://cloud.sambanova.ai/apis",
},
Expand Down Expand Up @@ -1011,9 +1033,7 @@ To get started, [register](https://dataplatform.cloud.ibm.com/registration/stepo
required: true,
},
],
packages: [
{...models.AUTODETECT}
],
apiKeyUrl: "https://venice.ai/chat"
}
packages: [{ ...models.AUTODETECT }],
apiKeyUrl: "https://venice.ai/chat",
},
};
15 changes: 12 additions & 3 deletions packages/config-types/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ export const modelDescriptionSchema = z.object({
"openai",
"anthropic",
"cohere",
"morph",
"ollama",
"huggingface-tgi",
"huggingface-inference-api",
Expand All @@ -60,7 +61,7 @@ export const modelDescriptionSchema = z.object({
"continue-proxy",
"nebius",
"scaleway",
"watsonx"
"watsonx",
]),
model: z.string(),
apiKey: z.string().optional(),
Expand Down Expand Up @@ -109,13 +110,14 @@ export const embeddingsProviderSchema = z.object({
"ollama",
"openai",
"cohere",
"morph",
"free-trial",
"gemini",
"ovhcloud",
"continue-proxy",
"nebius",
"scaleway",
"watsonx"
"watsonx",
]),
apiBase: z.string().optional(),
apiKey: z.string().optional(),
Expand Down Expand Up @@ -178,7 +180,14 @@ export const contextProviderSchema = z.object({
export type ContextProvider = z.infer<typeof contextProviderSchema>;

export const rerankerSchema = z.object({
name: z.enum(["cohere", "voyage", "watsonx", "llm", "continue-proxy"]),
name: z.enum([
"cohere",
"morph",
"voyage",
"watsonx",
"llm",
"continue-proxy",
]),
params: z.record(z.any()).optional(),
});
export type Reranker = z.infer<typeof rerankerSchema>;
Expand Down
2 changes: 2 additions & 0 deletions packages/llm-info/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ import { Anthropic } from "./providers/anthropic.js";
import { Azure } from "./providers/azure.js";
import { Bedrock } from "./providers/bedrock.js";
import { Cohere } from "./providers/cohere.js";
import { Morph } from "./providers/morph.js";
import { Gemini } from "./providers/gemini.js";
import { Mistral } from "./providers/mistral.js";
import { Ollama } from "./providers/ollama.js";
Expand All @@ -22,6 +23,7 @@ export const allModelProviders: ModelProvider[] = [
Vllm,
Bedrock,
Cohere,
Morph,
xAI,
];

Expand Down
27 changes: 27 additions & 0 deletions packages/llm-info/src/providers/morph.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import { ModelProvider } from "../types.js";

export const Morph: ModelProvider = {
models: [
{
model: "morph-rerank-v2",
displayName: "Morph Rerank v2",
// contextLength: 128000,
// maxCompletionTokens: 4000,
// recommendedFor: ["rerank"],
},
{
model: "morph-v2",
displayName: "Morph Fast Apply v2",
// contextLength: 128000,
// maxCompletionTokens: 4000,
},
{
model: "morph-embedding-v2",
displayName: "Morph Embedding v2",
// recommendedFor: ["embed"],
// contextLength: 512,
},
],
id: "morph",
displayName: "Morph",
};
3 changes: 2 additions & 1 deletion packages/openai-adapters/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ They are concerned with:
- Cache behavior
- max stop words
- use legacy completions endpoint?
- anything else that couldn't possibly be guess by the client since it won't know the endpoint behind the proxy
- anything else that couldn't possibly be guess by the client ssince it won't know the endpoint behind the proxy

## Supported APIs

Expand All @@ -33,6 +33,7 @@ They are concerned with:
- [x] Cerebras
- [ ] Cloudflare
- [x] Cohere
- [x] Morph
- [x] DeepInfra
- [x] Deepseek
- [ ] Flowise
Expand Down
12 changes: 12 additions & 0 deletions packages/openai-adapters/src/test/main.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,12 @@ const TESTS: Omit<ModelConfig, "name">[] = [
// roles: ["embed"],
// },
// {
// provider: "morph",
// model: "morph-embedding-v2",
// apiKey: process.env.MORPH_API_KEY!,
// roles: ["embed"],
// },
// {
// provider: "gemini",
// model: "models/text-embedding-004",
// apiKey: process.env.GEMINI_API_KEY!,
Expand All @@ -102,6 +108,12 @@ const TESTS: Omit<ModelConfig, "name">[] = [
// apiKey: process.env.COHERE_API_KEY!,
// roles: ["rerank"],
// },
// {
// provider: "morph",
// model: "morph-rerank-v2",
// apiKey: process.env.MORPH_API_KEY!,
// roles: ["rerank"],
// },
];

TESTS.forEach((config) => {
Expand Down
Loading