Skip to content

chore: update actions/upload-artifact to v4 #1736

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/fulltest.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ jobs:
exit 1
fi
- name: Upload pytest test results
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: pytest-results-${{ matrix.python-version }}
path: |
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/unittest.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ jobs:
exit 1
fi
- name: Upload pytest test results
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: pytest-results-${{ matrix.python-version }}
path: |
Expand Down
21 changes: 21 additions & 0 deletions config/examples/ollama-third-party-wrapper.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Example configuration for using Ollama with a third-party URL wrapper
llm:
api_type: "ollama" # Using the Ollama provider
model: "llama2" # Specify the model name
base_url: "http://localhost:8989/ollama/api" # Third-party wrapper URL with /api path
api_key: "not-needed-for-ollama" # Ollama doesn't require an API key, but config needs this field

# Alternative configuration if your wrapper doesn't include /api in the URL
# llm:
# api_type: "ollama"
# model: "llama2"
# base_url: "http://localhost:8989/ollama" # The code will handle adding /api before /chat
# api_key: "not-needed-for-ollama"

# You can also use the proxy parameter if needed
# llm:
# api_type: "ollama"
# model: "llama2"
# base_url: "http://localhost:11434" # Direct Ollama URL
# proxy: "http://localhost:8989" # Proxy server
# api_key: "not-needed-for-ollama"
96 changes: 96 additions & 0 deletions docs/tutorial/ollama_third_party_wrapper.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
# Using Ollama with Third-Party URL Wrappers

This guide explains how to configure MetaGPT to use Ollama through a third-party URL wrapper.

## Background

Ollama's API structure requires endpoints to be accessed via paths like:
- `/api/chat` for chat completions
- `/api/generate` for text generation
- `/api/embeddings` for embeddings

When using a third-party URL wrapper or proxy, you need to ensure the URL structure is correctly maintained.

## Configuration Options

### Option 1: Include `/api` in the Base URL (Recommended)

```yaml
llm:
api_type: "ollama"
model: "llama2"
base_url: "http://localhost:8989/ollama/api" # Note the /api at the end
api_key: "not-needed-for-ollama"
```

With this configuration, MetaGPT will correctly form URLs like:
- `http://localhost:8989/ollama/api/chat`
- `http://localhost:8989/ollama/api/generate`

### Option 2: Let MetaGPT Handle the `/api` Path

```yaml
llm:
api_type: "ollama"
model: "llama2"
base_url: "http://localhost:8989/ollama" # No /api at the end
api_key: "not-needed-for-ollama"
```

MetaGPT will automatically insert `/api` before the specific endpoint, resulting in:
- `http://localhost:8989/ollama/api/chat`
- `http://localhost:8989/ollama/api/generate`

### Option 3: Using the Proxy Parameter

```yaml
llm:
api_type: "ollama"
model: "llama2"
base_url: "http://localhost:11434" # Direct Ollama URL
proxy: "http://localhost:8989" # Proxy server
api_key: "not-needed-for-ollama"
```

Note: The proxy parameter is passed to the HTTP client but may not work with all wrapper configurations.

## Troubleshooting

If you encounter a 404 error, check that:

1. Your wrapper correctly forwards requests to Ollama
2. The URL structure includes `/api` before the specific endpoint (e.g., `/chat`, `/generate`)
3. Your wrapper preserves the complete path when forwarding requests

## Example Wrapper Configuration

If you're implementing a wrapper for Ollama, ensure it correctly handles the path structure:

```javascript
// Example Node.js proxy for Ollama
app.use('/ollama', createProxyMiddleware({
target: 'http://localhost:11434',
pathRewrite: {
'^/ollama': '/api' // Rewrite /ollama to /api
},
changeOrigin: true,
}));
```

Or alternatively:

```javascript
// Pass through the complete path
app.use('/ollama', createProxyMiddleware({
target: 'http://localhost:11434',
pathRewrite: {
'^/ollama': '' // Remove /ollama prefix
},
changeOrigin: true,
}));
```

## Related Configuration Files

For a complete example configuration, see:
- `config/examples/ollama-third-party-wrapper.yaml`
71 changes: 71 additions & 0 deletions examples/ollama_url_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc : Simple test for Ollama URL construction

def get_api_url(base_url, suffix):
"""
Ensure the API URL is correctly formed by handling both direct Ollama URLs and third-party wrappers.
For direct Ollama, the URL should be: base_url + /api + suffix
For wrappers, we need to check if /api is already in the base_url
"""
base_url = base_url.rstrip('/')

# If base_url already ends with /api, just append the suffix
if base_url.endswith('/api'):
return f"{base_url}{suffix}"

# If base_url contains /api/ somewhere in the middle (like in a wrapper URL)
# we should just append the suffix directly
if '/api/' in base_url:
return f"{base_url}{suffix}"

# For standard Ollama URL, insert /api before the suffix
return f"{base_url}/api{suffix}"

def test_url_construction():
"""Test URL construction with different base URLs"""
# Test cases
test_cases = [
{
"name": "Direct Ollama URL",
"base_url": "http://localhost:11434",
"suffix": "/chat",
"expected": "http://localhost:11434/api/chat"
},
{
"name": "Wrapper URL with /api at end",
"base_url": "http://localhost:8989/ollama/api",
"suffix": "/chat",
"expected": "http://localhost:8989/ollama/api/chat"
},
{
"name": "Wrapper URL without /api",
"base_url": "http://localhost:8989/ollama",
"suffix": "/chat",
"expected": "http://localhost:8989/ollama/api/chat"
},
{
"name": "Wrapper URL with /api/ in middle",
"base_url": "http://localhost:8989/api/ollama",
"suffix": "/chat",
"expected": "http://localhost:8989/api/ollama/chat"
}
]

# Run tests
print("Testing Ollama URL construction...")
print("-" * 50)

for case in test_cases:
result = get_api_url(case["base_url"], case["suffix"])

print(f"Test: {case['name']}")
print(f"Base URL: {case['base_url']}")
print(f"Suffix: {case['suffix']}")
print(f"Result URL: {result}")
print(f"Expected: {case['expected']}")
print(f"{'✅ PASS' if result == case['expected'] else '❌ FAIL'}")
print("-" * 50)

if __name__ == "__main__":
test_url_construction()
70 changes: 70 additions & 0 deletions examples/ollama_wrapper_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Desc : Test script for Ollama with third-party URL wrapper

import asyncio
import os
import sys

# Add the project root to the path so we can import metagpt
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))

from metagpt.configs.llm_config import LLMConfig, LLMType
from metagpt.provider.ollama_api import OllamaLLM


async def test_ollama_wrapper():
"""Test Ollama with a third-party URL wrapper"""
print("Testing Ollama with third-party URL wrapper...")

# Configuration for direct Ollama (for comparison)
direct_config = LLMConfig(
api_type=LLMType.OLLAMA,
model="llama2", # Change to a model you have installed
base_url="http://localhost:11434",
api_key="not-needed-for-ollama",
)

# Configuration for Ollama with wrapper (with /api in URL)
wrapper_config_with_api = LLMConfig(
api_type=LLMType.OLLAMA,
model="llama2", # Change to a model you have installed
base_url="http://localhost:8989/ollama/api", # Your wrapper URL with /api
api_key="not-needed-for-ollama",
)

# Configuration for Ollama with wrapper (without /api in URL)
wrapper_config_without_api = LLMConfig(
api_type=LLMType.OLLAMA,
model="llama2", # Change to a model you have installed
base_url="http://localhost:8989/ollama", # Your wrapper URL without /api
api_key="not-needed-for-ollama",
)

# Choose which configuration to test
# config = direct_config
config = wrapper_config_with_api
# config = wrapper_config_without_api

# Initialize the Ollama LLM
ollama = OllamaLLM(config)

# Test the URL construction
api_url = ollama._get_api_url(ollama.ollama_message.api_suffix)
print(f"Base URL: {config.base_url}")
print(f"API suffix: {ollama.ollama_message.api_suffix}")
print(f"Constructed API URL: {api_url}")
print(f"Full URL would be: {config.base_url}{api_url}")

# Uncomment to test an actual API call
# try:
# messages = [{"role": "user", "content": "Hello, how are you?"}]
# response = await ollama.acompletion(messages)
# print("\nAPI Response:")
# print(response)
# except Exception as e:
# print(f"\nError during API call: {e}")


if __name__ == "__main__":
asyncio.run(test_ollama_wrapper())
29 changes: 26 additions & 3 deletions metagpt/provider/ollama_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -218,10 +218,31 @@ def __init_ollama(self, config: LLMConfig):
def get_usage(self, resp: dict) -> dict:
return {"prompt_tokens": resp.get("prompt_eval_count", 0), "completion_tokens": resp.get("eval_count", 0)}

def _get_api_url(self, suffix: str) -> str:
"""
Ensure the API URL is correctly formed by handling both direct Ollama URLs and third-party wrappers.
For direct Ollama, the URL should be: base_url + /api + suffix
For wrappers, we need to check if /api is already in the base_url
"""
base_url = self.config.base_url.rstrip('/')

# If base_url already ends with /api, just append the suffix
if base_url.endswith('/api'):
return f"{base_url}{suffix}"

# If base_url contains /api/ somewhere in the middle (like in a wrapper URL)
# we should just append the suffix directly
if '/api/' in base_url:
return f"{base_url}{suffix}"

# For standard Ollama URL, insert /api before the suffix
return f"{base_url}/api{suffix}"

async def _achat_completion(self, messages: list[dict], timeout: int = USE_CONFIG_TIMEOUT) -> dict:
api_url = self._get_api_url(self.ollama_message.api_suffix)
resp, _, _ = await self.client.arequest(
method=self.http_method,
url=self.ollama_message.api_suffix,
url=api_url,
params=self.ollama_message.apply(messages=messages),
request_timeout=self.get_timeout(timeout),
)
Expand All @@ -239,9 +260,10 @@ async def acompletion(self, messages: list[dict], timeout=USE_CONFIG_TIMEOUT) ->
return await self._achat_completion(messages, timeout=self.get_timeout(timeout))

async def _achat_completion_stream(self, messages: list[dict], timeout: int = USE_CONFIG_TIMEOUT) -> str:
api_url = self._get_api_url(self.ollama_message.api_suffix)
resp, _, _ = await self.client.arequest(
method=self.http_method,
url=self.ollama_message.api_suffix,
url=api_url,
params=self.ollama_message.apply(messages=messages),
request_timeout=self.get_timeout(timeout),
stream=True,
Expand Down Expand Up @@ -305,9 +327,10 @@ def _llama_embedding_key(self) -> str:
return "embedding"

async def _achat_completion(self, messages: list[dict], timeout: int = USE_CONFIG_TIMEOUT) -> dict:
api_url = self._get_api_url(self.ollama_message.api_suffix)
resp, _, _ = await self.client.arequest(
method=self.http_method,
url=self.ollama_message.api_suffix,
url=api_url,
params=self.ollama_message.apply(messages=messages),
request_timeout=self.get_timeout(timeout),
)
Expand Down
Loading