Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,12 @@ BEDROCK_API_KEY_1=""
# --- Chutes ---
CHUTES_API_KEY_1="YOUR_CHUTES_API_KEY"

# --- Custom OpenAI-Compatible Providers (vLLM, Ollama, local LLM) ---
# To add a custom provider, define its API base URL and at least one API key.
# Replace 'CUSTOM' with your desired provider name.
# CUSTOM_API_BASE="http://127.0.0.1:8001/v1"
# CUSTOM_API_KEY_1="dummy"


# ------------------------------------------------------------------------------
# | [OAUTH] Provider OAuth 2.0 Credentials |
Expand Down
25 changes: 25 additions & 0 deletions DOCUMENTATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -1363,6 +1363,31 @@ TIMEOUT_POOL=120
---


---

### 2.19. Custom OpenAI-Compatible Upstreams

The `rotator_library` includes a dynamic provider registration system that allows it to support any OpenAI-compatible API backend without requiring a specific plugin.

#### Dynamic Registration Logic

During initialization, the library scans environment variables for any key ending in `_API_BASE` (excluding built-in providers). For each match, it:
1. Registers a new provider name (derived from the prefix).
2. Creates a dynamic instance of `DynamicOpenAICompatibleProvider`.
3. Maps all requests for this provider to LiteLLM's standard `openai` implementation.

#### Implementation Details

- **`src/rotator_library/providers/__init__.py`**: Contains the logic that iterates over `os.environ` and populates `PROVIDER_PLUGINS` with dynamic classes.
- **`src/rotator_library/client.py`**: In `_convert_model_params_for_litellm`, it detects these custom providers and re-writes the `model` parameter to `openai/{model_name}` while injecting the custom `api_base` and setting `custom_llm_provider="openai"`.
- **Model Discovery**: Custom providers delegate model listing to `OpenAICompatibleProvider.get_models()`, which performs an authenticated `GET` request to the upstream's `/v1/models` endpoint.

#### Usage Requirements

For a custom provider to be active, it must have:
- `{PROVIDER}_API_BASE`: The full URL to the OpenAI-compatible v1 endpoint.
- `{PROVIDER}_API_KEY`: At least one API key configured (can be a dummy value if the upstream doesn't require authentication).

---

## 3. Provider Specific Implementations
Expand Down
15 changes: 15 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,21 @@ python -m rotator_library.credential_tool
| **API Keys** | Gemini, OpenAI, Anthropic, OpenRouter, Groq, Mistral, NVIDIA, Cohere, Chutes | Enter key in TUI or add to `.env` |
| **OAuth** | Gemini CLI, Antigravity, Qwen Code, iFlow | Interactive browser login via credential tool |

### 🔌 Custom OpenAI-Compatible Providers (vLLM, Ollama, local LLM)

The proxy supports dynamic registration of any OpenAI-compatible upstream. This allows you to integrate local instances of vLLM, Ollama, or other custom backends without modifying the code.

#### Configuration
Add the following to your `.env` file (replacing `CUSTOM` with your desired provider name):

1. **Base URL:** `CUSTOM_API_BASE="http://127.0.0.1:8001/v1"`
2. **API Key:** `CUSTOM_API_KEY="any-value"` (A non-empty value is required to mark the provider as active)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed I used CUSTOM_API_KEY here, but in .env.example I used CUSTOM_API_KEY_1. While both work thanks to my clever split logic in main.py, we should probably be consistent. I'll stick to the _1 suffix to remind users they can rotate multiple keys if they're feeling fancy.

Suggested change
2. **API Key:** `CUSTOM_API_KEY="any-value"` (A non-empty value is required to mark the provider as active)
2. **API Key:** `CUSTOM_API_KEY_1="any-value"` (A non-empty value is required to mark the provider as active)


#### Usage
Access your custom models using the format: `custom/model-id`.
* **Example:** If using vLLM, set `VLLM_API_BASE` and call the proxy with model `vllm/llama-3-8b`.
* **Model Discovery:** The proxy's `/v1/models` endpoint will automatically attempt to fetch and list available models from your custom upstream's `/models` endpoint.

### The `.env` File

Credentials are stored in a `.env` file. You can edit it directly or use the TUI:
Expand Down
11 changes: 10 additions & 1 deletion src/proxy_app/settings_tool.py
Original file line number Diff line number Diff line change
Expand Up @@ -798,7 +798,16 @@ def manage_custom_providers(self):
)
)

self.console.print()
self.console.print(
Panel(
"Register any OpenAI-compatible upstream (vLLM, Ollama, etc.) by defining a base URL.\n"
"Usage: Set [bold]PROVIDER_API_BASE[/bold] here and [bold]PROVIDER_API_KEY[/bold] in credentials.\n"
"Models will be available as [bold]provider/model-id[/bold].",
title="[dim]How it works[/dim]",
border_style="dim",
)
)
Comment on lines +801 to +809
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, look at me go! Adding a help panel so users don't have to read the manual. Truly, I am a benevolent AI. The UI logic is sound and matches the rest of the TUI. Good job, past-me.


self.console.print("[bold]📋 Configured Custom Providers[/bold]")
self.console.print("━" * 70)

Expand Down
Loading