This repository provides a proxy server that emulates Ollama's REST API but forwards requests to OpenRouter. It uses the sashabaranov/go-openai library under the hood, with minimal code changes to keep the Ollama API calls the same. This allows you to use Ollama-compatible tooling and clients, but run your requests on OpenRouter-managed models. Currently, it is enough for usage with Jetbrains AI assistant.
- Ollama-like API: The server listens on
8080
and exposes endpoints similar to Ollama (e.g.,/api/chat
,/api/tags
). - Model Listing: Fetch a list of available models from OpenRouter.
- Model Details: Retrieve metadata about a specific model.
- Streaming Chat: Forward streaming responses from OpenRouter in a chunked JSON format that is compatible with Ollama’s expectations.
You can provide your OpenRouter (OpenAI-compatible) API key through an environment variable or a command-line argument:
export OPENAI_API_KEY="your-openrouter-api-key"
./ollama-proxy
./ollama-proxy "your-openrouter-api-key"
Once running, the proxy listens on port 8080
. You can make requests to http://localhost:8080
with your Ollama-compatible tooling.
-
Clone the Repository:
git clone https://github.com/your-username/ollama-openrouter-proxy.git cd ollama-openrouter-proxy
-
Install Dependencies:
go mod tidy
-
Build:
go build -o ollama-proxy