miles includes an optional Miles Router used during rollout / data generation. It is a lightweight HTTP router/proxy that sits in front of one or more SGLang worker servers and adds training-oriented capabilities that are not the main goal of serving-focused routers.
Miles Router is a small FastAPI service that:
- Registers workers (SGLang HTTP servers) into a local pool
- Routes requests to a selected worker (simple least-inflight load balancing)
- Proxies arbitrary paths to the selected worker (e.g.
/generate) - Runs periodic health checks and quarantines unhealthy workers
- Supports middleware plugins (via
--miles-router-middleware-paths) to implement rollout-specific processing (e.g. caching, request/response transforms)
In miles's architecture, the router is part of the rollout system ("SGLang + router") that generates samples and pushes them into the data buffer.
In distributed training, miles will start a router automatically when --sglang-router-ip is not provided:
- If
--use-miles-routeris set, miles starts Miles Router - Otherwise, miles starts SGLang Model Gateway
Unlike production inference, RL rollout needs to capture additional metadata for training: token-level logprobs, loss masks, and (for MoE models) expert routing decisions. Miles Router provides these capabilities through its middleware system and passthrough proxy design.
Use this when your rollout pipeline is text-in/text-out and you cannot reliably persist token IDs; if you already control token-in/token-out (e.g. search r1, multiturn VLM examples), you likely don't need the radix-tree cache.
Text-in text-out interfaces can cause token retokenization mismatches - re-tokenizing text at training time may produce different token sequences than rollout, breaking per-token alignment needed for PPO/GRPO losses.
The radix-tree cache solves this transparently: it intercepts text-based requests, tokenizes them, and stores trajectories (text, token IDs, logprobs, loss masks) keyed by the text prefix. After rollout finishes, calling /retrieve_from_text returns the exact token sequence with aligned metadata, without requiring any changes to your rollout code.
Implementation-wise, the radix-tree cache:
- Accepts text plus tokens/metadata and stores them in a radix tree
- Uses longest-prefix matching to reuse cached token sequences (enabling token-in/token-out downstream)
- Allows insertion of new text continuations as rollout proceeds (multiple trajectories per prompt, e.g. GRPO)
- Periodically cleans up stale nodes to control memory usage
Use the radix cache when you have text-based rollout code and want token-level precision without rewriting, or when running GRPO with multiple trajectories sharing the same prompt prefix.
For MoE models, miles supports rollout routing replay (R3): record expert routing decisions during rollout and replay them during training to improve stability.
SGLang provides expert routing capture via:
--enable-return-routed-experts: server argument to enable routing captureRoutedExpertsCapturer: capturestopk_ids(selected expert IDs) at each MoE layer during forward passreturn_routed_experts: request parameter to retrieve routing data- Returns
routed_expertsin responsemeta_info- a[seq_len - 1, num_layers, top_k]tensor of expert IDs
miles consumes the routing data and replays it during training:
--use-miles-router --use-rollout-routing-replay: both flags required to enable R3- Rollout sends
return_routed_experts=Trueand stores results insample.rollout_routed_experts - Training calls
fill_routing_replay()to load routing data intoRoutingReplayobjects - During forward pass, recorded routing decisions are replayed instead of recomputed
We need Miles Router because the SGLang worker returns routed experts in the response (meta_info.routed_experts) when the request sets return_routed_experts=true, and Miles Router preserves this field end-to-end. SGLang Model Gateway may drop this extra metadata when it reconstructs responses with a fixed schema (see section 3).
Miles Router and SGLang Model Gateway can both route requests to workers, but they are optimized for different goals.
Miles Router is a lightweight Python/FastAPI proxy that acts as a passthrough to SGLang workers. This passthrough design enables RL-specific features like radix-tree trajectory caching and R3 (which require preserving raw response metadata like routed_experts).
SGLang Model Gateway is a high-performance Rust-based router optimized for large-scale inference: async non-blocking routing, advanced fault tolerance (retries, circuit breakers), multiple load balancing policies (including cache-aware routing), and PD disaggregation support. However, it reconstructs responses with a fixed schema, so it does not preserve the metadata needed for miles's R3 flow.
For more details on SGLang Model Gateway, see the official documentation.
- Use Miles Router when you need R3 or radix-tree caching
- Use SGLang Model Gateway for everything else (recommended default)