You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(vLLM)[https://github.com/vllm-project/vllm] is a LLM serving framework that enables you to expose openAI API compatible endpoint. How can I get helicone working with vLLM?
Motivation, pitch
For on prem deployment, vLLM is a great option for secure inference data handling and for full control on the model you use. Given helicone can be self deployed with docker compose / K8s, it would be a great complimentary service to have on our infra.
The Feature
(vLLM)[https://github.com/vllm-project/vllm] is a LLM serving framework that enables you to expose openAI API compatible endpoint. How can I get helicone working with vLLM?
Motivation, pitch
For on prem deployment, vLLM is a great option for secure inference data handling and for full control on the model you use. Given helicone can be self deployed with docker compose / K8s, it would be a great complimentary service to have on our infra.
Twitter / LinkedIn details
https://www.linkedin.com/in/fpaupier/
The text was updated successfully, but these errors were encountered: