-
Notifications
You must be signed in to change notification settings - Fork 354
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is TLS for the GRPC endpoint supported? #6469
Comments
Hey @flomader, can you verify that |
This is my config: apiVersion: apps/v1
kind: Deployment
metadata:
name: phoenix
spec:
replicas: 1
selector:
matchLabels:
app: phoenix
template:
metadata:
labels:
app: phoenix
spec:
containers:
- name: phoenix
image: arizephoenix/phoenix:latest
ports:
- containerPort: 6006
- containerPort: 4317
env:
- name: PHOENIX_SQL_DATABASE_URL
value: "postgresql://..."
- name: PHOENIX_PORT
value: "6006"
- name: PHOENIX_GRPC_PORT
value: "4317"
---
apiVersion: v1
kind: Service
metadata:
name: phoenix
labels:
app: phoenix
spec:
selector:
app: phoenix
ports:
- name: http
protocol: TCP
port: 6006
targetPort: 6006
- name: grpc
protocol: TCP
port: 4317
targetPort: 4317
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: phoenix
spec:
ingressClassName: webapprouting.kubernetes.azure.com
tls:
- hosts:
- phoenixdev.xxxxx.internal
secretName: phoenixdev-tls-cert
rules:
- host: phoenixdev.xxxxx.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: phoenix
port:
number: 6006
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: phoenix-grpc
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
ingressClassName: webapprouting.kubernetes.azure.com
rules:
- host: "grpc.phoenixdev.xxxxx.internal"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: phoenix
port:
number: 4317 In order to verify that the ingress controller supports grpc over tls I also tested a different application (in the same namespace and with the same ingress controller): apiVersion: apps/v1
kind: Deployment
metadata:
name: rand
spec:
replicas: 1
selector:
matchLabels:
app: rand
template:
metadata:
labels:
app: rand
spec:
containers:
- name: rand
image: ghcr.io/s1ntaxe770r/randrpc-server:v1.5
ports:
- containerPort: 7070
---
apiVersion: v1
kind: Service
metadata:
name: rand
labels:
app: rand
spec:
selector:
app: rand
ports:
- name: grpc
protocol: TCP
port: 7070
targetPort: 7070
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rand
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
ingressClassName: webapprouting.kubernetes.azure.com
tls:
- hosts:
- phoenixdev.xxxxx.internal
secretName: phoenixdev-tls-cert
rules:
- host: phoenixdev.xxxxx.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rand
port:
number: 7070 grpcurl returned a successful response in this case:
|
Thanks so much for the thorough testing @flomader. We'll take a closer look. |
Hey @flomader, it looks like you are trying to deploy Phoenix behind a reverse proxy that terminates TLS. Just want to double-check that that is your goal. It's not currently supported to configure the Phoenix server itself with certs and keys, although I think that is a reasonable feature request. |
Hey @flomader, I deployed an instance of Phoenix on GKE and got gRPC traces running with a slightly modified version of your config. Here's what wound up working for me: Manifest:
Client: import grpc
import openai
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
from openinference.instrumentation.openai import OpenAIInstrumentor
# endpoint = "http://phoenixdev.xxxxx.internal/v1/traces"
endpoint = "grpcs://grpc.phoenixdev.xxxxx.internal:443"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(
SimpleSpanProcessor(
OTLPSpanExporter(
endpoint,
insecure=False,
credentials=grpc.ssl_channel_credentials(open("examples/server.crt", "rb").read()),
)
)
)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
if __name__ == "__main__":
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Write a haiku."}],
max_tokens=20,
stream=True,
stream_options={"include_usage": True},
)
for chunk in response:
if chunk.choices and (content := chunk.choices[0].delta.content):
print(content, end="") It's difficult to say what the issue might be without having access to your cluster and client to debug. A few things to check:
If I had to guess, I would guess that there is either an issue with the client request or with the gRPC ingress. Going to close out this issue for now because it does not seem to be an issue with Phoenix. Please feel free to re-open or follow up in the thread. |
When exposing the GRPC endpoint via ingress controller (Nginx) I can only access it when TLS is disabled.
Mapping the Phoenix port 4317 to 443 on the ingress controller causes the trace export to fail:
Transient error StatusCode.UNAVAILABLE encountered while exporting traces to grpc.phoenixdev.xxxxx.internal:443, retrying in 1s.
Exposing the grpc endpoint on port 80
as well as exposing the http endpoint on port 443 is working fine.
Is TLS for the GRPC endpoint supported?
The text was updated successfully, but these errors were encountered: