Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is TLS for the GRPC endpoint supported? #6469

Closed
flomader opened this issue Feb 20, 2025 · 5 comments
Closed

Is TLS for the GRPC endpoint supported? #6469

flomader opened this issue Feb 20, 2025 · 5 comments
Labels
question Further information is requested

Comments

@flomader
Copy link

flomader commented Feb 20, 2025

When exposing the GRPC endpoint via ingress controller (Nginx) I can only access it when TLS is disabled.
Mapping the Phoenix port 4317 to 443 on the ingress controller causes the trace export to fail:

Transient error StatusCode.UNAVAILABLE encountered while exporting traces to grpc.phoenixdev.xxxxx.internal:443, retrying in 1s.

Exposing the grpc endpoint on port 80

tracer_provider = register(
  project_name="my-llm-app",
  endpoint="http://grpc.phoenixdev.xxxxx.internal:80",
  protocol="grpc",
)

as well as exposing the http endpoint on port 443 is working fine.

tracer_provider = register(
  project_name="my-llm-app",
  endpoint="https://phoenixdev.xxxxx.internal/v1/traces",
  protocol="http/protobuf",
)

Is TLS for the GRPC endpoint supported?

@github-project-automation github-project-automation bot moved this to 📘 Todo in phoenix Feb 20, 2025
@dosubot dosubot bot added the question Further information is requested label Feb 20, 2025
@axiomofjoy
Copy link
Contributor

Hey @flomader, can you verify that nginx is configured to allow TLS on gRPC? Feel free to drop your config here.

@flomader
Copy link
Author

This is my config:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: phoenix
spec:
  replicas: 1
  selector:
    matchLabels:
      app: phoenix
  template:
    metadata:
      labels:
        app: phoenix
    spec:
      containers:
      - name: phoenix
        image: arizephoenix/phoenix:latest
        ports:
        - containerPort: 6006
        - containerPort: 4317
        env:
        - name: PHOENIX_SQL_DATABASE_URL
          value: "postgresql://..."
        - name: PHOENIX_PORT
          value: "6006"
        - name: PHOENIX_GRPC_PORT
          value: "4317"
---
apiVersion: v1
kind: Service
metadata:
  name: phoenix
  labels:
    app: phoenix
spec:
  selector:
    app: phoenix
  ports:
   - name: http
     protocol: TCP
     port: 6006
     targetPort: 6006
   - name: grpc
     protocol: TCP
     port: 4317
     targetPort: 4317
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: phoenix
spec:
  ingressClassName: webapprouting.kubernetes.azure.com
  tls:
  - hosts:
    - phoenixdev.xxxxx.internal
    secretName: phoenixdev-tls-cert
  rules:
  - host: phoenixdev.xxxxx.internal
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: phoenix
            port:
              number: 6006
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: phoenix-grpc
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
  ingressClassName: webapprouting.kubernetes.azure.com
  rules:
  - host: "grpc.phoenixdev.xxxxx.internal"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: phoenix
            port:
              number: 4317

In order to verify that the ingress controller supports grpc over tls I also tested a different application (in the same namespace and with the same ingress controller):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: rand
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rand
  template:
    metadata:
      labels:
        app: rand
    spec:
      containers:
      - name: rand
        image: ghcr.io/s1ntaxe770r/randrpc-server:v1.5
        ports:
        - containerPort: 7070
---
apiVersion: v1
kind: Service
metadata:
  name: rand
  labels:
    app: rand
spec:
  selector:
    app: rand
  ports:
   - name: grpc
     protocol: TCP
     port: 7070
     targetPort: 7070
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: rand
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
spec:
  ingressClassName: webapprouting.kubernetes.azure.com
  tls:
  - hosts:
    - phoenixdev.xxxxx.internal
    secretName: phoenixdev-tls-cert
  rules:
  - host: phoenixdev.xxxxx.internal
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: rand
            port:
              number: 7070

grpcurl returned a successful response in this case:

grpcurl -d "{"""min""": 10, """max""": 100}" phoenixdev.xxxxx.internal:443 randrpc.RandService.Rand
{
  "rand": 65
}

@axiomofjoy
Copy link
Contributor

Thanks so much for the thorough testing @flomader. We'll take a closer look.

@axiomofjoy
Copy link
Contributor

axiomofjoy commented Feb 23, 2025

Hey @flomader, it looks like you are trying to deploy Phoenix behind a reverse proxy that terminates TLS. Just want to double-check that that is your goal.

It's not currently supported to configure the Phoenix server itself with certs and keys, although I think that is a reasonable feature request.

@axiomofjoy
Copy link
Contributor

Hey @flomader, I deployed an instance of Phoenix on GKE and got gRPC traces running with a slightly modified version of your config. Here's what wound up working for me:

Manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: phoenix
spec:
  replicas: 1
  selector:
    matchLabels:
      app: phoenix
  template:
    metadata:
      labels:
        app: phoenix
    spec:
      containers:
      - name: phoenix
        image: arizephoenix/phoenix:latest
        ports:
        - containerPort: 6006
        - containerPort: 4317
        env:
        - name: PHOENIX_PORT
          value: "6006"
        - name: PHOENIX_GRPC_PORT
          value: "4317"
---
apiVersion: v1
kind: Service
metadata:
  name: phoenix
  labels:
    app: phoenix
spec:
  type: NodePort
  selector:
    app: phoenix
  ports:
   - name: http
     protocol: TCP
     port: 6006
     targetPort: 6006
   - name: grpc
     protocol: TCP
     port: 4317
     targetPort: 4317
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: phoenix
  annotations:
    spec.ingressClassName: "nginx"
spec:
  rules:
  - host: phoenixdev.xxxxx.internal
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: phoenix
            port:
              number: 6006
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: phoenix-grpc
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - grpc.phoenixdev.xxxxx.internal
    secretName: phoenix-tls
  rules:
  - host: grpc.phoenixdev.xxxxx.internal
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: phoenix
            port:
              number: 4317

Client:

import grpc
import openai
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

from openinference.instrumentation.openai import OpenAIInstrumentor

# endpoint = "http://phoenixdev.xxxxx.internal/v1/traces"
endpoint = "grpcs://grpc.phoenixdev.xxxxx.internal:443"
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(
    SimpleSpanProcessor(
        OTLPSpanExporter(
            endpoint,
            insecure=False,
            credentials=grpc.ssl_channel_credentials(open("examples/server.crt", "rb").read()),
        )
    )
)

OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)


if __name__ == "__main__":
    client = openai.OpenAI()
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": "Write a haiku."}],
        max_tokens=20,
        stream=True,
        stream_options={"include_usage": True},
    )
    for chunk in response:
        if chunk.choices and (content := chunk.choices[0].delta.content):
            print(content, end="")

It's difficult to say what the issue might be without having access to your cluster and client to debug. A few things to check:

  • Check that you are explicitly specifying port 443 in your client endpoint.
  • Check your ingress controller logs to make sure that the request is making it pass the proxy.

If I had to guess, I would guess that there is either an issue with the client request or with the gRPC ingress.

Going to close out this issue for now because it does not seem to be an issue with Phoenix. Please feel free to re-open or follow up in the thread.

@github-project-automation github-project-automation bot moved this from 📘 Todo to ✅ Done in phoenix Feb 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
Status: Done
Development

No branches or pull requests

2 participants