The following recipe provides a walk-through for setting up GKE Ingress with a custom HTTP health check.
GKE Ingresses use proxy-based Google Cloud HTTP(s) Load Balancers that can serve multiple backend services (Kubernetes services in GKE case). Each of those backend services must reference Google Cloud health check. GKE creates those health checks with parameters that are either explicitly configured, inferred or have default values.
It is recommended practice to configure health check parameters explicitly for GKE Ingress backend services.
- Explicitly configure health check parameters for your service
GKE creates Google Cloud health check for each Ingress backend service in one of the following ways:
- If service references BackendConfig CRD
with
healthCheck
information, then GKE uses that to create the health check.
Otherwise:
-
If service Pods use Pod template with a container that has a readiness probe, GKE can infer some or all of the parameters from that probe for health check configuration. Check Parameters from a readiness probe for details.
-
If service Pods use Pod template with a container that does not have a container with a readiness probe whose attribute can be interpreted as health check parameters, the default values are used.
NOTE: keep in mind differences in destination port when configuring healthCheck
parameters in
BackendConfig
object for services that use Container Native Load Balancing.
-
When using container-native load balancing, the port should match
containerPort
of a serving Pod -
For backends based on instance groups, the port should match
nodePort
exposed by the service
In this example, an external Ingress resource sends HTTP traffic to the whereami
Service at port 80. A public IP is automatically provisioned by the Ingress controller which listens for internet traffic on port 80.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hc-test
spec:
rules:
- http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: whereami
port:
number: 80
In this recipe, a BackendConfig
resource is used to configure a custom load balancer health check. The whereami
service exposes the /healthz
endpoint that responds with 200 if the application is running. This custom health check probes the /healthz
endpoint every second at the Service targetPort
.
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: hc-test
spec:
healthCheck:
timeoutSec: 1
type: HTTP
requestPath: /healthz
port: 8080
The following whereami
Service selects across the Pods from the whereami
Deployment. This Deployment consists of three Pods which will get load balanced across. Note the use of the cloud.google.com/neg: '{"ingress": true}'
annotation. This enables container-native load balancing which is a best practice. In GKE 1.17+ this is annotated by default.
apiVersion: v1
kind: Service
metadata:
name: whereami
annotations:
cloud.google.com/neg: '{"ingress": true}'
beta.cloud.google.com/backend-config: '{"default": "hc-test"}'
spec:
selector:
app: whereami
ports:
- port: 80
protocol: TCP
targetPort: 8080
- GKE cluster up and running (check Cluster Setup)
- Download this repo and navigate to this folder
$ git clone https://github.com/GoogleCloudPlatform/gke-networking-recipes.git
Cloning into 'gke-networking-recipes'...
$ cd gke-networking-recipes/ingress/single-cluster/ingress-custom-http-health-check
- Deploy the Ingress, BackendConfig, Deployment, and Service resources in the custom-http-hc-ingress.yaml manifest.
$ kubectl apply -f custom-http-hc-ingress.yaml
ingress.networking.k8s.io/hc-test created
backendconfig.cloud.google.com/hc-test created
service/whereami created
deployment.apps/whereami created
- It will take up to a minute for the Pods to deploy and up to a few minutes for the Ingress resource to be ready. Validate their progress and make sure that no errors are surfaced in the resource events.
$ kubectl get deploy whereami
NAME READY UP-TO-DATE AVAILABLE AGE
whereami 3/3 3 3 2m3s
$ kubectl describe ingress hc-test
Name: hc-test
Labels: <none>
Namespace: default
Address: 34.149.233.143
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ whereami:80 (10.68.0.6:8080,10.68.2.7:8080,10.68.2.8:8080)
Annotations: ingress.kubernetes.io/backends:
{"k8s1-48c467ea-default-whereami-80-d9133039":"HEALTHY","k8s1-48c467ea-kube-system-default-http-backend-80-4fecb0c3":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-t65dobq9-default-hc-test-mdufv3cy
ingress.kubernetes.io/target-proxy: k8s2-tp-t65dobq9-default-hc-test-mdufv3cy
ingress.kubernetes.io/url-map: k8s2-um-t65dobq9-default-hc-test-mdufv3cy
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Translate 7m8s (x5 over 7m9s) loadbalancer-controller Translation failed: invalid ingress spec: could not find service "default/whereami"
Normal Sync 5m53s loadbalancer-controller UrlMap "k8s2-um-t65dobq9-default-hc-test-mdufv3cy" created
Normal Sync 5m51s loadbalancer-controller TargetProxy "k8s2-tp-t65dobq9-default-hc-test-mdufv3cy" created
Normal Sync 5m43s loadbalancer-controller ForwardingRule "k8s2-fr-t65dobq9-default-hc-test-mdufv3cy" created
Normal IPChanged 5m43s loadbalancer-controller IP is now 34.149.233.143
Normal Sync 32s (x7 over 7m9s) loadbalancer-controller Scheduled for sync
- Finally, we can validate the data plane by sending traffic to our Ingress VIP.
kubectl delete -f custom-http-hc-ingress.yaml