Skip to content

Latest commit

 

History

History
215 lines (154 loc) · 9.42 KB

openshift.adoc

File metadata and controls

215 lines (154 loc) · 9.42 KB

A console to manage your Kubernetes Cluster

Duration: 10:00

Until now we have been using Kubernetes CLI (kubectl) to manage our cluster. Wouldn’t it be good if we could visualize/manage our cluster in a user/interface?

Let’s take a look in OpenShift console.

Note
Don’t worry. This is still a Kubernetes lab! OpenShift runs on top of Kubernetes but adds features like deployments, source-to-image, user management, routes, etc.
Note

Accept the self-signed HTTPS certificate.

Login with the credentials:

  • username: openshift-dev

  • password: devel

Click on OpenShift sample project and now you will see all Kubernetes objects that you have created with kubectl command. This happens because OpenShift is built on top of Kubernetes.

OpenShift

Take some minutes to explore this interface.

  • Click on the Service and ReplicationController boxes and see the details.

  • Click on the POD number and see the list of PODS. Select one POD to explore.

  • Explore the POD details and the other tabs (Environment, Logs, Terminal and Events)

  • In the right top corner, click in the gray Actions button and check that you can also edit the YAML file.

Note
The blue bars on top of the pods are "Label filters" that were applied to select just the specific pods for the ReplicationController that you clicked.
Pod

Back to the Overview page and check that you can scale up and scale down the number of the pods simply by clicking on the gray arrows on the right hand side of the pods number.

Let’s perform a rolling-update in the CDK shell and see how it behaves visually in OpenShift. Execute the following command and return to the OpenShift console in the browser:

[vagrant@rhel-cdk frontend]$ kubectl rolling-update frontend-ui-2 frontend-ui --image=rafabene/microservices-frontend:1.0 --update-period=3s
Created frontend-ui
Scaling up frontend-ui from 0 to 2, scaling down frontend-ui-2 from 2 to 0 (keep 2 pods available, don’t exceed 3 pods)
Scaling frontend-ui up to 1
Scaling frontend-ui-2 down to 1
Scaling frontend-ui up to 2
Scaling frontend-ui-2 down to 0
Update succeeded. Deleting frontend-ui-2
replicationcontroller "frontend-ui-2" rolling updated to "frontend-ui"
Rolling Update

Creating a DeploymentConfig

The next chapters requires a lot of changes in the application configuration. Instead of running a kubectl rolling-update to replace the ReplicatonController, we will replace it by a DeploymentConfig.

DeploymentConfig is a OpenShift object that manages the ReplicatonController based on a user defined template. The deployment system provides:

  • A deployment configuration

  • Triggers that drive automated deployments in response to events or configuration changes

  • User-customizable strategies to transition from the previous deployment to the new deployment: Rolling, Recreate and Custom

  • Rollbacks to a previous deployment.

  • Management of ReplicationController scaling.

Since a DeploymentConfig manages its own ReplicatonController, we will replace the existing replication ReplicatonController.

Execute:

[vagrant@rhel-cdk frontend]$ cd ~/kubernetes-lab/kubernetes/

[vagrant@rhel-cdk kubernetes]$ kubectl delete -f helloworldservice-rc.yaml
replicationcontroller "helloworld-service-vertx" deleted

[vagrant@rhel-cdk kubernetes]$ oc run helloworld-service-vertx --image=rafabene/microservices-helloworld-vertx:1.0 -l app=helloworld-service-vertx -r 2
deploymentconfig "helloworld-service-vertx" created
Note
The -l specifies the label (It should match to the existing Kubernetes Service). The -r specifies the number of replicas.
Tip

OpenShift is built on top of Kubernetes but adds features to manage applications. Because of that, OpenShift has created the DeploymentConfig object that not only keeps the number of replicas in the cluster, but also keeps the same state status (image version, environment variables, volumes, etc) for all pods. This feature has been incorporated in Kubernetes with the name Deployment. It’s part of OpenShift project roadmap to replace the DeploymentConfig and use only Kubernetes Deployments in a transparent and portable way.

A word about oc command

Congratulations! You were introduced to the oc command. You can think of it as a super set of kubectl. You can use it to do almost all operations that you did with kubectl. Let’s try:

[vagrant@rhel-cdk kubernetes]$ oc get pods
NAME                               READY     STATUS    RESTARTS   AGE
frontend-ui-anbox                  1/1       Running   0          1h
frontend-ui-l7jbo                  1/1       Running   0          1h
guestbook-service-h4fvr            1/1       Running   0          1h
helloworld-service-vertx-1-738ux   1/1       Running   0          5m
mysql-yhl82                        1/1       Running   0          1h

[vagrant@rhel-cdk kubernetes]$ oc get rc
NAME                         DESIRED   CURRENT   AGE
frontend-ui                  2         2         1h
guestbook-service            1         1         1h
helloworld-service-vertx-1   1         1         5m
mysql                        1         1         1h

[vagrant@rhel-cdk kubernetes]$ oc get service
NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
frontend-ui                172.30.212.103                 80/TCP                       9h
guestbook-service          172.30.140.61    <none>        8080/TCP                     9h
helloworld-service-vertx   172.30.7.161     <none>        8080/TCP,8778/TCP,9779/TCP   8h
mysql                      172.30.249.144   <none>        3306/TCP                     9h

[vagrant@rhel-cdk kubernetes]$ oc describe service frontend-ui
Name:			frontend-ui
Namespace:		sample-project
Labels:			app=frontend-ui,lab=kubernetes-lab
Selector:		app=frontend-ui
Type:			LoadBalancer
IP:			172.30.212.103
Port:			http-80	80/TCP
NodePort:		http-80	32186/TCP
Endpoints:		172.17.0.2:8080,172.17.0.5:8080
Session Affinity:	None
No events.

[vagrant@rhel-cdk kubernetes]$ oc delete pod frontend-ui-?????
pod "frontend-ui-?????" deleted

[vagrant@rhel-cdk kubernetes]$ oc logs -f rc/frontend-ui
Found 2 pods, using pod/frontend-ui-?????

> [email protected] start /opt/app-root/src
> node frontend.js

Frontend service running at http://0.0.0.0:8080

Now let’s use oc to:

  • Easily set a ReadinessProbe.

To create a ReadinessProbe with oc command, execute:

[vagrant@rhel-cdk kubernetes]$ oc set probe dc helloworld-service-vertx --readiness --get-url=http://:8080/api/hello/Kubernetes
deploymentconfig "helloworld-service-vertx" updated

Note that this configuration change caused a new deployment in this project. This was much easier than the previous time, right? You can use oc get dc helloworld-service-vertx -o yaml to see the configuration inside the DeploymentConfig object.

Tip
You can also use oc set probe dc helloworld-service-vertx --readiness --remove to remove the ReadinessProbe using oc command.
  • Create a route to frontend-ui.

Now let’s create a route and expose the service. But first let’s understand what is a Route. Remember that we needed to execute kubectl describe service frontend-ui to get the NodePort? A Route uses the port 80 of OpenShift and "routes" the requests based on the defined hostname. Let’s see how it works. Execute:

[vagrant@rhel-cdk kubernetes]$ oc expose service frontend-ui \
                    --hostname=frontend.10.1.2.2.nip.io
route "frontend-ui" exposed

Now point your browser to http://frontend.10.1.2.2.nip.io/

Amazing, right?

Tip
TIP: You can delete the Route with the command: oc delete route frontend-ui
Note

How frontend.10.1.2.2.nip.io resolved to 10.1.2.2?

NIP.IO allows you to map any IP Address in the following DNS wildcard entries:

  • 10.0.0.1.nip.io maps to 10.0.0.1

  • app.10.0.0.1.nip.io maps to 10.0.0.1

  • customer1.app.10.0.0.1.nip.io maps to 10.0.0.1

  • customer2.app.10.0.0.1.nip.io maps to 10.0.0.1

  • otherapp.10.0.0.1.nip.io maps to 10.0.0.1

NIP.IO maps <anything>.<IP Address>.nip.io to the corresponding <IP Address>, even 127.0.0.1.nip.io maps to 127.0.0.1