Skip to content
This repository was archived by the owner on Jul 22, 2024. It is now read-only.

Commit 2ecaa8a

Browse files
authored
Merge pull request #43 from vwilburn/master
Lab 2 health check changes
2 parents 1192dcd + 09a7f48 commit 2ecaa8a

File tree

5 files changed

+104
-122
lines changed

5 files changed

+104
-122
lines changed

Lab 2/README.md

Lines changed: 67 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -99,11 +99,19 @@ hello-world-562211614-zsp0j 1/1 Running 0 2m
9999

100100
Kubernetes allows you to use a rollout to update an app deployment with a new docker image. This allows you to easily update the running image and also allows you to easily undo a rollout if a problem is discovered after deployment.
101101

102+
Before you begin: Ensure that you have the image tagged with `1` and pushed:
103+
```
104+
docker build --tag registry.ng.bluemix.net/<namespace>/hello-world:1 .
105+
106+
docker push registry.ng.bluemix.net/<namespace>/hello-world:1
107+
```
108+
109+
To update and roll back:
102110
1. First, make a change to your code and build a new docker image with a new tag:
103111

104112
`docker build --tag registry.ng.bluemix.net/<namespace>/hello-world:2 .`
105113

106-
2 .Then push the image to the IBM Cloud Container Registry:
114+
2. Then push the image to the IBM Cloud Container Registry:
107115

108116
`docker push registry.ng.bluemix.net/<namespace>/hello-world:2`
109117

@@ -174,80 +182,78 @@ hello-world-3254495675 10 10 10 1m
174182

175183
# Checking the health of apps
176184

177-
The kubelet uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a Container in such a state can help to make the application more available despite bugs.
185+
Kubernetes uses availability checks (liveness probes) to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs.
178186

179-
The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.
187+
Also Kubernetes uses readiness checks to know when a container is ready to start accepting traffic. A pod is considered ready when all of its containers are ready. One use of this check is to control which pods are used as backends for services. When a pod is not ready, it is removed from load balancers.
180188

181-
In this example, we have defined a HTTP liveness probe, to check health of the container every 5 seconds:
182-
```yml
183-
...
184-
livenessProbe:
185-
httpGet:
186-
path: /healthz
187-
port: 8080
188-
httpHeaders:
189-
- name: x-Custom-Header
190-
value: Awesome
191-
initialDelaySeconds: 5
192-
periodSeconds: 5
193-
...
194-
```
189+
In this example, we have defined a HTTP liveness probe, to check health of the container every 5 seconds. For the first 10-15 seconds the `/healthz` return a `200` response and will fail afterward. Kubernetes will automatically restart the service.
195190

196-
For the first 10-15 seconds the `/healthz` return a `200` response and will fail afterward. Kubernetes will automatically restart the service. For reference, the following changes were made to the app.js from the Stage1 file:
197-
198-
```javascript
199-
....
200-
var delay = 10000 + Math.floor(Math.random() * 5000)
201-
....
202-
app.get('/healthz', function(req, res) {
203-
if ((Date.now() - startTime) > delay) {
204-
res.status(500).send({
205-
error: 'Timeout, Health check error!'
206-
})
207-
} else {
208-
res.send('OK!')
209-
}
210-
})
211-
....
212-
```
191+
1. Open the `<username_home_directory>/container-service-getting-started-wt/Stage2/healthcheck.yml` file with a text editor. This configuration script combines a few steps from the previous lesson to create a deployment and a service at the same time. App developers can use these scripts when updates are made or to troubleshoot issues by re-creating the pods:
213192

214-
To try the HTTP liveness check, first, cd into the Stage2 directory, then create and push the sigex-demo-health image to the IBM Cloud Container Registry:
193+
1. Update the details for the image in your private registry namespace.
215194

216-
```
217-
docker build --tag registry.ng.bluemix.net/<namespace>/health-check-demo .
218-
docker push registry.ng.bluemix.net/<namespace>/health-check-demo
219-
```
195+
```
196+
image: "registry.<region>.bluemix.net/<namespace>/hello-world:2"
197+
```
220198
199+
2. Note the HTTP liveness probe that check health of the container every 5 seconds.
221200
222-
Replace the correct namespace in the healthcheck.yml file under the image tag:
201+
```
202+
livenessProbe:
203+
httpGet:
204+
path: /healthz
205+
port: 8080
206+
initialDelaySeconds: 5
207+
periodSeconds: 5
208+
```
223209
224-
```yml
225-
spec:
226-
containers:
227-
- name: hello-world-container
228-
image: "registry.ng.bluemix.net/<namespace>/health-check-demo" # replace here
229-
imagePullPolicy: Always
230-
livenessProbe:
231-
httpGet:
232-
path: /healthz
233-
port: 8080
234-
initialDelaySeconds: 5
235-
periodSeconds: 5
236-
```
210+
3. In the **Service** section, note the `NodePort`. Rather than generating a random NodePort like you did in the previous lesson, you can specify a port in the 30000 - 32767 range. This example uses 30072.
211+
212+
2. Run the configuration script in the cluster. When the deployment and the service are created, the app is available for anyone to see:
213+
214+
```
215+
kubectl apply -f <username_home_directory>/container-service-getting-started-wt/Stage2/healthcheck.yml
216+
```
217+
218+
Now that all the deployment work is done, check how everything turned out. You might notice that because more instances are running, things might run a bit slower.
219+
220+
3. Open a browser and check out the app. To form the URL, combine the IP with the NodePort that was specified in the configuration script. To get the public IP address for the worker node:
221+
222+
```
223+
bx cs workers <cluster-name>
224+
```
225+
226+
In a browser, you'll see a success message. If you do not see this text, don't worry. This app is designed to go up and down.
227+
228+
For the first 10 - 15 seconds, a 200 message is returned, so you know that the app is running successfully. After those 15 seconds, a timeout message is displayed, as is designed in the app.
229+
230+
231+
4. Launch your Kubernetes dashboard with the default port 8001:
232+
1. Set the proxy with the default port number.
233+
234+
```
235+
kubectl proxy
236+
```
237+
238+
Output:
237239
240+
```
241+
Starting to serve on 127.0.0.1:8001
242+
```
238243
239-
Once the yml file is updated, create a Pod:
244+
2. Open the following URL in a web browser to see the Kubernetes dashboard:
240245
241-
`kubectl create -f healthcheck.yml`
246+
```
247+
http://localhost:8001/ui
248+
```
242249
243-
Run `kubectl get pods` and verify that the image was provisioned to the pod correctly.
250+
5. In the **Workloads** tab, you can see the resources that you created. From this tab, you can continually refresh and see that the health check is working. In the **Pods** section, you can see how many times the pods are restarted when the containers in them are re-created. You might happen to catch errors in the dashboard, indicating that the health check caught a problem. Give it a few minutes and refresh again. You see the number of restarts changes for each pod.
244251
245-
Get the ip of your cluster by running `bx cs workers <clustername>`, your nodeport will be 30072.
252+
6. Ready to delete what you created before you continue? This time, you can use the same configuration script to delete both of the resources you created.
246253
247-
After 10 secconds, view the Pod events to confirm health check failed and pod restarted:
254+
kubectl delete -f <username_home_directory>/container-service-getting-started-wt/Stage2/healthcheck.yml
248255
249-
`kubectl describe pod hello-world-deployment`
256+
7. When you are done exploring the Kubernetes dashboard, in your CLI, enter CTRL+C to exit the `proxy` command.
250257
251-
And finally, open a web browser and naviagate to `<cluster-ip>:30072/healthz` to see the endpoint operational, and `<cluster-ip>:30072` to see that the application tries to work despite having failing nodes.
252258
253-
Thus you have seen the fault tolerance having multiple replicas provides you. Stage 2 of the lab is now complete!
259+
Congratulations! You deployed the second version of the app. You had to use fewer commands, learned how health check works, and edited a deployment, which is great! Lab 2 is now complete.

Lab 3/README.md

Lines changed: 37 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -1,41 +1,32 @@
1-
# Lab 3: Deploy an application with IBM Cloud Services
1+
# Lab 3: Deploy an application with IBM Watson services
22

3-
In this lab, we walk through setting up an application to leverage the Watson Tone Analyzer service. If you have yet to create a cluster, please refer to stage 1 of this walkthrough.
3+
In this lab, we walk through setting up an application to leverage the Watson Tone Analyzer service. If you have yet to create a cluster, please refer to lab 1 of this walkthrough.
44

5-
We will be using the watson folder under the Lab 3 directory for the duration of the application.
5+
# Deploying the Watson app
66

7-
# Lab steps
7+
1. Login to IBM Cloud Container Registry:
8+
`bx cr login`
89

9-
Run the following to begin this lab:
10+
2. Change the directory to `"Lab 3/watson"`.
1011

11-
1. Login to Container Registry
12-
- `bx cr login`
12+
3. Build the `watson` image:
13+
`docker build -t registry.ng.bluemix.net/<namespace>/watson .`
1314

15+
4. Push the `watson` image to IBM Cloud Container Registry:
16+
`docker push registry.ng.bluemix.net/<namespace>/watson`
1417

15-
2. Change current directory to `"Lab 3/watson"`
16-
- `cd "Lab 3/watson"`
18+
Tip: If you run out of registry space, clean up previous
19+
lab's images with this example command: `bx cr image-rm registry.ng.bluemix.net/<namespace>/hello-world:2`
1720

21+
5. Change the directory to `"Lab 3/watson-talk"`.
1822

19-
3. Build `watson` image
20-
- `docker build -t registry.ng.bluemix.net/<namespace>/watson .`
23+
6. Build the `watson-talk` image:
24+
`docker build -t registry.ng.bluemix.net/<namespace>/watson-talk .`
2125

22-
4. Push `watson` image to IBM Cloud Container Registry
23-
- `docker push registry.ng.bluemix.net/<namespace>/watson`
26+
7. Push the `watson-talk` image to IBM Cloud Container Registry:
27+
`docker push registry.ng.bluemix.net/<namespace>/watson-talk`
2428

25-
26-
5. Change current directory to `"Lab 3/watson-talk"`
27-
- `cd ../watson-talk`
28-
29-
30-
6. Build `watson-talk` image
31-
- `docker build -t registry.ng.bluemix.net/<namespace>/watson-talk .`
32-
33-
34-
7. Push `watson-talk` image to IBM Cloud Container Registry
35-
36-
- `docker push registry.ng.bluemix.net/<namespace>/watson-talk`
37-
38-
In `watson-deployment.yml`, update the image tag with the registry path to the image you created in the following two sections:
29+
8. In `watson-deployment.yml`, update the image tag with the registry path to the image you created in the following two sections:
3930

4031
```yml
4132
spec:
@@ -52,32 +43,25 @@ In `watson-deployment.yml`, update the image tag with the registry path to the i
5243
```
5344

5445

55-
# Create an IBM Cloud service via the cli
46+
# Creating an instance of the IBM Watson service via the CLI
5647

57-
In order to begin using the watson tone analyzer (the IBM Cloud service for this application), we must first request an instance of the analyzer in the org and space we have set up our cluster in. If you need to check what space and org you are currently using, simply run `bx login`. Then use `bx target --cf` to select the space and org you were using for stage 1 and 2 of the lab.
48+
In order to begin using the Watson Tone Analyzer (the IBM Cloud service for this application), we must first request an instance of the Watson service in the org and space we have set up our cluster in.
5849

59-
Once we have set our space and org, run `bx cf create-service tone_analyzer standard tone`, where `tone` is the name we will use for the watson tone analyzer service.
50+
1. If you need to check what space and org you are currently using, simply run `bx login`. Then use `bx target --cf` to select the space and org you were using for labs 1 and 2.
6051

61-
Run `bx cf services` to ensure a service named tone was created. You should see output like the following:
52+
2. Once we have set our space and org, run `bx cf create-service tone_analyzer standard tone`, where `tone` is the name we will use for the Watson Tone Analyzer service.
6253

63-
```
64-
Invoking 'cf services'...
54+
Note: When you add the Tone Analyzer service to your account, a message is displayed that the service is not free. If you [limit your API calls](https://www.ibm.com/watson/developercloud/tone-analyzer.html#pricing-block), this tutorial does not incur charges from the Watson service.
6555

66-
Getting services in org <org> / space <space> as <username>...
67-
OK
56+
3. Run `bx cf services` to ensure a service named `tone` was created.
6857

69-
name service plan bound apps last operation
70-
tone tone_analyzer standard create succeeded
58+
# Binding the Watson service to your cluster
7159

72-
```
60+
1. Run `bx cs cluster-service-bind <name-of-cluster> default tone` to bind the service to your cluster. This command will create a secret for the service.
7361

74-
# Bind a Service to a Cluster
62+
2. Verify the secret was created by running `kubectl get secrets`.
7563

76-
Run `bx cs cluster-service-bind <name-of-cluster> default tone` to bind the service to your cluster. This command will create a secret for the service.
77-
78-
Verify the secret was created by running `kubectl get secrets`
79-
80-
# Create pods and services
64+
# Creating pods and services
8165

8266
Now that the service is bound to the cluster, we want to expose the secret to our pod so it can utilize the service. You can do this by creating a secret datastore as a part of your deployment configuration. This has been done for you in watson-deployment.yml:
8367

@@ -92,39 +76,31 @@ Now that the service is bound to the cluster, we want to expose the secret to ou
9276
secretName: 2a5baa4b-a52d-4911-9019-69ac01afbb7f-key0 # from the kubectl get secrets command above
9377
```
9478
95-
Once the YAML configuration is updated, build the application using the yaml:
79+
1. Build the application using the yaml:
9680
- `cd "Lab 3"`
9781
- `kubectl create -f watson-deployment.yml`
9882

99-
Verify the pod has been created:
83+
2. Verify the pod has been created:
10084

10185
`kubectl get pods`
10286

103-
At this time, verify the secret was created and grab the json secret file to configure your application. Note that for this demo, this has been done for you:
104-
105-
`kubectl exec <pod_name> -it /bin/bash`
106-
107-
`cd /opt/service-bind`
108-
109-
`ls`
110-
111-
If the volume containing the secrets has been mounted, a file named `binding` should be in your CLI output. Cat the file and use it to configure your application to use the service.
112-
113-
`cat binding`
87+
At this time, your secret was created. Note that for this lab, this has been done for you.
11488

11589
# Putting It All Together - Run the Application and Service
11690

117-
By this time you have created pods, services and volumes for this lab. You can open the dashboard and explore all new objects created or use the following commands:
91+
By this time you have created pods, services and volumes for this lab.
92+
93+
1. You can open the Kubernetes dashboard and explore all new objects created or use the following commands:
11894
```
11995
kubectl get pods
12096
kubectl get deployments
12197
kubectl get services
12298
```
12399

124-
You have to find the Public IP for the worker node to access the application. Run the following command and take note of the same:
100+
2. Get the public IP for the worker node to access the application:
125101

126102
`bx cs workers <name-of-cluster>`
127103

128-
Now that the you got the container IP and port, go to your favorite web browswer and launch the following URL to analyze the text and see output: `http://<public-IP>:30080/analyze/<YourTextHere>`
104+
3. Now that the you got the container IP and port, go to your favorite web browser and launch the following URL to analyze the text and see output: `http://<public-IP>:30080/analyze/"Today is a beautiful day"`
129105

130-
If you can see JSON output on your screen, congratulations! You are done!
106+
If you can see JSON output on your screen, congratulations! You are done with lab 3!

images/cluster_ha_roadmap.png

224 KB
Loading
39 KB
Loading

images/container-vs-vm.jpg

28.3 KB
Loading

0 commit comments

Comments
 (0)