You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 22, 2024. It is now read-only.
Kubernetes allows you to use a rollout to update an app deployment with a new docker image. This allows you to easily update the running image and also allows you to easily undo a rollout if a problem is discovered after deployment.
101
101
102
+
Before you begin: Ensure that you have the image tagged with `1` and pushed:
The kubelet uses liveness probes to know when to restart a Container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a Container in such a state can help to make the application more available despite bugs.
185
+
Kubernetes uses availability checks (liveness probes) to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs.
178
186
179
-
The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.
187
+
Also Kubernetes uses readiness checks to know when a container is ready to start accepting traffic. A pod is considered ready when all of its containers are ready. One use of this check is to control which pods are used as backends for services. When a pod is not ready, it is removed from load balancers.
180
188
181
-
In this example, we have defined a HTTP liveness probe, to check health of the container every 5 seconds:
182
-
```yml
183
-
...
184
-
livenessProbe:
185
-
httpGet:
186
-
path: /healthz
187
-
port: 8080
188
-
httpHeaders:
189
-
- name: x-Custom-Header
190
-
value: Awesome
191
-
initialDelaySeconds: 5
192
-
periodSeconds: 5
193
-
...
194
-
```
189
+
In this example, we have defined a HTTP liveness probe, to check health of the container every 5 seconds. For the first 10-15 seconds the `/healthz` return a `200` response and will fail afterward. Kubernetes will automatically restart the service.
195
190
196
-
For the first 10-15 seconds the `/healthz` return a `200` response and will fail afterward. Kubernetes will automatically restart the service. For reference, the following changes were made to the app.js from the Stage1 file:
197
-
198
-
```javascript
199
-
....
200
-
var delay =10000+Math.floor(Math.random() *5000)
201
-
....
202
-
app.get('/healthz', function(req, res) {
203
-
if ((Date.now() - startTime) > delay) {
204
-
res.status(500).send({
205
-
error:'Timeout, Health check error!'
206
-
})
207
-
} else {
208
-
res.send('OK!')
209
-
}
210
-
})
211
-
....
212
-
```
191
+
1. Open the `<username_home_directory>/container-service-getting-started-wt/Stage2/healthcheck.yml` file with a text editor. This configuration script combines a few steps from the previous lesson to create a deployment and a service at the same time. App developers can use these scripts when updates are made or to troubleshoot issues by re-creating the pods:
213
192
214
-
To try the HTTP liveness check, first, cd into the Stage2 directory, then create and push the sigex-demo-health image to the IBM Cloud Container Registry:
193
+
1. Update the details for the image in your private registry namespace.
2. Note the HTTP liveness probe that check health of the container every 5 seconds.
221
200
222
-
Replace the correct namespace in the healthcheck.yml file under the image tag:
201
+
```
202
+
livenessProbe:
203
+
httpGet:
204
+
path: /healthz
205
+
port: 8080
206
+
initialDelaySeconds: 5
207
+
periodSeconds: 5
208
+
```
223
209
224
-
```yml
225
-
spec:
226
-
containers:
227
-
- name: hello-world-container
228
-
image: "registry.ng.bluemix.net/<namespace>/health-check-demo"# replace here
229
-
imagePullPolicy: Always
230
-
livenessProbe:
231
-
httpGet:
232
-
path: /healthz
233
-
port: 8080
234
-
initialDelaySeconds: 5
235
-
periodSeconds: 5
236
-
```
210
+
3. In the **Service** section, note the `NodePort`. Rather than generating a random NodePort like you did in the previous lesson, you can specify a port in the 30000 - 32767 range. This example uses 30072.
211
+
212
+
2. Run the configuration script in the cluster. When the deployment and the service are created, the app is available for anyone to see:
Now that all the deployment work is done, check how everything turned out. You might notice that because more instances are running, things might run a bit slower.
219
+
220
+
3. Open a browser and check out the app. To form the URL, combine the IP with the NodePort that was specified in the configuration script. To get the public IP address for the worker node:
221
+
222
+
```
223
+
bx cs workers <cluster-name>
224
+
```
225
+
226
+
In a browser, you'll see a success message. If you do not see this text, don't worry. This app is designed to go up and down.
227
+
228
+
For the first 10 - 15 seconds, a 200 message is returned, so you know that the app is running successfully. After those 15 seconds, a timeout message is displayed, as is designed in the app.
229
+
230
+
231
+
4. Launch your Kubernetes dashboard with the default port 8001:
232
+
1. Set the proxy with the default port number.
233
+
234
+
```
235
+
kubectl proxy
236
+
```
237
+
238
+
Output:
237
239
240
+
```
241
+
Starting to serve on 127.0.0.1:8001
242
+
```
238
243
239
-
Once the yml file is updated, create a Pod:
244
+
2. Open the following URL in a web browser to see the Kubernetes dashboard:
240
245
241
-
`kubectl create -f healthcheck.yml`
246
+
```
247
+
http://localhost:8001/ui
248
+
```
242
249
243
-
Run `kubectl get pods` and verify that the image was provisioned to the pod correctly.
250
+
5. In the **Workloads** tab, you can see the resources that you created. From this tab, you can continually refresh and see that the health check is working. In the **Pods** section, you can see how many times the pods are restarted when the containers in them are re-created. You might happen to catch errors in the dashboard, indicating that the health check caught a problem. Give it a few minutes and refresh again. You see the number of restarts changes for each pod.
244
251
245
-
Get the ip of your cluster by running `bx cs workers <clustername>`, your nodeport will be 30072.
252
+
6. Ready to delete what you created before you continue? This time, you can use the same configuration script to delete both of the resources you created.
246
253
247
-
After 10 secconds, view the Pod events to confirm health check failed and pod restarted:
7. When you are done exploring the Kubernetes dashboard, in your CLI, enter CTRL+C to exit the `proxy` command.
250
257
251
-
And finally, open a web browser and naviagate to `<cluster-ip>:30072/healthz` to see the endpoint operational, and `<cluster-ip>:30072` to see that the application tries to work despite having failing nodes.
252
258
253
-
Thus you have seen the fault tolerance having multiple replicas provides you. Stage 2 of the lab is now complete!
259
+
Congratulations! You deployed the second version of the app. You had to use fewer commands, learned how health check works, and edited a deployment, which is great! Lab 2 is now complete.
# Lab 3: Deploy an application with IBM Cloud Services
1
+
# Lab 3: Deploy an application with IBM Watson services
2
2
3
-
In this lab, we walk through setting up an application to leverage the Watson Tone Analyzer service. If you have yet to create a cluster, please refer to stage 1 of this walkthrough.
3
+
In this lab, we walk through setting up an application to leverage the Watson Tone Analyzer service. If you have yet to create a cluster, please refer to lab 1 of this walkthrough.
4
4
5
-
We will be using the watson folder under the Lab 3 directory for the duration of the application.
In `watson-deployment.yml`, update the image tag with the registry path to the image you created in the following two sections:
29
+
8. In `watson-deployment.yml`, update the image tag with the registry path to the image you created in the following two sections:
39
30
40
31
```yml
41
32
spec:
@@ -52,32 +43,25 @@ In `watson-deployment.yml`, update the image tag with the registry path to the i
52
43
```
53
44
54
45
55
-
# Create an IBM Cloud service via the cli
46
+
# Creating an instance of the IBM Watson service via the CLI
56
47
57
-
In order to begin using the watson tone analyzer (the IBM Cloud service for this application), we must first request an instance of the analyzer in the org and space we have set up our cluster in. If you need to check what space and org you are currently using, simply run `bx login`. Then use `bx target --cf` to select the space and org you were using for stage 1 and 2 of the lab.
48
+
In order to begin using the Watson Tone Analyzer (the IBM Cloud service for this application), we must first request an instance of the Watson service in the org and space we have set up our cluster in.
58
49
59
-
Once we have set our space and org, run `bx cf create-service tone_analyzer standard tone`, where `tone` is the name we will use for the watson tone analyzer service.
50
+
1. If you need to check what space and org you are currently using, simply run `bx login`. Then use `bx target --cf` to select the space and org you were using for labs 1 and 2.
60
51
61
-
Run `bx cf services` to ensure a service named tone was created. You should see output like the following:
52
+
2. Once we have set our space and org, run `bx cf create-service tone_analyzer standard tone`, where `tone` is the name we will use for the Watson Tone Analyzer service.
62
53
63
-
```
64
-
Invoking 'cf services'...
54
+
Note: When you add the Tone Analyzer service to your account, a message is displayed that the service is not free. If you [limit your API calls](https://www.ibm.com/watson/developercloud/tone-analyzer.html#pricing-block), this tutorial does not incur charges from the Watson service.
65
55
66
-
Getting services in org <org> / space <space> as <username>...
67
-
OK
56
+
3. Run `bx cf services` to ensure a service named `tone` was created.
68
57
69
-
name service plan bound apps last operation
70
-
tone tone_analyzer standard create succeeded
58
+
# Binding the Watson service to your cluster
71
59
72
-
```
60
+
1. Run `bx cs cluster-service-bind <name-of-cluster> default tone` to bind the service to your cluster. This command will create a secret for the service.
73
61
74
-
# Bind a Service to a Cluster
62
+
2. Verify the secret was created by running `kubectl get secrets`.
75
63
76
-
Run `bx cs cluster-service-bind <name-of-cluster> default tone` to bind the service to your cluster. This command will create a secret for the service.
77
-
78
-
Verify the secret was created by running `kubectl get secrets`
79
-
80
-
# Create pods and services
64
+
# Creating pods and services
81
65
82
66
Now that the service is bound to the cluster, we want to expose the secret to our pod so it can utilize the service. You can do this by creating a secret datastore as a part of your deployment configuration. This has been done for you in watson-deployment.yml:
83
67
@@ -92,39 +76,31 @@ Now that the service is bound to the cluster, we want to expose the secret to ou
92
76
secretName: 2a5baa4b-a52d-4911-9019-69ac01afbb7f-key0 # from the kubectl get secrets command above
93
77
```
94
78
95
-
Once the YAML configuration is updated, build the application using the yaml:
79
+
1. Build the application using the yaml:
96
80
- `cd "Lab 3"`
97
81
- `kubectl create -f watson-deployment.yml`
98
82
99
-
Verify the pod has been created:
83
+
2. Verify the pod has been created:
100
84
101
85
`kubectl get pods`
102
86
103
-
At this time, verify the secret was created and grab the json secret file to configure your application. Note that for this demo, this has been done for you:
104
-
105
-
`kubectl exec <pod_name> -it /bin/bash`
106
-
107
-
`cd /opt/service-bind`
108
-
109
-
`ls`
110
-
111
-
If the volume containing the secrets has been mounted, a file named `binding` should be in your CLI output. Cat the file and use it to configure your application to use the service.
112
-
113
-
`cat binding`
87
+
At this time, your secret was created. Note that for this lab, this has been done for you.
114
88
115
89
# Putting It All Together - Run the Application and Service
116
90
117
-
By this time you have created pods, services and volumes for this lab. You can open the dashboard and explore all new objects created or use the following commands:
91
+
By this time you have created pods, services and volumes for this lab.
92
+
93
+
1. You can open the Kubernetes dashboard and explore all new objects created or use the following commands:
118
94
```
119
95
kubectl get pods
120
96
kubectl get deployments
121
97
kubectl get services
122
98
```
123
99
124
-
You have to find the Public IP for the worker node to access the application. Run the following command and take note of the same:
100
+
2. Get the public IP for the worker node to access the application:
125
101
126
102
`bx cs workers <name-of-cluster>`
127
103
128
-
Now that the you got the container IP and port, go to your favorite web browswer and launch the following URL to analyze the text and see output: `http://<public-IP>:30080/analyze/<YourTextHere>`
104
+
3. Now that the you got the container IP and port, go to your favorite web browser and launch the following URL to analyze the text and see output: `http://<public-IP>:30080/analyze/"Today is a beautiful day"`
129
105
130
-
If you can see JSON output on your screen, congratulations! You are done!
106
+
If you can see JSON output on your screen, congratulations! You are done with lab 3!
0 commit comments