You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/next/modules/en/pages/user/clusterclass.adoc
+166Lines changed: 166 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -191,6 +191,73 @@ spec:
191
191
matchLabels: {}
192
192
----
193
193
194
+
GCP::
195
+
+
196
+
To prepare the management Cluster, we are going to install the https://cluster-api-gcp.sigs.k8s.io/[Cluster API Provider GCP], and create a secret with the required credentials to provision a new Cluster on GCP. +
197
+
A Service Account is required to create and manage clusters in GCP and this will require `Editor` permissions. You can follow the offical guide from the https://cluster-api-gcp.sigs.k8s.io/quick-start#create-a-service-account[CAPG Book]. +
198
+
+
199
+
* Provider installation
200
+
+
201
+
[source,yaml]
202
+
----
203
+
apiVersion: turtles-capi.cattle.io/v1alpha1
204
+
kind: CAPIProvider
205
+
metadata:
206
+
name: gcp
207
+
namespace: capg-system
208
+
spec:
209
+
type: infrastructure
210
+
variables:
211
+
GCP_B64ENCODED_CREDENTIALS: ""
212
+
----
213
+
+
214
+
* https://github.com/kubernetes-sigs/cluster-api[Bootstrap/Control Plane provider for Kubeadm], example of Kubeadm installation:
215
+
+
216
+
[source,yaml]
217
+
----
218
+
apiVersion: v1
219
+
kind: Namespace
220
+
metadata:
221
+
name: capi-kubeadm-bootstrap-system
222
+
---
223
+
apiVersion: turtles-capi.cattle.io/v1alpha1
224
+
kind: CAPIProvider
225
+
metadata:
226
+
name: kubeadm-bootstrap
227
+
namespace: capi-kubeadm-bootstrap-system
228
+
spec:
229
+
name: kubeadm
230
+
type: bootstrap
231
+
---
232
+
apiVersion: v1
233
+
kind: Namespace
234
+
metadata:
235
+
name: capi-kubeadm-control-plane-system
236
+
---
237
+
apiVersion: turtles-capi.cattle.io/v1alpha1
238
+
kind: CAPIProvider
239
+
metadata:
240
+
name: kubeadm-control-plane
241
+
namespace: capi-kubeadm-control-plane-system
242
+
spec:
243
+
name: kubeadm
244
+
type: controlPlane
245
+
----
246
+
+
247
+
* Network Setup
248
+
+
249
+
Provisioning a self-managed GCP cluster requires that a GCP network is configured to allow Kubernetes nodes to communicate with the control plane and pull images from the container registry for which machines need to have NAT access or a public IP. +
250
+
The default provider behavior is to create virtual machines with no public IP attached, so a https://cloud.google.com/nat/docs/overview[Cloud NAT] is required to allow the nodes to establish a connection with the load balancer and the outside world. +
251
+
Please, refer to the official https://cluster-api-gcp.sigs.k8s.io/prerequisites#configure-network-and-cloud-nat[CAPG Book] guide on how to prepare your GCP network to provision a self-managed GCP cluster. +
252
+
+
253
+
[NOTE]
254
+
====
255
+
The following steps are required to prepare the GCP network for Cluster provisioning:
256
+
257
+
- Create a router.
258
+
- Create a NAT associated with the router.
259
+
====
260
+
194
261
Docker::
195
262
+
196
263
To prepare the management Cluster, we are going to install the Docker Cluster API Provider.
@@ -763,6 +830,105 @@ spec:
763
830
replicas: 1
764
831
----
765
832
833
+
GCP Kubeadm::
834
+
+
835
+
[WARNING]
836
+
====
837
+
Before creating a GCP+Kubeadm workload cluster, it is required to either build an Image for the Kubernetes version that is going to be installed on the cluster or find one that will work for your use case.
838
+
You can follow the steps in the https://image-builder.sigs.k8s.io/capi/providers/gcp[Kubernetes GCP Image Builder book].
839
+
====
840
+
+
841
+
* A GCP Kubeadm ClusterClass can be found among the https://github.com/rancher/turtles/tree/main/examples/clusterclasses[Turtles examples].
842
+
+
843
+
Applications like https://docs.tigera.io/calico/latest/about/[Calico CNI] and https://github.com/kubernetes/cloud-provider-gcp[Cloud Controller Manager GCP] will be installed on downstream Clusters. This is done automatically at Cluster creation by targeted Clusters with specific labels, such as `cni: calico` and `cloud-provider: gcp`.
844
+
+
845
+
[tabs]
846
+
=======
847
+
CLI::
848
+
+
849
+
A GCP Kubeadm ClusterClass and associated applications can be applied using the examples tool:
850
+
+
851
+
[source,bash]
852
+
----
853
+
go run github.com/rancher/turtles/examples@latest -r gcp-kubeadm | kubectl apply -f -
854
+
----
855
+
856
+
kubectl::
857
+
+
858
+
* Alternatively, you can apply the GCP Kubeadm ClusterClass directly using kubectl:
* For this example we are also going to install https://docs.tigera.io/calico/latest/about/[Calico] as the default CNI. +
866
+
* The https://github.com/kubernetes/cloud-provider-gcp[Cloud Controller Manager GCP] will need to be installed on each downstream Cluster for the nodes to be functional. +
867
+
+
868
+
We can do this automatically at Cluster creation using a combination of https://rancher.github.io/cluster-api-addon-provider-fleet/[Cluster API Add-on Provider Fleet] and https://fleet.rancher.io/bundle-add[Fleet Bundle]. +
869
+
The Add-on provider is installed by default with {product_name}. +
870
+
The `HelmApps` need to be created first, to be applied on the new Cluster via label selectors. This will take care of deploying Calico. +
A `Bundle` will take care of deploying Cloud Controller Manager GCP. The reason for not using Add-on Provider Fleet is that https://github.com/kubernetes/cloud-provider-gcp[Cloud Controller Manager GCP] does not provide a Helm chart, so we opt for creating the Fleet `Bundle` resource directly. +
* Create the GCP Cluster from the example ClusterClass +
886
+
+
887
+
Note that some variables are left to the user to substitute. +
888
+
The default configuration of Cloud Controller Manager GCP is configured to use a single zone cluster, so the `clusterFailureDomains` variable is set to a single zone. If you need to provision a multi-zone cluster, we recommend you inspect the parameters provided by https://github.com/kubernetes/cloud-provider-gcp/blob/12f93cb23a5af58bfb7fb453bebff3eb2f81755c/providers/gce/gce.go#L120[Cloud Controller Manager GCP] and how https://github.com/kubernetes-sigs/cluster-api-provider-gcp/blob/6518ef9b44cfc4f8c3f7139b2ce4ae71523deff6/test/e2e/data/infrastructure-gcp/cluster-template-ci.yaml#L59[CAPG leverages these variables] to create cluster-specific configurations. +
889
+
+
890
+
[source,yaml]
891
+
----
892
+
893
+
apiVersion: cluster.x-k8s.io/v1beta1
894
+
kind: Cluster
895
+
metadata:
896
+
labels:
897
+
cluster-api.cattle.io/rancher-auto-import: "true"
898
+
cni: calico
899
+
cloud-provider: gcp
900
+
name: gcp-quickstart
901
+
spec:
902
+
clusterNetwork:
903
+
pods:
904
+
cidrBlocks:
905
+
- 192.168.0.0/16
906
+
topology:
907
+
class: gcp-kubeadm-example
908
+
controlPlane:
909
+
replicas: 1
910
+
workers:
911
+
machineDeployments:
912
+
- class: "default-worker"
913
+
name: "md-0"
914
+
replicas: 1
915
+
variables:
916
+
- name: gcpProject
917
+
value: <GCP_PROJECT>
918
+
- name: region
919
+
value: <GCP_REGION>
920
+
- name: gcpNetworkName
921
+
value: <GCP_NETWORK_NAME>
922
+
- name: clusterFailureDomains
923
+
value:
924
+
- "<GCP_REGION>-a"
925
+
- name: imageId
926
+
value: <GCP_IMAGE_ID>
927
+
- name: machineType
928
+
value: <GCP_MACHINE_TYPE>
929
+
version: v1.31.7
930
+
----
931
+
766
932
Docker Kubeadm::
767
933
+
768
934
* A Docker Kubeadm ClusterClass can be found among the https://github.com/rancher/turtles/tree/main/examples/clusterclasses[Turtles examples].
0 commit comments