You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/current/v25.2/migrate-cockroachdb-kubernetes-helm.md
+78-1Lines changed: 78 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -130,6 +130,10 @@ For each pod in the StatefulSet, perform the following steps:
130
130
131
131
Repeat these steps until the StatefulSet has zero replicas.
132
132
133
+
{{site.data.alerts.callout_danger}}
134
+
If there are issues with the migration and you need to revert back to the previous deployment, follow the [rollback process](#roll-back-a-migration-in-progress).
135
+
{{site.data.alerts.end}}
136
+
133
137
## Step 4. Update the public service
134
138
135
139
The Helm chart creates a public Service that exposes both SQL and gRPC connections over a single power. However, the operator uses a different port for gRPC communication. To ensure compatibility, update the public Service to reflect the correct gRPC port used by the operator.
@@ -162,4 +166,77 @@ Apply the crdbcluster manifest using Helm:
If the migration to the {{ site.data.products.cockroachdb-operator}} fails during the stage where you are applying the generated `crdbnode` manifests, follow the steps below to safely restore the original state using the previously backed-up resources and preserved volumes. This assumes the StatefulSet and PVCs are not deleted.
174
+
175
+
1. Delete the applied `crdbnode` resources and simultaneously scale the StatefulSet back up.
176
+
177
+
Delete the individual `crdbnode` manifests in the reverse order of their creation (starting with the last one created, e.g., `crdbnode-2.yaml`) and scale the StatefulSet back to its original replica count (e.g., 3). For example, assuming you have applied two `crdbnode` yaml files (`crdbnode-2.yaml` & `crdbnode-1.yaml`):
178
+
179
+
1. Delete a `crdbnode` manifest in reverse order, starting with `crdbnode-2.yaml`.
180
+
1. Scale the StatefulSet replica count up by one (to 2).
181
+
1. Verify that data has propagated by waiting for there to be zero under-replicated ranges:
182
+
183
+
1. Set up port forwarding to access the CockroachDB node's HTTP interface, replacing `cockroachdb-X` with the node name:
This command outputs the number of under-replicated ranges on the node, which should be zero before proceeding with the next node. This may take some time depending on the deployment, but is necessary to ensure that there is no downtime in data availability.
200
+
201
+
1. Repeat steps a through c for each node, deleting the `crdbnode-1.yaml`, scaling replica count to 3, and so on.
Repeat the `kubectl delete -f ... command`for each `crdbnode` manifest you applied during migration. Make sure to verify that there are no underreplicated ranges after rolling back each node.
210
+
211
+
1. Delete the PriorityClass and RBAC resources created for the CockroachDB operator:
212
+
213
+
{% include_cached copy-clipboard.html %}
214
+
~~~ shell
215
+
kubectl delete priorityclass crdb-critical
216
+
kubectl delete -f manifests/rbac.yaml
217
+
~~~
218
+
219
+
1. Uninstall the {{ site.data.products.cockroachdb-operator }}:
220
+
221
+
{% include_cached copy-clipboard.html %}
222
+
~~~ shell
223
+
helm uninstall crdb-operator
224
+
~~~
225
+
226
+
1. Clean up {{ site.data.products.cockroachdb-operator }} resources and custom resource definitions:
Copy file name to clipboardExpand all lines: src/current/v25.2/migrate-cockroachdb-kubernetes-operator.md
+99Lines changed: 99 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -180,6 +180,10 @@ For each pod in the StatefulSet, perform the following steps:
180
180
181
181
Repeat these steps until the StatefulSet has zero replicas.
182
182
183
+
{{site.data.alerts.callout_danger}}
184
+
If there are issues with the migration and you need to revert back to the previous deployment, follow the [rollback process](#roll-back-a-migration-in-progress).
185
+
{{site.data.alerts.end}}
186
+
183
187
## Step 5. Update the crdbcluster manifest
184
188
185
189
The {{ site.data.products.public-operator }} creates a pod disruption budget that conflicts with a pod disruption budget managed by the {{ site.data.products.cockroachdb-operator }}. Before applying the crdbcluster manifest, delete the existing pod disruption budget:
@@ -211,3 +215,98 @@ Once the migration is successful, delete the StatefulSet that was created by the
If the migration to the {{ site.data.products.cockroachdb-operator}} fails during the stage where you are applying the generated `crdbnode` manifests, follow the steps below to safely restore the original state using the previously backed-up resources and preserved volumes. This assumes the StatefulSet and PVCs are not deleted.
222
+
223
+
1. Delete the applied `crdbnode` resources and simultaneously scale the StatefulSet back up.
224
+
225
+
Delete the individual `crdbnode` manifests in the reverse order of their creation (starting with the last one created, e.g., `crdbnode-2.yaml`) and scale the StatefulSet back to its original replica count (e.g., 3). For example, assuming you have applied two `crdbnode` yaml files (`crdbnode-2.yaml`&`crdbnode-1.yaml`):
226
+
227
+
1. Delete a `crdbnode` manifest in reverse order, starting with `crdbnode-2.yaml`.
228
+
1. Scale the StatefulSet replica count up by one (to 2).
229
+
1. Verify that data has propagated by waiting for there to be zero under-replicated ranges:
230
+
231
+
1. Set up port forwarding to access the CockroachDB node's HTTP interface, replacing `cockroachdb-X` with the node name:
This command outputs the number of under-replicated ranges on the node, which should be zero before proceeding with the next node. This may take some time depending on the deployment, but is necessary to ensure that there is no downtime in data availability.
248
+
249
+
1. Repeat steps a through c for each node, deleting the `crdbnode-1.yaml`, scaling replica count to 3, and so on.
Repeat the `kubectl delete -f ... command` for each `crdbnode` manifest you applied during migration. Make sure to verify that there are no underreplicated ranges after rolling back each node.
258
+
259
+
1. Delete the PriorityClass and RBAC resources created for the CockroachDB operator:
260
+
261
+
{% include_cached copy-clipboard.html %}
262
+
~~~ shell
263
+
kubectl delete priorityclass crdb-critical
264
+
kubectl delete -f manifests/rbac.yaml
265
+
~~~
266
+
267
+
1. Uninstall the {{ site.data.products.cockroachdb-operator }}:
268
+
269
+
{% include_cached copy-clipboard.html %}
270
+
~~~ shell
271
+
helm uninstall crdb-operator
272
+
~~~
273
+
274
+
1. Clean up {{ site.data.products.cockroachdb-operator }} resources and custom resource definitions:
0 commit comments