This example demonstrates how to create two Nutanix clusters and set up a protection policy between them. Additionally, it covers the process of creating a VM in one cluster and migrating it to the other. The setup is partially automated using Terraform and partially manual.
- Terraform and SSH installed on your local machine
- Equinix Metal account
- Equinix Metal Nutanix Certfied Hardware Reservations or preapproved access to the
nutanix_lts_6_5_poc
image.
-
Create two Nutanix clusters
1.1. Clone the repository:
git clone [email protected]:equinix-labs/terraform-equinix-metal-nutanix-cluster.git cd terraform-equinix-metal-nutanix-cluster cd examples/cluster-migration
1.2. Create the
terraform.tfvars
file:metal_auth_token = "" # Equinix Metal API token metal_project_id = "" # The ID of the Metal project in which to deploy to cluster if `create_project` is false. metal_organization_id = "" # The ID of the Metal organization in which to create the project if `create_project` is true. metal_metro = "sl" # The metro to create the cluster in create_project = false # (Optional) to use an existing project matching `metal_project_name`, set this to false. create_vlan = false # Whether to create a new VLAN for this project. create_vrf = true nutanix_node_count = 1 # The number of Nutanix nodes to create. It should be odd number like 1, 3, 5.. metal_subnet = "192.168.96.0/21" # Pick an arbitrary private subnet, we recommend a /21 like "192.168.96.0/21" nutanix_reservation_ids = {cluster_a=[],cluster_b=[]} # Hardware reservation IDs to use for the Nutanix nodes metal_nutanix_os = "nutanix_lts_6_5" # Nutanix OS to deploy. nutanix_lts_6_5 requires reservations. nutanix_lts_6_5_poc may be available on request.
1.3. Initialize and apply Terraform:
terraform init terraform plan terraform apply
1.4. Network Topology:
graph TD Internet[Internet 🌐] A[Common VRF: 192.168.96.0/21] subgraph ClusterA["Cluster A"] direction TB A1[VLAN A] A2[VRF IP Reservation: 192.168.96.0/22] A3[Gateway A] A4[Bastion A <DHCP,NTP,NAT>] A5[Nutanix Nodes A] end subgraph ClusterB["Cluster B"] direction TB B1[VLAN B] B2[VRF IP Reservation: 192.168.100.0/22] B3[Gateway B] B4[Bastion B <DHCP,NTP,NAT>] B5[Nutanix Nodes B] end A -->|192.168.96.0/22| A1 A1 --> A2 A2 --> A3 A3 --> A4 A4 --> A5 A -->|192.168.100.0/22| B1 B1 --> B2 B2 --> B3 B3 --> B4 B4 --> B5 Internet --> A4 Internet --> B4
1.5. After a successful run, the expected output is:
Outputs: nutanix_cluster1_bastion_public_ip = "145.40.91.33" nutanix_cluster1_cvim_ip_address = "192.168.97.57" nutanix_cluster1_iscsi_data_services_ip = "192.168.99.253" nutanix_cluster1_prism_central_ip_address = "192.168.99.252" nutanix_cluster1_ssh_forward_command = "ssh -L 9440:192.168.97.57:9440 -L 19440:192.168.99.252:9440 -i /Users/username/terraform-equinix-metal-nutanix-cluster/examples/cluster-migration/ssh-key-qh0f2 [email protected]" nutanix_cluster1_ssh_private_key = "/Users/example/terraform-equinix-metal-nutanix-cluster/examples/cluster-migration/ssh-key-qh0f2" nutanix_cluster1_virtual_ip_address = "192.168.99.254" nutanix_cluster2_bastion_public_ip = "145.40.91.141" nutanix_cluster2_cvim_ip_address = "192.168.102.176" nutanix_cluster2_iscsi_data_services_ip = "192.168.103.253" nutanix_cluster2_prism_central_ip_address = "192.168.103.252" nutanix_cluster2_ssh_forward_command = "ssh -L 9442:192.168.102.176:9440 -L 19442:192.168.103.252:9440 -i /Users/example/Equinix/terraform-equinix-metal-nutanix-cluster/examples/cluster-migration/ssh-key-lha20 [email protected]" nutanix_cluster2_ssh_private_key = "/Users/example/Equinix/terraform-equinix-metal-nutanix-cluster/examples/cluster-migration/ssh-key-lha20" nutanix_cluster2_virtual_ip_address = "192.168.103.254"
-
Set up network resources to connect the clusters
Let's start by simplifying how we access the Terraform outputs from the previous step. We'll make heavy use of these outputs as variables in the following steps.
terraform output | wc -l grep -c output\ \" outputs.tf
If you didn't reach a successful deployment in the previous steps, you will be missing variables needed in the following steps. If the following command doesn't show the same number twice, please check the known issues before moving ahead.
Now export the outputs to their own shell environment variables. Keep in mind, these variables are only available where you ran Terraform, not within the bastion or Nutanix nodes.
eval $(terraform output | sed 's/ = /=/')
1.1. Access Cluster 1:
ssh -L 9440:$nutanix_cluster1_cvim_ip_address:9440 \ -L 19440:$nutanix_cluster1_prism_central_ip_address:9440 \ -i $nutanix_cluster1_ssh_private_key \ root@$nutanix_cluster1_bastion_public_ip
OR
$(terraform output -raw nutanix_cluster1_ssh_forward_command)
1.2. Follow the instructions to change the password of Cluster 1: Nutanix Metal Workshop - Access Prism UI
1.3. Access Cluster 2:
ssh -L 9440:$nutanix_cluster2_cvim_ip_address:9440 \ -L 19440:$nutanix_cluster2_prism_central_ip_address:9440 \ -i $nutanix_cluster2_ssh_private_key \ root@$nutanix_cluster2_bastion_public_ip
OR
$(terraform output -raw nutanix_cluster2_ssh_forward_command)
1.4. Follow the instructions to change the password of Cluster 2: Nutanix Metal Workshop - Access Prism UI
1.5. Add a route to establish connectivity between the two clusters:
1.5.1. On Cluster 1:
ssh -L 9440:$nutanix_cluster1_cvim_ip_address:9440 \ -L 19440:$nutanix_cluster1_prism_central_ip_address:9440 \ -i $nutanix_cluster1_ssh_private_key \ -J root@$nutanix_cluster1_bastion_public_ip \ admin@$nutanix_cluster1_cvim_ip_address sudo ip route add 192.168.100.0/22 via 192.168.96.1
1.5.2. On Cluster 2:
ssh -L 9440:$nutanix_cluster2_cvim_ip_address:9440 \ -L 19440:$nutanix_cluster2_prism_central_ip_address:9440 \ -i $nutanix_cluster2_ssh_private_key \ -J root@$nutanix_cluster2_bastion_public_ip \ admin@$nutanix_cluster2_cvim_ip_address sudo ip route add 192.168.96.0/22 via 192.168.100.1
Note: It is recommended to use Cluster 1 in a normal window and Cluster 2 in an incognito window.
-
Update Cluster Details
2.1. Update on Cluster 1: Click on the gear icon in the upper right corner of the Prism UI. Then choose Cluster Details and enter
192.168.99.254
for the Virtual IP and192.168.99.253
for the ISCSI Data Services IP and click Save.2.2. Update on Cluster 2: Click on the gear icon in the upper right corner of the Prism UI. Then choose Cluster Details and enter
192.168.103.254
for the Virtual IP and192.168.102.176
for the ISCSI Data Services IP and click Save. -
Setup Remote Site On both Clusters
Navigate to the top-right, click on
+ Remote Site
, and selectPhysical Cluster
.Navigate to the next pop-up window.
-
Create a Virtual Machine on any one Cluster
-
Set up a protection policy between the clusters
5.1. Log in to Nutanix Prism Central.
5.2. Navigate to the Data Protection section and create a new Protection Domain.
-
Migrate the VM to the other cluster
6.1. Log in to Nutanix Prism Central.
After migration is initiated, it will take a while. You can see the progress in recent tasks.
Name | Version |
---|---|
terraform | >= 1.0 |
equinix | >= 1.30 |
local | >= 2.5 |
null | >= 3 |
random | >= 3 |
Name | Version |
---|---|
equinix | >= 1.30 |
random | >= 3 |
Name | Source | Version |
---|---|---|
nutanix_cluster1 | equinix-labs/metal-nutanix-cluster/equinix | 0.5.0 |
nutanix_cluster2 | equinix-labs/metal-nutanix-cluster/equinix | 0.5.0 |
Name | Type |
---|---|
equinix_metal_project.nutanix | resource |
equinix_metal_vrf.nutanix | resource |
random_string.vrf_name_suffix | resource |
equinix_metal_project.nutanix | data source |
equinix_metal_vrf.nutanix | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
metal_auth_token | Equinix Metal API token. | string |
n/a | yes |
metal_metro | The metro to create the cluster in. | string |
n/a | yes |
create_project | (Optional) to use an existing project matching metal_project_name , set this to false. |
bool |
true |
no |
create_vlan | Whether to create a new VLAN for this project. | bool |
true |
no |
create_vrf | Whether to create a new VRF for this project. | bool |
true |
no |
metal_nutanix_os | The Equinix Metal OS to use for the Nutanix nodes. nutanix_lts_6_5 is available for Nutanix certified hardware reservation instances. nutanix_lts_6_5_poc may be available upon request. | string |
"nutanix_lts_6_5" |
no |
metal_organization_id | The ID of the Metal organization in which to create the project if create_project is true. |
string |
null |
no |
metal_project_id | The ID of the Metal project in which to deploy to cluster. If create_project is false andyou do not specify a project name, the project will be looked up by ID. One (and only one) of metal_project_name or metal_project_id is required or metal_project_id must be set. |
string |
"" |
no |
metal_project_name | The name of the Metal project in which to deploy the cluster. If create_project is false andyou do not specify a project ID, the project will be looked up by name. One (and only one) of metal_project_name or metal_project_id is required or metal_project_id must be set.Required if create_project is true. |
string |
"" |
no |
metal_subnet | IP pool for all Nutanix Clusters in the example. One bit will be appended to the end and divided between example clusters. (192.168.96.0/21 will result in clusters with ranges 192.168.96.0/22 and 192.168.100.0/22) | string |
"192.168.96.0/21" |
no |
metal_vlan_id | ID of the VLAN you wish to use. | number |
null |
no |
nutanix_node_count | The number of Nutanix nodes to create. This must be an odd number. | number |
1 |
no |
nutanix_reservation_ids | Hardware reservation IDs to use for the Nutanix nodes. If specified, the length of this list must be the same as nutanix_node_count for each cluster. Each item can be a reservation UUID or next-available . Ifyou use reservation UUIDs, make sure that they are in the same metro specified in metal_metro . |
object({ |
{ |
no |
vrf_id | ID of the VRF you wish to use. | string |
null |
no |
Name | Description |
---|---|
nutanix_cluster1_bastion_public_ip | The public IP address of the bastion host |
nutanix_cluster1_cluster_gateway | The Nutanix cluster gateway IP |
nutanix_cluster1_cvim_ip_address | The IP address of the CVM |
nutanix_cluster1_iscsi_data_services_ip | Reserved IP for cluster ISCSI Data Services IP |
nutanix_cluster1_prism_central_ip_address | Reserved IP for Prism Central VM |
nutanix_cluster1_ssh_forward_command | SSH port forward command to use to connect to the Prism GUI |
nutanix_cluster1_ssh_private_key | The SSH keypair's private key for cluster1 |
nutanix_cluster1_virtual_ip_address | Reserved IP for cluster virtal IP |
nutanix_cluster2_bastion_public_ip | The public IP address of the bastion host |
nutanix_cluster2_cluster_gateway | The Nutanix cluster gateway IP |
nutanix_cluster2_cvim_ip_address | The IP address of the CVM |
nutanix_cluster2_iscsi_data_services_ip | Reserved IP for cluster ISCSI Data Services IP |
nutanix_cluster2_prism_central_ip_address | Reserved IP for Prism Central VM |
nutanix_cluster2_ssh_forward_command | SSH port forward command to use to connect to the Prism GUI |
nutanix_cluster2_ssh_private_key | The SSH keypair's private key for cluster1 |
nutanix_cluster2_virtual_ip_address | Reserved IP for cluster virtal IP |