Terraform templates aimed to provide easy to use YAML configuration file describing the target cloud infrastructure.
Provider | Component | Supported |
---|---|---|
EDB | BigAnimal - AWS | ✅ |
AWS | EC2 - VM | ✅ |
AWS | EC2 - additional EBS vol. | ✅ |
AWS | multi-region VPC peering | ✅ |
AWS | Security (ports) | ✅ |
AWS | RDS | ✅ |
AWS | RDS Aurora | ✅ |
AWS | Elastic Kubernetes Service | ✅ |
GCloud | Compute Engine - VM | ✅ |
GCloud | CloudSQL | ✅ |
GCloud | AlloyDB | ✅ |
GCloud | Google Kubernetes Engine | ✅ |
EDB | BigAnimal - Azure | ✅ |
Azure | VM | ✅ |
Azure | Database - Flexible | ✅ |
Azure | CosmoDB | ❌ |
Azure | Azure Kubernetes Service | ✅ |
The following components must be installed on the system:
- Python3 >= 3.6
- AWS CLI
- GCloud CLI
- Azure CLI
- BigAnimal token (CLI currently optional)
- Terraform >= 1.3.6
Infrastructure files describing the target cloud can be found inside of the infrastructure-examples directory
$ sudo apt install python3 python3-pip -y
$ sudo pip3 install pip --upgrade
access_token
- expires in 24 hoursrefresh_token
- expires
- 30 days
- when refreshed
- expired refresh_tokens reused
- changes after every refresh with a new access_token
- expires
wget https://raw.githubusercontent.com/EnterpriseDB/cloud-utilities/main/api/get-token.sh
bash get-token.sh
# Visit the biganimal link to activate the device
# ex. Please login to https://auth.biganimal.com/activate?user_code=JWPL-RCXL with your BigAnimal account
# Have you finished the login successfully. (y/n)
# Save the refresh token, if needed
export BA_BEARER_TOKEN=<access_token>
Refresh the token
bash get-token.sh --refresh <refresh_token>
# Save the new refresh token, if needed again
export BA_BEARER_TOKEN=<access_token>
The CLI currently requires users to visit a link during when using biganimal reset-credential
.
The token directly from the API is preferred to avoid needing to revisit the link.
$ sudo pip3 install awscli
AWS Access Key and Secret Access Key configuration:
$ aws configure
Initialize GCloud and export project id
$ gcloud init
$ export GOOGLE_PROJECT=<project_id>
$ sudo apt install unzip -y
$ wget https://releases.hashicorp.com/terraform/1.3.6/terraform_1.3.6_linux_amd64.zip
$ unzip terraform_1.3.6_linux_amd64.zip
$ sudo install terraform /usr/bin
$ git clone https://github.com/EnterpriseDB/edb-terraform.git
$ python3 -m pip install edb-terraform/. --upgrade
Once the infrastructure file has been created we can to proceed with cloud resources creation:
-
We can attempt to setup a compatable version of Terraform. This directory will be inside of
~/.edb-terraform/bin
Logs can be found inside of~/.edb-terraform/logs
$ edb-terraform setup
-
A new Terraform project must be created with the help of the
edb-terraform
script. This script is in charge of creating a dedicated directory for the project, generating SSH keys, building Terraform configuration based on the infrastructure file, copying Terraform modules into the project directory.a. First argument is the project path, second argument is the path to the infrastructure file
Use option
-c
to specify the cloud provider option:azure
aws
gcloud
Defaults to
aws
if not used$ edb-terraform generate --project-name aws-terraform \ --cloud-service-provider aws \ --infra-file edb-terraform/infrastructure-examples/aws/edb-ra-3.yml \ --user-templates edb-terraform/infrastructure-examples/templates/inventory.yml.tftpl
b. Step 2 can be skipped if option
--validate
is included withgenerate
, which provides basic validations and checks through terraform.
-
Terraform initialisation of the project:
$ cd aws-terraform $ terraform init
-
Apply Cloud resources creation:
$ cd aws-terraform $ terraform apply -auto-approve
Once cloud resources provisioning is completed, machines public and private IPs
are stored in the servers.yml
file, located into the project's directory.
These outputs can be used with a list of templates to generate files for other programs such as ansible.
See example here which uses the below outputs.
Example:
---
servers:
machines:
dbt2-driver:
additional_volumes: []
instance_type: "c5.4xlarge"
operating_system: {"name":"debian-10-amd64","owner":"136693071363","ssh_user":"admin"}
private_ip: "10.2.20.38"
public_dns: "ec2-54-197-78-139.compute-1.amazonaws.com"
public_ip: "54.197.78.139"
region: "us-east-1"
tags: {"Name":"dbt2-driver-Demo-Infra-d8d0a932","cluster_name":"Demo-Infra","created_by":"edb-terraform","terraform_hex":"d8d0a932","terraform_id":"2NCpMg","terraform_time":"2023-05-24T21:09:11Z","type":"dbt2-driver"}
type: null
zone: "us-east-1b"
pg1:
additional_volumes: [{"encrypted":false,"iops":5000,"mount_point":"/opt/pg_data","size_gb":20,"type":"io2"},{"encrypted":false,"iops":5000,"mount_point":"/opt/pg_wal","size_gb":20,"type":"io2"}]
instance_type: "c5.4xlarge"
operating_system: {"name":"Rocky-8-ec2-8.6-20220515.0.x86_64","owner":"679593333241","ssh_user":"rocky"}
private_ip: "10.2.30.197"
public_dns: "ec2-3-89-238-24.compute-1.amazonaws.com"
public_ip: "3.89.238.24"
region: "us-east-1"
tags: {"Name":"pg1-Demo-Infra-d8d0a932","cluster_name":"Demo-Infra","created_by":"edb-terraform","terraform_hex":"d8d0a932","terraform_id":"2NCpMg","terraform_time":"2023-05-24T21:09:11Z","type":"postgres"}
type: null
zone: "us-east-1b"
[...]
You can also use terraform output
to get a json object for use
terraform output -json servers | python3 -m json.tool
SSH key files: ssh-id_rsa
and ssh-id_rsa.pub
.
Users can further modify their resources after the initial provisioning.
If any output files are needed based on the resources,
terraform templates can be added to the projects template
directory to have it rendered with any resource outputs once all resources are created.
Examples of template files can be found here:
edb-ansible included inventory.yml
sample inventory.yml
$ cd aws-terraform
$ terraform destroy -auto-approve