This repo provides the Terraform configuration to deploy a demo app running on an AWS EKS Cluster using best practices. This was created as an educational tool to learn about EKS and Terraform. It is not recommended that this configuration be used in production without further assessment to ensure it meets organization requirements.
This Terraform configuration deploys the following resources:
- AWS EKS Cluster using Amazon EC2 nodes
- Amazon DynamoDB table
- Amazon Elastic Block Store (EBS) volume used as attached storage for the Kubernetes cluster (a
PersistentVolume
) - Demo "guestbook" application, deployed via containers
- Application Load Balancer (ALB) to access the app
Plus several other supporting resources, as shown in the following diagram:
This cluster does not use EKS Auto Mode. To learn about EKS Auo Mode see this repo instead: https://github.com/setheliot/eks_auto_mode/
Run all commands from an environment that has
- Terraform installed
- AWS CLI installed
- AWS credentials configured for the target account
You have two options:
- Update the S3 bucket and DynamoDB table used for Terraform backend state here: backend.tf. Instructions are in the comments in that file.
- Choose one of the
tfvars
configuration files in the terraform/environment directory, or create a new one. The environment nameenv_name
should be unique to eachtfvars
configuration file. You can also set the AWS Region in the configuration file. - Run the following commands:
cd scripts
./ez_cluster_deploy.sh
-
Update the S3 bucket and DynamoDB table used for Terraform backend state here: backend.tf. Instructions are in the comments in that file.
-
Create the IAM policy to be used by AWS Load Balancer Controller
- This only needs to be done once per AWS account
- Create the IAM policy using the terraform in terraform/init
-
Choose one of the
tfvars
configuration files in the terraform/environment directory, or create a new one. The environment nameenv_name
should be unique to eachtfvars
configuration file. You can also set the AWS Region in the configuration file. -
cd
into theterraform/deploy
directory -
Initialize Terraform
terraform init
-
Set the terraform workspace to the same value as the environment name
env_name
for thetfvars
configuration file you are using.- If this is your first time running then use
terraform workspace new <env_name>
- On subsequent uses, use
terraform workspace select <env_name>
- If this is your first time running then use
-
Generate the plan and review it
terraform plan -var-file=environment/<selected tfvars file>
-
Deploy the resources
terraform apply -var-file=environment/<selected tfvars file> -auto-approve
Under Outputs there may be a value for alb_dns_name
. If not, then
- you can wait a few seconds and re-run the
terraform apply
command, or - you can look up the value in your EKS cluster by examining the
Ingress
Kubernetes resource
Use this DNS name to access the app. Use http://
(do not use https). It may take about a minute after initial deployment for the application to start working.
If you want to experiment and make changes to the Terraform, you should be able to start at step 3.
cd scripts
./cleanup_cluster.sh \
-var-file=environment/<selected tfvars file>
terraform init
terraform workspace select <env_name>
terraform destroy \
-auto-approve \
-target=kubernetes_deployment_v1.guestbook_app_deployment \
-var-file=environment/<selected tfvars file>
terraform destroy \
-auto-approve \
-target=kubernetes_persistent_volume_claim_v1.ebs_pvc \
-var-file=environment/<selected tfvars file>
terraform destroy \
-auto-approve \
-target=module.alb[0].kubernetes_ingress_v1.ingress_alb \
-var-file=environment/<selected tfvars file>
terraform destroy \
-auto-approve \
-var-file=environment/<selected tfvars file>
To understand why this requires these separate destroy
operations, see this.
I welcome feedback or bug reports (use GitHub issues) and Pull Requests.