Skip to content

Deploys complete EKS cluster including Persistent Storage, Load Balancer, and demo app

License

Notifications You must be signed in to change notification settings

setheliot/eks_demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AWS EKS Terraform demo

This repo provides the Terraform configuration to deploy a demo app running on an AWS EKS Cluster using best practices. This was created as an educational tool to learn about EKS and Terraform. It is not recommended that this configuration be used in production without further assessment to ensure it meets organization requirements.

Deployed resources

This Terraform configuration deploys the following resources:

  • AWS EKS Cluster using Amazon EC2 nodes
  • Amazon DynamoDB table
  • Amazon Elastic Block Store (EBS) volume used as attached storage for the Kubernetes cluster (a PersistentVolume)
  • Demo "guestbook" application, deployed via containers
  • Application Load Balancer (ALB) to access the app

Plus several other supporting resources, as shown in the following diagram:

architecture

Auto Mode Disabled

This cluster does not use EKS Auto Mode. To learn about EKS Auo Mode see this repo instead: https://github.com/setheliot/eks_auto_mode/

Deploy EKS cluster and app resources

Run all commands from an environment that has

  • Terraform installed
  • AWS CLI installed
  • AWS credentials configured for the target account

You have two options:

Option 1. Automatic configuration and execution

  1. Update the S3 bucket and DynamoDB table used for Terraform backend state here: backend.tf. Instructions are in the comments in that file.
  2. Choose one of the tfvars configuration files in the terraform/environment directory, or create a new one. The environment name env_name should be unique to each tfvars configuration file. You can also set the AWS Region in the configuration file.
  3. Run the following commands:
cd scripts

./ez_cluster_deploy.sh

Option 2. For those familiar with using Terraform

  1. Update the S3 bucket and DynamoDB table used for Terraform backend state here: backend.tf. Instructions are in the comments in that file.

  2. Create the IAM policy to be used by AWS Load Balancer Controller

    1. This only needs to be done once per AWS account
    2. Create the IAM policy using the terraform in terraform/init
  3. Choose one of the tfvars configuration files in the terraform/environment directory, or create a new one. The environment name env_name should be unique to each tfvars configuration file. You can also set the AWS Region in the configuration file.

  4. cd into the terraform/deploy directory

  5. Initialize Terraform

    terraform init
  6. Set the terraform workspace to the same value as the environment name env_name for the tfvars configuration file you are using.

    • If this is your first time running then use
      terraform workspace new <env_name>
    • On subsequent uses, use
      terraform workspace select <env_name>
  7. Generate the plan and review it

    terraform plan -var-file=environment/<selected tfvars file>
  8. Deploy the resources

    terraform apply -var-file=environment/<selected tfvars file> -auto-approve

Under Outputs there may be a value for alb_dns_name. If not, then

  • you can wait a few seconds and re-run the terraform apply command, or
  • you can look up the value in your EKS cluster by examining the Ingress Kubernetes resource

Use this DNS name to access the app. Use http:// (do not use https). It may take about a minute after initial deployment for the application to start working.

If you want to experiment and make changes to the Terraform, you should be able to start at step 3.

Tear-down (clean up) all the resources created

Option 1. Scripted

cd scripts

./cleanup_cluster.sh \
    -var-file=environment/<selected tfvars file>

Option 2. Do it yourself

terraform init
terraform workspace select <env_name>
terraform destroy \
    -auto-approve \
    -target=kubernetes_deployment_v1.guestbook_app_deployment \
    -var-file=environment/<selected tfvars file>

terraform destroy \
    -auto-approve \
    -target=kubernetes_persistent_volume_claim_v1.ebs_pvc \
    -var-file=environment/<selected tfvars file>

terraform destroy \
    -auto-approve \
    -target=module.alb[0].kubernetes_ingress_v1.ingress_alb \
    -var-file=environment/<selected tfvars file>

terraform destroy \
    -auto-approve \
    -var-file=environment/<selected tfvars file>

To understand why this requires these separate destroy operations, see this.

Known issues


I welcome feedback or bug reports (use GitHub issues) and Pull Requests.

MIT License

About

Deploys complete EKS cluster including Persistent Storage, Load Balancer, and demo app

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published