- Terraform Beginner Bootcamp 2023 - Week 1
- Fixing Tags
- Root Module Structure
- What happens if we lose our state file?
- Fix using Terraform Refresh
- Terraform Modules
- Considerations when using ChatGPT to write Terraform
- Working with Files in Terraform
- Tag 1.4.1 - 1.4.2
- Tag 1.5.0
- Terraform Data
- Tag 1.5.1
- Provisioners
- [Local-exec](#-local-exec--https---developerhashicorpcom-terraform-language-resources-provisioners-local-exec-)
- Remote-exec
- Tag 1.6.0
- Tag 1.7.0
Table of contents generated with markdown-toc
How to Delete Local and Remote Tags on Git
Locall delete a tag
git tag -d <tag_name>
Remotely delete tag
git push --delete origin tagname
Checkout the commit that you want to retag. Grab the sha from your Github history.
git checkout <SHA>
git tag M.M.P
git push --tags
git checkout main
Our root module structure is as follows:
PROJECT_ROOT
│
├── main.tf # everything else.
├── variables.tf # stores the structure of input variables
├── terraform.tfvars # the data of variables we want to load into our terraform project
├── providers.tf # defined required providers and their configuration
├── outputs.tf # stores our outputs
└── README.md # required for root modules
In terraform we can set two kinds of variables:
- Environment Variables - those you would set in your bash terminal eg. AWS credentials
- Terraform Variables - those that you would normally set in your tfvars file
We can set Terraform Cloud variables to be sensitive so they are not shown visibliy in the UI.
We can use the -var
flag to set an input variable or override a variable in the tfvars file eg. terraform -var user_ud="my-user_id"
- TODO: document this flag
This is the default file to load in terraform variables in blunk
- TODO: document this functionality for terraform cloud
- TODO: document which terraform variables takes presendence.
If you lose your statefile, you most likley have to tear down all your cloud infrastructure manually.
You can use terraform import but it won't for all cloud resources. You need check the terraform providers documentation for which resources support import.
terraform import aws_s3_bucket.bucket bucket-name
Terraform Import AWS S3 Bucket Import
Note
: During changing configuration from random bucket name to static, we lose our s3 bucket created by random provider settings.
And we needed ro run Terraform apply twice: after first we destroyed random provider settings, and then to create our new bucket, even if we run terraform import before this commands. Maybe we needed to do remove first it from terraform state.
To remove a resource from Terraform state, follow these steps:
First, identify the address of the resource in your Terraform state. You can list the resources and their addresses using the following command:
terraform state list
Use the terraform state rm command to remove the identified resource from Terraform state. Replace <RESOURCE_ADDRESS>
with the actual address of the resource.
terraform state rm <RESOURCE_ADDRESS>
After removing the resource from the state, update your Terraform configuration to reflect the removal. This involves removing the corresponding resource block from your configuration file.
Run terraform apply to apply the changes to your infrastructure.
terraform apply
This will update your infrastructure to reflect the changes in your Terraform configuration.
Note: Removing a resource from the state does not automatically destroy the resource in the cloud provider. It only removes the resource from Terraform's tracking. If the resource still exists in the cloud provider, Terraform will not manage it.
Always exercise caution when making changes to your Terraform state and configurations, especially when dealing with live infrastructure.
If someone goes and delete or modifies cloud resource manually through ClickOps.
If we run Terraform plan is with attempt to put our infrstraucture back into the expected state fixing Configuration Drift
A common error scenario that can prompt Terraform to refresh the contents of your state file is mistakenly modifying your credentials or provider configuration.
Run terraform plan -refresh-only to review how Terraform would update your state file.
terraform plan -refresh-only
It is recommend to place modules in a modules
directory when locally developing modules but you can name it whatever you like.
We can pass input variables to our module. The module has to declare the terraform variables in its own variables.tf We need to put it on root module along with providers.
module "terrahouse_aws" {
source = "./modules/terrahouse_aws"
user_uuid = var.user_uuid
bucket_name = var.bucket_name
}
Using the source we can import the module from various places eg:
- locally
- Github
- Terraform Registry
module "terrahouse_aws" {
source = "./modules/terrahouse_aws"
}
When we vreate a module Terraform compares previous state with new configuration, correlating by each module or resource's unique address. Therefore by default Terraform understands moving or renaming an object as an intent to destroy the object at the old address and to create a new object at the new address. 28-aws-terrahouse-module To prevent this we need to tell TF that we moved our resource to the module.
moved {
from = aws_s3_bucket.website_bucket
to = module.terrahouse_aws.aws_s3_bucket.website_bucket
}
LLMs such as ChatGPT may not be trained on the latest documentation or information about Terraform.
It may likely produce older examples that could be deprecated. Often affecting providers.
This is a built in terraform function to check the existance of a file.
condition = fileexists(var.error_html_filepath)
https://developer.hashicorp.com/terraform/language/functions/fileexists
https://developer.hashicorp.com/terraform/language/functions/filemd5
In terraform there is a special variable called path
that allows us to reference local paths:
- path.module = get the path for the current module
- path.root = get the path for the root module Special Path Variable
resource "aws_s3_object" "index_html" { bucket = aws_s3_bucket.website_bucket.bucket key = "index.html" source = "${path.root}/public/index.html" }
From now I decided to create paragraphs in relation to tags in the project.
Locals allows us to define local variables. It can be very useful when we need transform data into another format and have referenced a varaible.
locals {
s3_origin_id = "MyS3Origin"
}
This allows use to source data from cloud resources.
This is useful when we want to reference cloud resources without importing them.
data "aws_caller_identity" "current" {}
output "account_id" {
value = data.aws_caller_identity.current.account_id
}
We could use the JSON encode to create the JSON policy inline in the hcl.
> jsonencode({"hello"="world"})
{"hello":"world"}
Also we can use iam_policy_document as data sources
Also with CDN we do not need enabled website hosting settings for S3 bucket.
The terraform_data implements the standard resource lifecycle, but does not directly take any other actions. You can use the terraform_data resource without requiring or configuring a provider. The terraform_data resource is useful for storing values which need to follow a manage resource lifecycle, and for triggering provisioners when there is no other logical managed resource in which to place them.
Provisioners allow you to execute commands on compute instances eg. a AWS CLI command.
They are not recommended for use by Hashicorp because Configuration Management tools such as Ansible are a better fit, but the functionality exists.
This will execute command on the machine running the terraform commands eg. plan apply
resource "aws_instance" "web" {
# ...
provisioner "local-exec" {
command = "echo The server's IP address is ${self.private_ip}"
}
}
https://developer.hashicorp.com/terraform/language/resources/provisioners/local-exec
This will execute commands on a machine which you target. You will need to provide credentials such as ssh to get into the machine.
resource "aws_instance" "web" {
# ...
# Establishes connection to be used by all
# generic remote provisioners (i.e. file/remote-exec)
connection {
type = "ssh"
user = "root"
password = var.root_password
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"puppet apply",
"consul join ${aws_instance.web.private_ip}",
]
}
}
https://developer.hashicorp.com/terraform/language/resources/provisioners/remote-exec
First we (I) deployed local-exec provisioner in s3 object resource, but it is better to implement it as terraform_data
resource
with trigger - when content version changed.
For each allows us to enumerate over complex data types
[for s in var.list : upper(s)]
This is mostly useful when you are creating multiples of a cloud resource and you want to reduce the amount of repetitive terraform code.
In our case and on this tag we go throw public/assets/
for_each = fileset("${var.assets_path}", "*.{jpg,png,gif}")
Just added another extension for git-graph to gitpod.yml file.