terraform init
does not create an S3 bucket for the backend- Need to manually create S3 to store terraform and kops state
- AWS credentials are inherited from parent modules
- Only specify AWS credentials in top-level
main.tf
- Do not specify AWS credentials in any child modules
- Only specify AWS credentials in top-level
- Specify
--state=s3://<bucket>
when using kops to edit or update cluster - Must add
kubeAPIServer: authorizationRbacSuperUser: admin
isauthorization
set torbac
-
Initialize kops-base from
kops-base
directory.terraform init -backend-config config.conf -reconfigure -upgrade .
-
Create terraform plan.
terraform plan -var-file config.tfvars -out plan.out .
-
Apply terraform plan (create resources).
terraform apply plan.out
-
Initialize kops-template from
kops-template
directory.terraform init
-
Create terraform plan from
kops-template
directory.terraform plan -out plan.out -var-file config.tfvars .
-
Apply terraform plan to create
kops.config
.terraform apply plan.out
-
Create cluster using
kops
.kops create cluster --zones us-east-1a --state s3://cluster.dev.dappest.co --name <BUCKET_NAME>
-
Remove the default instance groups.
kops delete ig master-us-east-1a --state s3://cluster.dev.dappest.co --name <BUCKET_NAME> --yes
kops delete ig nodes --state s3://cluster.dev.dappest.co --name <BUCKET_NAME> --yes
-
Update kops configuration.
kops replace --state s3://cluster.dev.dappest.co --name <BUCKET_NAME> -f dappest-dev-kops.config --force
-
Deploy kubernetes cluster. *
kops update cluster --state s3://cluster.dev.dappest.co --name <BUCKET_NAME> --yes
- Understand why the period at end of hosted_zone.name causes S3 creation to fail.
- Running
terraform destroy
on a non-empty S3 bucket. - When updating cluster with kops, need to specify
--cloudonly
until k8s API server configuration is set up properly
- Why separate kops-base and VPC remote state?