The project demonstrates how to create a scalable Kubernetes infrastructure using Kind (Kubernetes in Docker) with Terraform.
-
Automated Cluster Management
- Local Kubernetes cluster provisioning using Kind
- Configurable multi-node cluster setup
- Automated Ingress controller deployment
-
Application Deployment
- Multi-application support (web, API, dashboard)
- Configurable replica management
- Resource limit enforcement
- Health monitoring with probes
- Environment variable management
- Volume and ConfigMap support
-
Traffic Management
- Path-based routing with Ingress
- Intelligent load distribution
- Health-based pod selection
- Multi-application hosting
You can use either of the following approaches to set up your development environment:
Before starting, ensure you have the following tools installed:
- Terraform
- kubectl
- Docker
- Kind
- Helm
This project includes a shell.nix
file that provides a reproducible development environment with all required tools.
-
Install Nix (if not already installed):
sh <(curl -L https://nixos.org/nix/install) --daemon
-
Enter the Development Shell:
nix-shell
The Nix shell includes additional useful tools:
jq
andyq
for JSON/YAML processingkubectx
for Kubernetes context managementk9s
for terminal UI to interact with Kubernetes clusters
.
├── main.tf # Main Terraform configuration
├── variables.tf # Input variables
├── outputs.tf # Output values
├── modules/
│ ├── kind/ # Kind cluster module
│ │ ├── main.tf # Kind cluster configuration
│ │ └── variables.tf
│ ├── ingress/ # Ingress controller module
│ │ ├── main.tf # Ingress controller configuration
│ │ └── variables.tf
│ └── web-app/ # Web application module
│ ├── main.tf # Application configuration
│ ├── variables.tf
│ └── outputs.tf
└── README.md # This file
The Kind cluster can be configured through variables in variables.tf
:
variable "cluster_name" {
description = "Name of the Kind cluster"
type = string
default = "terraform-kind"
}
variable "worker_nodes" {
description = "Number of worker nodes to create"
type = number
default = 2
}
variable "node_port" {
description = "Port to expose on the host machine"
type = number
default = 80
}
Applications are configured through the applications
map variable. Each application supports:
- Resource limits and requests
- Replica count
- Environment variables
- Volume mounts
- Ingress path configuration
Example configuration:
applications = {
web_app = {
name = "web-app"
namespace = "web-app"
replica_count = 3
container_port = 8080
service_port = 80
resource_limits = {
cpu = "200m"
memory = "256Mi"
}
resource_requests = {
cpu = "100m"
memory = "128Mi"
}
image = "gcr.io/google-samples/hello-app:1.0"
ingress_path = "/web"
app_type = "web"
}
}
-
Clone the Repository
git clone <repository-url> cd <repository-name>
-
Initialize Terraform
terraform init
-
Review the Plan
terraform plan
-
Deploy the Infrastructure
terraform apply
-
Access Applications Once deployed, use port-forward to access the applications:
- Web App:
kubectl port-forward -n web-app svc/web-app 8080:80
http://localhost:8080 - API Service:
kubectl port-forward -n api svc/api-service 8081:80
http://localhost:8081 - Dashboard:
kubectl port-forward -n dashboard svc/dashboard 8082:80
http://localhost:8082
- Web App:
-
Verify Deployment
# Check pod status kubectl get pods -A # Verify services kubectl get svc -A
At the moment there's an issue exposing with Ingress from Kind. I'm working on a fix
To add a new application, extend the applications
map in variables.tf
:
applications = {
# ... existing applications ...
new_app = {
name = "new-app"
namespace = "new-app"
replica_count = 2
container_port = 8080
service_port = 80
resource_limits = {
cpu = "200m"
memory = "256Mi"
}
resource_requests = {
cpu = "100m"
memory = "128Mi"
}
image = "your-image:tag"
ingress_path = "/new-app"
app_type = "custom"
app_config = {
environment_variables = {
"CUSTOM_VAR" = "value"
}
}
}
}