Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: allow the use of private networks #1567

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

xavierleune
Copy link
Contributor

Hi there,

First, thank you for this awesome project! 🙌
This feature allows you to deploy k3s using only private IPs on your servers. It requires a VPN and NAT configuration, which are out of scope for this deployment.

This is my very first time working with Terraform, so I'm open to any feedback on this pull request as it can probably be improved.

Have a nice day!

CF #282 #1255

@xavierleune xavierleune force-pushed the feature/private-network branch from 15e7571 to e187c93 Compare November 22, 2024 10:18
Copy link
Contributor

@cedric-lovit cedric-lovit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm super excited for this feature. So thank you very much for putting effort in this!
You mentioned this was the first time you used terraform... You did super well!

Welcome to Terraform world! :D

autoscaler-agents.tf Show resolved Hide resolved
autoscaler-agents.tf Outdated Show resolved Hide resolved
modules/host/main.tf Outdated Show resolved Hide resolved
@nicolaracco
Copy link

I attempted to switch to this implementation in an existing cookbook without making any other change to the kube.tf file. I’ve posted the current configuration here. However, I encountered several instances of the following error:

Error: Attempt to get attribute from null value
│ 
│   on .terraform/modules/kube-hetzner/modules/host/out.tf line 10, in output "private_ipv4_address":
│   10:   value = one(hcloud_server.server.network).ip
│     ├────────────────
│     │ hcloud_server.server.network is empty set of object
│ 
│ This value is null, so it does not have any attributes.
╵

Maybe is it expecting to always find the network referenced as an input? If I have some time later today, I’ll try to debug the issue and add more info

@xavierleune
Copy link
Contributor Author

@nicolaracco I confirm that the network should always be defined. It must be created before your cluster, because you have to be connected to this network through a non-k8s server.

@xavierleune
Copy link
Contributor Author

@mysticaltech do you have an opinion on this PR ? 🙏

@mysticaltech
Copy link
Collaborator

@xavierleune Thanks for this, will review it tonight

@4erdenko
Copy link

@xavierleune Thanks for this, will review it tonight

Hey, any updates?
Disabling public addresses on nodes its a important security thing.

@mkajander
Copy link

Tested this today (along with NAT + Wireguard) and was able to successfully deploy a cluster with public ips disabled for the nodes.

mysticaltech
mysticaltech previously approved these changes Jan 20, 2025
Copy link
Collaborator

@mysticaltech mysticaltech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good! Will merge and try it before releasing.

@mysticaltech
Copy link
Collaborator

@xavierleune Here's my test deploy, the one I usually use for everything. Please make sure it deploys, this is important.

locals {
  # You have the choice of setting your Hetzner API token here or define the TF_VAR_hcloud_token env
  # within your shell, such as such: export TF_VAR_hcloud_token=xxxxxxxxxxx 
  # If you choose to define it in the shell, this can be left as is.

  # Your Hetzner token can be found in your Project > Security > API Token (Read & Write is required).
  hcloud_token = "xxxxxx"
}

module "kube-hetzner" {
  providers = {
    hcloud = hcloud
  }
  hcloud_token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token

  source = "../kube-hetzner"

  cluster_name = "test12"

  initial_k3s_channel = "v1.30"

  ssh_public_key  = file("/home/karim/.ssh/id_ed25519.pub")
  ssh_private_key = file("/home/karim/.ssh/id_ed25519")

  network_region = "eu-central"

  control_plane_nodepools = [
    {
      name        = "control-plane",
      server_type = "cx22",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 1
    }
  ]

  agent_nodepools = [
    {
      name        = "agent-small",
      server_type = "cx22",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count       = 1
    }
  ]

  autoscaler_nodepools = [
    {
      name        = "autoscaled-small"
      server_type = "cx22",
      location    = "fsn1",
      min_nodes   = 1
      max_nodes   = 2
    }
  ]

  load_balancer_type     = "lb11"
  load_balancer_location = "fsn1"

}

provider "hcloud" {
  token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
}

terraform {
  required_version = ">= 1.5.0"
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = ">= 1.43.0"
    }
  }
}

output "kubeconfig" {
  value     = module.kube-hetzner.kubeconfig
  sensitive = true
}

variable "hcloud_token" {
  sensitive = true
  default   = ""
}

Currently, having the following error.
Screenshot From 2025-01-20 20-53-29

Copy link
Collaborator

@mysticaltech mysticaltech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xavierleune The ball is in your camp. Please, let's make sure this is fully backward compatible. See above. Thanks!

@xavierleune
Copy link
Contributor Author

@mysticaltech thanks for your feedback, I'll have a look !

@Mikopet
Copy link

Mikopet commented Jan 22, 2025

This is something I am also really looking forward to.

I have never done k8s cluster with publicly available nodes, and made me concerned. In fact, I was thinking about creating an own module because of that. But having this here would be way better :-)

Big thanks for the implementation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants