Skip to content

multi wildcard subdomains are blocked #4395

@Vadim-Zenin

Description

@Vadim-Zenin

Bug Description

I could create rules from AWS Console like:
*.*.example.com
*.*.*.example.com
*.*.*.*.example.com
etc.

aws-load-balancer-controller does NOT allow create *.<* skipped>.*.example.com rules. This decrease AWS ALB functionality and I count it as BUG.

Steps to Reproduce

  • Step-by-step guide to reproduce the bug:
    Create any rule with *.*.example.com or *.*.*.example.com
  • Manifests applied while reproducing the issue:
  • Controller logs/error messages while reproducing the issue:
The Ingress  is invalid: spec.rules[3].host: Invalid value: "*.*.example.com": a wildcard DNS-1123 subdomain must start with '*.', followed by a valid DNS subdomain, which must consist of lower case alphanumeric characters, '-' or '.' and end with an alphanumeric character (e.g. '*.example.com', regex used for validation is '\*\.[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

Expected Behavior

MATCH AWS ALB functionality and allow create ..example.com and .< skipped>.*.example.com rules.
I expect to have rules:
api.example.com
*.*.*.example.com
*.*.example.com
*.example.com

Actual Behavior

The bad and broken filter always block new deployment.
I could not fix it with manual edit of the AWS ALB either, because aws-load-balancer-controller will overwrite it again.
The bug behaviour is business critical.

Current Workarounds

N/A

Environment

  • AWS Load Balancer controller version: v2.13.3
  • Kubernetes version: 1.33
  • Using EKS (yes/no), if so version?: 1.33
  • Using Service or Ingress:
  • AWS region: eu-west-1
  • How was the aws-load-balancer-controller installed:
    kubectl apply -f ${DOWNLOAD_DIR}/aws-load-balancer-controller/${EKS_LB_CONTROLLER_INSTALL_VERSION_UNDERSCORE}_ingclass.yaml
    I do not use Helm to minimize number of wrappers that limit or block functionality or delay new functionality implementation.
  • Current state of the Controller configuration:
    • kubectl -n <controllernamespace> describe deployment aws-load-balancer-controller
Name:                   aws-load-balancer-controller
Namespace:              kube-system
CreationTimestamp:      Fri, 26 Sep 2025 20:37:59 +0100
Labels:                 app.kubernetes.io/component=controller
                        app.kubernetes.io/name=aws-load-balancer-controller
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app.kubernetes.io/component=controller,app.kubernetes.io/name=aws-load-balancer-controller
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/component=controller
                    app.kubernetes.io/name=aws-load-balancer-controller
  Service Account:  aws-load-balancer-controller
  Containers:
   controller:
    Image:      public.ecr.aws/eks/aws-load-balancer-controller:v2.13.3
    Port:       9443/TCP
    Host Port:  0/TCP
    Args:
      --cluster-name=<skipped>
      --ingress-class=alb
    Limits:
      cpu:     200m
      memory:  500Mi
    Requests:
      cpu:        100m
      memory:     200Mi
    Liveness:     http-get http://:61779/healthz delay=30s timeout=10s period=10s #success=1 #failure=2
    Environment:  <none>
    Mounts:
      /tmp/k8s-webhook-server/serving-certs from cert (ro)
  Volumes:
   cert:
    Type:               Secret (a volume populated by a Secret)
    SecretName:         aws-load-balancer-webhook-tls
    Optional:           false
  Priority Class Name:  system-cluster-critical
  Node-Selectors:       <none>
  Tolerations:          <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   aws-load-balancer-controller-<skipped> (1/1 replicas created)
Events:          <none>
  • Current state of the Ingress/Service configuration:
kubectl -n nspace10 describe ingress.networking.k8s.io/public-ingress
Name:             public-ingress
Labels:           app=alb1-public
Namespace:        nspace10
Address:          k8s-prod<skipped>7.eu-west-1.elb.amazonaws.com
Ingress Class:    <none>
Default backend:  <default>
Rules:
  Host                              Path  Backends
  ----                              ----  --------
  api.example.com                     
                                    /*   api-service-svc:<skipped> (10.1.168.65:<skipped>,10.1.210.121:<skipped>)
  *.example.com                       
                                    /*   app-svc:<skipped> (10.1.150.62:<skipped>,10.1.210.203:<skipped>,10.1.207.121:<skipped> + 30 more...)
Annotations:                        alb.ingress.kubernetes.io/actions.ssl-redirect:
                                      {"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}
                                    alb.ingress.kubernetes.io/auth-session-cookie: example.com-cookie
                                    alb.ingress.kubernetes.io/backend-protocol: HTTP
                                    alb.ingress.kubernetes.io/group.name: public-ingress-group
                                    alb.ingress.kubernetes.io/healthcheck-interval-seconds: 30
                                    alb.ingress.kubernetes.io/healthcheck-path: /actuator/health
                                    alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
                                    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: 8
                                    alb.ingress.kubernetes.io/healthy-threshold-count: 2
                                    alb.ingress.kubernetes.io/ip-address-type: ipv4
                                    alb.ingress.kubernetes.io/listen-ports: [{"HTTP": 80}, {"HTTPS": 443}]
                                    alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=<skipped>
                                    alb.ingress.kubernetes.io/scheme: internet-facing
                                    alb.ingress.kubernetes.io/security-groups: sg-<skipped>, sg-<skipped>, sg-<skipped>, sg-<skipped>
                                    alb.ingress.kubernetes.io/success-codes: 200-301
                                    alb.ingress.kubernetes.io/tags: Environment=prod,Name=alb1-public,NameSpace=nspace10,time_stamp_iso_number=202510091222
                                    alb.ingress.kubernetes.io/target-group-attributes:
                                      stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60,deregistration_delay.timeout_seconds=30,slow_start.duration_seconds=0
                                    alb.ingress.kubernetes.io/target-type: ip
                                    alb.ingress.kubernetes.io/unhealthy-threshold-count: 2
                                    external-dns.alpha.kubernetes.io/hostname: *.example.com
                                    kubernetes.io/ingress.class: alb
                                    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:123456789012:certificate/f3<skipped>8b
Events:                             <none>

Possible Solution (Optional)

Contribution Intention (Optional)

  • No, I cannot work on a PR at this time

Additional Context

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/supportCategorizes issue or PR as a support question.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions