-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Open
Description
Bug Description
I am using ingress group feature to share one ALB for multiple ingresses in different. The ingress rule of my second ingress is not updated to ALB if I only configure the alb.ingress.kubernetes.io/group.name in the second ingress. It is working well if I configure exactly same annotations in both ingresses.
Steps to Reproduce
- Step-by-step guide to reproduce the bug:
Deploy two ingresses in two different namespaces. - Manifests applied while reproducing the issue:
#First ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-northeast-1:xxxx:certificate/xxxx
alb.ingress.kubernetes.io/group.name: xxx-dev-test
alb.ingress.kubernetes.io/inbound-cidrs: x.x.x.x/32
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=60,client_keep_alive.seconds=60
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/ssl-redirect: "443"
alb.ingress.kubernetes.io/tags: Environment=Dev
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=60
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
labels:
...
name: xxx
namespace: xxx-dev
spec:
rules:
- host: dev-xxx.example.com
http:
paths:
- backend:
service:
name: xxx
port:
number: 80
path: /
pathType: Prefix
#Second ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/group.name: xxx-dev-test
labels:
...
name: xxx
namespace: xxx-dev3
spec:
rules:
- host: dev3-xxx.example.com
http:
paths:
- backend:
service:
name: xxx
port:
number: 80
path: /
pathType: Prefix
- Controller logs/error messages while reproducing the issue:
Expected Behavior
Rules of both ingresses are added to ALB.
Actual Behavior
Only the rule of the first ingess is added to ALB.
Regression
Was the functionality working correctly in a previous version ? No, this is a new setup.
Current Workarounds
Configure the exact same (full) annotations for both ingresses.
Environment
- AWS Load Balancer controller version: v2.8.1
- Kubernetes version: 1.31
- Using EKS (yes/no), if so version?: 1.31
- Using Service or Ingress: Ingress
- AWS region: ap-northeast-1
- How was the aws-load-balancer-controller installed: helm
- If helm was used then please show output of
helm ls -A | grep -i aws-load-balancer-controller
aws-load-balancer-controller kube-system 1 2024-08-05 02:41:05.933649033 +0000 UTC deployed aws-load-balancer-controller-1.8.1 v2.8.1 - If helm was used then please show output of
helm -n <controllernamespace> get values <helmreleasename>
clusterName: xxx-xxx
enableShield: false
enableWaf: false
enableWafv2: false
region: ap-northeast-1
serviceAccount:
create: false
name: aws-load-balancer-controller
vpcId: vpc-xxxx
- If helm was used then please show output of
- Current state of the Controller configuration:
kubectl -n <controllernamespace> describe deployment aws-load-balancer-controller
- Current state of the Ingress/Service configuration:
kubectl describe ingressclasses
Name: alb
Labels:
Annotations: ingressclass.kubernetes.io/is-default-class: true
Controller: ingress.k8s.aws/alb
Events:kubectl -n <appnamespace> describe ingress <ingressname>kubectl -n <appnamespace> describe svc <servicename>
Possible Solution (Optional)
Contribution Intention (Optional)
- Yes, I'm willing to submit a PR to fix this issue
- No, I cannot work on a PR at this time
Additional Context
Metadata
Metadata
Assignees
Labels
No labels