You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The service appears to fail to run when using EKS Pod Identity, with an error that AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are not present.
To reproduce
Steps to reproduce the behavior:
Start container with suitable minimal arguments, and with a service account configured with EKS Pod Identity
View output/logs/configuration on pod
See error
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/00-check-for-required-env.sh
Required AWS_ACCESS_KEY_ID environment variable missing
Required AWS_SECRET_ACCESS_KEY environment variable missing
Expected behavior
The pod should successfully proceed to launch and use credentials from EKS pod identity
Your environment
Image ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest-20250224, run with EKS pod identity.
Additional context
This seems to be caused by a missing branch in common/docker-entrypoint.d/00-check-for-required-env.sh relative to the logic in common/etc/nginx/include/awscredentials.js. In the second file, the environment variable AWS_CONTAINER_AUTHORIZATION_TOKEN_FILE is checked, and if present credentials are (correctly) sourced from EKS pod identity. In the first file, there is no equivalent check, which results in the script expecting the AWS keys (and failing when they are not found).
A workaround seems to be adding the environment variable AWS_SESSION_TOKEN with an empty value. This short-circuits the branch before it attempts to check for the other two variables, and allows the actual credentials-sourcing to proceed to (correctly) use the EKS Pod Identity credentials.
The text was updated successfully, but these errors were encountered:
Describe the bug
The service appears to fail to run when using EKS Pod Identity, with an error that
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
are not present.To reproduce
Steps to reproduce the behavior:
Expected behavior
The pod should successfully proceed to launch and use credentials from EKS pod identity
Your environment
Image
ghcr.io/nginxinc/nginx-s3-gateway/nginx-oss-s3-gateway:latest-20250224
, run with EKS pod identity.Additional context
This seems to be caused by a missing branch in
common/docker-entrypoint.d/00-check-for-required-env.sh
relative to the logic incommon/etc/nginx/include/awscredentials.js
. In the second file, the environment variableAWS_CONTAINER_AUTHORIZATION_TOKEN_FILE
is checked, and if present credentials are (correctly) sourced from EKS pod identity. In the first file, there is no equivalent check, which results in the script expecting the AWS keys (and failing when they are not found).A workaround seems to be adding the environment variable
AWS_SESSION_TOKEN
with an empty value. This short-circuits the branch before it attempts to check for the other two variables, and allows the actual credentials-sourcing to proceed to (correctly) use the EKS Pod Identity credentials.The text was updated successfully, but these errors were encountered: