Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement integration/deploy tests (GHA) #198

Open
wants to merge 21 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
8fb6241
add new file for integration/deploy test, implement custom stacks for…
alexcasalboni Mar 20, 2023
d88b2d6
GHA - fix newlines
alexcasalboni Mar 20, 2023
6bdfc2d
GHA - fix newlines (again)
alexcasalboni Mar 20, 2023
ed4de0e
GHA - add permission boundary ARN
alexcasalboni Mar 20, 2023
599b019
GHA - run deployments in parallel (matrix)
alexcasalboni Mar 21, 2023
e41b082
GHA remove sam validate
alexcasalboni Mar 21, 2023
ee76090
GHA - fix string comparison, merge into one deploy job without artifacts
alexcasalboni Mar 21, 2023
d73e68b
GHA - fix stack name with current branch
alexcasalboni Mar 21, 2023
ac5aefa
GHA - add SAM build cache, simplify if conditions, add stack deletion
alexcasalboni Mar 21, 2023
b873605
GHA - syntax fix
alexcasalboni Mar 21, 2023
2a477d3
GHA - another syntax fix
alexcasalboni Mar 21, 2023
0a2f935
GHA - add sample payloads
alexcasalboni Mar 22, 2023
57d0792
GHA - update dependencies
alexcasalboni Mar 22, 2023
ec9f5cc
GHA - update cleaner code to re-raise errors and make the execution fail
alexcasalboni Mar 22, 2023
ad0d8ef
GHA - implement integration test logic with mocha, fix action structure
alexcasalboni Mar 22, 2023
1d0a898
GHA - fix coverage cmd
alexcasalboni Mar 22, 2023
4eaa947
GHA - update cleaner unit tests for re-throw
alexcasalboni Mar 22, 2023
5b78f9a
GHA - only trigger on PR
alexcasalboni Mar 22, 2023
92e9c65
GHA - add new stack for functions
alexcasalboni Mar 22, 2023
f67889f
GHA - use environment to pause stack deletion
alexcasalboni Mar 22, 2023
60b9a6e
add environment for deployments, add new test file for S3 stack
alexcasalboni Mar 22, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
217 changes: 201 additions & 16 deletions .github/workflows/tests-integration.yml
Original file line number Diff line number Diff line change
@@ -1,36 +1,221 @@
name: aws-lambda-power-tuning-integration-tests
run-name: ${{ github.actor }} is running integration tests
on:
push:
branches:
- 'master'
pull_request:
permissions:
id-token: write
contents: read
on: pull_request
# make sure only one concurrent run per workflow/branch
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
# stack names are reused across jobs
STACK_NAME_FUNCTIONS: aws-lpt-gh-functions
STACK_NAME_DEFAULTS: aws-lpt-gh-defaults
STACK_NAME_S3: aws-lpt-gh-s3
STACK_NAME_VPC: aws-lpt-gh-vpc
STACK_NAME_LAMBDA_RESOURCE: aws-lpt-gh-lambda-resource
STACK_NAME_CUSTOM_PARAMS: aws-lpt-gh-custom-params

# deployment parameters are reused across jobs
PowerValues: 512,1024,1536 # shorter list
VisualizationURL: https://my-custom-url.io/
LambdaResource: 'arn:aws:lambda:eu-west-1:*:function:*' # specific region
TotalExecutionTimeout: '900' #max value
PermissionsBoundary: arn:aws:iam::aws:policy/AdministratorAccess
S3Bucket: lpt-payloads # existing bucket
S3Key: "payload.json" # only allow this object
LayerSdkName: custom-layer-name
LogGroupRetentionInDays: 7
SecurityGroupIds: sg-06ad5b959d0ce9f57 # existing SG
SubnetIDs: subnet-0126a3daed78354c7,subnet-00e2995006f41811e # existing subnets

jobs:
build:
permissions:
id-token: write
contents: read
deploy:
runs-on: ubuntu-latest
if: always() # run all stacks even if one fails
environment: Deployment # this will require approval before deploying all stacks
strategy:
fail-fast: false
matrix:
node-version: [16.x]
# this will deploy the 5 stacks in parallel,
# without defining each job individually
stack: ['functions', 'default', 'vpc', 's3', 'resource', 'custom']
steps:
- uses: actions/checkout@v3
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v3
- uses: actions/setup-node@v3
with:
node-version: ${{ matrix.node-version }}
node-version: 16.x
cache: 'npm'

# install dependencies
- run: npm ci

- uses: aws-actions/configure-aws-credentials@master
# configure AWS CLI and SAM CLI credentials
- uses: aws-actions/configure-aws-credentials@v2
with:
audience: sts.amazonaws.com
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
aws-region: ${{ secrets.AWS_REGION }}

# install SAM CLI
- uses: aws-actions/setup-sam@v2
with:
use-installer: true

# validate SAM template (nice to have)
- run: sam validate

# sam build takes about 40 seconds, if not cached
- name: Cache SAM Build
id: cache-sam-build
uses: actions/cache@v3
with:
path: .aws-sam/
key: aws-sam-build

# build app and layer, if cache miss
- name: SAM Build
if: steps.cache-sam-build.outputs.cache-hit != 'true'
run: sam build --use-container


- if: matrix.stack == 'functions'
run: |
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --s3-bucket ${{ secrets.AWS_S3_BUCKET }} --capabilities CAPABILITY_IAM --region ${{ secrets.AWS_REGION }} \
--template test/integration/functions.yml \
--stack-name $STACK_NAME_FUNCTIONS-${GITHUB_REF_NAME/\//-}

# deploy with default params
- if: matrix.stack == 'default'
run: |
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --s3-bucket ${{ secrets.AWS_S3_BUCKET }} --capabilities CAPABILITY_IAM --region ${{ secrets.AWS_REGION }} \
--stack-name $STACK_NAME_DEFAULTS-${GITHUB_REF_NAME/\//-}

# deploy with VPC params
- if: matrix.stack == 'vpc'
run: |
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --s3-bucket ${{ secrets.AWS_S3_BUCKET }} --capabilities CAPABILITY_IAM --region ${{ secrets.AWS_REGION }} \
--stack-name $STACK_NAME_VPC-${GITHUB_REF_NAME/\//-} \
--parameter-overrides subnetIds=$SubnetIDs securityGroupIds=$SecurityGroupIds

# deploy with S3 payload params
- if: matrix.stack == 's3'
run: |
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --s3-bucket ${{ secrets.AWS_S3_BUCKET }} --capabilities CAPABILITY_IAM --region ${{ secrets.AWS_REGION }} \
--stack-name $STACK_NAME_S3-${GITHUB_REF_NAME/\//-} \
--parameter-overrides payloadS3Bucket=$S3Bucket payloadS3Key=$S3Key

# deploy with regional limitation (via Lambda Resource)
- if: matrix.stack == 'resource'
run: |
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --s3-bucket ${{ secrets.AWS_S3_BUCKET }} --capabilities CAPABILITY_IAM --region ${{ secrets.AWS_REGION }} \
--stack-name $STACK_NAME_LAMBDA_RESOURCE-${GITHUB_REF_NAME/\//-} \
--parameter-overrides lambdaResource=$LambdaResource


# deploy with a bunch of custom parameters
- if: matrix.stack == 'custom'
run: |
sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --s3-bucket ${{ secrets.AWS_S3_BUCKET }} --capabilities CAPABILITY_IAM --region ${{ secrets.AWS_REGION }} \
--stack-name $STACK_NAME_CUSTOM_PARAMS-${GITHUB_REF_NAME/\//-} \
--parameter-overrides \
PowerValues=$PowerValues \
visualizationURL=$VisualizationURL \
totalExecutionTimeout=$TotalExecutionTimeout \
layerSdkName=$LayerSdkName \
logGroupRetentionInDays=$LogGroupRetentionInDays \
permissionsBoundary=$PermissionsBoundary

test:
needs: deploy
if: always() # run even if something failed (100% sure?)
runs-on: ubuntu-latest
env:
# sample function for testing (more will come)
BRANCH_NAME: ${{ github.ref_name }}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: 16.x
cache: 'npm'
# install dependencies (needed for AWS CLI and a few more utils)
- run: npm ci

# configure AWS CLI credentials
- uses: aws-actions/configure-aws-credentials@v2
with:
audience: sts.amazonaws.com
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
aws-region: ${{ secrets.AWS_REGION }}

# define env variable for integration tests (function arns)
- run: |
FUNCTION_ARNS_DEFAULTS=$(aws cloudformation describe-stacks --stack-name $STACK_NAME_FUNCTIONS-${GITHUB_REF_NAME/\//-} --query 'Stacks[0].Outputs[?OutputKey==`FunctionsDefaults`].OutputValue' --output text)
FUNCTION_ARNS_VPC=$(aws cloudformation describe-stacks --stack-name $STACK_NAME_FUNCTIONS-${GITHUB_REF_NAME/\//-} --query 'Stacks[0].Outputs[?OutputKey==`FunctionsVpc`].OutputValue' --output text)
FUNCTION_ARNS_S3=$(aws cloudformation describe-stacks --stack-name $STACK_NAME_FUNCTIONS-${GITHUB_REF_NAME/\//-} --query 'Stacks[0].Outputs[?OutputKey==`FunctionsS3`].OutputValue' --output text)
echo "FUNCTION_ARNS_DEFAULTS=${FUNCTION_ARNS_DEFAULTS}" >> $GITHUB_ENV
echo "FUNCTION_ARNS_VPC=${FUNCTION_ARNS_VPC}" >> $GITHUB_ENV
echo "FUNCTION_ARNS_S3=${FUNCTION_ARNS_S3}" >> $GITHUB_ENV

# run integrations tests (all in parallel)
- run: npm run test-integration


# delete all Cfn stacks created above
# (whether tests have succedeed or not)
# note: sam delete doesn't fail if the stack doesn't exist
cleanup:
needs: test
if: always() # run even if something failed
runs-on: ubuntu-latest
environment: Deletion # this will require approval before deleting all stacks
strategy:
fail-fast: false
matrix:
# this will delete the 5 stacks in parallel,
# without defining each job individually
stack: ['default', 'vpc', 's3', 'resource', 'custom']
steps:
# configure AWS CLI and SAM CLI credentials
- uses: aws-actions/configure-aws-credentials@v2
with:
audience: sts.amazonaws.com
role-to-assume: ${{ secrets.AWS_ROLE_TO_ASSUME }}
aws-region: ${{ secrets.AWS_REGION }}

# install SAM CLI
- uses: aws-actions/setup-sam@v2
with:
use-installer: true

- run: sam build --use-container
- run: sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --stack-name aws-lambda-power-tuning-gh-${GITHUB_REF_NAME/\//-} --s3-bucket ${{ secrets.AWS_S3_BUCKET }} --capabilities CAPABILITY_IAM --region ${{ secrets.AWS_REGION }}
# delete stack with default params
- if: matrix.stack == 'default'
run: |
sam delete --no-prompts --region ${{ secrets.AWS_REGION }} \
--stack-name $STACK_NAME_DEFAULTS-${GITHUB_REF_NAME/\//-}

# delete stack with VPC params
- if: matrix.stack == 'vpc'
run: |
sam delete --no-prompts --region ${{ secrets.AWS_REGION }} \
--stack-name $STACK_NAME_VPC-${GITHUB_REF_NAME/\//-}

# delete stack with S3 payload params
- if: matrix.stack == 's3'
run: |
sam delete --no-prompts --region ${{ secrets.AWS_REGION }} \
--stack-name $STACK_NAME_S3-${GITHUB_REF_NAME/\//-}

# delete stack with regional limitation (via Lambda Resource)
- if: matrix.stack == 'resource'
run: |
sam delete --no-prompts --region ${{ secrets.AWS_REGION }} \
--stack-name $STACK_NAME_LAMBDA_RESOURCE-${GITHUB_REF_NAME/\//-}

# delete stack with a bunch of custom parameters
- if: matrix.stack == 'custom'
run: |
sam delete --no-prompts --region ${{ secrets.AWS_REGION }} \
--stack-name $STACK_NAME_CUSTOM_PARAMS-${GITHUB_REF_NAME/\//-}
3 changes: 2 additions & 1 deletion .mocharc.js
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,6 @@ module.exports = {
require: [
'./test/setup.spec.js',
],
spec: './test/unit/**/*.js'
//spec: './test/**/*.js',
timeout: 60 * 1000 // 1 minute per test
};
5 changes: 5 additions & 0 deletions lambda/cleaner.js
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,11 @@ module.exports.handler = async(event, context) => {
// run everything in parallel and wait until completed
await Promise.all(ops);

if (event.error) {
// re-raise so that the state machine execution fails (correctly)
throw new Error(event.error.Cause);
}

return 'OK';
};

Expand Down
Loading