Skip to content

chore: run e2e tests on demand #441

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 45 commits into from
Jan 14, 2022
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
c3b427d
add AWS infra to run E2E tests
flochaz Jan 6, 2022
27bfa90
Fix e2e tests and add missing scripts
flochaz Jan 6, 2022
cd7b015
split workflow
flochaz Jan 6, 2022
7015f64
Fix region and account setting for tracing e2e
flochaz Jan 7, 2022
d1acd31
add log to tracing e2e
flochaz Jan 7, 2022
b367d80
alternative to get account
flochaz Jan 7, 2022
52a20d8
revert increase timeout
flochaz Jan 7, 2022
c0a4301
remove test trigger
flochaz Jan 7, 2022
38fdb75
remove aws-infra
flochaz Jan 7, 2022
ea45fdf
add Contributing
flochaz Jan 7, 2022
bdf800f
fix on merge
flochaz Jan 7, 2022
e3a83ce
chore: added jest group runner dep + config + headers to logger
dreamorosi Jan 7, 2022
6db56fa
chore: added group filter in regular cmd tests for logger
dreamorosi Jan 7, 2022
6046c08
Add manual command for get trace summary
flochaz Jan 7, 2022
08e8050
Merge remote-tracking branch 'origin/main' into chore/cicd/automateIn…
flochaz Jan 7, 2022
ff9a87f
Update CONTRIBUTING.md
flochaz Jan 7, 2022
13103b8
Update CONTRIBUTING.md
flochaz Jan 7, 2022
3a189ba
Update CONTRIBUTING.md
flochaz Jan 7, 2022
5fbdc66
Update CONTRIBUTING.md
flochaz Jan 7, 2022
f86e54f
Update CONTRIBUTING.md
flochaz Jan 7, 2022
065e538
Update CONTRIBUTING.md
flochaz Jan 7, 2022
29ed2e7
Update CONTRIBUTING.md
flochaz Jan 7, 2022
0cfb133
Update CONTRIBUTING.md
flochaz Jan 7, 2022
92f1183
remove useless steps in owrkflow
flochaz Jan 11, 2022
e196ae1
force test run TO BE REVERTED
flochaz Jan 11, 2022
9c1e339
Revert "force test run TO BE REVERTED"
flochaz Jan 11, 2022
33d87f7
Update CONTRIBUTING.md
flochaz Jan 12, 2022
19c10c4
Update CONTRIBUTING.md
flochaz Jan 12, 2022
f792e5c
Update CONTRIBUTING.md
flochaz Jan 12, 2022
ba91ab7
Update CONTRIBUTING.md
flochaz Jan 12, 2022
51677d8
Update CONTRIBUTING.md
flochaz Jan 12, 2022
9f9b6d6
Update .github/workflows/run-e2e-tests.yml
flochaz Jan 12, 2022
d0f9736
Update CONTRIBUTING.md
flochaz Jan 12, 2022
530d50d
Update .github/workflows/run-e2e-tests.yml
flochaz Jan 12, 2022
df4e51a
Update CONTRIBUTING.md
flochaz Jan 12, 2022
5ff1931
Update CONTRIBUTING.md
flochaz Jan 12, 2022
9d0daa8
Update CONTRIBUTING.md
flochaz Jan 12, 2022
1d22541
Update CONTRIBUTING.md
flochaz Jan 12, 2022
1e4c897
Update CONTRIBUTING.md
flochaz Jan 13, 2022
ee00ffc
Update CONTRIBUTING.md
flochaz Jan 13, 2022
57c3bb0
Update CONTRIBUTING.md
flochaz Jan 13, 2022
0de1d03
Update CONTRIBUTING.md
flochaz Jan 13, 2022
64304ce
homogenize tests run process
flochaz Jan 13, 2022
4f62c1a
fix indentation
flochaz Jan 13, 2022
973a9d5
Update CONTRIBUTING.md
dreamorosi Jan 14, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 18 additions & 1 deletion .github/workflows/on-merge-to-main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,9 @@ on:
jobs:
publish:
runs-on: ubuntu-latest

permissions:
id-token: write # needed to interact with GitHub's OIDC Token endpoint.
contents: read
steps:
- name: "Checkout"
uses: actions/checkout@v2
Expand All @@ -32,6 +34,21 @@ jobs:
run: npm run lerna-lint
- name: Run tests
run: npm run lerna-test
- name: "Version and publish"
env:
GH_TOKEN: ${{ secrets.GH_PUBLISH_TOKEN }}
run: |
git config --global user.name 'github-actions[bot]'
git config --global user.email 'github-actions[bot]@users.noreply.github.com'
git remote set-url origin https://x-access-token:${GH_TOKEN}@github.com/$GITHUB_REPOSITORY

# For merge to main we
## don't create github release,
## don't update changelog (--no-changelog)
## bump version as a pre-release (--conventional-prerelease)
## add a custom preid (--preid dev): 0.2.0-dev.1 -> 0.2.0-dev.2
npx lerna version --conventional-commits --conventional-prerelease --preid dev --force-publish=* --yes --no-changelog
git push --delete origin $(git describe --abbrev=0)
- name: update release draft
uses: release-drafter/[email protected]
env:
Expand Down
36 changes: 36 additions & 0 deletions .github/workflows/run-e2e-tests.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: run-e2e-tests
on:
workflow_dispatch: {}
jobs:
publish:
runs-on: ubuntu-latest
permissions:
id-token: write # needed to interact with GitHub's OIDC Token endpoint.
contents: read
steps:
- name: "Checkout"
uses: actions/checkout@v2
with:
token: ${{ secrets.GH_PUBLISH_TOKEN }}
fetch-depth: 0
#########################
# Release new version
#########################
- name: "Use NodeJS 14"
uses: actions/setup-node@v2
with:
node-version: '14'
- name: "Setup npm"
run: |
npm set "//registry.npmjs.org/:_authToken=${{ secrets.NPM_TOKEN }}"
- name: Install packages
run: |
npm ci
npm run lerna-ci
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@master
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN_TO_ASSUME }}
aws-region: eu-west-1
- name: Run integration tests
run: npm run lerna-test:e2e
69 changes: 69 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,75 @@ You can build and start a local docs website by running these two commands.
- `npm run docs-buildDockerImage` OR `docker build -t squidfunk/mkdocs-material ./docs/`
- `npm run docs-runLocalDocker` OR `docker run --rm -it -p 8000:8000 -v ${PWD}:/docs squidfunk/mkdocs-material`

### Tests

Tests are under `tests` folder of each modules and splitted into two categories: Unit tests and e2e tests.

This split happen thanks to [jest-runner-groups](https://www.npmjs.com/package/jest-runner-groups).

Unit tests, under `tests/unit` folder are standard jest tests.

Integration tests, under `tests/e2e` folder, will test the module features by deploying lambdas into your AWS Account (thanks to CDK lib for typescript) and use aws sdk to invoke them and assert on expected behavior. All of it orchestrated using standard Jest framework. Since it's deploying infrastructure, it will need an AWS account.


**Unit testing**

**Write**

As mentioned before, tests are splitted thanks to [jest-runner-groups](https://www.npmjs.com/package/jest-runner-groups) and therefore needs to be tagged properly by adding the following comments in your unit test file:

```
/**
* Tests metrics
*
* @group unit/<YOUR CATEGORY>/<YOUR SUB CATEGORY>
*/
```

**Run**

`npm run test`
Copy link
Contributor

@saragerion saragerion Jan 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For consistency with the e2e test, can we change this command to be everywhere:

npm run test:unit

So that in the future we can (if we want or need):

  • create 1 line command that runs all the tests in sequence:
    npm run test will execute the unit tests and then the e2e tests. If we have new kind of tests we want to add to the sequence, we can do it by adding them here.
    I used this similar pattern in past teams as a customer and makes things easy.
  • add new types of tests with related name if we need (npm run test:acceptance)

Copy link
Contributor

@saragerion saragerion Jan 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: this is not much about the unique command, but more of a matter of naming consistency.


You can run selective tests by restricting the group to the one you want. For instance `npx jest --group=unit/metrics/all`.

**e2e tests**

**Write**

As mentioned before, unit and e2e tests are splitted thanks to [jest-runner-groups](https://www.npmjs.com/package/jest-runner-groups) and therefore needs to be tagged properly by adding the following comments in your unit test file:

```
/**
* Tests data lake catalog
*
* @group e2e/<YOUR CATEGORY>/<YOUR SUB CATEGORY>
*/
```

and leverage `aws-cdk` package to programatically deploy and destroy stacks. See `metrics/tests/e2e/decorator.test.ts` as an example.


**Run**

To run unit tests you can either use projen task
* `npm run test:e2e` which will only run jest integ tests
* or jest directly `npx jest --group=e2e`

You can run selective tests by restricting the group to the one you want. For instance `npx jest --group=e2e/other/example`.

Two important env variable can be used:
* `AWS_PROFILE` to use the right credentials
* `DISABLE_TEARDOWN` if you don't want your stack to be destroyed at the end of the test (useful in dev mode when iterating over your code).

Example: `DISABLE_TEARDOWN=true AWS_PROFILE=ara npx jest --group=integ/other/example`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: can we add a real example?


**Automate**

1. Create AWS Role
As mention earlier we are leveraging CDK to deploy and clean resources on AWS. Therefore to run those tests through github actions you will need to grant specific permissions to your workflow. To do so you can leverage [@pahud/cdk-github-oidc](https://constructs.dev/packages/@pahud/cdk-github-oidc) construct which setup the right resources to leverage [Github OpenID Connect](https://github.blog/changelog/2021-10-27-github-actions-secure-cloud-deployments-with-openid-connect/) mechanism.
1. Add your new role into your Github fork secrets under `AWS_ROLE_ARN_TO_ASSUME`.
1. Run manually `run-e2e-tests` workflow.

### Conventions

Category | Convention
Expand Down
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
"package": "npm run package",
"lerna-ci": "lerna exec -- npm ci",
"lerna-test": "lerna exec -- npm run test",
"lerna-test:e2e": "lerna exec -- npm run test:e2e",
"lerna-package": "lerna exec -- npm run package",
"lerna-build": "lerna exec -- tsc",
"lerna-lint": "lerna exec -- eslint \"./{src,tests}/**/*.ts ./src/*.ts\"",
Expand Down
1 change: 1 addition & 0 deletions packages/commons/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
"scripts": {
"commit": "commit",
"test": "jest --detectOpenHandles --coverage --verbose",
"test:e2e": "echo 'Not Applicable'",
"watch": "jest --watch",
"build": "tsc",
"lint": "eslint --ext .ts --fix --no-error-on-unmatched-pattern src tests",
Expand Down
1 change: 1 addition & 0 deletions packages/logger/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
"scripts": {
"commit": "commit",
"test": "jest --detectOpenHandles --coverage --verbose",
"test:e2e": "jest --group=e2e",
"watch": "jest --watch",
"build": "tsc",
"lint": "eslint --ext .ts --fix --no-error-on-unmatched-pattern src tests",
Expand Down
2 changes: 1 addition & 1 deletion packages/metrics/tests/e2e/decorator.test.MyFunction.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ const singleMetricName = process.env.EXPECTED_SINGLE_METRIC_NAME ?? 'MySingleMet
const singleMetricUnit = (process.env.EXPECTED_SINGLE_METRIC_UNIT as MetricUnits) ?? MetricUnits.Percent;
const singleMetricValue = process.env.EXPECTED_SINGLE_METRIC_VALUE ?? '2';

const metrics = new Metrics({ namespace: namespace, service: serviceName });
const metrics = new Metrics({ namespace: namespace, serviceName: serviceName });

class Lambda implements LambdaInterface {

Expand Down
78 changes: 31 additions & 47 deletions packages/metrics/tests/e2e/decorator.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ const integTestApp = new App();
const stack = new Stack(integTestApp, 'MetricsE2EDecoratorStack');

// GIVEN
const invocationCount = 2;
const startTime = new Date();
const expectedNamespace = randomUUID(); // to easily find metrics back at assert phase
const expectedServiceName = 'decoratorService';
Expand Down Expand Up @@ -67,28 +68,29 @@ describe('happy cases', () => {
// lambda function is deployed
await cloudFormation.deployStack({
stack: stackArtifact,
quiet: true,
});
}, 200000);

it('capture ColdStart Metric', async () => {
// WHEN
// invoked
await lambdaClient
.invoke({
FunctionName: functionName,
})
.promise();
// twice
await lambdaClient
.invoke({
FunctionName: functionName,
})
.promise();
// and invoked
for (let i = 0; i < invocationCount; i++) {
await lambdaClient
.invoke({
FunctionName: functionName,
})
.promise();
}

// THEN
// sleep to allow metrics to be collected
await new Promise((resolve) => setTimeout(resolve, 10000));
await new Promise((resolve) => setTimeout(resolve, 15000));
}, 200000);

it('capture ColdStart Metric', async () => {
const expectedDimensions = [
{ Name: 'service', Value: expectedServiceName },
{ Name: 'function_name', Value: functionName },
{ Name: Object.keys(expectedDefaultDimensions)[0], Value: expectedDefaultDimensions.MyDimension },
];
// Check coldstart metric dimensions
const coldStartMetrics = await cloudwatchClient
.listMetrics({
Expand All @@ -98,24 +100,19 @@ describe('happy cases', () => {
.promise();
expect(coldStartMetrics.Metrics?.length).toBe(1);
const coldStartMetric = coldStartMetrics.Metrics?.[0];
expect(coldStartMetric?.Dimensions).toStrictEqual([
{ Name: 'service', Value: expectedServiceName },
{ Name: 'function_name', Value: functionName },
{ Name: Object.keys(expectedDefaultDimensions)[0], Value: expectedDefaultDimensions.MyDimension },
]);
expect(coldStartMetric?.Dimensions).toStrictEqual(expectedDimensions);

// Check coldstart metric value
const adjustedStartTime = new Date(startTime.getTime() - 60 * 1000);
const endTime = new Date(new Date().getTime() + 60 * 1000);
console.log(`Manual command: aws cloudwatch get-metric-statistics --namespace ${expectedNamespace} --metric-name ColdStart --start-time ${Math.floor(adjustedStartTime.getTime()/1000)} --end-time ${Math.floor(endTime.getTime()/1000)} --statistics 'Sum' --period 60 --dimensions '${JSON.stringify(expectedDimensions)}'`);
const coldStartMetricStat = await cloudwatchClient
.getMetricStatistics(
{
Namespace: expectedNamespace,
StartTime: new Date(startTime.getTime() - 60 * 1000), // minus 1 minute,
Dimensions: [
{ Name: 'service', Value: expectedServiceName },
{ Name: 'function_name', Value: functionName },
{ Name: Object.keys(expectedDefaultDimensions)[0], Value: expectedDefaultDimensions.MyDimension },
],
EndTime: new Date(new Date().getTime() + 60 * 1000),
StartTime: adjustedStartTime,
Dimensions: expectedDimensions,
EndTime: endTime,
Period: 60,
MetricName: 'ColdStart',
Statistics: ['Sum'],
Expand All @@ -126,27 +123,10 @@ describe('happy cases', () => {

// Despite lambda has been called twice, coldstart metric sum should only be 1
const singleDataPoint = coldStartMetricStat.Datapoints ? coldStartMetricStat.Datapoints[0] : {};
expect(singleDataPoint.Sum).toBe(1);
expect(singleDataPoint?.Sum).toBe(1);
}, 15000);

it('produce added Metric with the default and extra one dimensions', async () => {
// GIVEN
const invocationCount = 2;

// WHEN
// invoked
for (let i = 0; i < invocationCount; i++) {
await lambdaClient
.invoke({
FunctionName: functionName,
})
.promise();
}

// THEN
// sleep to allow metrics to be collected
await new Promise((resolve) => setTimeout(resolve, 10000));

// Check metric dimensions
const metrics = await cloudwatchClient
.listMetrics({
Expand All @@ -164,6 +144,9 @@ describe('happy cases', () => {
expect(metric?.Dimensions).toStrictEqual(expectedDimensions);

// Check coldstart metric value
const adjustedStartTime = new Date(startTime.getTime() - 60 * 1000);
const endTime = new Date(new Date().getTime() + 60 * 1000);
console.log(`Manual command: aws cloudwatch get-metric-statistics --namespace ${expectedNamespace} --metric-name ${expectedMetricName} --start-time ${Math.floor(adjustedStartTime.getTime()/1000)} --end-time ${Math.floor(endTime.getTime()/1000)} --statistics 'Sum' --period 60 --dimensions '${JSON.stringify(expectedDimensions)}'`);
const metricStat = await cloudwatchClient
.getMetricStatistics(
{
Expand All @@ -181,7 +164,7 @@ describe('happy cases', () => {

// Since lambda has been called twice in this test and potentially more in others, metric sum should be at least of expectedMetricValue * invocationCount
const singleDataPoint = metricStat.Datapoints ? metricStat.Datapoints[0] : {};
expect(singleDataPoint.Sum).toBeGreaterThanOrEqual(parseInt(expectedMetricValue) * invocationCount);
expect(singleDataPoint?.Sum).toBeGreaterThanOrEqual(parseInt(expectedMetricValue) * invocationCount);
}, 15000);

afterAll(async () => {
Expand All @@ -195,6 +178,7 @@ describe('happy cases', () => {

await cloudFormation.destroyStack({
stack: stackArtifact,
quiet: true,
});
}
}, 200000);
Expand Down
Loading