Skip to content

Commit 7b23b5b

Browse files
test: add benchmark on AWS Lambda (#261)
* tests: add benchmark on AWS Lambda * changing from average to p50 * tests: add README to benchmark * test: use S3_BUCKET environment variable for benchmark * docs: core utilities instead of main utilities * docs: share fixed ETA on data propagation * docs: note on systems we expect it to run plus charges Co-authored-by: Heitor Lessa <[email protected]>
1 parent dd7e987 commit 7b23b5b

File tree

8 files changed

+181
-0
lines changed

8 files changed

+181
-0
lines changed

benchmark/.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
.aws-sam

benchmark/README.md

+26
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
# Cold Start Benchmark
2+
3+
The [benchmark.sh script](./benchmark.sh) is a bash script to compare the cold-start time of using the AWS Lambda Powertools in a semi-automated way. It does so by deploying two Lambda functions which both have the aws-lambda-powertools module installed. One Lambda function will import and initialize the three core utilities (`Metrics`, `Logger`, `Tracer`), while the other one will not.
4+
5+
Please note that this requires the [SAM CLI](https://github.com/aws/aws-sam-cli) version 1.2.0 or later.
6+
7+
## Usage
8+
9+
> **NOTE**: This script is expected to run in Unix-based systems only, and can incur charges on your AWS account.
10+
11+
To use the script, you should move into the benchmark folder and run the benchmark script:
12+
13+
```
14+
export S3_BUCKET=code-artifact-s3-bucket
15+
16+
cd benchmark
17+
./benchmark.sh
18+
```
19+
20+
This will:
21+
22+
* Deploy a CloudFormation stack using guided SAM deployment (*you will need to answer a few questions*).
23+
* Run loops to update the memory setting of the functions to force a cold start, then invoke them. This process is repeated a number of time to get more consistent results.
24+
* Wait 2.5 minutes to ensure data propagates from CloudWatch Logs to CloudWatch Logs Insights.
25+
* Run a query on CloudWatch Logs insights, looking at the **REPORT** line from the logs.
26+
* Delete the CloudFormation stack.

benchmark/benchmark.sh

+83
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
#!/bin/bash
2+
3+
set -e
4+
trap cleanup EXIT
5+
6+
if [ -z "S3_BUCKET" ]; then
7+
echo "Missing S3_BUCKET environment variabe"
8+
exit 1
9+
fi
10+
11+
export BENCHMARK_STACK_NAME=${BENCHMARK_STACK_NAME:-"powertools-benchmark"}
12+
13+
function cleanup {
14+
echo "Cleaning up stack..."
15+
aws cloudformation delete-stack --stack-name $BENCHMARK_STACK_NAME
16+
}
17+
18+
function run_function {
19+
# Update function to force a cold start
20+
aws lambda update-function-configuration --function-name $1 --memory-size 256 >/dev/null
21+
aws lambda update-function-configuration --function-name $1 --memory-size 128 >/dev/null
22+
# Cold-start invoke
23+
aws lambda invoke --function-name $1 --payload '{}' /dev/null >/dev/null && echo -n . || echo -n e
24+
}
25+
26+
# Retrieve statistics
27+
function get_stats {
28+
# Gather results from CloudWatch Logs Insights
29+
query_id=$(aws logs start-query --log-group-name $1 --query-string 'filter @type = "REPORT" | stats pct(@initDuration, 50) as init_duration, pct(@duration, 50) as duration' --start-time $(expr $(date +%s) - 86400) --end-time $(expr $(date +%s) + 0) --query 'queryId' --output text)
30+
while true; do
31+
result=$(aws logs get-query-results --query-id $query_id --query 'status' --output text)
32+
if [ $result == "Complete" ]; then
33+
break
34+
fi
35+
sleep 1
36+
done
37+
38+
# Check if greater than threshold and print result
39+
init_duration=$(aws logs get-query-results --query-id $query_id --query 'results[0][?field==`init_duration`].value' --output text)
40+
duration=$(aws logs get-query-results --query-id $query_id --query 'results[0][?field==`duration`].value' --output text)
41+
echo "$init_duration,$duration"
42+
}
43+
44+
# Build and deploy the benchmark stack
45+
echo "Building and deploying..."
46+
sam build
47+
sam deploy --stack-name $BENCHMARK_STACK_NAME --s3-bucket $S3_BUCKET --capabilities CAPABILITY_IAM
48+
49+
# Retrieve output values
50+
echo "Retrieve values..."
51+
export INSTRUMENTED_FUNCTION=$(aws cloudformation describe-stacks --stack-name $BENCHMARK_STACK_NAME --query 'Stacks[0].Outputs[?OutputKey==`InstrumentedFunction`].OutputValue' --output text)
52+
export REFERENCE_FUNCTION=$(aws cloudformation describe-stacks --stack-name $BENCHMARK_STACK_NAME --query 'Stacks[0].Outputs[?OutputKey==`ReferenceFunction`].OutputValue' --output text)
53+
export INSTRUMENTED_LOG_GROUP=$(aws cloudformation describe-stacks --stack-name $BENCHMARK_STACK_NAME --query 'Stacks[0].Outputs[?OutputKey==`InstrumentedLogGroup`].OutputValue' --output text)
54+
export REFERENCE_LOG_GROUP=$(aws cloudformation describe-stacks --stack-name $BENCHMARK_STACK_NAME --query 'Stacks[0].Outputs[?OutputKey==`ReferenceLogGroup`].OutputValue' --output text)
55+
56+
echo INSTRUMENTED_FUNCTION=$INSTRUMENTED_FUNCTION
57+
echo REFERENCE_FUNCTION=$REFERENCE_FUNCTION
58+
echo INSTRUMENTED_LOG_GROUP=$INSTRUMENTED_LOG_GROUP
59+
echo REFERENCE_LOG_GROUP=$REFERENCE_LOG_GROUP
60+
61+
# Running cold starts
62+
echo "Running functions..."
63+
for i in {0..20}; do
64+
run_function $INSTRUMENTED_FUNCTION
65+
done &
66+
process_id=$!
67+
for i in {0..20}; do
68+
run_function $REFERENCE_FUNCTION
69+
done &
70+
wait $process_id
71+
wait $!
72+
echo
73+
74+
# Gather statistics
75+
# Waiting 2.5 minutes to make sure the data propagates from CloudWatch Logs
76+
# into CloudWatch Logs Insights.
77+
echo "Waiting for data to propagate in CloudWatch Logs Insights..."
78+
sleep 150
79+
return_code=0
80+
echo "INSTRUMENTED=$(get_stats $INSTRUMENTED_LOG_GROUP)"
81+
echo "REFERENCE=$(get_stats $REFERENCE_LOG_GROUP)"
82+
83+
exit $return_code

benchmark/src/instrumented/main.py

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
from aws_lambda_powertools import (Logger, Metrics, Tracer)
2+
3+
4+
# Initialize core utilities
5+
logger = Logger()
6+
metrics = Metrics()
7+
tracer = Tracer()
8+
9+
10+
# Instrument Lambda function
11+
@logger.inject_lambda_context
12+
@metrics.log_metrics
13+
@tracer.capture_lambda_handler
14+
def handler(event, context):
15+
return {
16+
"message": "success"
17+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
aws-lambda-powertools

benchmark/src/reference/main.py

+4
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
def handler(event, context):
2+
return {
3+
"message": "success"
4+
}
+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
aws-lambda-powertools

benchmark/template.yaml

+48
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
AWSTemplateFormatVersion: '2010-09-09'
2+
Transform: AWS::Serverless-2016-10-31
3+
4+
Globals:
5+
Function:
6+
Handler: main.handler
7+
Runtime: python3.8
8+
MemorySize: 128
9+
Tracing: Active
10+
Environment:
11+
Variables:
12+
POWERTOOLS_SERVICE_NAME: benchmark
13+
POWERTOOLS_METRICS_NAMESPACE: LambdaPowertools
14+
POWERTOOLS_LOGGER_LOG_EVENT: "true"
15+
LOG_LEVEL: INFO
16+
17+
Resources:
18+
InstrumentedFunction:
19+
Type: AWS::Serverless::Function
20+
Properties:
21+
CodeUri: ./src/instrumented/
22+
23+
ReferenceFunction:
24+
Type: AWS::Serverless::Function
25+
Properties:
26+
CodeUri: ./src/reference/
27+
28+
InstrumentedLogGroup:
29+
Type: AWS::Logs::LogGroup
30+
Properties:
31+
LogGroupName: !Sub "/aws/lambda/${InstrumentedFunction}"
32+
RetentionInDays: 7
33+
34+
ReferenceLogGroup:
35+
Type: AWS::Logs::LogGroup
36+
Properties:
37+
LogGroupName: !Sub "/aws/lambda/${ReferenceFunction}"
38+
RetentionInDays: 7
39+
40+
Outputs:
41+
InstrumentedFunction:
42+
Value: !Ref InstrumentedFunction
43+
ReferenceFunction:
44+
Value: !Ref ReferenceFunction
45+
InstrumentedLogGroup:
46+
Value: !Ref InstrumentedLogGroup
47+
ReferenceLogGroup:
48+
Value: !Ref ReferenceLogGroup

0 commit comments

Comments
 (0)