Skip to content

test: add benchmark on AWS Lambda #261

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Feb 9, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions benchmark/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
.aws-sam
26 changes: 26 additions & 0 deletions benchmark/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Cold Start Benchmark

The [benchmark.sh script](./benchmark.sh) is a bash script to compare the cold-start time of using the AWS Lambda Powertools in a semi-automated way. It does so by deploying two Lambda functions which both have the aws-lambda-powertools module installed. One Lambda function will import and initialize the three core utilities (`Metrics`, `Logger`, `Tracer`), while the other one will not.

Please note that this requires the [SAM CLI](https://github.com/aws/aws-sam-cli) version 1.2.0 or later.

## Usage

> **NOTE**: This script is expected to run in Unix-based systems only, and can incur charges on your AWS account.

To use the script, you should move into the benchmark folder and run the benchmark script:

```
export S3_BUCKET=code-artifact-s3-bucket

cd benchmark
./benchmark.sh
```

This will:

* Deploy a CloudFormation stack using guided SAM deployment (*you will need to answer a few questions*).
* Run loops to update the memory setting of the functions to force a cold start, then invoke them. This process is repeated a number of time to get more consistent results.
* Wait 2.5 minutes to ensure data propagates from CloudWatch Logs to CloudWatch Logs Insights.
* Run a query on CloudWatch Logs insights, looking at the **REPORT** line from the logs.
* Delete the CloudFormation stack.
83 changes: 83 additions & 0 deletions benchmark/benchmark.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
#!/bin/bash

set -e
trap cleanup EXIT

if [ -z "S3_BUCKET" ]; then
echo "Missing S3_BUCKET environment variabe"
exit 1
fi

export BENCHMARK_STACK_NAME=${BENCHMARK_STACK_NAME:-"powertools-benchmark"}

function cleanup {
echo "Cleaning up stack..."
aws cloudformation delete-stack --stack-name $BENCHMARK_STACK_NAME
}

function run_function {
# Update function to force a cold start
aws lambda update-function-configuration --function-name $1 --memory-size 256 >/dev/null
aws lambda update-function-configuration --function-name $1 --memory-size 128 >/dev/null
# Cold-start invoke
aws lambda invoke --function-name $1 --payload '{}' /dev/null >/dev/null && echo -n . || echo -n e
}

# Retrieve statistics
function get_stats {
# Gather results from CloudWatch Logs Insights
query_id=$(aws logs start-query --log-group-name $1 --query-string 'filter @type = "REPORT" | stats pct(@initDuration, 50) as init_duration, pct(@duration, 50) as duration' --start-time $(expr $(date +%s) - 86400) --end-time $(expr $(date +%s) + 0) --query 'queryId' --output text)
while true; do
result=$(aws logs get-query-results --query-id $query_id --query 'status' --output text)
if [ $result == "Complete" ]; then
break
fi
sleep 1
done

# Check if greater than threshold and print result
init_duration=$(aws logs get-query-results --query-id $query_id --query 'results[0][?field==`init_duration`].value' --output text)
duration=$(aws logs get-query-results --query-id $query_id --query 'results[0][?field==`duration`].value' --output text)
echo "$init_duration,$duration"
}

# Build and deploy the benchmark stack
echo "Building and deploying..."
sam build
sam deploy --stack-name $BENCHMARK_STACK_NAME --s3-bucket $S3_BUCKET --capabilities CAPABILITY_IAM

# Retrieve output values
echo "Retrieve values..."
export INSTRUMENTED_FUNCTION=$(aws cloudformation describe-stacks --stack-name $BENCHMARK_STACK_NAME --query 'Stacks[0].Outputs[?OutputKey==`InstrumentedFunction`].OutputValue' --output text)
export REFERENCE_FUNCTION=$(aws cloudformation describe-stacks --stack-name $BENCHMARK_STACK_NAME --query 'Stacks[0].Outputs[?OutputKey==`ReferenceFunction`].OutputValue' --output text)
export INSTRUMENTED_LOG_GROUP=$(aws cloudformation describe-stacks --stack-name $BENCHMARK_STACK_NAME --query 'Stacks[0].Outputs[?OutputKey==`InstrumentedLogGroup`].OutputValue' --output text)
export REFERENCE_LOG_GROUP=$(aws cloudformation describe-stacks --stack-name $BENCHMARK_STACK_NAME --query 'Stacks[0].Outputs[?OutputKey==`ReferenceLogGroup`].OutputValue' --output text)

echo INSTRUMENTED_FUNCTION=$INSTRUMENTED_FUNCTION
echo REFERENCE_FUNCTION=$REFERENCE_FUNCTION
echo INSTRUMENTED_LOG_GROUP=$INSTRUMENTED_LOG_GROUP
echo REFERENCE_LOG_GROUP=$REFERENCE_LOG_GROUP

# Running cold starts
echo "Running functions..."
for i in {0..20}; do
run_function $INSTRUMENTED_FUNCTION
done &
process_id=$!
for i in {0..20}; do
run_function $REFERENCE_FUNCTION
done &
wait $process_id
wait $!
echo

# Gather statistics
# Waiting 2.5 minutes to make sure the data propagates from CloudWatch Logs
# into CloudWatch Logs Insights.
echo "Waiting for data to propagate in CloudWatch Logs Insights..."
sleep 150
return_code=0
echo "INSTRUMENTED=$(get_stats $INSTRUMENTED_LOG_GROUP)"
echo "REFERENCE=$(get_stats $REFERENCE_LOG_GROUP)"

exit $return_code
17 changes: 17 additions & 0 deletions benchmark/src/instrumented/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
from aws_lambda_powertools import (Logger, Metrics, Tracer)


# Initialize core utilities
logger = Logger()
metrics = Metrics()
tracer = Tracer()


# Instrument Lambda function
@logger.inject_lambda_context
@metrics.log_metrics
@tracer.capture_lambda_handler
def handler(event, context):
return {
"message": "success"
}
1 change: 1 addition & 0 deletions benchmark/src/instrumented/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
aws-lambda-powertools
4 changes: 4 additions & 0 deletions benchmark/src/reference/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
def handler(event, context):
return {
"message": "success"
}
1 change: 1 addition & 0 deletions benchmark/src/reference/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
aws-lambda-powertools
48 changes: 48 additions & 0 deletions benchmark/template.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Globals:
Function:
Handler: main.handler
Runtime: python3.8
MemorySize: 128
Tracing: Active
Environment:
Variables:
POWERTOOLS_SERVICE_NAME: benchmark
POWERTOOLS_METRICS_NAMESPACE: LambdaPowertools
POWERTOOLS_LOGGER_LOG_EVENT: "true"
LOG_LEVEL: INFO

Resources:
InstrumentedFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./src/instrumented/

ReferenceFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./src/reference/

InstrumentedLogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Sub "/aws/lambda/${InstrumentedFunction}"
RetentionInDays: 7

ReferenceLogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Sub "/aws/lambda/${ReferenceFunction}"
RetentionInDays: 7

Outputs:
InstrumentedFunction:
Value: !Ref InstrumentedFunction
ReferenceFunction:
Value: !Ref ReferenceFunction
InstrumentedLogGroup:
Value: !Ref InstrumentedLogGroup
ReferenceLogGroup:
Value: !Ref ReferenceLogGroup