diff --git a/docs/core/event_handler/api_gateway.md b/docs/core/event_handler/api_gateway.md index a87daa3299a..e1bc9399eb3 100644 --- a/docs/core/event_handler/api_gateway.md +++ b/docs/core/event_handler/api_gateway.md @@ -23,65 +23,65 @@ This is the sample infrastructure for API Gateway we are using for the examples === "template.yml" - ```yaml - AWSTemplateFormatVersion: '2010-09-09' - Transform: AWS::Serverless-2016-10-31 - Description: Hello world event handler API Gateway - - Globals: - Api: - TracingEnabled: true - Cors: # see CORS section - AllowOrigin: "'https://example.com'" - AllowHeaders: "'Content-Type,Authorization,X-Amz-Date'" - MaxAge: "'300'" - BinaryMediaTypes: # see Binary responses section - - '*~1*' # converts to */* for any binary type - Function: + ```yaml + AWSTemplateFormatVersion: '2010-09-09' + Transform: AWS::Serverless-2016-10-31 + Description: Hello world event handler API Gateway + + Globals: + Api: + TracingEnabled: true + Cors: # see CORS section + AllowOrigin: "'https://example.com'" + AllowHeaders: "'Content-Type,Authorization,X-Amz-Date'" + MaxAge: "'300'" + BinaryMediaTypes: # see Binary responses section + - '*~1*' # converts to */* for any binary type + Function: Timeout: 5 Runtime: python3.8 Tracing: Active - Environment: + Environment: Variables: - LOG_LEVEL: INFO - POWERTOOLS_LOGGER_SAMPLE_RATE: 0.1 - POWERTOOLS_LOGGER_LOG_EVENT: true - POWERTOOLS_METRICS_NAMESPACE: MyServerlessApplication - POWERTOOLS_SERVICE_NAME: hello - - Resources: - HelloWorldFunction: + LOG_LEVEL: INFO + POWERTOOLS_LOGGER_SAMPLE_RATE: 0.1 + POWERTOOLS_LOGGER_LOG_EVENT: true + POWERTOOLS_METRICS_NAMESPACE: MyServerlessApplication + POWERTOOLS_SERVICE_NAME: hello + + Resources: + HelloWorldFunction: Type: AWS::Serverless::Function Properties: - Handler: app.lambda_handler - CodeUri: hello_world - Description: Hello World function - Events: - HelloUniverse: - Type: Api - Properties: - Path: /hello - Method: GET - HelloYou: - Type: Api - Properties: - Path: /hello/{name} # see Dynamic routes section - Method: GET - CustomMessage: - Type: Api - Properties: - Path: /{message}/{name} # see Dynamic routes section - Method: GET - - Outputs: - HelloWorldApigwURL: + Handler: app.lambda_handler + CodeUri: hello_world + Description: Hello World function + Events: + HelloUniverse: + Type: Api + Properties: + Path: /hello + Method: GET + HelloYou: + Type: Api + Properties: + Path: /hello/{name} # see Dynamic routes section + Method: GET + CustomMessage: + Type: Api + Properties: + Path: /{message}/{name} # see Dynamic routes section + Method: GET + + Outputs: + HelloWorldApigwURL: Description: "API Gateway endpoint URL for Prod environment for Hello World Function" Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello" HelloWorldFunction: Description: "Hello World Lambda Function ARN" Value: !GetAtt HelloWorldFunction.Arn - ``` + ``` ### API Gateway decorator @@ -93,107 +93,107 @@ Here's an example where we have two separate functions to resolve two paths: `/h === "app.py" - ```python hl_lines="3 7 9 12 18" - from aws_lambda_powertools import Logger, Tracer - from aws_lambda_powertools.logging import correlation_paths - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver - - tracer = Tracer() - logger = Logger() - app = ApiGatewayResolver() # by default API Gateway REST API (v1) - - @app.get("/hello") - @tracer.capture_method - def get_hello_universe(): - return {"message": "hello universe"} - - # You can continue to use other utilities just as before - @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) - @tracer.capture_lambda_handler - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + ```python hl_lines="3 7 9 12 18" + from aws_lambda_powertools import Logger, Tracer + from aws_lambda_powertools.logging import correlation_paths + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver + + tracer = Tracer() + logger = Logger() + app = ApiGatewayResolver() # by default API Gateway REST API (v1) + + @app.get("/hello") + @tracer.capture_method + def get_hello_universe(): + return {"message": "hello universe"} + + # You can continue to use other utilities just as before + @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) + @tracer.capture_lambda_handler + def lambda_handler(event, context): + return app.resolve(event, context) + ``` === "hello_event.json" - This utility uses `path` and `httpMethod` to route to the right function. This helps make unit tests and local invocation easier too. - - ```json hl_lines="4-5" - { - "body": "hello", - "resource": "/hello", - "path": "/hello", - "httpMethod": "GET", - "isBase64Encoded": false, - "queryStringParameters": { - "foo": "bar" - }, - "multiValueQueryStringParameters": {}, - "pathParameters": { - "hello": "/hello" - }, - "stageVariables": {}, - "headers": { - "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", - "Accept-Encoding": "gzip, deflate, sdch", - "Accept-Language": "en-US,en;q=0.8", - "Cache-Control": "max-age=0", - "CloudFront-Forwarded-Proto": "https", - "CloudFront-Is-Desktop-Viewer": "true", - "CloudFront-Is-Mobile-Viewer": "false", - "CloudFront-Is-SmartTV-Viewer": "false", - "CloudFront-Is-Tablet-Viewer": "false", - "CloudFront-Viewer-Country": "US", - "Host": "1234567890.execute-api.us-east-1.amazonaws.com", - "Upgrade-Insecure-Requests": "1", - "User-Agent": "Custom User Agent String", - "Via": "1.1 08f323deadbeefa7af34d5feb414ce27.cloudfront.net (CloudFront)", - "X-Amz-Cf-Id": "cDehVQoZnx43VYQb9j2-nvCh-9z396Uhbp027Y2JvkCPNLmGJHqlaA==", - "X-Forwarded-For": "127.0.0.1, 127.0.0.2", - "X-Forwarded-Port": "443", - "X-Forwarded-Proto": "https" - }, - "multiValueHeaders": {}, - "requestContext": { - "accountId": "123456789012", - "resourceId": "123456", - "stage": "Prod", - "requestId": "c6af9ac6-7b61-11e6-9a41-93e8deadbeef", - "requestTime": "25/Jul/2020:12:34:56 +0000", - "requestTimeEpoch": 1428582896000, - "identity": { - "cognitoIdentityPoolId": null, - "accountId": null, - "cognitoIdentityId": null, - "caller": null, - "accessKey": null, - "sourceIp": "127.0.0.1", - "cognitoAuthenticationType": null, - "cognitoAuthenticationProvider": null, - "userArn": null, - "userAgent": "Custom User Agent String", - "user": null - }, - "path": "/Prod/hello", - "resourcePath": "/hello", - "httpMethod": "POST", - "apiId": "1234567890", - "protocol": "HTTP/1.1" - } - } - ``` + This utility uses `path` and `httpMethod` to route to the right function. This helps make unit tests and local invocation easier too. + + ```json hl_lines="4-5" + { + "body": "hello", + "resource": "/hello", + "path": "/hello", + "httpMethod": "GET", + "isBase64Encoded": false, + "queryStringParameters": { + "foo": "bar" + }, + "multiValueQueryStringParameters": {}, + "pathParameters": { + "hello": "/hello" + }, + "stageVariables": {}, + "headers": { + "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", + "Accept-Encoding": "gzip, deflate, sdch", + "Accept-Language": "en-US,en;q=0.8", + "Cache-Control": "max-age=0", + "CloudFront-Forwarded-Proto": "https", + "CloudFront-Is-Desktop-Viewer": "true", + "CloudFront-Is-Mobile-Viewer": "false", + "CloudFront-Is-SmartTV-Viewer": "false", + "CloudFront-Is-Tablet-Viewer": "false", + "CloudFront-Viewer-Country": "US", + "Host": "1234567890.execute-api.us-east-1.amazonaws.com", + "Upgrade-Insecure-Requests": "1", + "User-Agent": "Custom User Agent String", + "Via": "1.1 08f323deadbeefa7af34d5feb414ce27.cloudfront.net (CloudFront)", + "X-Amz-Cf-Id": "cDehVQoZnx43VYQb9j2-nvCh-9z396Uhbp027Y2JvkCPNLmGJHqlaA==", + "X-Forwarded-For": "127.0.0.1, 127.0.0.2", + "X-Forwarded-Port": "443", + "X-Forwarded-Proto": "https" + }, + "multiValueHeaders": {}, + "requestContext": { + "accountId": "123456789012", + "resourceId": "123456", + "stage": "Prod", + "requestId": "c6af9ac6-7b61-11e6-9a41-93e8deadbeef", + "requestTime": "25/Jul/2020:12:34:56 +0000", + "requestTimeEpoch": 1428582896000, + "identity": { + "cognitoIdentityPoolId": null, + "accountId": null, + "cognitoIdentityId": null, + "caller": null, + "accessKey": null, + "sourceIp": "127.0.0.1", + "cognitoAuthenticationType": null, + "cognitoAuthenticationProvider": null, + "userArn": null, + "userAgent": "Custom User Agent String", + "user": null + }, + "path": "/Prod/hello", + "resourcePath": "/hello", + "httpMethod": "POST", + "apiId": "1234567890", + "protocol": "HTTP/1.1" + } + } + ``` === "response.json" - ```json - { - "statusCode": 200, - "headers": { - "Content-Type": "application/json" - }, - "body": "{\"message\":\"hello universe\"}", - "isBase64Encoded": false - } - ``` + ```json + { + "statusCode": 200, + "headers": { + "Content-Type": "application/json" + }, + "body": "{\"message\":\"hello universe\"}", + "isBase64Encoded": false + } + ``` #### HTTP API @@ -201,26 +201,26 @@ When using API Gateway HTTP API to front your Lambda functions, you can instruct === "app.py" - ```python hl_lines="3 7" - from aws_lambda_powertools import Logger, Tracer - from aws_lambda_powertools.logging import correlation_paths - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver, ProxyEventType + ```python hl_lines="3 7" + from aws_lambda_powertools import Logger, Tracer + from aws_lambda_powertools.logging import correlation_paths + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver, ProxyEventType - tracer = Tracer() - logger = Logger() - app = ApiGatewayResolver(proxy_type=ProxyEventType.APIGatewayProxyEventV2) + tracer = Tracer() + logger = Logger() + app = ApiGatewayResolver(proxy_type=ProxyEventType.APIGatewayProxyEventV2) - @app.get("/hello") - @tracer.capture_method - def get_hello_universe(): - return {"message": "hello universe"} + @app.get("/hello") + @tracer.capture_method + def get_hello_universe(): + return {"message": "hello universe"} - # You can continue to use other utilities just as before - @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_HTTP) - @tracer.capture_lambda_handler - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + # You can continue to use other utilities just as before + @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_HTTP) + @tracer.capture_lambda_handler + def lambda_handler(event, context): + return app.resolve(event, context) + ``` #### ALB @@ -228,26 +228,26 @@ When using ALB to front your Lambda functions, you can instruct `ApiGatewayResol === "app.py" - ```python hl_lines="3 7" - from aws_lambda_powertools import Logger, Tracer - from aws_lambda_powertools.logging import correlation_paths - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver, ProxyEventType + ```python hl_lines="3 7" + from aws_lambda_powertools import Logger, Tracer + from aws_lambda_powertools.logging import correlation_paths + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver, ProxyEventType - tracer = Tracer() - logger = Logger() - app = ApiGatewayResolver(proxy_type=ProxyEventType.ALBEvent) + tracer = Tracer() + logger = Logger() + app = ApiGatewayResolver(proxy_type=ProxyEventType.ALBEvent) - @app.get("/hello") - @tracer.capture_method - def get_hello_universe(): - return {"message": "hello universe"} + @app.get("/hello") + @tracer.capture_method + def get_hello_universe(): + return {"message": "hello universe"} - # You can continue to use other utilities just as before - @logger.inject_lambda_context(correlation_id_path=correlation_paths.APPLICATION_LOAD_BALANCER) - @tracer.capture_lambda_handler - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + # You can continue to use other utilities just as before + @logger.inject_lambda_context(correlation_id_path=correlation_paths.APPLICATION_LOAD_BALANCER) + @tracer.capture_lambda_handler + def lambda_handler(event, context): + return app.resolve(event, context) + ``` ### Dynamic routes @@ -255,35 +255,35 @@ You can use `/path/{dynamic_value}` when configuring dynamic URL paths. This all === "app.py" - ```python hl_lines="9 11" - from aws_lambda_powertools import Logger, Tracer - from aws_lambda_powertools.logging import correlation_paths - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver + ```python hl_lines="9 11" + from aws_lambda_powertools import Logger, Tracer + from aws_lambda_powertools.logging import correlation_paths + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver - tracer = Tracer() - logger = Logger() - app = ApiGatewayResolver() + tracer = Tracer() + logger = Logger() + app = ApiGatewayResolver() - @app.get("/hello/") - @tracer.capture_method - def get_hello_you(name): - return {"message": f"hello {name}"} + @app.get("/hello/") + @tracer.capture_method + def get_hello_you(name): + return {"message": f"hello {name}"} - # You can continue to use other utilities just as before - @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) - @tracer.capture_lambda_handler - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + # You can continue to use other utilities just as before + @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) + @tracer.capture_lambda_handler + def lambda_handler(event, context): + return app.resolve(event, context) + ``` === "sample_request.json" - ```json + ```json { - "resource": "/hello/{name}", - "path": "/hello/lessa", - "httpMethod": "GET", - ... + "resource": "/hello/{name}", + "path": "/hello/lessa", + "httpMethod": "GET", + ... } ``` @@ -291,35 +291,35 @@ You can also nest paths as configured earlier in [our sample infrastructure](#re === "app.py" - ```python hl_lines="9 11" - from aws_lambda_powertools import Logger, Tracer - from aws_lambda_powertools.logging import correlation_paths - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver + ```python hl_lines="9 11" + from aws_lambda_powertools import Logger, Tracer + from aws_lambda_powertools.logging import correlation_paths + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver - tracer = Tracer() - logger = Logger() - app = ApiGatewayResolver() + tracer = Tracer() + logger = Logger() + app = ApiGatewayResolver() - @app.get("//") - @tracer.capture_method - def get_message(message, name): - return {"message": f"{message}, {name}}"} + @app.get("//") + @tracer.capture_method + def get_message(message, name): + return {"message": f"{message}, {name}}"} - # You can continue to use other utilities just as before - @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) - @tracer.capture_lambda_handler - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + # You can continue to use other utilities just as before + @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) + @tracer.capture_lambda_handler + def lambda_handler(event, context): + return app.resolve(event, context) + ``` === "sample_request.json" - ```json + ```json { - "resource": "/{message}/{name}", - "path": "/hi/michael", - "httpMethod": "GET", - ... + "resource": "/{message}/{name}", + "path": "/hi/michael", + "httpMethod": "GET", + ... } ``` @@ -337,23 +337,23 @@ You can access the raw payload via `body` property, or if it's a JSON string you === "app.py" - ```python hl_lines="7-9 11" - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver + ```python hl_lines="7-9 11" + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver - app = ApiGatewayResolver() + app = ApiGatewayResolver() - @app.get("/hello") - def get_hello_you(): - query_strings_as_dict = app.current_event.query_string_parameters - json_payload = app.current_event.json_body - payload = app.current_event.body + @app.get("/hello") + def get_hello_you(): + query_strings_as_dict = app.current_event.query_string_parameters + json_payload = app.current_event.json_body + payload = app.current_event.body - name = app.current_event.get_query_string_value(name="name", default_value="") - return {"message": f"hello {name}}"} + name = app.current_event.get_query_string_value(name="name", default_value="") + return {"message": f"hello {name}}"} - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + def lambda_handler(event, context): + return app.resolve(event, context) + ``` #### Headers @@ -361,21 +361,21 @@ Similarly to [Query strings](#query-strings-and-payload), you can access headers === "app.py" - ```python hl_lines="7-8" - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver + ```python hl_lines="7-8" + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver - app = ApiGatewayResolver() + app = ApiGatewayResolver() - @app.get("/hello") - def get_hello_you(): - headers_as_dict = app.current_event.headers - name = app.current_event.get_header_value(name="X-Name", default_value="") + @app.get("/hello") + def get_hello_you(): + headers_as_dict = app.current_event.headers + name = app.current_event.get_header_value(name="X-Name", default_value="") - return {"message": f"hello {name}}"} + return {"message": f"hello {name}}"} - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + def lambda_handler(event, context): + return app.resolve(event, context) + ``` ### Raising HTTP errors @@ -385,13 +385,12 @@ You can easily raise any HTTP Error back to the client using `ServiceError` exce Additionally, we provide pre-defined errors for the most popular ones such as HTTP 400, 401, 404, 500. - === "app.py" - ```python hl_lines="4-10 20 25 30 35 39" - from aws_lambda_powertools import Logger, Tracer - from aws_lambda_powertools.logging import correlation_paths - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver + ```python hl_lines="4-10 20 25 30 35 39" + from aws_lambda_powertools import Logger, Tracer + from aws_lambda_powertools.logging import correlation_paths + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver from aws_lambda_powertools.event_handler.exceptions import ( BadRequestError, InternalServerError, @@ -400,42 +399,41 @@ Additionally, we provide pre-defined errors for the most popular ones such as HT UnauthorizedError, ) - tracer = Tracer() - logger = Logger() + tracer = Tracer() + logger = Logger() - app = ApiGatewayResolver() + app = ApiGatewayResolver() @app.get(rule="/bad-request-error") def bad_request_error(): - # HTTP 400 + # HTTP 400 raise BadRequestError("Missing required parameter") @app.get(rule="/unauthorized-error") def unauthorized_error(): - # HTTP 401 + # HTTP 401 raise UnauthorizedError("Unauthorized") @app.get(rule="/not-found-error") def not_found_error(): - # HTTP 404 + # HTTP 404 raise NotFoundError @app.get(rule="/internal-server-error") def internal_server_error(): - # HTTP 500 + # HTTP 500 raise InternalServerError("Internal server error") @app.get(rule="/service-error", cors=True) def service_error(): raise ServiceError(502, "Something went wrong!") - # alternatively - # from http import HTTPStatus - # raise ServiceError(HTTPStatus.BAD_GATEWAY.value, "Something went wrong) + # alternatively + # from http import HTTPStatus + # raise ServiceError(HTTPStatus.BAD_GATEWAY.value, "Something went wrong) def handler(event, context): return app.resolve(event, context) - ``` - + ``` ## Advanced @@ -447,62 +445,61 @@ This will ensure that CORS headers are always returned as part of the response w === "app.py" - ```python hl_lines="9 11" - from aws_lambda_powertools import Logger, Tracer - from aws_lambda_powertools.logging import correlation_paths - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver, CORSConfig + ```python hl_lines="9 11" + from aws_lambda_powertools import Logger, Tracer + from aws_lambda_powertools.logging import correlation_paths + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver, CORSConfig - tracer = Tracer() - logger = Logger() + tracer = Tracer() + logger = Logger() - cors_config = CORSConfig(allow_origin="https://example.com", max_age=300) - app = ApiGatewayResolver(cors=cors_config) + cors_config = CORSConfig(allow_origin="https://example.com", max_age=300) + app = ApiGatewayResolver(cors=cors_config) - @app.get("/hello/") - @tracer.capture_method - def get_hello_you(name): - return {"message": f"hello {name}"} + @app.get("/hello/") + @tracer.capture_method + def get_hello_you(name): + return {"message": f"hello {name}"} - @app.get("/hello", cors=False) # optionally exclude CORS from response, if needed - @tracer.capture_method - def get_hello_no_cors_needed(): - return {"message": "hello, no CORS needed for this path ;)"} + @app.get("/hello", cors=False) # optionally exclude CORS from response, if needed + @tracer.capture_method + def get_hello_no_cors_needed(): + return {"message": "hello, no CORS needed for this path ;)"} - # You can continue to use other utilities just as before - @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) - @tracer.capture_lambda_handler - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + # You can continue to use other utilities just as before + @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) + @tracer.capture_lambda_handler + def lambda_handler(event, context): + return app.resolve(event, context) + ``` === "response.json" - ```json - { - "statusCode": 200, - "headers": { - "Content-Type": "application/json", - "Access-Control-Allow-Origin": "https://www.example.com", - "Access-Control-Allow-Headers": "Authorization,Content-Type,X-Amz-Date,X-Amz-Security-Token,X-Api-Key" - }, - "body": "{\"message\":\"hello lessa\"}", - "isBase64Encoded": false - } - ``` + ```json + { + "statusCode": 200, + "headers": { + "Content-Type": "application/json", + "Access-Control-Allow-Origin": "https://www.example.com", + "Access-Control-Allow-Headers": "Authorization,Content-Type,X-Amz-Date,X-Amz-Security-Token,X-Api-Key" + }, + "body": "{\"message\":\"hello lessa\"}", + "isBase64Encoded": false + } + ``` === "response_no_cors.json" - ```json - { - "statusCode": 200, - "headers": { - "Content-Type": "application/json" - }, - "body": "{\"message\":\"hello lessa\"}", - "isBase64Encoded": false - } - ``` - + ```json + { + "statusCode": 200, + "headers": { + "Content-Type": "application/json" + }, + "body": "{\"message\":\"hello lessa\"}", + "isBase64Encoded": false + } + ``` !!! tip "Optionally disable class on a per path basis with `cors=False` parameter" @@ -532,25 +529,25 @@ You can use the `Response` class to have full control over the response, for exa === "app.py" - ```python hl_lines="10-14" - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver, Response + ```python hl_lines="10-14" + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver, Response - app = ApiGatewayResolver() + app = ApiGatewayResolver() - @app.get("/hello") - def get_hello_you(): - payload = json.dumps({"message": "I'm a teapot"}) - custom_headers = {"X-Custom": "X-Value"} + @app.get("/hello") + def get_hello_you(): + payload = json.dumps({"message": "I'm a teapot"}) + custom_headers = {"X-Custom": "X-Value"} - return Response(status_code=418, - content_type="application/json", - body=payload, - headers=custom_headers - ) + return Response(status_code=418, + content_type="application/json", + body=payload, + headers=custom_headers + ) - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + def lambda_handler(event, context): + return app.resolve(event, context) + ``` === "response.json" @@ -559,7 +556,7 @@ You can use the `Response` class to have full control over the response, for exa "body": "{\"message\":\"I\'m a teapot\"}", "headers": { "Content-Type": "application/json", - "X-Custom": "X-Value" + "X-Custom": "X-Value" }, "isBase64Encoded": false, "statusCode": 418 @@ -573,29 +570,29 @@ You can compress with gzip and base64 encode your responses via `compress` param === "app.py" - ```python hl_lines="5 7" - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver + ```python hl_lines="5 7" + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver - app = ApiGatewayResolver() + app = ApiGatewayResolver() - @app.get("/hello", compress=True) - def get_hello_you(): - return {"message": "hello universe"} + @app.get("/hello", compress=True) + def get_hello_you(): + return {"message": "hello universe"} - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + def lambda_handler(event, context): + return app.resolve(event, context) + ``` === "sample_request.json" - ```json + ```json { "headers": { "Accept-Encoding": "gzip" }, "httpMethod": "GET", "path": "/hello", - ... + ... } ``` @@ -623,74 +620,75 @@ Like `compress` feature, the client must send the `Accept` header with the corre === "app.py" - ```python hl_lines="4 7 11" - import os - from pathlib import Path + ```python hl_lines="4 7 11" + import os + from pathlib import Path - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver, Response + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver, Response - app = ApiGatewayResolver() - logo_file: bytes = Path(os.getenv("LAMBDA_TASK_ROOT") + "/logo.svg").read_bytes() + app = ApiGatewayResolver() + logo_file: bytes = Path(os.getenv("LAMBDA_TASK_ROOT") + "/logo.svg").read_bytes() - @app.get("/logo") - def get_logo(): - return Response(status_code=200, content_type="image/svg+xml", body=logo_file) + @app.get("/logo") + def get_logo(): + return Response(status_code=200, content_type="image/svg+xml", body=logo_file) - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + def lambda_handler(event, context): + return app.resolve(event, context) + ``` === "logo.svg" - ```xml - - - - - - - - - - - - - ``` + + ```xml + + + + + + + + + + + + + ``` === "sample_request.json" - ```json + ```json { "headers": { "Accept": "image/svg+xml" }, "httpMethod": "GET", "path": "/logo", - ... + ... } ``` @@ -717,62 +715,62 @@ This will enable full tracebacks errors in the response, print request and respo === "debug.py" - ```python hl_lines="3" - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver + ```python hl_lines="3" + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver - app = ApiGatewayResolver(debug=True) + app = ApiGatewayResolver(debug=True) - @app.get("/hello") - def get_hello_universe(): - return {"message": "hello universe"} + @app.get("/hello") + def get_hello_universe(): + return {"message": "hello universe"} - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + def lambda_handler(event, context): + return app.resolve(event, context) + ``` ### Custom serializer You can instruct API Gateway handler to use a custom serializer to best suit your needs, for example take into account Enums when serializing. === "custom_serializer.py" - ```python hl_lines="19-20 24" - import json - from enum import Enum - from json import JSONEncoder - from typing import Dict - - class CustomEncoder(JSONEncoder): - """Your customer json encoder""" - def default(self, obj): - if isinstance(obj, Enum): - return obj.value - try: - iterable = iter(obj) - except TypeError: - pass - else: - return sorted(iterable) - return JSONEncoder.default(self, obj) - - def custom_serializer(obj) -> str: - """Your custom serializer function ApiGatewayResolver will use""" - return json.dumps(obj, cls=CustomEncoder) - - # Assigning your custom serializer - app = ApiGatewayResolver(serializer=custom_serializer) - - class Color(Enum): - RED = 1 - BLUE = 2 - - @app.get("/colors") - def get_color() -> Dict: - return { - # Color.RED will be serialized to 1 as expected now - "color": Color.RED, - "variations": {"light", "dark"}, - } - ``` + ```python hl_lines="19-20 24" + import json + from enum import Enum + from json import JSONEncoder + from typing import Dict + + class CustomEncoder(JSONEncoder): + """Your customer json encoder""" + def default(self, obj): + if isinstance(obj, Enum): + return obj.value + try: + iterable = iter(obj) + except TypeError: + pass + else: + return sorted(iterable) + return JSONEncoder.default(self, obj) + + def custom_serializer(obj) -> str: + """Your custom serializer function ApiGatewayResolver will use""" + return json.dumps(obj, cls=CustomEncoder) + + # Assigning your custom serializer + app = ApiGatewayResolver(serializer=custom_serializer) + + class Color(Enum): + RED = 1 + BLUE = 2 + + @app.get("/colors") + def get_color() -> Dict: + return { + # Color.RED will be serialized to 1 as expected now + "color": Color.RED, + "variations": {"light", "dark"}, + } + ``` ## Testing your code @@ -780,54 +778,54 @@ You can test your routes by passing a proxy event request where `path` and `http === "test_app.py" - ```python hl_lines="18-24" - from dataclasses import dataclass + ```python hl_lines="18-24" + from dataclasses import dataclass - import pytest - import app + import pytest + import app @pytest.fixture def lambda_context(): - @dataclass - class LambdaContext: - function_name: str = "test" - memory_limit_in_mb: int = 128 - invoked_function_arn: str = "arn:aws:lambda:eu-west-1:809313241:function:test" - aws_request_id: str = "52fdfc07-2182-154f-163f-5f0f9a621d72" + @dataclass + class LambdaContext: + function_name: str = "test" + memory_limit_in_mb: int = 128 + invoked_function_arn: str = "arn:aws:lambda:eu-west-1:809313241:function:test" + aws_request_id: str = "52fdfc07-2182-154f-163f-5f0f9a621d72" return LambdaContext() def test_lambda_handler(lambda_context): - minimal_event = { - "path": "/hello", - "httpMethod": "GET" - "requestContext": { # correlation ID - "requestId": "c6af9ac6-7b61-11e6-9a41-93e8deadbeef" - } - } + minimal_event = { + "path": "/hello", + "httpMethod": "GET" + "requestContext": { # correlation ID + "requestId": "c6af9ac6-7b61-11e6-9a41-93e8deadbeef" + } + } app.lambda_handler(minimal_event, lambda_context) - ``` + ``` === "app.py" - ```python - from aws_lambda_powertools import Logger - from aws_lambda_powertools.logging import correlation_paths - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver + ```python + from aws_lambda_powertools import Logger + from aws_lambda_powertools.logging import correlation_paths + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver - logger = Logger() - app = ApiGatewayResolver() # by default API Gateway REST API (v1) + logger = Logger() + app = ApiGatewayResolver() # by default API Gateway REST API (v1) - @app.get("/hello") - def get_hello_universe(): - return {"message": "hello universe"} + @app.get("/hello") + def get_hello_universe(): + return {"message": "hello universe"} - # You can continue to use other utilities just as before - @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) - def lambda_handler(event, context): - return app.resolve(event, context) - ``` + # You can continue to use other utilities just as before + @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) + def lambda_handler(event, context): + return app.resolve(event, context) + ``` ## FAQ diff --git a/docs/core/event_handler/appsync.md b/docs/core/event_handler/appsync.md index a47b8a4c641..93bb7bf69a5 100644 --- a/docs/core/event_handler/appsync.md +++ b/docs/core/event_handler/appsync.md @@ -170,7 +170,6 @@ This is the sample infrastructure we are using for the initial examples with a A Value: !GetAtt HelloWorldApi.Arn ``` - ### Resolver decorator You can define your functions to match GraphQL types and fields with the `app.resolver()` decorator. @@ -238,6 +237,7 @@ Here's an example where we have two separate functions to resolve `getTodo` and ``` === "getTodo_event.json" + ```json { "arguments": { @@ -288,6 +288,7 @@ Here's an example where we have two separate functions to resolve `getTodo` and ``` === "listTodos_event.json" + ```json { "arguments": {}, @@ -395,6 +396,7 @@ You can nest `app.resolver()` decorator multiple times when resolving fields wit For Lambda Python3.8+ runtime, this utility supports async functions when you use in conjunction with `asyncio.run`. === "async_resolver.py" + ```python hl_lines="4 8 10-12 20" from aws_lambda_powertools import Logger, Tracer @@ -602,7 +604,6 @@ Use the following code for `merchantInfo` and `searchMerchant` functions respect You can subclass `AppSyncResolverEvent` to bring your own set of methods to handle incoming events, by using `data_model` param in the `resolve` method. - === "custom_model.py" ```python hl_lines="11-14 19 26" @@ -662,8 +663,8 @@ You can subclass `AppSyncResolverEvent` to bring your own set of methods to hand === "listLocations_event.json" - ```json - { + ```json + { "arguments": {}, "identity": null, "source": null, @@ -707,8 +708,8 @@ You can subclass `AppSyncResolverEvent` to bring your own set of methods to hand "variables": {} }, "stash": {} - } - ``` + } + ``` ## Testing your code @@ -719,6 +720,7 @@ You can use either `app.resolve(event, context)` or simply `app(event, context)` Here's an example from our internal functional test. === "test_direct_resolver.py" + ```python def test_direct_resolver(): # Check whether we can handle an example appsync direct resolver @@ -739,6 +741,7 @@ Here's an example from our internal functional test. ``` === "appSyncDirectResolver.json" + ```json --8<-- "tests/events/appSyncDirectResolver.json" ``` diff --git a/docs/core/logger.md b/docs/core/logger.md index 53818bada51..833d5a5c721 100644 --- a/docs/core/logger.md +++ b/docs/core/logger.md @@ -24,7 +24,8 @@ Setting | Description | Environment variable | Constructor parameter > Example using AWS Serverless Application Model (SAM) === "template.yaml" - ```yaml hl_lines="9 10" + + ```yaml hl_lines="9 10" Resources: HelloWorldFunction: Type: AWS::Serverless::Function @@ -34,13 +35,14 @@ Setting | Description | Environment variable | Constructor parameter Variables: LOG_LEVEL: INFO POWERTOOLS_SERVICE_NAME: example - ``` + ``` === "app.py" - ```python hl_lines="2 4" - from aws_lambda_powertools import Logger - logger = Logger() # Sets service via env var - # OR logger = Logger(service="example") - ``` + + ```python hl_lines="2 4" + from aws_lambda_powertools import Logger + logger = Logger() # Sets service via env var + # OR logger = Logger(service="example") + ``` ### Standard structured keys @@ -71,46 +73,46 @@ You can enrich your structured logs with key Lambda context information via `inj @logger.inject_lambda_context def handler(event, context): - logger.info("Collecting payment") + logger.info("Collecting payment") - # You can log entire objects too - logger.info({ + # You can log entire objects too + logger.info({ "operation": "collect_payment", "charge_id": event['charge_id'] - }) - ... + }) + ... ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="7-11 16-19" - { - "level": "INFO", - "location": "collect.handler:7", - "message": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "cold_start": true, - "lambda_function_name": "test", - "lambda_function_memory_size": 128, - "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", - "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" - }, - { - "level": "INFO", - "location": "collect.handler:10", - "message": { - "operation": "collect_payment", - "charge_id": "ch_AZFlk2345C0" - }, - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "cold_start": true, - "lambda_function_name": "test", - "lambda_function_memory_size": 128, - "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", - "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" - } + { + "level": "INFO", + "location": "collect.handler:7", + "message": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "cold_start": true, + "lambda_function_name": "test", + "lambda_function_memory_size": 128, + "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", + "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" + }, + { + "level": "INFO", + "location": "collect.handler:10", + "message": { + "operation": "collect_payment", + "charge_id": "ch_AZFlk2345C0" + }, + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "cold_start": true, + "lambda_function_name": "test", + "lambda_function_memory_size": 128, + "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", + "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" + } ``` When used, this will include the following keys: @@ -151,42 +153,42 @@ You can set a Correlation ID using `correlation_id_path` param by passing a [JME === "collect.py" ```python hl_lines="5" - from aws_lambda_powertools import Logger + from aws_lambda_powertools import Logger - logger = Logger(service="payment") + logger = Logger(service="payment") - @logger.inject_lambda_context(correlation_id_path="headers.my_request_id_header") - def handler(event, context): - logger.debug(f"Correlation ID => {logger.get_correlation_id()}") - logger.info("Collecting payment") + @logger.inject_lambda_context(correlation_id_path="headers.my_request_id_header") + def handler(event, context): + logger.debug(f"Correlation ID => {logger.get_correlation_id()}") + logger.info("Collecting payment") ``` === "Example Event" - ```json hl_lines="3" - { - "headers": { - "my_request_id_header": "correlation_id_value" - } - } - ``` + ```json hl_lines="3" + { + "headers": { + "my_request_id_header": "correlation_id_value" + } + } + ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="12" - { - "level": "INFO", - "location": "collect.handler:7", - "message": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "cold_start": true, - "lambda_function_name": "test", - "lambda_function_memory_size": 128, - "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", - "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", - "correlation_id": "correlation_id_value" - } + { + "level": "INFO", + "location": "collect.handler:7", + "message": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "cold_start": true, + "lambda_function_name": "test", + "lambda_function_memory_size": 128, + "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", + "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", + "correlation_id": "correlation_id_value" + } ``` We provide [built-in JMESPath expressions](#built-in-correlation-id-expressions) for known event sources, where either a request ID or X-Ray Trace ID are present. @@ -194,50 +196,49 @@ We provide [built-in JMESPath expressions](#built-in-correlation-id-expressions) === "collect.py" ```python hl_lines="2 6" - from aws_lambda_powertools import Logger - from aws_lambda_powertools.logging import correlation_paths + from aws_lambda_powertools import Logger + from aws_lambda_powertools.logging import correlation_paths - logger = Logger(service="payment") + logger = Logger(service="payment") - @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) - def handler(event, context): - logger.debug(f"Correlation ID => {logger.get_correlation_id()}") - logger.info("Collecting payment") + @logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_REST) + def handler(event, context): + logger.debug(f"Correlation ID => {logger.get_correlation_id()}") + logger.info("Collecting payment") ``` === "Example Event" - ```json hl_lines="3" - { - "requestContext": { - "requestId": "correlation_id_value" - } - } - ``` + ```json hl_lines="3" + { + "requestContext": { + "requestId": "correlation_id_value" + } + } + ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="12" - { - "level": "INFO", - "location": "collect.handler:8", - "message": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "cold_start": true, - "lambda_function_name": "test", - "lambda_function_memory_size": 128, - "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", - "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", - "correlation_id": "correlation_id_value" - } + { + "level": "INFO", + "location": "collect.handler:8", + "message": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "cold_start": true, + "lambda_function_name": "test", + "lambda_function_memory_size": 128, + "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", + "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", + "correlation_id": "correlation_id_value" + } ``` ### Appending additional keys !!! info "Custom keys are persisted across warm invocations" - Always set additional keys as part of your handler to ensure they have the latest value, or explicitly clear them with [`clear_state=True`](#clearing-all-state). - + Always set additional keys as part of your handler to ensure they have the latest value, or explicitly clear them with [`clear_state=True`](#clearing-all-state). You can append additional keys using either mechanism: @@ -258,30 +259,30 @@ You can append your own keys to your existing Logger via `append_keys(**addition logger = Logger(service="payment") def handler(event, context): - order_id = event.get("order_id") + order_id = event.get("order_id") - # this will ensure order_id key always has the latest value before logging - logger.append_keys(order_id=order_id) + # this will ensure order_id key always has the latest value before logging + logger.append_keys(order_id=order_id) - logger.info("Collecting payment") + logger.info("Collecting payment") ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="7" - { - "level": "INFO", - "location": "collect.handler:11", - "message": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "order_id": "order_id_value" - } + { + "level": "INFO", + "location": "collect.handler:11", + "message": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "order_id": "order_id_value" + } ``` !!! tip "Logger will automatically reject any key with a None value" - If you conditionally add keys depending on the payload, you can follow the example above. + If you conditionally add keys depending on the payload, you can follow the example above. - This example will add `order_id` if its value is not empty, and in subsequent invocations where `order_id` might not be present it'll remove it from the Logger. + This example will add `order_id` if its value is not empty, and in subsequent invocations where `order_id` might not be present it'll remove it from the Logger. #### extra parameter @@ -304,14 +305,14 @@ It accepts any dictionary, and all keyword arguments will be added as part of th === "Example CloudWatch Logs excerpt" ```json hl_lines="7" - { - "level": "INFO", - "location": "collect.handler:6", - "message": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "request_id": "1123" - } + { + "level": "INFO", + "location": "collect.handler:6", + "message": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "request_id": "1123" + } ``` #### set_correlation_id method @@ -321,36 +322,36 @@ You can set a correlation_id to your existing Logger via `set_correlation_id(val === "collect.py" ```python hl_lines="6" - from aws_lambda_powertools import Logger + from aws_lambda_powertools import Logger - logger = Logger(service="payment") + logger = Logger(service="payment") - def handler(event, context): - logger.set_correlation_id(event["requestContext"]["requestId"]) - logger.info("Collecting payment") + def handler(event, context): + logger.set_correlation_id(event["requestContext"]["requestId"]) + logger.info("Collecting payment") ``` === "Example Event" - ```json hl_lines="3" - { - "requestContext": { - "requestId": "correlation_id_value" - } - } - ``` + ```json hl_lines="3" + { + "requestContext": { + "requestId": "correlation_id_value" + } + } + ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="7" - { - "level": "INFO", - "location": "collect.handler:7", - "message": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "correlation_id": "correlation_id_value" - } + { + "level": "INFO", + "location": "collect.handler:7", + "message": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "correlation_id": "correlation_id_value" + } ``` Alternatively, you can combine [Data Classes utility](../utilities/data_classes.md) with Logger to use dot notation object: @@ -358,38 +359,38 @@ Alternatively, you can combine [Data Classes utility](../utilities/data_classes. === "collect.py" ```python hl_lines="2 7-8" - from aws_lambda_powertools import Logger - from aws_lambda_powertools.utilities.data_classes import APIGatewayProxyEvent + from aws_lambda_powertools import Logger + from aws_lambda_powertools.utilities.data_classes import APIGatewayProxyEvent - logger = Logger(service="payment") + logger = Logger(service="payment") - def handler(event, context): - event = APIGatewayProxyEvent(event) - logger.set_correlation_id(event.request_context.request_id) - logger.info("Collecting payment") + def handler(event, context): + event = APIGatewayProxyEvent(event) + logger.set_correlation_id(event.request_context.request_id) + logger.info("Collecting payment") ``` === "Example Event" - ```json hl_lines="3" - { - "requestContext": { - "requestId": "correlation_id_value" - } - } - ``` + ```json hl_lines="3" + { + "requestContext": { + "requestId": "correlation_id_value" + } + } + ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="7" - { - "timestamp": "2020-05-24 18:17:33,774", - "level": "INFO", - "location": "collect.handler:9", - "service": "payment", - "sampling_rate": 0.0, - "correlation_id": "correlation_id_value", - "message": "Collecting payment" - } + { + "timestamp": "2020-05-24 18:17:33,774", + "level": "INFO", + "location": "collect.handler:9", + "service": "payment", + "sampling_rate": 0.0, + "correlation_id": "correlation_id_value", + "message": "Collecting payment" + } ``` ### Removing additional keys @@ -405,30 +406,30 @@ You can remove any additional key from Logger state using `remove_keys`. def handler(event, context): logger.append_keys(sample_key="value") - logger.info("Collecting payment") + logger.info("Collecting payment") - logger.remove_keys(["sample_key"]) - logger.info("Collecting payment without sample key") + logger.remove_keys(["sample_key"]) + logger.info("Collecting payment without sample key") ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="7" - { - "level": "INFO", - "location": "collect.handler:7", - "message": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "sample_key": "value" - }, - { - "level": "INFO", - "location": "collect.handler:10", - "message": "Collecting payment without sample key", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment" - } + { + "level": "INFO", + "location": "collect.handler:7", + "message": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "sample_key": "value" + }, + { + "level": "INFO", + "location": "collect.handler:10", + "message": "Collecting payment without sample key", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment" + } ``` #### Clearing all state @@ -436,14 +437,14 @@ You can remove any additional key from Logger state using `remove_keys`. Logger is commonly initialized in the global scope. Due to [Lambda Execution Context reuse](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html), this means that custom keys can be persisted across invocations. If you want all custom keys to be deleted, you can use `clear_state=True` param in `inject_lambda_context` decorator. !!! info - This is useful when you add multiple custom keys conditionally, instead of setting a default `None` value if not present. Any key with `None` value is automatically removed by Logger. + This is useful when you add multiple custom keys conditionally, instead of setting a default `None` value if not present. Any key with `None` value is automatically removed by Logger. !!! danger "This can have unintended side effects if you use Layers" - Lambda Layers code is imported before the Lambda handler. + Lambda Layers code is imported before the Lambda handler. - This means that `clear_state=True` will instruct Logger to remove any keys previously added before Lambda handler execution proceeds. + This means that `clear_state=True` will instruct Logger to remove any keys previously added before Lambda handler execution proceeds. - You can either avoid running any code as part of Lambda Layers global scope, or override keys with their latest value as part of handler's execution. + You can either avoid running any code as part of Lambda Layers global scope, or override keys with their latest value as part of handler's execution. === "collect.py" @@ -454,81 +455,80 @@ Logger is commonly initialized in the global scope. Due to [Lambda Execution Con @logger.inject_lambda_context(clear_state=True) def handler(event, context): - if event.get("special_key"): - # Should only be available in the first request log - # as the second request doesn't contain `special_key` - logger.append_keys(debugging_key="value") + if event.get("special_key"): + # Should only be available in the first request log + # as the second request doesn't contain `special_key` + logger.append_keys(debugging_key="value") - logger.info("Collecting payment") + logger.info("Collecting payment") ``` === "#1 request" ```json hl_lines="7" - { - "level": "INFO", - "location": "collect.handler:10", - "message": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "special_key": "debug_key", - "cold_start": true, - "lambda_function_name": "test", - "lambda_function_memory_size": 128, - "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", - "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" - } + { + "level": "INFO", + "location": "collect.handler:10", + "message": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "special_key": "debug_key", + "cold_start": true, + "lambda_function_name": "test", + "lambda_function_memory_size": 128, + "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", + "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" + } ``` === "#2 request" - ```json hl_lines="7" - { - "level": "INFO", - "location": "collect.handler:10", - "message": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "cold_start": false, - "lambda_function_name": "test", - "lambda_function_memory_size": 128, - "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", - "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" - } + ```json hl_lines="7" + { + "level": "INFO", + "location": "collect.handler:10", + "message": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "cold_start": false, + "lambda_function_name": "test", + "lambda_function_memory_size": 128, + "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", + "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" + } ``` - ### Logging exceptions Use `logger.exception` method to log contextual information about exceptions. Logger will include `exception_name` and `exception` keys to aid troubleshooting and error enumeration. !!! tip - You can use your preferred Log Analytics tool to enumerate and visualize exceptions across all your services using `exception_name` key. + You can use your preferred Log Analytics tool to enumerate and visualize exceptions across all your services using `exception_name` key. === "collect.py" ```python hl_lines="8" from aws_lambda_powertools import Logger - logger = Logger(service="payment") + logger = Logger(service="payment") try: - raise ValueError("something went wrong") + raise ValueError("something went wrong") except Exception: - logger.exception("Received an exception") + logger.exception("Received an exception") ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="7-8" { - "level": "ERROR", - "location": "collect.handler:5", - "message": "Received an exception", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "exception_name": "ValueError", - "exception": "Traceback (most recent call last):\n File \"\", line 2, in \nValueError: something went wrong" + "level": "ERROR", + "location": "collect.handler:5", + "message": "Received an exception", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "exception_name": "ValueError", + "exception": "Traceback (most recent call last):\n File \"\", line 2, in \nValueError: something went wrong" } ``` @@ -547,8 +547,8 @@ Logger supports inheritance via `child` parameter. This allows you to create mul logger = Logger() # POWERTOOLS_SERVICE_NAME: "payment" def handler(event, context): - shared.inject_payment_id(event) - ... + shared.inject_payment_id(event) + ... ``` === "shared.py" @@ -565,7 +565,7 @@ Logger supports inheritance via `child` parameter. This allows you to create mul In this example, `Logger` will create a parent logger named `payment` and a child logger named `payment.shared`. Changes in either parent or child logger will be propagated bi-directionally. !!! info "Child loggers will be named after the following convention `{service}.{filename}`" - If you forget to use `child` param but the `service` name is the same of the parent, we will return the existing parent `Logger` instead. + If you forget to use `child` param but the `service` name is the same of the parent, we will return the existing parent `Logger` instead. ### Sampling debug logs @@ -574,9 +574,9 @@ Use sampling when you want to dynamically change your log level to **DEBUG** bas You can use values ranging from `0.0` to `1` (100%) when setting `POWERTOOLS_LOGGER_SAMPLE_RATE` env var or `sample_rate` parameter in Logger. !!! tip "When is this useful?" - Let's imagine a sudden spike increase in concurrency triggered a transient issue downstream. When looking into the logs you might not have enough information, and while you can adjust log levels it might not happen again. + Let's imagine a sudden spike increase in concurrency triggered a transient issue downstream. When looking into the logs you might not have enough information, and while you can adjust log levels it might not happen again. - This feature takes into account transient issues where additional debugging information can be useful. + This feature takes into account transient issues where additional debugging information can be useful. Sampling decision happens at the Logger initialization. This means sampling may happen significantly more or less than depending on your traffic patterns, for example a steady low number of invocations and thus few cold starts. @@ -592,38 +592,38 @@ Sampling decision happens at the Logger initialization. This means sampling may logger = Logger(service="payment", sample_rate=0.1) def handler(event, context): - logger.debug("Verifying whether order_id is present") - logger.info("Collecting payment") + logger.debug("Verifying whether order_id is present") + logger.info("Collecting payment") ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="2 4 12 15 25" { - "level": "DEBUG", - "location": "collect.handler:7", - "message": "Verifying whether order_id is present", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "cold_start": true, - "lambda_function_name": "test", - "lambda_function_memory_size": 128, - "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", - "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", - "sampling_rate": 0.1 + "level": "DEBUG", + "location": "collect.handler:7", + "message": "Verifying whether order_id is present", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "cold_start": true, + "lambda_function_name": "test", + "lambda_function_memory_size": 128, + "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", + "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", + "sampling_rate": 0.1 }, { - "level": "INFO", - "location": "collect.handler:7", - "message": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494+0200", - "service": "payment", - "cold_start": true, - "lambda_function_name": "test", - "lambda_function_memory_size": 128, - "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", - "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", - "sampling_rate": 0.1 + "level": "INFO", + "location": "collect.handler:7", + "message": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494+0200", + "service": "payment", + "cold_start": true, + "lambda_function_name": "test", + "lambda_function_memory_size": 128, + "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", + "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72", + "sampling_rate": 0.1 } ``` @@ -647,9 +647,9 @@ Parameter | Description | Default ```python hl_lines="2 4-5" from aws_lambda_powertools import Logger - from aws_lambda_powertools.logging.formatter import LambdaPowertoolsFormatter + from aws_lambda_powertools.logging.formatter import LambdaPowertoolsFormatter - formatter = LambdaPowertoolsFormatter(utc=True, log_record_order=["message"]) + formatter = LambdaPowertoolsFormatter(utc=True, log_record_order=["message"]) logger = Logger(service="example", logger_formatter=formatter) ``` @@ -672,7 +672,7 @@ For inheritance, Logger uses a `child=True` parameter along with `service` being For child Loggers, we introspect the name of your module where `Logger(child=True, service="name")` is called, and we name your Logger as **{service}.{filename}**. !!! danger - A common issue when migrating from other Loggers is that `service` might be defined in the parent Logger (no child param), and not defined in the child Logger: + A common issue when migrating from other Loggers is that `service` might be defined in the parent Logger (no child param), and not defined in the child Logger: === "incorrect_logger_inheritance.py" @@ -707,7 +707,7 @@ For child Loggers, we introspect the name of your module where `Logger(child=Tru In this case, Logger will register a Logger named `payment`, and a Logger named `service_undefined`. The latter isn't inheriting from the parent, and will have no handler, resulting in no message being logged to standard output. !!! tip - This can be fixed by either ensuring both has the `service` value as `payment`, or simply use the environment variable `POWERTOOLS_SERVICE_NAME` to ensure service value will be the same across all Loggers when not explicitly set. + This can be fixed by either ensuring both has the `service` value as `payment`, or simply use the environment variable `POWERTOOLS_SERVICE_NAME` to ensure service value will be the same across all Loggers when not explicitly set. #### Overriding Log records @@ -716,13 +716,13 @@ You might want to continue to use the same date formatting style, or override `l Logger allows you to either change the format or suppress the following keys altogether at the initialization: `location`, `timestamp`, `level`, `xray_trace_id`. === "lambda_handler.py" - > We honour standard [logging library string formats](https://docs.python.org/3/howto/logging.html#displaying-the-date-time-in-messages){target="_blank"}. + > We honour standard [logging library string formats](https://docs.python.org/3/howto/logging.html#displaying-the-date-time-in-messages){target="_blank"}. ```python hl_lines="7 10" from aws_lambda_powertools import Logger - date_format = "%m/%d/%Y %I:%M:%S %p" - location_format = "[%(funcName)s] %(module)s" + date_format = "%m/%d/%Y %I:%M:%S %p" + location_format = "[%(funcName)s] %(module)s" # override location and timestamp format logger = Logger(service="payment", location=location_format, datefmt=date_format) @@ -730,18 +730,19 @@ Logger allows you to either change the format or suppress the following keys alt # suppress the location key with a None value logger_two = Logger(service="payment", location=None) - logger.info("Collecting payment") + logger.info("Collecting payment") ``` === "Example CloudWatch Logs excerpt" - ```json hl_lines="3 5" - { - "level": "INFO", - "location": "[] lambda_handler", - "message": "Collecting payment", - "timestamp": "02/09/2021 09:25:17 AM", - "service": "payment" - } - ``` + + ```json hl_lines="3 5" + { + "level": "INFO", + "location": "[] lambda_handler", + "message": "Collecting payment", + "timestamp": "02/09/2021 09:25:17 AM", + "service": "payment" + } + ``` #### Reordering log keys position @@ -755,23 +756,24 @@ You can change the order of [standard Logger keys](#standard-structured-keys) or # make message as the first key logger = Logger(service="payment", log_record_order=["message"]) - # make request_id that will be added later as the first key - # Logger(service="payment", log_record_order=["request_id"]) + # make request_id that will be added later as the first key + # Logger(service="payment", log_record_order=["request_id"]) # Default key sorting order when omit # Logger(service="payment", log_record_order=["level","location","message","timestamp"]) ``` === "Example CloudWatch Logs excerpt" - ```json hl_lines="3 5" - { - "message": "hello world", - "level": "INFO", - "location": "[]:6", - "timestamp": "2021-02-09 09:36:12,280", - "service": "service_undefined", - "sampling_rate": 0.0 - } - ``` + + ```json hl_lines="3 5" + { + "message": "hello world", + "level": "INFO", + "location": "[]:6", + "timestamp": "2021-02-09 09:36:12,280", + "service": "service_undefined", + "sampling_rate": 0.0 + } + ``` #### Setting timestamp to UTC @@ -798,27 +800,28 @@ By default, Logger uses `str` to handle values non-serializable by JSON. You can ```python hl_lines="3-4 9 12" from aws_lambda_powertools import Logger - def custom_json_default(value): - return f"" + def custom_json_default(value): + return f"" - class Unserializable: - pass + class Unserializable: + pass logger = Logger(service="payment", json_default=custom_json_default) - def handler(event, context): - logger.info(Unserializable()) + def handler(event, context): + logger.info(Unserializable()) ``` === "Example CloudWatch Logs excerpt" - ```json hl_lines="4" - { - "level": "INFO", - "location": "collect.handler:8", - "message": """", - "timestamp": "2021-05-03 15:17:23,632+0200", - "service": "payment" - } - ``` + + ```json hl_lines="4" + { + "level": "INFO", + "location": "collect.handler:8", + "message": """", + "timestamp": "2021-05-03 15:17:23,632+0200", + "service": "payment" + } + ``` #### Bring your own handler @@ -827,16 +830,16 @@ By default, Logger uses StreamHandler and logs to standard output. You can overr === "collect.py" ```python hl_lines="3-4 9 12" - import logging - from pathlib import Path + import logging + from pathlib import Path - from aws_lambda_powertools import Logger + from aws_lambda_powertools import Logger - log_file = Path("/tmp/log.json") - log_file_handler = logging.FileHandler(filename=log_file) + log_file = Path("/tmp/log.json") + log_file_handler = logging.FileHandler(filename=log_file) logger = Logger(service="payment", logger_handler=log_file_handler) - logger.info("Collecting payment") + logger.info("Collecting payment") ``` #### Bring your own formatter @@ -847,48 +850,48 @@ For **minor changes like remapping keys** after all log record processing has co === "custom_formatter.py" - ```python - from aws_lambda_powertools import Logger - from aws_lambda_powertools.logging.formatter import LambdaPowertoolsFormatter + ```python + from aws_lambda_powertools import Logger + from aws_lambda_powertools.logging.formatter import LambdaPowertoolsFormatter - from typing import Dict + from typing import Dict - class CustomFormatter(LambdaPowertoolsFormatter): - def serialize(self, log: Dict) -> str: - """Serialize final structured log dict to JSON str""" - log["event"] = log.pop("message") # rename message key to event - return self.json_serializer(log) # use configured json serializer + class CustomFormatter(LambdaPowertoolsFormatter): + def serialize(self, log: Dict) -> str: + """Serialize final structured log dict to JSON str""" + log["event"] = log.pop("message") # rename message key to event + return self.json_serializer(log) # use configured json serializer - my_formatter = CustomFormatter() - logger = Logger(service="example", logger_formatter=my_formatter) - logger.info("hello") - ``` + my_formatter = CustomFormatter() + logger = Logger(service="example", logger_formatter=my_formatter) + logger.info("hello") + ``` For **replacing the formatter entirely**, you can subclass `BasePowertoolsFormatter`, implement `append_keys` method, and override `format` standard logging method. This ensures the current feature set of Logger like [injecting Lambda context](#capturing-lambda-context-info) and [sampling](#sampling-debug-logs) will continue to work. !!! info - You might need to implement `remove_keys` method if you make use of the feature too. + You might need to implement `remove_keys` method if you make use of the feature too. === "collect.py" ```python hl_lines="2 4 7 12 16 27" - from aws_lambda_powertools import Logger - from aws_lambda_powertools.logging.formatter import BasePowertoolsFormatter + from aws_lambda_powertools import Logger + from aws_lambda_powertools.logging.formatter import BasePowertoolsFormatter class CustomFormatter(BasePowertoolsFormatter): custom_format = {} # arbitrary dict to hold our structured keys def append_keys(self, **additional_keys): - # also used by `inject_lambda_context` decorator + # also used by `inject_lambda_context` decorator self.custom_format.update(additional_keys) - # Optional unless you make use of this Logger feature + # Optional unless you make use of this Logger feature def remove_keys(self, keys: Iterable[str]): for key in keys: self.custom_format.pop(key, None) def format(self, record: logging.LogRecord) -> str: # noqa: A003 - """Format logging record as structured JSON str""" + """Format logging record as structured JSON str""" return json.dumps( { "event": super().format(record), @@ -902,20 +905,20 @@ For **replacing the formatter entirely**, you can subclass `BasePowertoolsFormat @logger.inject_lambda_context def handler(event, context): - logger.info("Collecting payment") + logger.info("Collecting payment") ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="2-4" { - "event": "Collecting payment", - "timestamp": "2021-05-03 11:47:12,494", - "my_default_key": "test", - "cold_start": true, - "lambda_function_name": "test", - "lambda_function_memory_size": 128, - "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", - "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" + "event": "Collecting payment", + "timestamp": "2021-05-03 11:47:12,494", + "my_default_key": "test", + "cold_start": true, + "lambda_function_name": "test", + "lambda_function_memory_size": 128, + "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test", + "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72" } ``` @@ -928,20 +931,20 @@ As parameters don't always translate well between them, you can pass any callabl === "collect.py" ```python hl_lines="1 5-6 9-10" - import orjson + import orjson - from aws_lambda_powertools import Logger + from aws_lambda_powertools import Logger - custom_serializer = orjson.dumps - custom_deserializer = orjson.loads + custom_serializer = orjson.dumps + custom_deserializer = orjson.loads logger = Logger(service="payment", json_serializer=custom_serializer, json_deserializer=custom_deserializer - ) + ) - # when using parameters, you can pass a partial - # custom_serializer=functools.partial(orjson.dumps, option=orjson.OPT_SERIALIZE_NUMPY) + # when using parameters, you can pass a partial + # custom_serializer=functools.partial(orjson.dumps, option=orjson.OPT_SERIALIZE_NUMPY) ``` ## Built-in Correlation ID expressions @@ -949,7 +952,7 @@ As parameters don't always translate well between them, you can pass any callabl You can use any of the following built-in JMESPath expressions as part of [inject_lambda_context decorator](#setting-a-correlation-id). !!! note "Escaping necessary for the `-` character" - Any object key named with `-` must be escaped, for example **`request.headers."x-amzn-trace-id"`**. + Any object key named with `-` must be escaped, for example **`request.headers."x-amzn-trace-id"`**. Name | Expression | Description ------------------------------------------------- | ------------------------------------------------- | --------------------------------------------------------------------------------- @@ -968,21 +971,21 @@ When unit testing your code that makes use of `inject_lambda_context` decorator, This is a Pytest sample that provides the minimum information necessary for Logger to succeed: === "fake_lambda_context_for_logger.py" - Note that dataclasses are available in Python 3.7+ only. + Note that dataclasses are available in Python 3.7+ only. ```python - from dataclasses import dataclass + from dataclasses import dataclass - import pytest + import pytest @pytest.fixture def lambda_context(): - @dataclass - class LambdaContext: - function_name: str = "test" - memory_limit_in_mb: int = 128 - invoked_function_arn: str = "arn:aws:lambda:eu-west-1:809313241:function:test" - aws_request_id: str = "52fdfc07-2182-154f-163f-5f0f9a621d72" + @dataclass + class LambdaContext: + function_name: str = "test" + memory_limit_in_mb: int = 128 + invoked_function_arn: str = "arn:aws:lambda:eu-west-1:809313241:function:test" + aws_request_id: str = "52fdfc07-2182-154f-163f-5f0f9a621d72" return LambdaContext() @@ -991,10 +994,11 @@ This is a Pytest sample that provides the minimum information necessary for Logg your_lambda_handler(test_event, lambda_context) # this will now have a Context object populated ``` === "fake_lambda_context_for_logger_py36.py" + ```python - from collections import namedtuple + from collections import namedtuple - import pytest + import pytest @pytest.fixture def lambda_context(): @@ -1010,20 +1014,22 @@ This is a Pytest sample that provides the minimum information necessary for Logg def test_lambda_handler(lambda_context): test_event = {'test': 'event'} - # this will now have a Context object populated - your_lambda_handler(test_event, lambda_context) + # this will now have a Context object populated + your_lambda_handler(test_event, lambda_context) ``` !!! tip - If you're using pytest and are looking to assert plain log messages, do check out the built-in [caplog fixture](https://docs.pytest.org/en/latest/how-to/logging.html){target="_blank"}. + If you're using pytest and are looking to assert plain log messages, do check out the built-in [caplog fixture](https://docs.pytest.org/en/latest/how-to/logging.html){target="_blank"}. ### Pytest live log feature Pytest Live Log feature duplicates emitted log messages in order to style log statements according to their levels, for this to work use `POWERTOOLS_LOG_DEDUPLICATION_DISABLED` env var. -```bash -POWERTOOLS_LOG_DEDUPLICATION_DISABLED="1" pytest -o log_cli=1 -``` +=== "shell" + + ```bash + POWERTOOLS_LOG_DEDUPLICATION_DISABLED="1" pytest -o log_cli=1 + ``` !!! warning This feature should be used with care, as it explicitly disables our ability to filter propagated messages to the root logger (if configured). @@ -1069,40 +1075,40 @@ Here's an example where we persist `payment_id` not `request_id`. Note that `pay logger = Logger(service="payment") - def handler(event, context): - logger.append_keys(payment_id="123456789") + def handler(event, context): + logger.append_keys(payment_id="123456789") - try: - booking_id = book_flight() - logger.info("Flight booked successfully", extra={ "booking_id": booking_id}) - except BookingReservationError: - ... + try: + booking_id = book_flight() + logger.info("Flight booked successfully", extra={ "booking_id": booking_id}) + except BookingReservationError: + ... - logger.info("goodbye") + logger.info("goodbye") ``` === "Example CloudWatch Logs excerpt" - ```json hl_lines="8-9 18" - { - "level": "INFO", - "location": ":10", - "message": "Flight booked successfully", - "timestamp": "2021-01-12 14:09:10,859", - "service": "payment", - "sampling_rate": 0.0, - "payment_id": "123456789", - "booking_id": "75edbad0-0857-4fc9-b547-6180e2f7959b" - }, - { - "level": "INFO", - "location": ":14", - "message": "goodbye", - "timestamp": "2021-01-12 14:09:10,860", - "service": "payment", - "sampling_rate": 0.0, - "payment_id": "123456789" - } - ``` + ```json hl_lines="8-9 18" + { + "level": "INFO", + "location": ":10", + "message": "Flight booked successfully", + "timestamp": "2021-01-12 14:09:10,859", + "service": "payment", + "sampling_rate": 0.0, + "payment_id": "123456789", + "booking_id": "75edbad0-0857-4fc9-b547-6180e2f7959b" + }, + { + "level": "INFO", + "location": ":14", + "message": "goodbye", + "timestamp": "2021-01-12 14:09:10,860", + "service": "payment", + "sampling_rate": 0.0, + "payment_id": "123456789" + } + ``` **How do I aggregate and search Powertools logs across accounts?** diff --git a/docs/core/metrics.md b/docs/core/metrics.md index b556dce2a9e..d4bd9a0727e 100644 --- a/docs/core/metrics.md +++ b/docs/core/metrics.md @@ -26,7 +26,6 @@ If you're new to Amazon CloudWatch, there are two terminologies you must be awar
Metric terminology, visually explained
- ## Getting started Metric has two global settings that will be used across all metrics emitted: @@ -54,7 +53,6 @@ Setting | Description | Environment variable | Constructor parameter POWERTOOLS_METRICS_NAMESPACE: ServerlessAirline ``` - === "app.py" ```python hl_lines="4 6" @@ -62,7 +60,7 @@ Setting | Description | Environment variable | Constructor parameter from aws_lambda_powertools.metrics import MetricUnit metrics = Metrics() # Sets metric namespace and service via env var - # OR + # OR metrics = Metrics(namespace="ServerlessAirline", service="orders") # Sets metric namespace, and service as a metric dimension ``` @@ -80,9 +78,9 @@ You can create metrics using `add_metric`, and you can create dimensions for all metrics = Metrics(namespace="ExampleApplication", service="booking") - @metrics.log_metrics - def lambda_handler(evt, ctx): - metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) + @metrics.log_metrics + def lambda_handler(evt, ctx): + metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) ``` === "Metrics with custom dimensions" @@ -92,20 +90,20 @@ You can create metrics using `add_metric`, and you can create dimensions for all metrics = Metrics(namespace="ExampleApplication", service="booking") - @metrics.log_metrics - def lambda_handler(evt, ctx): - metrics.add_dimension(name="environment", value="prod") - metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) + @metrics.log_metrics + def lambda_handler(evt, ctx): + metrics.add_dimension(name="environment", value="prod") + metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) ``` !!! tip "Autocomplete Metric Units" - `MetricUnit` enum facilitate finding a supported metric unit by CloudWatch. Alternatively, you can pass the value as a string if you already know them e.g. "Count". + `MetricUnit` enum facilitate finding a supported metric unit by CloudWatch. Alternatively, you can pass the value as a string if you already know them e.g. "Count". !!! note "Metrics overflow" - CloudWatch EMF supports a max of 100 metrics per batch. Metrics utility will flush all metrics when adding the 100th metric. Subsequent metrics, e.g. 101th, will be aggregated into a new EMF object, for your convenience. + CloudWatch EMF supports a max of 100 metrics per batch. Metrics utility will flush all metrics when adding the 100th metric. Subsequent metrics, e.g. 101th, will be aggregated into a new EMF object, for your convenience. !!! warning "Do not create metrics or dimensions outside the handler" - Metrics or dimensions added in the global scope will only be added during cold start. Disregard if you that's the intended behaviour. + Metrics or dimensions added in the global scope will only be added during cold start. Disregard if you that's the intended behaviour. ### Adding default dimensions @@ -116,29 +114,29 @@ If you'd like to remove them at some point, you can use `clear_default_dimension === "set_default_dimensions method" ```python hl_lines="5" - from aws_lambda_powertools import Metrics - from aws_lambda_powertools.metrics import MetricUnit + from aws_lambda_powertools import Metrics + from aws_lambda_powertools.metrics import MetricUnit - metrics = Metrics(namespace="ExampleApplication", service="booking") - metrics.set_default_dimensions(environment="prod", another="one") + metrics = Metrics(namespace="ExampleApplication", service="booking") + metrics.set_default_dimensions(environment="prod", another="one") - @metrics.log_metrics - def lambda_handler(evt, ctx): - metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) - ``` + @metrics.log_metrics + def lambda_handler(evt, ctx): + metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) + ``` === "with log_metrics decorator" ```python hl_lines="5 7" - from aws_lambda_powertools import Metrics - from aws_lambda_powertools.metrics import MetricUnit + from aws_lambda_powertools import Metrics + from aws_lambda_powertools.metrics import MetricUnit - metrics = Metrics(namespace="ExampleApplication", service="booking") - DEFAULT_DIMENSIONS = {"environment": "prod", "another": "one"} + metrics = Metrics(namespace="ExampleApplication", service="booking") + DEFAULT_DIMENSIONS = {"environment": "prod", "another": "one"} - @metrics.log_metrics(default_dimensions=DEFAULT_DIMENSIONS) - def lambda_handler(evt, ctx): - metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) - ``` + @metrics.log_metrics(default_dimensions=DEFAULT_DIMENSIONS) + def lambda_handler(evt, ctx): + metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) + ``` ### Flushing metrics @@ -161,37 +159,37 @@ This decorator also **validates**, **serializes**, and **flushes** all your metr === "Example CloudWatch Logs excerpt" ```json hl_lines="2 7 10 15 22" - { - "BookingConfirmation": 1.0, - "_aws": { - "Timestamp": 1592234975665, - "CloudWatchMetrics": [ - { - "Namespace": "ExampleApplication", - "Dimensions": [ - [ - "service" - ] - ], - "Metrics": [ - { - "Name": "BookingConfirmation", - "Unit": "Count" - } - ] - } - ] - }, - "service": "ExampleService" - } + { + "BookingConfirmation": 1.0, + "_aws": { + "Timestamp": 1592234975665, + "CloudWatchMetrics": [ + { + "Namespace": "ExampleApplication", + "Dimensions": [ + [ + "service" + ] + ], + "Metrics": [ + { + "Name": "BookingConfirmation", + "Unit": "Count" + } + ] + } + ] + }, + "service": "ExampleService" + } ``` !!! tip "Metric validation" - If metrics are provided, and any of the following criteria are not met, **`SchemaValidationError`** exception will be raised: + If metrics are provided, and any of the following criteria are not met, **`SchemaValidationError`** exception will be raised: - * Maximum of 9 dimensions - * Namespace is set, and no more than one - * Metric units must be [supported by CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html) + * Maximum of 9 dimensions + * Namespace is set, and no more than one + * Metric units must be [supported by CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html) #### Raising SchemaValidationError on empty metrics @@ -210,7 +208,7 @@ If you want to ensure that at least one metric is emitted, you can pass `raise_o ``` !!! tip "Suppressing warning messages on empty metrics" - If you expect your function to execute without publishing metrics every time, you can suppress the warning with **`warnings.filterwarnings("ignore", "No metrics to publish*")`**. + If you expect your function to execute without publishing metrics every time, you can suppress the warning with **`warnings.filterwarnings("ignore", "No metrics to publish*")`**. #### Nesting multiple middlewares @@ -222,7 +220,7 @@ When using multiple middlewares, use `log_metrics` as your **last decorator** wr from aws_lambda_powertools import Metrics, Tracer from aws_lambda_powertools.metrics import MetricUnit - tracer = Tracer(service="booking") + tracer = Tracer(service="booking") metrics = Metrics(namespace="ExampleApplication", service="booking") @metrics.log_metrics @@ -273,39 +271,39 @@ You can add high-cardinality data as part of your Metrics log with `add_metadata metrics = Metrics(namespace="ExampleApplication", service="booking") - @metrics.log_metrics - def lambda_handler(evt, ctx): - metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) - metrics.add_metadata(key="booking_id", value="booking_uuid") + @metrics.log_metrics + def lambda_handler(evt, ctx): + metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) + metrics.add_metadata(key="booking_id", value="booking_uuid") ``` === "Example CloudWatch Logs excerpt" ```json hl_lines="23" - { - "SuccessfulBooking": 1.0, - "_aws": { - "Timestamp": 1592234975665, - "CloudWatchMetrics": [ - { - "Namespace": "ExampleApplication", - "Dimensions": [ - [ - "service" - ] - ], - "Metrics": [ - { - "Name": "SuccessfulBooking", - "Unit": "Count" - } - ] - } - ] - }, - "service": "booking", - "booking_id": "booking_uuid" - } + { + "SuccessfulBooking": 1.0, + "_aws": { + "Timestamp": 1592234975665, + "CloudWatchMetrics": [ + { + "Namespace": "ExampleApplication", + "Dimensions": [ + [ + "service" + ] + ], + "Metrics": [ + { + "Name": "SuccessfulBooking", + "Unit": "Count" + } + ] + } + ] + }, + "service": "booking", + "booking_id": "booking_uuid" + } ``` ### Single metric with a different dimension @@ -324,10 +322,10 @@ CloudWatch EMF uses the same dimensions across all your metrics. Use `single_met from aws_lambda_powertools.metrics import MetricUnit - def lambda_handler(evt, ctx): - with single_metric(name="ColdStart", unit=MetricUnit.Count, value=1, namespace="ExampleApplication") as metric: - metric.add_dimension(name="function_context", value="$LATEST") - ... + def lambda_handler(evt, ctx): + with single_metric(name="ColdStart", unit=MetricUnit.Count, value=1, namespace="ExampleApplication") as metric: + metric.add_dimension(name="function_context", value="$LATEST") + ... ``` ### Flushing metrics manually @@ -346,11 +344,11 @@ If you prefer not to use `log_metrics` because you might want to encapsulate add metrics = Metrics(namespace="ExampleApplication", service="booking") - def lambda_handler(evt, ctx): - metrics.add_metric(name="ColdStart", unit=MetricUnit.Count, value=1) - your_metrics_object = metrics.serialize_metric_set() - metrics.clear_metrics() - print(json.dumps(your_metrics_object)) + def lambda_handler(evt, ctx): + metrics.add_metric(name="ColdStart", unit=MetricUnit.Count, value=1) + your_metrics_object = metrics.serialize_metric_set() + metrics.clear_metrics() + print(json.dumps(your_metrics_object)) ``` ## Testing your code @@ -359,9 +357,11 @@ If you prefer not to use `log_metrics` because you might want to encapsulate add Use `POWERTOOLS_METRICS_NAMESPACE` and `POWERTOOLS_SERVICE_NAME` env vars when unit testing your code to ensure metric namespace and dimension objects are created, and your code doesn't fail validation. -```bash -POWERTOOLS_SERVICE_NAME="Example" POWERTOOLS_METRICS_NAMESPACE="Application" python -m pytest -``` +=== "shell" + + ```bash + POWERTOOLS_SERVICE_NAME="Example" POWERTOOLS_METRICS_NAMESPACE="Application" python -m pytest + ``` If you prefer setting environment variable for specific tests, and are using Pytest, you can use [monkeypatch](https://docs.pytest.org/en/latest/monkeypatch.html) fixture: @@ -401,68 +401,68 @@ As metrics are logged to standard output, you can read standard output and asser === "Assert single EMF blob with pytest.py" - ```python hl_lines="6 9-10 23-34" - from aws_lambda_powertools import Metrics - from aws_lambda_powertools.metrics import MetricUnit - - import json - - def test_log_metrics(capsys): - # GIVEN Metrics is initialized - metrics = Metrics(namespace="ServerlessAirline") - - # WHEN we utilize log_metrics to serialize - # and flush all metrics at the end of a function execution - @metrics.log_metrics - def lambda_handler(evt, ctx): - metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) - metrics.add_dimension(name="environment", value="prod") + ```python hl_lines="6 9-10 23-34" + from aws_lambda_powertools import Metrics + from aws_lambda_powertools.metrics import MetricUnit - lambda_handler({}, {}) - log = capsys.readouterr().out.strip() # remove any extra line - metrics_output = json.loads(log) # deserialize JSON str + import json - # THEN we should have no exceptions - # and a valid EMF object should be flushed correctly - assert "SuccessfulBooking" in log # basic string assertion in JSON str - assert "SuccessfulBooking" in metrics_output["_aws"]["CloudWatchMetrics"][0]["Metrics"][0]["Name"] - ``` + def test_log_metrics(capsys): + # GIVEN Metrics is initialized + metrics = Metrics(namespace="ServerlessAirline") + + # WHEN we utilize log_metrics to serialize + # and flush all metrics at the end of a function execution + @metrics.log_metrics + def lambda_handler(evt, ctx): + metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) + metrics.add_dimension(name="environment", value="prod") + + lambda_handler({}, {}) + log = capsys.readouterr().out.strip() # remove any extra line + metrics_output = json.loads(log) # deserialize JSON str + + # THEN we should have no exceptions + # and a valid EMF object should be flushed correctly + assert "SuccessfulBooking" in log # basic string assertion in JSON str + assert "SuccessfulBooking" in metrics_output["_aws"]["CloudWatchMetrics"][0]["Metrics"][0]["Name"] + ``` === "Assert multiple EMF blobs with pytest" - ```python hl_lines="8-9 11 21-23 25 29-30 32" - from aws_lambda_powertools import Metrics - from aws_lambda_powertools.metrics import MetricUnit + ```python hl_lines="8-9 11 21-23 25 29-30 32" + from aws_lambda_powertools import Metrics + from aws_lambda_powertools.metrics import MetricUnit - from collections import namedtuple + from collections import namedtuple - import json + import json - def capture_metrics_output_multiple_emf_objects(capsys): - return [json.loads(line.strip()) for line in capsys.readouterr().out.split("\n") if line] + def capture_metrics_output_multiple_emf_objects(capsys): + return [json.loads(line.strip()) for line in capsys.readouterr().out.split("\n") if line] - def test_log_metrics(capsys): - # GIVEN Metrics is initialized - metrics = Metrics(namespace="ServerlessAirline") + def test_log_metrics(capsys): + # GIVEN Metrics is initialized + metrics = Metrics(namespace="ServerlessAirline") - # WHEN log_metrics is used with capture_cold_start_metric - @metrics.log_metrics(capture_cold_start_metric=True) - def lambda_handler(evt, ctx): - metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) - metrics.add_dimension(name="environment", value="prod") + # WHEN log_metrics is used with capture_cold_start_metric + @metrics.log_metrics(capture_cold_start_metric=True) + def lambda_handler(evt, ctx): + metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1) + metrics.add_dimension(name="environment", value="prod") - # log_metrics uses function_name property from context to add as a dimension for cold start metric - LambdaContext = namedtuple("LambdaContext", "function_name") - lambda_handler({}, LambdaContext("example_fn") + # log_metrics uses function_name property from context to add as a dimension for cold start metric + LambdaContext = namedtuple("LambdaContext", "function_name") + lambda_handler({}, LambdaContext("example_fn") - cold_start_blob, custom_metrics_blob = capture_metrics_output_multiple_emf_objects(capsys) + cold_start_blob, custom_metrics_blob = capture_metrics_output_multiple_emf_objects(capsys) - # THEN ColdStart metric and function_name dimension should be logged - # in a separate EMF blob than the application metrics - assert cold_start_blob["ColdStart"] == [1.0] - assert cold_start_blob["function_name"] == "example_fn" + # THEN ColdStart metric and function_name dimension should be logged + # in a separate EMF blob than the application metrics + assert cold_start_blob["ColdStart"] == [1.0] + assert cold_start_blob["function_name"] == "example_fn" - assert "SuccessfulBooking" in custom_metrics_blob # as per previous example - ``` + assert "SuccessfulBooking" in custom_metrics_blob # as per previous example + ``` !!! tip "For more elaborate assertions and comparisons, check out [our functional testing for Metrics utility](https://github.com/awslabs/aws-lambda-powertools-python/blob/develop/tests/functional/test_metrics.py)" diff --git a/docs/core/tracer.md b/docs/core/tracer.md index c6f2baa59fd..e2e2df52e18 100644 --- a/docs/core/tracer.md +++ b/docs/core/tracer.md @@ -23,6 +23,7 @@ Before your use this utility, your AWS Lambda function [must have permissions](h > Example using AWS Serverless Application Model (SAM) === "template.yml" + ```yaml hl_lines="6 9" Resources: HelloWorldFunction: @@ -40,18 +41,19 @@ Before your use this utility, your AWS Lambda function [must have permissions](h You can quickly start by importing the `Tracer` class, initialize it outside the Lambda handler, and use `capture_lambda_handler` decorator. === "app.py" - ```python hl_lines="1 3 6" - from aws_lambda_powertools import Tracer - tracer = Tracer() # Sets service via env var - # OR tracer = Tracer(service="example") + ```python hl_lines="1 3 6" + from aws_lambda_powertools import Tracer + + tracer = Tracer() # Sets service via env var + # OR tracer = Tracer(service="example") - @tracer.capture_lambda_handler - def handler(event, context): - charge_id = event.get('charge_id') - payment = collect_payment(charge_id) - ... - ``` + @tracer.capture_lambda_handler + def handler(event, context): + charge_id = event.get('charge_id') + payment = collect_payment(charge_id) + ... + ``` When using this `capture_lambda_handler` decorator, Tracer performs these additional tasks to ease operations: @@ -65,7 +67,7 @@ When using this `capture_lambda_handler` decorator, Tracer performs these additi **Metadata** are key-values also associated with traces but not indexed by AWS X-Ray. You can use them to add additional context for an operation using any native object. === "Annotations" - You can add annotations using `put_annotation` method. + You can add annotations using `put_annotation` method. ```python hl_lines="7" from aws_lambda_powertools import Tracer @@ -77,7 +79,7 @@ When using this `capture_lambda_handler` decorator, Tracer performs these additi tracer.put_annotation(key="PaymentStatus", value="SUCCESS") ``` === "Metadata" - You can add metadata using `put_metadata` method. + You can add metadata using `put_metadata` method. ```python hl_lines="8" from aws_lambda_powertools import Tracer @@ -101,13 +103,13 @@ You can trace synchronous functions using the `capture_method` decorator. unintended consequences if there are side effects to recursively reading the returned value, for example if the decorated function response contains a file-like object or a `StreamingBody` for S3 objects. -```python hl_lines="7 13" -@tracer.capture_method -def collect_payment(charge_id): - ret = requests.post(PAYMENT_ENDPOINT) # logic - tracer.put_annotation("PAYMENT_STATUS", "SUCCESS") # custom annotation - return ret -``` + ```python hl_lines="7 13" + @tracer.capture_method + def collect_payment(charge_id): + ret = requests.post(PAYMENT_ENDPOINT) # logic + tracer.put_annotation("PAYMENT_STATUS", "SUCCESS") # custom annotation + return ret + ``` ### Asynchronous and generator functions @@ -116,7 +118,6 @@ def collect_payment(charge_id): You can trace asynchronous functions and generator functions (including context managers) using `capture_method`. - === "Async" ```python hl_lines="7" @@ -164,17 +165,18 @@ You can trace asynchronous functions and generator functions (including context The decorator will detect whether your function is asynchronous, a generator, or a context manager and adapt its behaviour accordingly. -```python -@tracer.capture_lambda_handler -def handler(evt, ctx): - asyncio.run(collect_payment()) +=== "app.py" - with collect_payment_ctxman as result: - do_something_with(result) + ```python + @tracer.capture_lambda_handler + def handler(evt, ctx): + asyncio.run(collect_payment()) - another_result = list(collect_payment_gen()) -``` + with collect_payment_ctxman as result: + do_something_with(result) + another_result = list(collect_payment_gen()) + ``` ## Advanced @@ -184,15 +186,17 @@ Tracer automatically patches all [supported libraries by X-Ray](https://docs.aws If you're looking to shave a few microseconds, or milliseconds depending on your function memory configuration, you can patch specific modules using `patch_modules` param: -```python hl_lines="7" -import boto3 -import requests +=== "app.py" -from aws_lambda_powertools import Tracer + ```python hl_lines="7" + import boto3 + import requests -modules_to_be_patched = ["boto3", "requests"] -tracer = Tracer(patch_modules=modules_to_be_patched) -``` + from aws_lambda_powertools import Tracer + + modules_to_be_patched = ["boto3", "requests"] + tracer = Tracer(patch_modules=modules_to_be_patched) + ``` ### Disabling response auto-capture @@ -202,32 +206,34 @@ Use **`capture_response=False`** parameter in both `capture_lambda_handler` and !!! info "This is commonly useful in three scenarios" - 1. You might **return sensitive** information you don't want it to be added to your traces - 2. You might manipulate **streaming objects that can be read only once**; this prevents subsequent calls from being empty - 3. You might return **more than 64K** of data _e.g., `message too long` error_ + 1. You might **return sensitive** information you don't want it to be added to your traces + 2. You might manipulate **streaming objects that can be read only once**; this prevents subsequent calls from being empty + 3. You might return **more than 64K** of data _e.g., `message too long` error_ === "sensitive_data_scenario.py" - ```python hl_lines="3 7" - from aws_lambda_powertools import Tracer - @tracer.capture_method(capture_response=False) - def fetch_sensitive_information(): - return "sensitive_information" + ```python hl_lines="3 7" + from aws_lambda_powertools import Tracer - @tracer.capture_lambda_handler(capture_response=False) - def handler(event, context): - sensitive_information = fetch_sensitive_information() - ``` + @tracer.capture_method(capture_response=False) + def fetch_sensitive_information(): + return "sensitive_information" + + @tracer.capture_lambda_handler(capture_response=False) + def handler(event, context): + sensitive_information = fetch_sensitive_information() + ``` === "streaming_object_scenario.py" - ```python hl_lines="3" - from aws_lambda_powertools import Tracer - @tracer.capture_method(capture_response=False) - def get_s3_object(bucket_name, object_key): - s3 = boto3.client("s3") - s3_object = get_object(Bucket=bucket_name, Key=object_key) - return s3_object - ``` + ```python hl_lines="3" + from aws_lambda_powertools import Tracer + + @tracer.capture_method(capture_response=False) + def get_s3_object(bucket_name, object_key): + s3 = boto3.client("s3") + s3_object = get_object(Bucket=bucket_name, Key=object_key) + return s3_object + ``` ### Disabling exception auto-capture @@ -237,16 +243,17 @@ Use **`capture_error=False`** parameter in both `capture_lambda_handler` and `ca !!! info "Commonly useful in one scenario" - 1. You might **return sensitive** information from exceptions, stack traces you might not control + 1. You might **return sensitive** information from exceptions, stack traces you might not control === "sensitive_data_exception.py" - ```python hl_lines="3 5" - from aws_lambda_powertools import Tracer - @tracer.capture_lambda_handler(capture_error=False) - def handler(event, context): - raise ValueError("some sensitive info in the stack trace...") - ``` + ```python hl_lines="3 5" + from aws_lambda_powertools import Tracer + + @tracer.capture_lambda_handler(capture_error=False) + def handler(event, context): + raise ValueError("some sensitive info in the stack trace...") + ``` ### Tracing aiohttp requests @@ -256,21 +263,22 @@ Use **`capture_error=False`** parameter in both `capture_lambda_handler` and `ca You can use `aiohttp_trace_config` function to create a valid [aiohttp trace_config object](https://docs.aiohttp.org/en/stable/tracing_reference.html). This is necessary since X-Ray utilizes aiohttp trace hooks to capture requests end-to-end. === "aiohttp_example.py" - ```python hl_lines="5 10" - import asyncio - import aiohttp - from aws_lambda_powertools import Tracer - from aws_lambda_powertools.tracing import aiohttp_trace_config + ```python hl_lines="5 10" + import asyncio + import aiohttp - tracer = Tracer() + from aws_lambda_powertools import Tracer + from aws_lambda_powertools.tracing import aiohttp_trace_config - async def aiohttp_task(): - async with aiohttp.ClientSession(trace_configs=[aiohttp_trace_config()]) as session: - async with session.get("https://httpbin.org/json") as resp: - resp = await resp.json() - return resp - ``` + tracer = Tracer() + + async def aiohttp_task(): + async with aiohttp.ClientSession(trace_configs=[aiohttp_trace_config()]) as session: + async with session.get("https://httpbin.org/json") as resp: + resp = await resp.json() + return resp + ``` ### Escape hatch mechanism @@ -279,17 +287,18 @@ You can use `tracer.provider` attribute to access all methods provided by AWS X- This is useful when you need a feature available in X-Ray that is not available in the Tracer utility, for example [thread-safe](https://github.com/aws/aws-xray-sdk-python/#user-content-trace-threadpoolexecutor), or [context managers](https://github.com/aws/aws-xray-sdk-python/#user-content-start-a-custom-segmentsubsegment). === "escape_hatch_context_manager_example.py" - ```python hl_lines="7" - from aws_lambda_powertools import Tracer - tracer = Tracer() + ```python hl_lines="7" + from aws_lambda_powertools import Tracer + + tracer = Tracer() - @tracer.capture_lambda_handler - def handler(event, context): - with tracer.provider.in_subsegment('## custom subsegment') as subsegment: - ret = some_work() - subsegment.put_metadata('response', ret) - ``` + @tracer.capture_lambda_handler + def handler(event, context): + with tracer.provider.in_subsegment('## custom subsegment') as subsegment: + ret = some_work() + subsegment.put_metadata('response', ret) + ``` ### Concurrent asynchronous functions @@ -299,26 +308,27 @@ This is useful when you need a feature available in X-Ray that is not available A safe workaround mechanism is to use `in_subsegment_async` available via Tracer escape hatch (`tracer.provider`). === "concurrent_async_workaround.py" - ```python hl_lines="6 7 12 15 17" - import asyncio - from aws_lambda_powertools import Tracer - tracer = Tracer() + ```python hl_lines="6 7 12 15 17" + import asyncio + + from aws_lambda_powertools import Tracer + tracer = Tracer() - async def another_async_task(): - async with tracer.provider.in_subsegment_async("## another_async_task") as subsegment: - subsegment.put_annotation(key="key", value="value") - subsegment.put_metadata(key="key", value="value", namespace="namespace") - ... + async def another_async_task(): + async with tracer.provider.in_subsegment_async("## another_async_task") as subsegment: + subsegment.put_annotation(key="key", value="value") + subsegment.put_metadata(key="key", value="value", namespace="namespace") + ... - async def another_async_task_2(): - ... + async def another_async_task_2(): + ... - @tracer.capture_method - async def collect_payment(charge_id): - asyncio.gather(another_async_task(), another_async_task_2()) - ... - ``` + @tracer.capture_method + async def collect_payment(charge_id): + asyncio.gather(another_async_task(), another_async_task_2()) + ... + ``` ### Reusing Tracer across your code @@ -330,29 +340,30 @@ Tracer keeps a copy of its configuration after the first initialization. This is This can result in the first Tracer config being inherited by new instances, and their modules not being patched. === "handler.py" - ```python hl_lines="2 4 9" - from aws_lambda_powertools import Tracer - from payment import collect_payment - tracer = Tracer(service="payment") + ```python hl_lines="2 4 9" + from aws_lambda_powertools import Tracer + from payment import collect_payment + + tracer = Tracer(service="payment") - @tracer.capture_lambda_handler - def handler(event, context): - charge_id = event.get('charge_id') - payment = collect_payment(charge_id) - ``` + @tracer.capture_lambda_handler + def handler(event, context): + charge_id = event.get('charge_id') + payment = collect_payment(charge_id) + ``` === "payment.py" - A new instance of Tracer will be created but will reuse the previous Tracer instance configuration, similar to a Singleton. + A new instance of Tracer will be created but will reuse the previous Tracer instance configuration, similar to a Singleton. - ```python hl_lines="3 5" - from aws_lambda_powertools import Tracer + ```python hl_lines="3 5" + from aws_lambda_powertools import Tracer - tracer = Tracer(service="payment") + tracer = Tracer(service="payment") - @tracer.capture_method + @tracer.capture_method def collect_payment(charge_id: str): ... - ``` + ``` ## Testing your code diff --git a/docs/index.md b/docs/index.md index bd9d7875ece..781a96e2eb3 100644 --- a/docs/index.md +++ b/docs/index.md @@ -6,7 +6,7 @@ description: AWS Lambda Powertools Python A suite of utilities for AWS Lambda functions to ease adopting best practices such as tracing, structured logging, custom metrics, and more. !!! tip "Looking for a quick read through how the core features are used?" - Check out [this detailed blog post](https://aws.amazon.com/blogs/opensource/simplifying-serverless-best-practices-with-lambda-powertools/) with a practical example. + Check out [this detailed blog post](https://aws.amazon.com/blogs/opensource/simplifying-serverless-best-practices-with-lambda-powertools/) with a practical example. ## Tenets @@ -28,9 +28,11 @@ Powertools is available in PyPi. You can use your favourite dependency managemen **Quick hello world example using SAM CLI** -```bash -sam init --location https://github.com/aws-samples/cookiecutter-aws-sam-python -``` +=== "shell" + + ```bash + sam init --location https://github.com/aws-samples/cookiecutter-aws-sam-python + ``` ### Lambda Layer @@ -44,62 +46,61 @@ Powertools is also available as a Lambda Layer, and it is distributed via the [A !!! warning **Layer-extras** does not support Python 3.6 runtime. This layer also includes all extra dependencies: `22.4MB zipped`, `~155MB unzipped`. - If using SAM, you can include this SAR App as part of your shared Layers stack, and lock to a specific semantic version. Once deployed, it'll be available across the account this is deployed to. === "SAM" - ```yaml hl_lines="5-6 12-13" - AwsLambdaPowertoolsPythonLayer: - Type: AWS::Serverless::Application - Properties: - Location: - ApplicationId: arn:aws:serverlessrepo:eu-west-1:057560766410:applications/aws-lambda-powertools-python-layer - SemanticVersion: 1.17.0 # change to latest semantic version available in SAR - - MyLambdaFunction: - Type: AWS::Serverless::Function - Properties: - Layers: - # fetch Layer ARN from SAR App stack output - - !GetAtt AwsLambdaPowertoolsPythonLayer.Outputs.LayerVersionArn - ``` + ```yaml hl_lines="5-6 12-13" + AwsLambdaPowertoolsPythonLayer: + Type: AWS::Serverless::Application + Properties: + Location: + ApplicationId: arn:aws:serverlessrepo:eu-west-1:057560766410:applications/aws-lambda-powertools-python-layer + SemanticVersion: 1.17.0 # change to latest semantic version available in SAR + + MyLambdaFunction: + Type: AWS::Serverless::Function + Properties: + Layers: + # fetch Layer ARN from SAR App stack output + - !GetAtt AwsLambdaPowertoolsPythonLayer.Outputs.LayerVersionArn + ``` === "Serverless framework" - ```yaml hl_lines="5 8 10-11" - functions: - main: - handler: lambda_function.lambda_handler - layers: - - !GetAtt AwsLambdaPowertoolsPythonLayer.Outputs.LayerVersionArn - - resources: - Transform: AWS::Serverless-2016-10-31 - Resources: - AwsLambdaPowertoolsPythonLayer: - Type: AWS::Serverless::Application - Properties: - Location: - ApplicationId: arn:aws:serverlessrepo:eu-west-1:057560766410:applications/aws-lambda-powertools-python-layer - # Find latest from github.com/awslabs/aws-lambda-powertools-python/releases - SemanticVersion: 1.17.0 - ``` + ```yaml hl_lines="5 8 10-11" + functions: + main: + handler: lambda_function.lambda_handler + layers: + - !GetAtt AwsLambdaPowertoolsPythonLayer.Outputs.LayerVersionArn + + resources: + Transform: AWS::Serverless-2016-10-31 + Resources: + AwsLambdaPowertoolsPythonLayer: + Type: AWS::Serverless::Application + Properties: + Location: + ApplicationId: arn:aws:serverlessrepo:eu-west-1:057560766410:applications/aws-lambda-powertools-python-layer + # Find latest from github.com/awslabs/aws-lambda-powertools-python/releases + SemanticVersion: 1.17.0 + ``` === "CDK" - ```python hl_lines="14 22-23 31" - from aws_cdk import core, aws_sam as sam, aws_lambda + ```python hl_lines="14 22-23 31" + from aws_cdk import core, aws_sam as sam, aws_lambda POWERTOOLS_BASE_NAME = 'AWSLambdaPowertools' # Find latest from github.com/awslabs/aws-lambda-powertools-python/releases POWERTOOLS_VER = '1.17.0' POWERTOOLS_ARN = 'arn:aws:serverlessrepo:eu-west-1:057560766410:applications/aws-lambda-powertools-python-layer' - class SampleApp(core.Construct): + class SampleApp(core.Construct): - def __init__(self, scope: core.Construct, id_: str) -> None: - super().__init__(scope, id_) + def __init__(self, scope: core.Construct, id_: str) -> None: + super().__init__(scope, id_) # Launches SAR App as CloudFormation nested stack and return Lambda Layer powertools_app = sam.CfnApplication(self, @@ -114,86 +115,88 @@ If using SAM, you can include this SAR App as part of your shared Layers stack, powertools_layer_version = aws_lambda.LayerVersion.from_layer_version_arn(self, f'{POWERTOOLS_BASE_NAME}', powertools_layer_arn) aws_lambda.Function(self, - 'sample-app-lambda', + 'sample-app-lambda', runtime=aws_lambda.Runtime.PYTHON_3_8, function_name='sample-lambda', code=aws_lambda.Code.asset('./src'), handler='app.handler', layers: [powertools_layer_version] ) - ``` + ``` ??? tip "Example of least-privileged IAM permissions to deploy Layer" - > Credits to [mwarkentin](https://github.com/mwarkentin) for providing the scoped down IAM permissions. - - The region and the account id for `CloudFormationTransform` and `GetCfnTemplate` are fixed. - - === "template.yml" - - ```yaml hl_lines="21-52" - AWSTemplateFormatVersion: "2010-09-09" - Resources: - PowertoolsLayerIamRole: - Type: "AWS::IAM::Role" - Properties: - AssumeRolePolicyDocument: - Version: "2012-10-17" - Statement: - - Effect: "Allow" - Principal: - Service: - - "cloudformation.amazonaws.com" - Action: - - "sts:AssumeRole" - Path: "/" - PowertoolsLayerIamPolicy: - Type: "AWS::IAM::Policy" - Properties: - PolicyName: PowertoolsLambdaLayerPolicy - PolicyDocument: - Version: "2012-10-17" - Statement: - - Sid: CloudFormationTransform - Effect: Allow - Action: cloudformation:CreateChangeSet - Resource: - - arn:aws:cloudformation:us-east-1:aws:transform/Serverless-2016-10-31 - - Sid: GetCfnTemplate - Effect: Allow - Action: - - serverlessrepo:CreateCloudFormationTemplate - - serverlessrepo:GetCloudFormationTemplate - Resource: - # this is arn of the powertools SAR app - - arn:aws:serverlessrepo:eu-west-1:057560766410:applications/aws-lambda-powertools-python-layer - - Sid: S3AccessLayer - Effect: Allow - Action: - - s3:GetObject - Resource: - # AWS publishes to an external S3 bucket locked down to your account ID - # The below example is us publishing lambda powertools - # Bucket: awsserverlessrepo-changesets-plntc6bfnfj - # Key: *****/arn:aws:serverlessrepo:eu-west-1:057560766410:applications-aws-lambda-powertools-python-layer-versions-1.10.2/aeeccf50-****-****-****-********* - - arn:aws:s3:::awsserverlessrepo-changesets-*/* - - Sid: GetLayerVersion - Effect: Allow - Action: - - lambda:PublishLayerVersion - - lambda:GetLayerVersion - Resource: - - !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:layer:aws-lambda-powertools-python-layer* - Roles: - - Ref: "PowertoolsLayerIamRole" - ``` + > Credits to [mwarkentin](https://github.com/mwarkentin) for providing the scoped down IAM permissions. + + The region and the account id for `CloudFormationTransform` and `GetCfnTemplate` are fixed. + + === "template.yml" + + ```yaml hl_lines="21-52" + AWSTemplateFormatVersion: "2010-09-09" + Resources: + PowertoolsLayerIamRole: + Type: "AWS::IAM::Role" + Properties: + AssumeRolePolicyDocument: + Version: "2012-10-17" + Statement: + - Effect: "Allow" + Principal: + Service: + - "cloudformation.amazonaws.com" + Action: + - "sts:AssumeRole" + Path: "/" + PowertoolsLayerIamPolicy: + Type: "AWS::IAM::Policy" + Properties: + PolicyName: PowertoolsLambdaLayerPolicy + PolicyDocument: + Version: "2012-10-17" + Statement: + - Sid: CloudFormationTransform + Effect: Allow + Action: cloudformation:CreateChangeSet + Resource: + - arn:aws:cloudformation:us-east-1:aws:transform/Serverless-2016-10-31 + - Sid: GetCfnTemplate + Effect: Allow + Action: + - serverlessrepo:CreateCloudFormationTemplate + - serverlessrepo:GetCloudFormationTemplate + Resource: + # this is arn of the powertools SAR app + - arn:aws:serverlessrepo:eu-west-1:057560766410:applications/aws-lambda-powertools-python-layer + - Sid: S3AccessLayer + Effect: Allow + Action: + - s3:GetObject + Resource: + # AWS publishes to an external S3 bucket locked down to your account ID + # The below example is us publishing lambda powertools + # Bucket: awsserverlessrepo-changesets-plntc6bfnfj + # Key: *****/arn:aws:serverlessrepo:eu-west-1:057560766410:applications-aws-lambda-powertools-python-layer-versions-1.10.2/aeeccf50-****-****-****-********* + - arn:aws:s3:::awsserverlessrepo-changesets-*/* + - Sid: GetLayerVersion + Effect: Allow + Action: + - lambda:PublishLayerVersion + - lambda:GetLayerVersion + Resource: + - !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:layer:aws-lambda-powertools-python-layer* + Roles: + - Ref: "PowertoolsLayerIamRole" + ``` You can fetch available versions via SAR API with: -```bash -aws serverlessrepo list-application-versions \ - --application-id arn:aws:serverlessrepo:eu-west-1:057560766410:applications/aws-lambda-powertools-python-layer -``` +=== "shell" + + ```bash + aws serverlessrepo list-application-versions \ + --application-id arn:aws:serverlessrepo:eu-west-1:057560766410:applications/aws-lambda-powertools-python-layer + ``` ## Features @@ -238,6 +241,7 @@ aws serverlessrepo list-application-versions \ As a best practice, AWS Lambda Powertools logging statements are suppressed. If necessary, you can enable debugging using `set_package_logger`: === "app.py" + ```python from aws_lambda_powertools.logging.logger import set_package_logger diff --git a/docs/utilities/batch.md b/docs/utilities/batch.md index 26006427a14..96770fb1849 100644 --- a/docs/utilities/batch.md +++ b/docs/utilities/batch.md @@ -34,22 +34,23 @@ Before your use this utility, your AWS Lambda function must have `sqs:DeleteMess > Example using AWS Serverless Application Model (SAM) === "template.yml" + ```yaml hl_lines="2-3 12-15" Resources: - MyQueue: - Type: AWS::SQS::Queue + MyQueue: + Type: AWS::SQS::Queue - HelloWorldFunction: + HelloWorldFunction: Type: AWS::Serverless::Function Properties: - Runtime: python3.8 - Environment: + Runtime: python3.8 + Environment: Variables: - POWERTOOLS_SERVICE_NAME: example - Policies: - - SQSPollerPolicy: - QueueName: - !GetAtt MyQueue.QueueName + POWERTOOLS_SERVICE_NAME: example + Policies: + - SQSPollerPolicy: + QueueName: + !GetAtt MyQueue.QueueName ``` ### Processing messages from SQS @@ -90,9 +91,9 @@ You need to create a function to handle each record from the batch - We call it ``` !!! tip - **Any non-exception/successful return from your record handler function** will instruct both decorator and context manager to queue up each individual message for deletion. + **Any non-exception/successful return from your record handler function** will instruct both decorator and context manager to queue up each individual message for deletion. - If the entire batch succeeds, we let Lambda to proceed in deleting the records from the queue for cost reasons. + If the entire batch succeeds, we let Lambda to proceed in deleting the records from the queue for cost reasons. ### Partial failure mechanics @@ -104,7 +105,7 @@ All records in the batch will be passed to this handler for processing, even if !!! warning You will not have accessed to the **processed messages** within the Lambda Handler. - All processing logic will and should be performed by the `record_handler` function. + All processing logic will and should be performed by the `record_handler` function. ## Advanced @@ -114,8 +115,8 @@ They have nearly the same behaviour when it comes to processing messages from th * **Entire batch has been successfully processed**, where your Lambda handler returned successfully, we will let SQS delete the batch to optimize your cost * **Entire Batch has been partially processed successfully**, where exceptions were raised within your `record handler`, we will: - - **1)** Delete successfully processed messages from the queue by directly calling `sqs:DeleteMessageBatch` - - **2)** Raise `SQSBatchProcessingError` to ensure failed messages return to your SQS queue + * **1)** Delete successfully processed messages from the queue by directly calling `sqs:DeleteMessageBatch` + * **2)** Raise `SQSBatchProcessingError` to ensure failed messages return to your SQS queue The only difference is that **PartialSQSProcessor** will give you access to processed messages if you need. @@ -192,7 +193,6 @@ the `sqs_batch_processor` decorator: return result ``` - ### Suppressing exceptions If you want to disable the default behavior where `SQSBatchProcessingError` is raised if there are any errors, you can pass the `suppress_exception` boolean argument. @@ -300,15 +300,15 @@ When using Sentry.io for error monitoring, you can override `failure_handler` to === "sentry_integration.py" - ```python hl_lines="4 7-8" - from typing import Tuple + ```python hl_lines="4 7-8" + from typing import Tuple - from aws_lambda_powertools.utilities.batch import PartialSQSProcessor - from sentry_sdk import capture_exception + from aws_lambda_powertools.utilities.batch import PartialSQSProcessor + from sentry_sdk import capture_exception - class SQSProcessor(PartialSQSProcessor): - def failure_handler(self, record: Event, exception: Tuple) -> Tuple: # type: ignore - capture_exception() # send exception to Sentry - logger.exception("got exception while processing SQS message") - return super().failure_handler(record, exception) # type: ignore - ``` + class SQSProcessor(PartialSQSProcessor): + def failure_handler(self, record: Event, exception: Tuple) -> Tuple: # type: ignore + capture_exception() # send exception to Sentry + logger.exception("got exception while processing SQS message") + return super().failure_handler(record, exception) # type: ignore + ``` diff --git a/docs/utilities/data_classes.md b/docs/utilities/data_classes.md index 3217c5364d3..9616b2b75fd 100644 --- a/docs/utilities/data_classes.md +++ b/docs/utilities/data_classes.md @@ -29,34 +29,32 @@ For example, if your Lambda function is being triggered by an API Gateway proxy === "app.py" -```python hl_lines="1 4" -from aws_lambda_powertools.utilities.data_classes import APIGatewayProxyEvent + ```python hl_lines="1 4" + from aws_lambda_powertools.utilities.data_classes import APIGatewayProxyEvent -def lambda_handler(event: dict, context): - event = APIGatewayProxyEvent(event) - if 'helloworld' in event.path and event.http_method == 'GET': - do_something_with(event.body, user) -``` + def lambda_handler(event: dict, context): + event = APIGatewayProxyEvent(event) + if 'helloworld' in event.path and event.http_method == 'GET': + do_something_with(event.body, user) + ``` Same example as above, but using the `event_source` decorator === "app.py" -```python hl_lines="1 3" -from aws_lambda_powertools.utilities.data_classes import event_source, APIGatewayProxyEvent + ```python hl_lines="1 3" + from aws_lambda_powertools.utilities.data_classes import event_source, APIGatewayProxyEvent -@event_source(data_class=APIGatewayProxyEvent) -def lambda_handler(event: APIGatewayProxyEvent, context): - if 'helloworld' in event.path and event.http_method == 'GET': - do_something_with(event.body, user) -``` + @event_source(data_class=APIGatewayProxyEvent) + def lambda_handler(event: APIGatewayProxyEvent, context): + if 'helloworld' in event.path and event.http_method == 'GET': + do_something_with(event.body, user) + ``` **Autocomplete with self-documented properties and methods** - ![Utilities Data Classes](../media/utilities_data_classes.png) - ## Supported event sources Event Source | Data_class @@ -78,29 +76,27 @@ Event Source | Data_class [SNS](#sns) | `SNSEvent` [SQS](#sqs) | `SQSEvent` - !!! info The examples provided below are far from exhaustive - the data classes themselves are designed to provide a form of documentation inherently (via autocompletion, types and docstrings). - ### API Gateway Proxy It is used for either API Gateway REST API or HTTP API using v1 proxy event. === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source, APIGatewayProxyEvent - -@event_source(data_class=APIGatewayProxyEvent) -def lambda_handler(event: APIGatewayProxyEvent, context): - if "helloworld" in event.path and event.http_method == "GET": - request_context = event.request_context - identity = request_context.identity - user = identity.user - do_something_with(event.json_body, user) -``` + ```python + from aws_lambda_powertools.utilities.data_classes import event_source, APIGatewayProxyEvent + + @event_source(data_class=APIGatewayProxyEvent) + def lambda_handler(event: APIGatewayProxyEvent, context): + if "helloworld" in event.path and event.http_method == "GET": + request_context = event.request_context + identity = request_context.identity + user = identity.user + do_something_with(event.json_body, user) + ``` ### API Gateway Proxy V2 @@ -108,14 +104,14 @@ It is used for HTTP API using v2 proxy event. === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source, APIGatewayProxyEventV2 + ```python + from aws_lambda_powertools.utilities.data_classes import event_source, APIGatewayProxyEventV2 -@event_source(data_class=APIGatewayProxyEventV2) -def lambda_handler(event: APIGatewayProxyEventV2, context): - if "helloworld" in event.path and event.http_method == "POST": - do_something_with(event.json_body, event.query_string_parameters) -``` + @event_source(data_class=APIGatewayProxyEventV2) + def lambda_handler(event: APIGatewayProxyEventV2, context): + if "helloworld" in event.path and event.http_method == "POST": + do_something_with(event.json_body, event.query_string_parameters) + ``` ### Application Load Balancer @@ -123,14 +119,14 @@ Is it used for Application load balancer event. === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source, ALBEvent + ```python + from aws_lambda_powertools.utilities.data_classes import event_source, ALBEvent -@event_source(data_class=ALBEvent) -def lambda_handler(event: ALBEvent, context): - if "helloworld" in event.path and event.http_method == "POST": - do_something_with(event.json_body, event.query_string_parameters) -``` + @event_source(data_class=ALBEvent) + def lambda_handler(event: ALBEvent, context): + if "helloworld" in event.path and event.http_method == "POST": + do_something_with(event.json_body, event.query_string_parameters) + ``` ### AppSync Resolver @@ -178,6 +174,7 @@ In this example, we also use the new Logger `correlation_id` and built-in `corre raise ValueError(f"Unsupported field resolver: {event.field_name}") ``` + === "Example AppSync Event" ```json hl_lines="2-8 14 19 20" @@ -237,17 +234,17 @@ decompress and parse json data from the event. === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source, CloudWatchLogsEvent -from aws_lambda_powertools.utilities.data_classes.cloud_watch_logs_event import CloudWatchLogsDecodedData - -@event_source(data_class=CloudWatchLogsEvent) -def lambda_handler(event: CloudWatchLogsEvent, context): - decompressed_log: CloudWatchLogsDecodedData = event.parse_logs_data - log_events = decompressed_log.log_events - for event in log_events: - do_something_with(event.timestamp, event.message) -``` + ```python + from aws_lambda_powertools.utilities.data_classes import event_source, CloudWatchLogsEvent + from aws_lambda_powertools.utilities.data_classes.cloud_watch_logs_event import CloudWatchLogsDecodedData + + @event_source(data_class=CloudWatchLogsEvent) + def lambda_handler(event: CloudWatchLogsEvent, context): + decompressed_log: CloudWatchLogsDecodedData = event.parse_logs_data + log_events = decompressed_log.log_events + for event in log_events: + do_something_with(event.timestamp, event.message) + ``` ### CodePipeline Job @@ -255,50 +252,50 @@ Data classes and utility functions to help create continuous delivery pipelines === "app.py" -```python -from aws_lambda_powertools import Logger -from aws_lambda_powertools.utilities.data_classes import event_source, CodePipelineJobEvent - -logger = Logger() - -@event_source(data_class=CodePipelineJobEvent) -def lambda_handler(event, context): - """The Lambda function handler - - If a continuing job then checks the CloudFormation stack status - and updates the job accordingly. - - If a new job then kick of an update or creation of the target - CloudFormation stack. - """ - - # Extract the Job ID - job_id = event.get_id + ```python + from aws_lambda_powertools import Logger + from aws_lambda_powertools.utilities.data_classes import event_source, CodePipelineJobEvent - # Extract the params - params: dict = event.decoded_user_parameters - stack = params["stack"] - artifact_name = params["artifact"] - template_file = params["file"] + logger = Logger() - try: - if event.data.continuation_token: - # If we're continuing then the create/update has already been triggered - # we just need to check if it has finished. - check_stack_update_status(job_id, stack) - else: - template = event.get_artifact(artifact_name, template_file) - # Kick off a stack update or create - start_update_or_create(job_id, stack, template) - except Exception as e: - # If any other exceptions which we didn't expect are raised - # then fail the job and log the exception message. - logger.exception("Function failed due to exception.") - put_job_failure(job_id, "Function exception: " + str(e)) - - logger.debug("Function complete.") - return "Complete." -``` + @event_source(data_class=CodePipelineJobEvent) + def lambda_handler(event, context): + """The Lambda function handler + + If a continuing job then checks the CloudFormation stack status + and updates the job accordingly. + + If a new job then kick of an update or creation of the target + CloudFormation stack. + """ + + # Extract the Job ID + job_id = event.get_id + + # Extract the params + params: dict = event.decoded_user_parameters + stack = params["stack"] + artifact_name = params["artifact"] + template_file = params["file"] + + try: + if event.data.continuation_token: + # If we're continuing then the create/update has already been triggered + # we just need to check if it has finished. + check_stack_update_status(job_id, stack) + else: + template = event.get_artifact(artifact_name, template_file) + # Kick off a stack update or create + start_update_or_create(job_id, stack, template) + except Exception as e: + # If any other exceptions which we didn't expect are raised + # then fail the job and log the exception message. + logger.exception("Function failed due to exception.") + put_job_failure(job_id, "Function exception: " + str(e)) + + logger.debug("Function complete.") + return "Complete." + ``` ### Cognito User Pool @@ -322,15 +319,15 @@ Verify Auth Challenge | `data_classes.cognito_user_pool_event.VerifyAuthChalleng === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes.cognito_user_pool_event import PostConfirmationTriggerEvent + ```python + from aws_lambda_powertools.utilities.data_classes.cognito_user_pool_event import PostConfirmationTriggerEvent -def lambda_handler(event, context): - event: PostConfirmationTriggerEvent = PostConfirmationTriggerEvent(event) + def lambda_handler(event, context): + event: PostConfirmationTriggerEvent = PostConfirmationTriggerEvent(event) - user_attributes = event.request.user_attributes - do_something_with(user_attributes) -``` + user_attributes = event.request.user_attributes + do_something_with(user_attributes) + ``` #### Define Auth Challenge Example @@ -495,18 +492,18 @@ This example is based on the AWS Cognito docs for [Create Auth Challenge Lambda === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source -from aws_lambda_powertools.utilities.data_classes.cognito_user_pool_event import CreateAuthChallengeTriggerEvent - -@event_source(data_class=CreateAuthChallengeTriggerEvent) -def handler(event: CreateAuthChallengeTriggerEvent, context) -> dict: - if event.request.challenge_name == "CUSTOM_CHALLENGE": - event.response.public_challenge_parameters = {"captchaUrl": "url/123.jpg"} - event.response.private_challenge_parameters = {"answer": "5"} - event.response.challenge_metadata = "CAPTCHA_CHALLENGE" - return event.raw_event -``` + ```python + from aws_lambda_powertools.utilities.data_classes import event_source + from aws_lambda_powertools.utilities.data_classes.cognito_user_pool_event import CreateAuthChallengeTriggerEvent + + @event_source(data_class=CreateAuthChallengeTriggerEvent) + def handler(event: CreateAuthChallengeTriggerEvent, context) -> dict: + if event.request.challenge_name == "CUSTOM_CHALLENGE": + event.response.public_challenge_parameters = {"captchaUrl": "url/123.jpg"} + event.response.private_challenge_parameters = {"answer": "5"} + event.response.challenge_metadata = "CAPTCHA_CHALLENGE" + return event.raw_event + ``` #### Verify Auth Challenge Response Example @@ -514,17 +511,17 @@ This example is based on the AWS Cognito docs for [Verify Auth Challenge Respons === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source -from aws_lambda_powertools.utilities.data_classes.cognito_user_pool_event import VerifyAuthChallengeResponseTriggerEvent - -@event_source(data_class=VerifyAuthChallengeResponseTriggerEvent) -def handler(event: VerifyAuthChallengeResponseTriggerEvent, context) -> dict: - event.response.answer_correct = ( - event.request.private_challenge_parameters.get("answer") == event.request.challenge_answer - ) - return event.raw_event -``` + ```python + from aws_lambda_powertools.utilities.data_classes import event_source + from aws_lambda_powertools.utilities.data_classes.cognito_user_pool_event import VerifyAuthChallengeResponseTriggerEvent + + @event_source(data_class=VerifyAuthChallengeResponseTriggerEvent) + def handler(event: VerifyAuthChallengeResponseTriggerEvent, context) -> dict: + event.response.answer_correct = ( + event.request.private_challenge_parameters.get("answer") == event.request.challenge_answer + ) + return event.raw_event + ``` ### Connect Contact Flow @@ -532,21 +529,21 @@ def handler(event: VerifyAuthChallengeResponseTriggerEvent, context) -> dict: === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes.connect_contact_flow_event import ( - ConnectContactFlowChannel, - ConnectContactFlowEndpointType, - ConnectContactFlowEvent, - ConnectContactFlowInitiationMethod, -) - -def lambda_handler(event, context): - event: ConnectContactFlowEvent = ConnectContactFlowEvent(event) - assert event.contact_data.attributes == {"Language": "en-US"} - assert event.contact_data.channel == ConnectContactFlowChannel.VOICE - assert event.contact_data.customer_endpoint.endpoint_type == ConnectContactFlowEndpointType.TELEPHONE_NUMBER - assert event.contact_data.initiation_method == ConnectContactFlowInitiationMethod.API -``` + ```python + from aws_lambda_powertools.utilities.data_classes.connect_contact_flow_event import ( + ConnectContactFlowChannel, + ConnectContactFlowEndpointType, + ConnectContactFlowEvent, + ConnectContactFlowInitiationMethod, + ) + + def lambda_handler(event, context): + event: ConnectContactFlowEvent = ConnectContactFlowEvent(event) + assert event.contact_data.attributes == {"Language": "en-US"} + assert event.contact_data.channel == ConnectContactFlowChannel.VOICE + assert event.contact_data.customer_endpoint.endpoint_type == ConnectContactFlowEndpointType.TELEPHONE_NUMBER + assert event.contact_data.initiation_method == ConnectContactFlowInitiationMethod.API + ``` ### DynamoDB Streams @@ -556,55 +553,55 @@ attributes values (`AttributeValue`), as well as enums for stream view type (`St === "app.py" - ```python - from aws_lambda_powertools.utilities.data_classes.dynamo_db_stream_event import ( - DynamoDBStreamEvent, - DynamoDBRecordEventName - ) + ```python + from aws_lambda_powertools.utilities.data_classes.dynamo_db_stream_event import ( + DynamoDBStreamEvent, + DynamoDBRecordEventName + ) - def lambda_handler(event, context): - event: DynamoDBStreamEvent = DynamoDBStreamEvent(event) + def lambda_handler(event, context): + event: DynamoDBStreamEvent = DynamoDBStreamEvent(event) - # Multiple records can be delivered in a single event - for record in event.records: - if record.event_name == DynamoDBRecordEventName.MODIFY: - do_something_with(record.dynamodb.new_image) - do_something_with(record.dynamodb.old_image) - ``` + # Multiple records can be delivered in a single event + for record in event.records: + if record.event_name == DynamoDBRecordEventName.MODIFY: + do_something_with(record.dynamodb.new_image) + do_something_with(record.dynamodb.old_image) + ``` === "multiple_records_types.py" - ```python - from aws_lambda_powertools.utilities.data_classes import event_source, DynamoDBStreamEvent - from aws_lambda_powertools.utilities.data_classes.dynamo_db_stream_event import AttributeValueType, AttributeValue - from aws_lambda_powertools.utilities.typing import LambdaContext - - - @event_source(data_class=DynamoDBStreamEvent) - def lambda_handler(event: DynamoDBStreamEvent, context: LambdaContext): - for record in event.records: - key: AttributeValue = record.dynamodb.keys["id"] - if key == AttributeValueType.Number: - # {"N": "123.45"} => "123.45" - assert key.get_value == key.n_value - print(key.get_value) - elif key == AttributeValueType.Map: - assert key.get_value == key.map_value - print(key.get_value) - ``` + ```python + from aws_lambda_powertools.utilities.data_classes import event_source, DynamoDBStreamEvent + from aws_lambda_powertools.utilities.data_classes.dynamo_db_stream_event import AttributeValueType, AttributeValue + from aws_lambda_powertools.utilities.typing import LambdaContext + + + @event_source(data_class=DynamoDBStreamEvent) + def lambda_handler(event: DynamoDBStreamEvent, context: LambdaContext): + for record in event.records: + key: AttributeValue = record.dynamodb.keys["id"] + if key == AttributeValueType.Number: + # {"N": "123.45"} => "123.45" + assert key.get_value == key.n_value + print(key.get_value) + elif key == AttributeValueType.Map: + assert key.get_value == key.map_value + print(key.get_value) + ``` ### EventBridge === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source, EventBridgeEvent + ```python + from aws_lambda_powertools.utilities.data_classes import event_source, EventBridgeEvent -@event_source(data_class=EventBridgeEvent) -def lambda_handler(event: EventBridgeEvent, context): - do_something_with(event.detail) + @event_source(data_class=EventBridgeEvent) + def lambda_handler(event: EventBridgeEvent, context): + do_something_with(event.detail) -``` + ``` ### Kinesis streams @@ -613,40 +610,40 @@ or plain text, depending on the original payload. === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source, KinesisStreamEvent + ```python + from aws_lambda_powertools.utilities.data_classes import event_source, KinesisStreamEvent -@event_source(data_class=KinesisStreamEvent) -def lambda_handler(event: KinesisStreamEvent, context): - kinesis_record = next(event.records).kinesis + @event_source(data_class=KinesisStreamEvent) + def lambda_handler(event: KinesisStreamEvent, context): + kinesis_record = next(event.records).kinesis - # if data was delivered as text - data = kinesis_record.data_as_text() + # if data was delivered as text + data = kinesis_record.data_as_text() - # if data was delivered as json - data = kinesis_record.data_as_json() + # if data was delivered as json + data = kinesis_record.data_as_json() - do_something_with(data) -``` + do_something_with(data) + ``` ### S3 === "app.py" -```python -from urllib.parse import unquote_plus -from aws_lambda_powertools.utilities.data_classes import event_source, S3Event + ```python + from urllib.parse import unquote_plus + from aws_lambda_powertools.utilities.data_classes import event_source, S3Event -@event_source(data_class=S3Event) -def lambda_handler(event: S3Event, context): - bucket_name = event.bucket_name + @event_source(data_class=S3Event) + def lambda_handler(event: S3Event, context): + bucket_name = event.bucket_name - # Multiple records can be delivered in a single event - for record in event.records: - object_key = unquote_plus(record.s3.get_object.key) + # Multiple records can be delivered in a single event + for record in event.records: + object_key = unquote_plus(record.s3.get_object.key) - do_something_with(f"{bucket_name}/{object_key}") -``` + do_something_with(f"{bucket_name}/{object_key}") + ``` ### S3 Object Lambda @@ -654,81 +651,81 @@ This example is based on the AWS Blog post [Introducing Amazon S3 Object Lambda === "app.py" -```python hl_lines="5-6 12 14" -import boto3 -import requests + ```python hl_lines="5-6 12 14" + import boto3 + import requests -from aws_lambda_powertools import Logger -from aws_lambda_powertools.logging.correlation_paths import S3_OBJECT_LAMBDA -from aws_lambda_powertools.utilities.data_classes.s3_object_event import S3ObjectLambdaEvent + from aws_lambda_powertools import Logger + from aws_lambda_powertools.logging.correlation_paths import S3_OBJECT_LAMBDA + from aws_lambda_powertools.utilities.data_classes.s3_object_event import S3ObjectLambdaEvent -logger = Logger() -session = boto3.Session() -s3 = session.client("s3") + logger = Logger() + session = boto3.Session() + s3 = session.client("s3") -@logger.inject_lambda_context(correlation_id_path=S3_OBJECT_LAMBDA, log_event=True) -def lambda_handler(event, context): - event = S3ObjectLambdaEvent(event) + @logger.inject_lambda_context(correlation_id_path=S3_OBJECT_LAMBDA, log_event=True) + def lambda_handler(event, context): + event = S3ObjectLambdaEvent(event) - # Get object from S3 - response = requests.get(event.input_s3_url) - original_object = response.content.decode("utf-8") + # Get object from S3 + response = requests.get(event.input_s3_url) + original_object = response.content.decode("utf-8") - # Make changes to the object about to be returned - transformed_object = original_object.upper() + # Make changes to the object about to be returned + transformed_object = original_object.upper() - # Write object back to S3 Object Lambda - s3.write_get_object_response( - Body=transformed_object, RequestRoute=event.request_route, RequestToken=event.request_token - ) + # Write object back to S3 Object Lambda + s3.write_get_object_response( + Body=transformed_object, RequestRoute=event.request_route, RequestToken=event.request_token + ) - return {"status_code": 200} -``` + return {"status_code": 200} + ``` ### SES === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source, SESEvent + ```python + from aws_lambda_powertools.utilities.data_classes import event_source, SESEvent -@event_source(data_class=SESEvent) -def lambda_handler(event: SESEvent, context): - # Multiple records can be delivered in a single event - for record in event.records: - mail = record.ses.mail - common_headers = mail.common_headers + @event_source(data_class=SESEvent) + def lambda_handler(event: SESEvent, context): + # Multiple records can be delivered in a single event + for record in event.records: + mail = record.ses.mail + common_headers = mail.common_headers - do_something_with(common_headers.to, common_headers.subject) -``` + do_something_with(common_headers.to, common_headers.subject) + ``` ### SNS === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source, SNSEvent + ```python + from aws_lambda_powertools.utilities.data_classes import event_source, SNSEvent -@event_source(data_class=SNSEvent) -def lambda_handler(event: SNSEvent, context): - # Multiple records can be delivered in a single event - for record in event.records: - message = record.sns.message - subject = record.sns.subject + @event_source(data_class=SNSEvent) + def lambda_handler(event: SNSEvent, context): + # Multiple records can be delivered in a single event + for record in event.records: + message = record.sns.message + subject = record.sns.subject - do_something_with(subject, message) -``` + do_something_with(subject, message) + ``` ### SQS === "app.py" -```python -from aws_lambda_powertools.utilities.data_classes import event_source, SQSEvent + ```python + from aws_lambda_powertools.utilities.data_classes import event_source, SQSEvent -@event_source(data_class=SQSEvent) -def lambda_handler(event: SQSEvent, context): - # Multiple records can be delivered in a single event - for record in event.records: - do_something_with(record.body) -``` + @event_source(data_class=SQSEvent) + def lambda_handler(event: SQSEvent, context): + # Multiple records can be delivered in a single event + for record in event.records: + do_something_with(record.body) + ``` diff --git a/docs/utilities/feature_flags.md b/docs/utilities/feature_flags.md index 5651542d451..d22f9c03296 100644 --- a/docs/utilities/feature_flags.md +++ b/docs/utilities/feature_flags.md @@ -110,67 +110,67 @@ The following sample infrastructure will be used throughout this documentation: === "CDK" - ```python hl_lines="11-22 24 29 35 42 50" - import json - - import aws_cdk.aws_appconfig as appconfig - from aws_cdk import core - - - class SampleFeatureFlagStore(core.Construct): - def __init__(self, scope: core.Construct, id_: str) -> None: - super().__init__(scope, id_) - - features_config = { - "premium_features": { - "default": False, - "rules": { - "customer tier equals premium": { - "when_match": True, - "conditions": [{"action": "EQUALS", "key": "tier", "value": "premium"}], - } - }, - }, - "ten_percent_off_campaign": {"default": True}, - } - - self.config_app = appconfig.CfnApplication( - self, - id="app", - name="product-catalogue", - ) - self.config_env = appconfig.CfnEnvironment( - self, - id="env", - application_id=self.config_app.ref, - name="dev-env", - ) - self.config_profile = appconfig.CfnConfigurationProfile( - self, - id="profile", - application_id=self.config_app.ref, - location_uri="hosted", - name="features", - ) - self.hosted_cfg_version = appconfig.CfnHostedConfigurationVersion( - self, - "version", - application_id=self.config_app.ref, - configuration_profile_id=self.config_profile.ref, - content=json.dumps(features_config), - content_type="application/json", - ) - self.app_config_deployment = appconfig.CfnDeployment( - self, - id="deploy", - application_id=self.config_app.ref, - configuration_profile_id=self.config_profile.ref, - configuration_version=self.hosted_cfg_version.ref, - deployment_strategy_id="AppConfig.AllAtOnce", - environment_id=self.config_env.ref, - ) - - ``` + ```python hl_lines="11-22 24 29 35 42 50" + import json + + import aws_cdk.aws_appconfig as appconfig + from aws_cdk import core + + + class SampleFeatureFlagStore(core.Construct): + def __init__(self, scope: core.Construct, id_: str) -> None: + super().__init__(scope, id_) + + features_config = { + "premium_features": { + "default": False, + "rules": { + "customer tier equals premium": { + "when_match": True, + "conditions": [{"action": "EQUALS", "key": "tier", "value": "premium"}], + } + }, + }, + "ten_percent_off_campaign": {"default": True}, + } + + self.config_app = appconfig.CfnApplication( + self, + id="app", + name="product-catalogue", + ) + self.config_env = appconfig.CfnEnvironment( + self, + id="env", + application_id=self.config_app.ref, + name="dev-env", + ) + self.config_profile = appconfig.CfnConfigurationProfile( + self, + id="profile", + application_id=self.config_app.ref, + location_uri="hosted", + name="features", + ) + self.hosted_cfg_version = appconfig.CfnHostedConfigurationVersion( + self, + "version", + application_id=self.config_app.ref, + configuration_profile_id=self.config_profile.ref, + content=json.dumps(features_config), + content_type="application/json", + ) + self.app_config_deployment = appconfig.CfnDeployment( + self, + id="deploy", + application_id=self.config_app.ref, + configuration_profile_id=self.config_profile.ref, + configuration_version=self.hosted_cfg_version.ref, + deployment_strategy_id="AppConfig.AllAtOnce", + environment_id=self.config_env.ref, + ) + + ``` ### Evaluating a single feature flag @@ -184,7 +184,7 @@ The `evaluate` method supports two optional parameters: === "app.py" ```python hl_lines="3 9 13 17-19" - from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore + from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore app_config = AppConfigStore( environment="dev", @@ -194,50 +194,50 @@ The `evaluate` method supports two optional parameters: feature_flags = FeatureFlags(store=app_config) - def lambda_handler(event, context): - # Get customer's tier from incoming request - ctx = { "tier": event.get("tier", "standard") } + def lambda_handler(event, context): + # Get customer's tier from incoming request + ctx = { "tier": event.get("tier", "standard") } - # Evaluate whether customer's tier has access to premium features - # based on `has_premium_features` rules - has_premium_features: bool = feature_flags.evaluate(name="premium_features", + # Evaluate whether customer's tier has access to premium features + # based on `has_premium_features` rules + has_premium_features: bool = feature_flags.evaluate(name="premium_features", context=ctx, default=False) - if has_premium_features: - # enable premium features - ... + if has_premium_features: + # enable premium features + ... ``` === "event.json" - ```json hl_lines="3" - { - "username": "lessa", - "tier": "premium", - "basked_id": "random_id" - } - ``` + ```json hl_lines="3" + { + "username": "lessa", + "tier": "premium", + "basked_id": "random_id" + } + ``` === "features.json" ```json hl_lines="2 6 9-11" - { - "premium_features": { - "default": false, - "rules": { - "customer tier equals premium": { - "when_match": true, - "conditions": [ - { - "action": "EQUALS", - "key": "tier", - "value": "premium" - } - ] - } - } - }, - "ten_percent_off_campaign": { - "default": false - } + { + "premium_features": { + "default": false, + "rules": { + "customer tier equals premium": { + "when_match": true, + "conditions": [ + { + "action": "EQUALS", + "key": "tier", + "value": "premium" + } + ] + } + } + }, + "ten_percent_off_campaign": { + "default": false + } } ``` @@ -250,7 +250,7 @@ In this case, we could omit the `context` parameter and simply evaluate whether === "app.py" ```python hl_lines="12-13" - from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore + from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore app_config = AppConfigStore( environment="dev", @@ -260,22 +260,22 @@ In this case, we could omit the `context` parameter and simply evaluate whether feature_flags = FeatureFlags(store=app_config) - def lambda_handler(event, context): - apply_discount: bool = feature_flags.evaluate(name="ten_percent_off_campaign", - default=False) + def lambda_handler(event, context): + apply_discount: bool = feature_flags.evaluate(name="ten_percent_off_campaign", + default=False) - if apply_discount: - # apply 10% discount to product - ... + if apply_discount: + # apply 10% discount to product + ... ``` === "features.json" ```json hl_lines="2-3" - { - "ten_percent_off_campaign": { - "default": false - } + { + "ten_percent_off_campaign": { + "default": false + } } ``` @@ -288,10 +288,10 @@ You can use `get_enabled_features` method for scenarios where you need a list of === "app.py" ```python hl_lines="17-20 23" - from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver - from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore + from aws_lambda_powertools.event_handler.api_gateway import ApiGatewayResolver + from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore - app = ApiGatewayResolver() + app = ApiGatewayResolver() app_config = AppConfigStore( environment="dev", @@ -301,85 +301,82 @@ You can use `get_enabled_features` method for scenarios where you need a list of feature_flags = FeatureFlags(store=app_config) + @app.get("/products") + def list_products(): + ctx = { + **app.current_event.headers, + **app.current_event.json_body + } - @app.get("/products") - def list_products(): - ctx = { - **app.current_event.headers, - **app.current_event.json_body - } + # all_features is evaluated to ["geo_customer_campaign", "ten_percent_off_campaign"] + all_features: list[str] = feature_flags.get_enabled_features(context=ctx) - # all_features is evaluated to ["geo_customer_campaign", "ten_percent_off_campaign"] - all_features: list[str] = feature_flags.get_enabled_features(context=ctx) + if "geo_customer_campaign" in all_features: + # apply discounts based on geo + ... - if "geo_customer_campaign" in all_features: - # apply discounts based on geo - ... + if "ten_percent_off_campaign" in all_features: + # apply additional 10% for all customers + ... - if "ten_percent_off_campaign" in all_features: - # apply additional 10% for all customers - ... - - def lambda_handler(event, context): - return app.resolve(event, context) + def lambda_handler(event, context): + return app.resolve(event, context) ``` === "event.json" - ```json hl_lines="2 8" - { - "body": "{\"username\": \"lessa\", \"tier\": \"premium\", \"basked_id\": \"random_id\"}", - "resource": "/products", - "path": "/products", - "httpMethod": "GET", - "isBase64Encoded": false, - "headers": { - "CloudFront-Viewer-Country": "NL", - } - } - ``` - + ```json hl_lines="2 8" + { + "body": "{\"username\": \"lessa\", \"tier\": \"premium\", \"basked_id\": \"random_id\"}", + "resource": "/products", + "path": "/products", + "httpMethod": "GET", + "isBase64Encoded": false, + "headers": { + "CloudFront-Viewer-Country": "NL" + } + } + ``` === "features.json" ```json hl_lines="17-18 20 27-29" - { - "premium_features": { - "default": false, - "rules": { - "customer tier equals premium": { - "when_match": true, - "conditions": [ - { - "action": "EQUALS", - "key": "tier", - "value": "premium" - } - ] - } - } - }, - "ten_percent_off_campaign": { - "default": true - }, - "geo_customer_campaign": { - "default": false, - "rules": { - "customer in temporary discount geo": { - "when_match": true, - "conditions": [ - { - "action": "IN", - "key": "CloudFront-Viewer-Country", - "value": ["NL", "IE", "UK", "PL", "PT"], - } - ] - } - } - } + { + "premium_features": { + "default": false, + "rules": { + "customer tier equals premium": { + "when_match": true, + "conditions": [ + { + "action": "EQUALS", + "key": "tier", + "value": "premium" + } + ] + } + } + }, + "ten_percent_off_campaign": { + "default": true + }, + "geo_customer_campaign": { + "default": false, + "rules": { + "customer in temporary discount geo": { + "when_match": true, + "conditions": [ + { + "action": "IN", + "key": "CloudFront-Viewer-Country", + "value": ["NL", "IE", "UK", "PL", "PT"] + } + ] + } + } + } } ``` - ## Advanced ### Schema @@ -391,13 +388,14 @@ This utility expects a certain schema to be stored as JSON within AWS AppConfig. A feature can simply have its name and a `default` value. This is either on or off, also known as a [static flag](#static-flags). === "minimal_schema.json" - ```json hl_lines="2-3" - { - "global_feature": { - "default": true - } - } - ``` + + ```json hl_lines="2-3" + { + "global_feature": { + "default": true + } + } + ``` If you need more control and want to provide context such as user group, permissions, location, etc., you need to add rules to your feature flag configuration. @@ -411,25 +409,25 @@ When adding `rules` to a feature, they must contain: === "feature_with_rules.json" - ```json hl_lines="4-11" - { - "premium_feature": { - "default": false, - "rules": { - "customer tier equals premium": { - "when_match": true, - "conditions": [ - { - "action": "EQUALS", - "key": "tier", - "value": "premium" - } - ] - } - } - } - } - ``` + ```json hl_lines="4-11" + { + "premium_feature": { + "default": false, + "rules": { + "customer tier equals premium": { + "when_match": true, + "conditions": [ + { + "action": "EQUALS", + "key": "tier", + "value": "premium" + } + ] + } + } + } + } + ``` You can have multiple rules with different names. The rule engine will return the first result `when_match` of the matching rule configuration, or `default` value when none of the rules apply. @@ -438,16 +436,17 @@ You can have multiple rules with different names. The rule engine will return th The `conditions` block is a list of conditions that contain `action`, `key`, and `value` keys: === "conditions.json" - ```json hl_lines="8-11" + + ```json hl_lines="5-7" { - ... - "conditions": [ - { - "action": "EQUALS", - "key": "tier", - "value": "premium" - } - ] + ... + "conditions": [ + { + "action": "EQUALS", + "key": "tier", + "value": "premium" + } + ] } ``` @@ -469,16 +468,16 @@ By default, we cache configuration retrieved from the Store for 5 seconds for pe You can override `max_age` parameter when instantiating the store. -```python hl_lines="7" -from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore + ```python hl_lines="7" + from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore -app_config = AppConfigStore( - environment="dev", - application="product-catalogue", - name="features", - max_age=300 -) -``` + app_config = AppConfigStore( + environment="dev", + application="product-catalogue", + name="features", + max_age=300 + ) + ``` ### Envelope @@ -488,47 +487,47 @@ For this to work, you need to use a JMESPath expression via the `envelope` param === "app.py" - ```python hl_lines="7" - from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore + ```python hl_lines="7" + from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore - app_config = AppConfigStore( - environment="dev", - application="product-catalogue", - name="configuration", - envelope = "feature_flags" - ) - ``` + app_config = AppConfigStore( + environment="dev", + application="product-catalogue", + name="configuration", + envelope = "feature_flags" + ) + ``` === "configuration.json" - ```json hl_lines="6" - { - "logging": { - "level": "INFO", - "sampling_rate": 0.1 - }, - "feature_flags": { - "premium_feature": { - "default": false, - "rules": { - "customer tier equals premium": { - "when_match": true, - "conditions": [ - { - "action": "EQUALS", - "key": "tier", - "value": "premium" - } - ] - } - } - }, - "feature2": { - "default": false - } - } - } - ``` + ```json hl_lines="6" + { + "logging": { + "level": "INFO", + "sampling_rate": 0.1 + }, + "feature_flags": { + "premium_feature": { + "default": false, + "rules": { + "customer tier equals premium": { + "when_match": true, + "conditions": [ + { + "action": "EQUALS", + "key": "tier", + "value": "premium" + } + ] + } + } + }, + "feature2": { + "default": false + } + } + } + ``` ### Built-in store provider @@ -552,35 +551,34 @@ Parameter | Default | Description === "appconfig_store_example.py" -```python hl_lines="19-25" -from botocore.config import Config + ```python hl_lines="19-25" + from botocore.config import Config -import jmespath + import jmespath -boto_config = Config(read_timeout=10, retries={"total_max_attempts": 2}) + boto_config = Config(read_timeout=10, retries={"total_max_attempts": 2}) -# Custom JMESPath functions -class CustomFunctions(jmespath.functions.Functions): + # Custom JMESPath functions + class CustomFunctions(jmespath.functions.Functions): - @jmespath.functions.signature({'types': ['string']}) - def _func_special_decoder(self, s): - return my_custom_decoder_logic(s) + @jmespath.functions.signature({'types': ['string']}) + def _func_special_decoder(self, s): + return my_custom_decoder_logic(s) -custom_jmespath_options = {"custom_functions": CustomFunctions()} + custom_jmespath_options = {"custom_functions": CustomFunctions()} -app_config = AppConfigStore( - environment="dev", - application="product-catalogue", - name="configuration", - max_age=120, - envelope = "features", - sdk_config=boto_config, - jmespath_options=custom_jmespath_options -) -``` - + app_config = AppConfigStore( + environment="dev", + application="product-catalogue", + name="configuration", + max_age=120, + envelope = "features", + sdk_config=boto_config, + jmespath_options=custom_jmespath_options + ) + ``` ## Testing your code @@ -593,56 +591,56 @@ You can unit test your feature flags locally and independently without setting u === "test_feature_flags_independently.py" ```python hl_lines="9-11" - from typing import Dict, List, Optional - - from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore, RuleAction - - - def init_feature_flags(mocker, mock_schema, envelope="") -> FeatureFlags: - """Mock AppConfig Store get_configuration method to use mock schema instead""" - - method_to_mock = "aws_lambda_powertools.utilities.feature_flags.AppConfigStore.get_configuration" - mocked_get_conf = mocker.patch(method_to_mock) - mocked_get_conf.return_value = mock_schema - - app_conf_store = AppConfigStore( - environment="test_env", - application="test_app", - name="test_conf_name", - envelope=envelope, - ) - - return FeatureFlags(store=app_conf_store) - - - def test_flags_condition_match(mocker): - # GIVEN - expected_value = True - mocked_app_config_schema = { - "my_feature": { - "default": expected_value, - "rules": { - "tenant id equals 12345": { - "when_match": True, - "conditions": [ - { - "action": RuleAction.EQUALS.value, - "key": "tenant_id", - "value": "12345", - } - ], - } - }, - } - } - - # WHEN - ctx = {"tenant_id": "12345", "username": "a"} - feature_flags = init_feature_flags(mocker=mocker, mock_schema=mocked_app_config_schema) - flag = feature_flags.evaluate(name="my_feature", context=ctx, default=False) - - # THEN - assert flag == expected_value + from typing import Dict, List, Optional + + from aws_lambda_powertools.utilities.feature_flags import FeatureFlags, AppConfigStore, RuleAction + + + def init_feature_flags(mocker, mock_schema, envelope="") -> FeatureFlags: + """Mock AppConfig Store get_configuration method to use mock schema instead""" + + method_to_mock = "aws_lambda_powertools.utilities.feature_flags.AppConfigStore.get_configuration" + mocked_get_conf = mocker.patch(method_to_mock) + mocked_get_conf.return_value = mock_schema + + app_conf_store = AppConfigStore( + environment="test_env", + application="test_app", + name="test_conf_name", + envelope=envelope, + ) + + return FeatureFlags(store=app_conf_store) + + + def test_flags_condition_match(mocker): + # GIVEN + expected_value = True + mocked_app_config_schema = { + "my_feature": { + "default": expected_value, + "rules": { + "tenant id equals 12345": { + "when_match": True, + "conditions": [ + { + "action": RuleAction.EQUALS.value, + "key": "tenant_id", + "value": "12345", + } + ], + } + }, + } + } + + # WHEN + ctx = {"tenant_id": "12345", "username": "a"} + feature_flags = init_feature_flags(mocker=mocker, mock_schema=mocked_app_config_schema) + flag = feature_flags.evaluate(name="my_feature", context=ctx, default=False) + + # THEN + assert flag == expected_value ``` ## Feature flags vs Parameters vs env vars diff --git a/docs/utilities/idempotency.md b/docs/utilities/idempotency.md index a684695b36c..8a0d1c81d5a 100644 --- a/docs/utilities/idempotency.md +++ b/docs/utilities/idempotency.md @@ -77,7 +77,7 @@ TTL attribute name | `expiration` | This can only be configured after your table !!! warning "Large responses with DynamoDB persistence layer" When using this utility with DynamoDB, your function's responses must be [smaller than 400KB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-items). - Larger items cannot be written to DynamoDB and will cause exceptions. + Larger items cannot be written to DynamoDB and will cause exceptions. !!! info "DynamoDB " Each function invocation will generally make 2 requests to DynamoDB. If the @@ -306,6 +306,7 @@ You can enable in-memory caching with the **`use_local_cache`** parameter: ``` When enabled, the default is to cache a maximum of 256 records in each Lambda execution environment - You can change it with the **`local_cache_max_items`** parameter. + ### Expiring idempotency records !!! note diff --git a/docs/utilities/middleware_factory.md b/docs/utilities/middleware_factory.md index b0f5d4a1ccd..366ae7eda66 100644 --- a/docs/utilities/middleware_factory.md +++ b/docs/utilities/middleware_factory.md @@ -107,6 +107,8 @@ For advanced use cases, you can instantiate [Tracer](../core/tracer.md) inside y When unit testing middlewares with `trace_execution` option enabled, use `POWERTOOLS_TRACE_DISABLED` env var to safely disable Tracer. -```bash -POWERTOOLS_TRACE_DISABLED=1 python -m pytest -``` +=== "shell" + + ```bash + POWERTOOLS_TRACE_DISABLED=1 python -m pytest + ``` diff --git a/docs/utilities/parameters.md b/docs/utilities/parameters.md index 871ea199e5a..081d22817ab 100644 --- a/docs/utilities/parameters.md +++ b/docs/utilities/parameters.md @@ -201,11 +201,12 @@ The DynamoDB Provider does not have any high-level functions, as it needs to kno You can initialize the DynamoDB provider pointing to [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html) using **`endpoint_url`** parameter: === "dynamodb_local.py" - ```python hl_lines="3" - from aws_lambda_powertools.utilities import parameters - dynamodb_provider = parameters.DynamoDBProvider(table_name="my-table", endpoint_url="http://localhost:8000") - ``` + ```python hl_lines="3" + from aws_lambda_powertools.utilities import parameters + + dynamodb_provider = parameters.DynamoDBProvider(table_name="my-table", endpoint_url="http://localhost:8000") + ``` **DynamoDB table structure for single parameters** @@ -218,7 +219,7 @@ For single parameters, you must use `id` as the [partition key](https://docs.aws > **Example** === "app.py" - With this table, the return value of `dynamodb_provider.get("my-param")` call will be `my-value`. + With this table, the return value of `dynamodb_provider.get("my-param")` call will be `my-value`. ```python hl_lines="3 7" from aws_lambda_powertools.utilities import parameters @@ -242,7 +243,6 @@ For example, if you want to retrieve multiple parameters having `my-hash-key` as | my-hash-key | param-b | my-value-b | | my-hash-key | param-c | my-value-c | - With this table, the return of `dynamodb_provider.get_multiple("my-hash-key")` call will be a dictionary like: ```json diff --git a/docs/utilities/parser.md b/docs/utilities/parser.md index 11dbaca48a8..47f87e355bb 100644 --- a/docs/utilities/parser.md +++ b/docs/utilities/parser.md @@ -230,7 +230,6 @@ You can extend them to include your own models, and yet have all other known fie 3. Defined how part of our EventBridge event should look like by overriding `detail` key within our `OrderEventModel` 4. Parser parsed the original event against `OrderEventModel` - ## Envelopes When trying to parse your payloads wrapped in a known structure, you might encounter the following situations: @@ -291,7 +290,6 @@ Here's an example of parsing a model found in an event coming from EventBridge, 3. Parser parsed the original event against the EventBridge model 4. Parser then parsed the `detail` key using `UserModel` - ### Built-in envelopes Parser comes with the following built-in envelopes, where `Model` in the return section is your given model. @@ -307,6 +305,7 @@ Parser comes with the following built-in envelopes, where `Model` in the return | **SnsSqsEnvelope** | 1. Parses data using `SqsModel`.
2. Parses SNS records in `body` key using `SnsNotificationModel`.
3. Parses data in `Message` key using your model and return them in a list. | `List[Model]` | | **ApiGatewayEnvelope** | 1. Parses data using `APIGatewayProxyEventModel`.
2. Parses `body` key using your model and returns it. | `Model` | | **ApiGatewayV2Envelope** | 1. Parses data using `APIGatewayProxyEventV2Model`.
2. Parses `body` key using your model and returns it. | `Model` | + ### Bringing your own envelope You can create your own Envelope model and logic by inheriting from `BaseEnvelope`, and implementing the `parse` method. @@ -475,7 +474,6 @@ Alternatively, you can pass `'*'` as an argument for the decorator so that you c !!! info You can read more about validating list items, reusing validators, validating raw inputs, and a lot more in Pydantic's documentation. - ## Advanced use cases !!! info @@ -557,19 +555,19 @@ Artillery load test sample against a [hello world sample](https://github.com/aws ``` Summary report @ 14:36:07(+0200) 2020-10-23 - Scenarios launched: 10 - Scenarios completed: 10 - Requests completed: 2000 - Mean response/sec: 114.81 - Response time (msec): +Scenarios launched: 10 +Scenarios completed: 10 +Requests completed: 2000 +Mean response/sec: 114.81 +Response time (msec): min: 54.9 max: 1684.9 median: 68 p95: 109.1 p99: 180.3 - Scenario counts: +Scenario counts: 0: 10 (100%) - Codes: +Codes: 200: 2000 ``` @@ -579,18 +577,18 @@ Summary report @ 14:36:07(+0200) 2020-10-23 ``` Summary report @ 14:29:23(+0200) 2020-10-23 - Scenarios launched: 10 - Scenarios completed: 10 - Requests completed: 2000 - Mean response/sec: 111.67 - Response time (msec): +Scenarios launched: 10 +Scenarios completed: 10 +Requests completed: 2000 +Mean response/sec: 111.67 +Response time (msec): min: 54.3 max: 1887.2 median: 66.1 p95: 113.3 p99: 193.1 - Scenario counts: +Scenario counts: 0: 10 (100%) - Codes: +Codes: 200: 2000 ``` diff --git a/docs/utilities/validation.md b/docs/utilities/validation.md index 3a32500f122..7df339b7503 100644 --- a/docs/utilities/validation.md +++ b/docs/utilities/validation.md @@ -134,7 +134,6 @@ Here is a sample custom EventBridge event, where we only validate what's inside --8<-- "docs/shared/validation_basic_jsonschema.py" ``` - This is quite powerful because you can use JMESPath Query language to extract records from [arrays, slice and dice](https://jmespath.org/tutorial.html#list-and-slice-projections), to [pipe expressions](https://jmespath.org/tutorial.html#pipe-expressions) and [function expressions](https://jmespath.org/tutorial.html#functions), where you'd extract what you need before validating the actual payload. ### Built-in envelopes @@ -165,7 +164,6 @@ This utility comes with built-in envelopes to easily extract the payload from po --8<-- "docs/shared/validation_basic_jsonschema.py" ``` - Here is a handy table with built-in envelopes along with their JMESPath expressions in case you want to build your own. Envelope name | JMESPath expression @@ -189,12 +187,13 @@ Envelope name | JMESPath expression JSON Schemas with custom formats like `int64` will fail validation. If you have these, you can pass them using `formats` parameter: === "custom_json_schema_type_format.json" + ```json { - "lastModifiedTime": { - "format": "int64", - "type": "integer" - } + "lastModifiedTime": { + "format": "int64", + "type": "integer" + } } ``` @@ -209,7 +208,7 @@ For each format defined in a dictionary key, you must use a regex, or a function custom_format = { "int64": True, # simply ignore it, - "positive": lambda x: False if x < 0 else True + "positive": lambda x: False if x < 0 else True } validate(event=event, schema=schemas.INPUT, formats=custom_format) @@ -352,6 +351,7 @@ For each format defined in a dictionary key, you must use a regex, or a function ``` === "event.json" + ```json { "account": "123456789012", @@ -460,7 +460,6 @@ This sample will decode the value within the `data` key into a valid JSON before --8<-- "docs/shared/validation_basic_jsonschema.py" ``` - #### powertools_base64 function Use `powertools_base64` function to decode any base64 data.