Skip to content

Latest commit

 

History

History
359 lines (270 loc) · 12.4 KB

File metadata and controls

359 lines (270 loc) · 12.4 KB
title description
Logger
Core utility

import Note from "../../src/components/Note"

Logger provides an opinionated logger with output structured as JSON.

Key features

  • Capture key fields from Lambda context, cold start and structures logging output as JSON
  • Log Lambda event when instructed (disabled by default)
    • Enable via POWERTOOLS_LOGGER_LOG_EVENT="true" or explicitly via decorator param
  • Log sampling enables DEBUG log level for a percentage of requests (disabled by default)
    • Enable via POWERTOOLS_LOGGER_SAMPLE_RATE=0.1, ranges from 0 to 1, where 0.1 is 10% and 1 is 100%
  • Append additional keys to structured log at any point in time

Initialization

Set LOG_LEVEL env var as a start - Here is an example using AWS Serverless Application Model (SAM)

Resources:
    HelloWorldFunction:
        Type: AWS::Serverless::Function
        Properties:
        ...
        Runtime: python3.8
        Environment:
            Variables:
                LOG_LEVEL: INFO # highlight-line

By default, Logger uses INFO log level. You can either change log level via level param or via env var.

You can also explicitly set a service name via service param or via POWERTOOLS_SERVICE_NAME env var. This sets service key that will be present across all log statements.

from aws_lambda_powertools import Logger
# POWERTOOLS_SERVICE_NAME defined
logger = Logger() # highlight-line

# Explicit definition
Logger(service="payment", level="INFO")

Standard structured keys

Your Logger will always include the following keys to your structured logging:

Key Type Example Description
timestamp str "2020-05-24 18:17:33,774" Timestamp of actual log statement
level str "INFO" Logging level
location str "collect.handler:1" Source code location where statement was executed
service str "payment" Service name defined. "service_undefined" will be used if unknown
sampling_rate int 0.1 Debug logging sampling rate in percentage e.g. 10% in this case
message any "Collecting payment" Log statement value. Unserializable JSON values will be casted to string

Capturing Lambda context info

You can enrich your structured logs with key Lambda context information via inject_lambda_context.

from aws_lambda_powertools import Logger

logger = Logger()

@logger.inject_lambda_context
def handler(event, context):
   logger.info("Collecting payment")
   ...
   # You can log entire objects too
   logger.info({
      "operation": "collect_payment",
      "charge_id": event['charge_id']
   })
   ...

When used, this will include the following keys:

Key Type Example
cold_start bool false
function_name str "example-powertools-HelloWorldFunction-1P1Z6B39FLU73"
function_memory_size int 128
function_arn str "arn:aws:lambda:eu-west-1:012345678910:function:example-powertools-HelloWorldFunction-1P1Z6B39FLU73"
function_request_id str "899856cb-83d1-40d7-8611-9e78f15f32f4"
Excerpt output in CloudWatch Logs
{
   "timestamp":"2020-05-24 18:17:33,774",
   "level":"INFO",
   "location":"collect.handler:1",
   "service":"payment",
// highlight-start
   "lambda_function_name":"test",
   "lambda_function_memory_size": 128,
   "lambda_function_arn":"arn:aws:lambda:eu-west-1:12345678910:function:test",
   "lambda_request_id":"52fdfc07-2182-154f-163f-5f0f9a621d72",
   "cold_start": true,
// highlight-end
   "sampling_rate": 0.0,
   "message": "Collecting payment"
}

{
   "timestamp":"2020-05-24 18:17:33,774",
   "level":"INFO",
   "location":"collect.handler:15",
   "service":"payment",
   "lambda_function_name":"test",
   "lambda_function_memory_size": 128,
   "lambda_function_arn":"arn:aws:lambda:eu-west-1:12345678910:function:test",
   "lambda_request_id":"52fdfc07-2182-154f-163f-5f0f9a621d72",
   "cold_start": true,
   "sampling_rate": 0.0,
// highlight-start
   "message":{
      "operation":"collect_payment",
      "charge_id": "ch_AZFlk2345C0"
   }
// highlight-end
}

You can also explicitly log any incoming event using log_event param or via POWERTOOLS_LOGGER_LOG_EVENT env var.

This is disabled by default to prevent sensitive info being logged.
from aws_lambda_powertools import Logger

logger = Logger()

@logger.inject_lambda_context(log_event=True) # highlight-line
def handler(event, context):
   ...

Appending additional keys

You can append your own keys to your existing Logger via structure_logs with append param.

from aws_lambda_powertools import Logger

logger = Logger()

def handler(event, context):
   if "order_id" in event:
      logger.structure_logs(append=True, order_id=event["order_id"]) # highlight-line
      logger.info("Collecting payment")
      ...
Excerpt output in CloudWatch Logs
{
   "timestamp": "2020-05-24 18:17:33,774",
   "level": "INFO",
   "location": "collect.handler:1",
   "service": "payment",
   "lambda_function_name": "test",
   "lambda_function_memory_size": 128,
   "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test",
   "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72",
   "cold_start": true,
   "sampling_rate": 0.0,
   "order_id": "order_id_value", // highlight-line
   "message": "Collecting payment"
}

Reusing Logger across your code

Logger supports inheritance via child parameter. This allows you to create multiple Loggers across your code base, and propagate changes such as new keys to all Loggers.

# POWERTOOLS_SERVICE_NAME: "payment"
import shared # Creates a child logger named "payment.shared"
from aws_lambda_powertools import Logger

logger = Logger()

def handler(event, context):
  shared.inject_payment_id(event) # highlight-line
  logger.structure_logs(append=True, order_id=event["order_id"]) # highlight-line
      ...
# POWERTOOLS_SERVICE_NAME: "payment"
from aws_lambda_powertools import Logger

logger = Logger(child=True) # highlight-line

def inject_payment_id(event):
    logger.structure_logs(append=True, payment_id=event["payment_id"])

In this example, Logger will create a parent logger named payment and a child logger named payment.shared. Any changes in the parent and child Loggers will be propagated among them.

If you ever forget to use child param, we will return an existing Logger with the same service name.

Child loggers will be named after the following convention service.filename.

Sampling debug logs

You can dynamically set a percentage of your logs to DEBUG level using sample_rate param or via env var POWERTOOLS_LOGGER_SAMPLE_RATE.

Sampling calculation happens at the Logger class initialization. This means, when configured it, sampling it's more likely to happen during concurrent requests, or infrequent invocations as new Lambda execution contexts are created, not reused.

If you want this logic to happen on every invocation regardless whether Lambda reuses the execution environment or not, then create your Logger inside your Lambda handler.
from aws_lambda_powertools import Logger

# Sample 10% of debug logs e.g. 0.1
logger = Logger(sample_rate=0.1) # highlight-line

def handler(event, context):
   if "order_id" in event:
      logger.info("Collecting payment")
      ...
Excerpt output in CloudWatch Logs
{
   "timestamp": "2020-05-24 18:17:33,774",
   "level": "INFO",
   "location": "collect.handler:1",
   "service": "payment",
   "lambda_function_name": "test",
   "lambda_function_memory_size": 128,
   "lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test",
   "lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72",
   "cold_start": true,
   "sampling_rate": 0.1, // highlight-line
   "message": "Collecting payment"
}

Migrating from other Loggers

If you're migrating from other Loggers, there are few key points to be aware of: Service parameter, Inheriting Loggers, Overriding Log records, and Logging exceptions.

The service parameter

Service is what defines what the function is responsible for, or part of (e.g payment service), and the name of the Logger.

For Logger, the service is the logging key customers can use to search log operations for one or more functions - For example, search for all errors, or messages like X, where service is payment.

Inheriting Loggers

Python Logging hierarchy happens via the dot notation: service, service.child, service.child_2.

For inheritance, Logger uses a child=True parameter along with service being the same value across Loggers.

For child Loggers, we introspect the name of your module where Logger(child=True, service="name") is called, and we name your Logger as {service}.{filename}.

A common issue when migrating from other Loggers is that service might be defined in the parent Logger (no child param), and not defined in the child Logger:

# app.py
import my_module
from aws_lambda_powertools import Logger

logger = Logger(service="payment") # highlight-line
...

# my_module.py
from aws_lambda_powertools import Logger

logger = Logger(child=True) # highlight-line

In this case, Logger will register a Logger named payment, and a Logger named service_undefined. The latter isn't inheriting from the parent, and will have no handler, thus no message being logged to standard output.

This can be fixed by either ensuring both has the service value as payment, or simply use the environment variable POWERTOOLS_SERVICE_NAME to ensure service value will be the same across all Loggers when not explicitly set.

Overriding Log records

You might want to continue to use the same date formatting style, or override location to display the package.function_name:line_number as you previously had.

Logger allows you to either change the format or suppress the following keys altogether at the initialization: location, timestamp, level, and datefmt

from aws_lambda_powertools import Logger

# override default values for location and timestamp format
logger = Logger(stream=stdout, location="[%(funcName)s] %(module)s", datefmt="fake-datefmt") # highlight-line

# suppress location key
logger = Logger(stream=stdout, location=None) # highlight-line

Alternatively, you can also change the order of the following log record keys via the log_record_order parameter: level, location, message, and timestamp`

from aws_lambda_powertools import Logger

# make message as the first key
logger = Logger(stream=stdout, log_record_order=["message"]) # highlight-line

# Default key sorting order
logger = Logger(stream=stdout, log_record_order=["level","location","message","timestamp"]) # highlight-line

Logging exceptions

When logging exceptions, Logger will add a new key named exception, and will serialize the full traceback as a string.

from aws_lambda_powertools import Logger
logger = Logger()

try:
     raise ValueError("something went wrong")
except Exception:
     logger.exception("Received an exception") # highlight-line
Excerpt output in CloudWatch Logs
{
   "level": "ERROR",
   "location": "<module>:4",
   "message": "Received an exception",
   "timestamp": "2020-08-28 18:11:38,886",
   "service": "service_undefined",
   "sampling_rate": 0.0,
   "exception": "Traceback (most recent call last):\n  File \"<input>\", line 2, in <module>\nValueError: something went wrong"
}