Skip to content

Latest commit

 

History

History
158 lines (112 loc) · 5.91 KB

File metadata and controls

158 lines (112 loc) · 5.91 KB
title description
Metrics
Core utility

import Note from "../../src/components/Note"

Metrics creates custom metrics asynchronously via logging metrics to standard output following Amazon CloudWatch Embedded Metric Format (EMF).

Key features

  • Aggregate up to 100 metrics using a single CloudWatch EMF object (large JSON blob)
  • Validate against common metric definitions mistakes (metric unit, values, max dimensions, max metrics, etc)
  • Metrics are created asynchronously by CloudWatch service, no custom stacks needed
  • Context manager to create an one off metric with a different dimension

Initialization

Set POWERTOOLS_SERVICE_NAME env var as a start - Here is an example using AWS Serverless Application Model (SAM)

Resources:
    HelloWorldFunction:
        Type: AWS::Serverless::Function
        Properties:
        ...
        Runtime: python3.8
        Environment:
            Variables:
                POWERTOOLS_SERVICE_NAME: ServerlessAirline # highlight-line

We recommend you use your application or main service as a metric namespace. You can explicitly set a namespace name via service param or via POWERTOOLS_SERVICE_NAME env var. This sets namespace key that will be used for all metrics.

from aws_lambda_powertools.metrics import Metrics, MetricUnit

# POWERTOOLS_SERVICE_NAME defined
metrics = Metrics() # highlight-line

# Explicit definition
Metrics(service="ServerlessAirline")  # sets namespace to "ServerlessAirline"

You can initialize Metrics anywhere in your code as many time as you need - It'll keep track of your aggregate metrics in memory.

Creating metrics

You can create metrics using add_metric, and set dimensions for all your aggregate metrics using add_dimension.

from aws_lambda_powertools.metrics import Metrics, MetricUnit

metrics = Metrics(service="ExampleService")
# highlight-start
metrics.add_metric(name="ColdStart", unit=MetricUnit.Count, value=1)
metrics.add_dimension(name="service", value="booking")
# highlight-end

MetricUnit enum facilitate finding a supported metric unit by CloudWatch. Alternatively, you can pass the value as a string if you already know them e.g. "Count".

CloudWatch EMF supports a max of 100 metrics. Metrics will automatically flush all metrics when adding the 100th metric, where subsequent metrics will be aggregated into a new EMF object.

Creating a metric with a different dimension

CloudWatch EMF uses the same dimensions across all your metrics. Use single_metric if you have a metric that should have different dimensions.

Generally, this would be an edge case since you pay for unique metric. Keep the following formula in mind:

unique metric = (metric_name + dimension_name + dimension_value)
from aws_lambda_powertools.metrics import MetricUnit, single_metric

with single_metric(name="ColdStart", unit=MetricUnit.Count, value=1, service="ExampleService") as metric: # highlight-line
    metric.add_dimension(name="function_context", value="$LATEST")
    ...

Flushing metrics

As you finish adding all your metrics, you need to serialize and flush them to standard output. You can do that right before you return your response to the caller via log_metrics.

from aws_lambda_powertools.metrics import Metrics, MetricUnit

metrics = Metrics(service="ExampleService")
metrics.add_metric(name="ColdStart", unit="Count", value=1)

@metrics.log_metrics # highlight-line
def lambda_handler(evt, ctx):
    metrics.add_dimension(name="service", value="booking")
    metrics.add_metric(name="BookingConfirmation", unit="Count", value=1)
    ...

log_metrics decorator validates, serializes, and flushes all your metrics. During metrics validation, if any of the following criteria is met, SchemaValidationError exception will be raised:

  • At least of one Metric and Dimension
  • Maximum of 9 dimensions
  • Namespace is set, and no more than one
  • Metric units must be supported by CloudWatch
When nesting multiple middlwares, you should use log_metrics as your last decorator wrapping all subsequent ones.
from aws_lambda_powertools.metrics import Metrics, MetricUnit

metrics = Metrics(service="ExampleService")
metrics.add_metric(name="ColdStart", unit="Count", value=1)

# highlight-start
@metrics.log_metrics
@tracer.capture_lambda_handler
# highlight-end
def lambda_handler(evt, ctx):
    metrics.add_dimension(name="service", value="booking")
    metrics.add_metric(name="BookingConfirmation", unit="Count", value=1)
    ...

Flushing metrics manually

If you prefer not to use log_metrics because you might want to encapsulate additional logic when doing so, you can manually flush and clear metrics as follows:

import json
from aws_lambda_powertools.metrics import Metrics, MetricUnit

metrics = Metrics(service="ExampleService")
metrics.add_metric(name="ColdStart", unit="Count", value=1)
metrics.add_dimension(name="service", value="booking")

# highlight-start
your_metrics_object = metrics.serialize_metric_set()
metrics.clear_metrics()
print(json.dumps(your_metrics_object))
# highlight-end

Testing your code

Use POWERTOOLS_SERVICE_NAME env var when unit testing your code to ensure a metric namespace object is created, and your code doesn't fail validation.

POWERTOOLS_SERVICE_NAME="Example" python -m pytest

You can ignore this if you are explicitly setting namespace by passing a service name when initializing Metrics: metrics = Metrics(service=ServiceName).