Skip to content

Latest commit

 

History

History
159 lines (112 loc) · 6.43 KB

metrics.mdx

File metadata and controls

159 lines (112 loc) · 6.43 KB
title description
Metrics
Core utility

import Note from "../../src/components/Note"

Metrics creates custom metrics asynchronously via logging metrics to standard output following Amazon CloudWatch Embedded Metric Format (EMF).

Key features

  • Aggregate up to 100 metrics using a single CloudWatch EMF object (large JSON blob)
  • Validate against common metric definitions mistakes (metric unit, values, max dimensions, max metrics, etc)
  • Metrics are created asynchronously by CloudWatch service, no custom stacks needed
  • Context manager to create an one off metric with a different dimension

Initialization

Set POWERTOOLS_SERVICE_NAME and POWERTOOLS_METRICS_NAMESPACE env vars as a start - Here is an example using AWS Serverless Application Model (SAM)

Resources:
    HelloWorldFunction:
        Type: AWS::Serverless::Function
        Properties:
        ...
        Runtime: python3.8
        Environment:
            Variables:
                POWERTOOLS_SERVICE_NAME: payment # highlight-line
                POWERTOOLS_METRICS_NAMESPACE: ServerlessAirline # highlight-line

We recommend you use your application or main service as a metric namespace. You can explicitly set a namespace name via namespace param or via POWERTOOLS_METRICS_NAMESPACE env var. This sets namespace key that will be used for all metrics. You can also pass a service name via service param or POWERTOOLS_SERVICE_NAME env var. This will create a dimension with the service name.

from aws_lambda_powertools.metrics import Metrics, MetricUnit

# POWERTOOLS_METRICS_NAMESPACE and POWERTOOLS_SERVICE_NAME defined
metrics = Metrics() # highlight-line 

# Explicit definition
Metrics(namespace="ServerlessAirline", service="orders")  # creates a default dimension {"service": "ServerlessAirline"} under the namespace "ServerlessAirline"

You can initialize Metrics anywhere in your code as many times as you need - It'll keep track of your aggregate metrics in memory.

Creating metrics

You can create metrics using add_metric, and manually create dimensions for all your aggregate metrics using add_dimension.

from aws_lambda_powertools.metrics import Metrics, MetricUnit

metrics = Metrics(namespace="ExampleApplication", service="booking")
# highlight-start
metrics.add_dimension(name="environment", value="prod")
metrics.add_metric(name="SuccessfulBooking", unit=MetricUnit.Count, value=1)
# highlight-end

MetricUnit enum facilitate finding a supported metric unit by CloudWatch. Alternatively, you can pass the value as a string if you already know them e.g. "Count".

CloudWatch EMF supports a max of 100 metrics. Metrics will automatically flush all metrics when adding the 100th metric, where subsequent metrics will be aggregated into a new EMF object.

Creating a metric with a different dimension

CloudWatch EMF uses the same dimensions across all your metrics. Use single_metric if you have a metric that should have different dimensions.

Generally, this would be an edge case since you pay for unique metric. Keep the following formula in mind:

unique metric = (metric_name + dimension_name + dimension_value)
from aws_lambda_powertools.metrics import MetricUnit, single_metric

with single_metric(name="ColdStart", unit=MetricUnit.Count, value=1, namespace="ExampleApplication") as metric: # highlight-line
    metric.add_dimension(name="function_context", value="$LATEST")
    ...

Flushing metrics

As you finish adding all your metrics, you need to serialize and flush them to standard output. You can do that right before you return your response to the caller via log_metrics.

from aws_lambda_powertools.metrics import Metrics, MetricUnit

metrics = Metrics(service="ExampleService")
metrics.add_metric(name="ColdStart", unit="Count", value=1)

@metrics.log_metrics # highlight-line
def lambda_handler(evt, ctx):
    metrics.add_dimension(name="service", value="booking")
    metrics.add_metric(name="BookingConfirmation", unit="Count", value=1)
    ...

log_metrics decorator validates, serializes, and flushes all your metrics. During metrics validation, if any of the following criteria is met, SchemaValidationError exception will be raised:

  • At least of one Metric and Dimension
  • Maximum of 9 dimensions
  • Namespace is set, and no more than one
  • Metric units must be supported by CloudWatch
When nesting multiple middlwares, you should use log_metrics as your last decorator wrapping all subsequent ones.
from aws_lambda_powertools.metrics import Metrics, MetricUnit

metrics = Metrics(namespace="ExampleApplication", service="booking")
metrics.add_metric(name="ColdStart", unit="Count", value=1)

# highlight-start
@metrics.log_metrics
@tracer.capture_lambda_handler
# highlight-end
def lambda_handler(evt, ctx):
    metrics.add_metric(name="BookingConfirmation", unit="Count", value=1)
    ...

Flushing metrics manually

If you prefer not to use log_metrics because you might want to encapsulate additional logic when doing so, you can manually flush and clear metrics as follows:

import json
from aws_lambda_powertools.metrics import Metrics, MetricUnit

metrics = Metrics(namespace="ExampleApplication", service="booking")
metrics.add_metric(name="ColdStart", unit="Count", value=1)

# highlight-start
your_metrics_object = metrics.serialize_metric_set() 
metrics.clear_metrics()
print(json.dumps(your_metrics_object))
# highlight-end

Testing your code

Use POWERTOOLS_METRICS_NAMESPACE and POWERTOOLS_SERVICE_NAME env vars when unit testing your code to ensure metric namespace and dimension objects are created, and your code doesn't fail validation.

POWERTOOLS_SERVICE_NAME="Example" POWERTOOLS_METRICS_NAMESPACE="Application" python -m pytest

You can ignore this if you are explicitly setting namespace/default dimension by passing the namespace and service parameters when initializing Metrics: metrics = Metrics(namespace=ApplicationName, service=ServiceName).