Skip to content

docs(navigation): standardize link targets to enhance customer experience #2420

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jun 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions docs/core/event_handler/api_gateway.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ When using Amazon Application Load Balancer (ALB) to front your Lambda functions

#### Lambda Function URL

When using [AWS Lambda Function URL](https://docs.aws.amazon.com/lambda/latest/dg/urls-configuration.html), you can use `LambdaFunctionUrlResolver`.
When using [AWS Lambda Function URL](https://docs.aws.amazon.com/lambda/latest/dg/urls-configuration.html){target="_blank"}, you can use `LambdaFunctionUrlResolver`.

=== "getting_started_lambda_function_url_resolver.py"

Expand Down Expand Up @@ -253,7 +253,7 @@ When using [Custom Domain API Mappings feature](https://docs.aws.amazon.com/apig

**Scenario**: You have a custom domain `api.mydomain.dev`. Then you set `/payment` API Mapping to forward any payment requests to your Payments API.

**Challenge**: This means your `path` value for any API requests will always contain `/payment/<actual_request>`, leading to HTTP 404 as Event Handler is trying to match what's after `payment/`. This gets further complicated with an [arbitrary level of nesting](https://github.com/awslabs/aws-lambda-powertools-roadmap/issues/34).
**Challenge**: This means your `path` value for any API requests will always contain `/payment/<actual_request>`, leading to HTTP 404 as Event Handler is trying to match what's after `payment/`. This gets further complicated with an [arbitrary level of nesting](https://github.com/awslabs/aws-lambda-powertools-roadmap/issues/34){target="_blank"}.

To address this API Gateway behavior, we use `strip_prefixes` parameter to account for these prefixes that are now injected into the path regardless of which type of API Gateway you're using.

Expand Down Expand Up @@ -344,7 +344,7 @@ You can use the `Response` class to have full control over the response. For exa
Some event sources require headers and cookies to be encoded as `multiValueHeaders`.

???+ warning "Using multiple values for HTTP headers in ALB?"
Make sure you [enable the multi value headers feature](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html#multi-value-headers) to serialize response headers correctly.
Make sure you [enable the multi value headers feature](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/lambda-functions.html#multi-value-headers){target="_blank"} to serialize response headers correctly.

=== "fine_grained_responses.py"

Expand Down Expand Up @@ -420,7 +420,7 @@ Like `compress` feature, the client must send the `Accept` header with the corre

### Debug mode

You can enable debug mode via `debug` param, or via `POWERTOOLS_DEV` [environment variable](../../index.md#environment-variables).
You can enable debug mode via `debug` param, or via `POWERTOOLS_DEV` [environment variable](../../index.md#environment-variables){target="_blank"}.

This will enable full tracebacks errors in the response, print request and responses, and set CORS in development mode.

Expand Down
10 changes: 5 additions & 5 deletions docs/core/logger.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ You can set a Correlation ID using `correlation_id_path` param by passing a [JME

#### set_correlation_id method

You can also use `set_correlation_id` method to inject it anywhere else in your code. Example below uses [Event Source Data Classes utility](../utilities/data_classes.md) to easily access events properties.
You can also use `set_correlation_id` method to inject it anywhere else in your code. Example below uses [Event Source Data Classes utility](../utilities/data_classes.md){target="_blank"} to easily access events properties.

=== "set_correlation_id_method.py"

Expand Down Expand Up @@ -242,7 +242,7 @@ You can remove any additional key from Logger state using `remove_keys`.

#### Clearing all state

Logger is commonly initialized in the global scope. Due to [Lambda Execution Context reuse](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html), this means that custom keys can be persisted across invocations. If you want all custom keys to be deleted, you can use `clear_state=True` param in `inject_lambda_context` decorator.
Logger is commonly initialized in the global scope. Due to [Lambda Execution Context reuse](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html){target="_blank"}, this means that custom keys can be persisted across invocations. If you want all custom keys to be deleted, you can use `clear_state=True` param in `inject_lambda_context` decorator.

???+ tip "Tip: When is this useful?"
It is useful when you add multiple custom keys conditionally, instead of setting a default `None` value if not present. Any key with `None` value is automatically removed by Logger.
Expand Down Expand Up @@ -360,7 +360,7 @@ You can use any of the following built-in JMESPath expressions as part of [injec

### Reusing Logger across your code

Similar to [Tracer](./tracer.md#reusing-tracer-across-your-code), a new instance that uses the same `service` name - env var or explicit parameter - will reuse a previous Logger instance. Just like `logging.getLogger("logger_name")` would in the standard library if called with the same logger name.
Similar to [Tracer](./tracer.md#reusing-tracer-across-your-code){target="_blank"}, a new instance that uses the same `service` name - env var or explicit parameter - will reuse a previous Logger instance. Just like `logging.getLogger("logger_name")` would in the standard library if called with the same logger name.

Notice in the CloudWatch Logs output how `payment_id` appeared as expected when logging in `collect.py`.

Expand Down Expand Up @@ -407,7 +407,7 @@ You can use values ranging from `0.0` to `1` (100%) when setting `POWERTOOLS_LOG
Sampling decision happens at the Logger initialization. This means sampling may happen significantly more or less than depending on your traffic patterns, for example a steady low number of invocations and thus few cold starts.

???+ note
Open a [feature request](https://github.com/awslabs/aws-lambda-powertools-python/issues/new?assignees=&labels=feature-request%2C+triage&template=feature_request.md&title=) if you want Logger to calculate sampling for every invocation
Open a [feature request](https://github.com/awslabs/aws-lambda-powertools-python/issues/new?assignees=&labels=feature-request%2C+triage&template=feature_request.md&title=){target="_blank"} if you want Logger to calculate sampling for every invocation

=== "sampling_debug_logs.py"

Expand Down Expand Up @@ -744,4 +744,4 @@ Here's an example where we persist `payment_id` not `request_id`. Note that `pay
<!-- markdownlint-disable MD013 -->
### How do I aggregate and search Powertools for AWS Lambda (Python) logs across accounts?

As of now, ElasticSearch (ELK) or 3rd party solutions are best suited to this task. Please refer to this [discussion for more details](https://github.com/awslabs/aws-lambda-powertools-python/issues/460)
As of now, ElasticSearch (ELK) or 3rd party solutions are best suited to this task. Please refer to this [discussion for more details](https://github.com/awslabs/aws-lambda-powertools-python/issues/460){target="_blank"}
20 changes: 10 additions & 10 deletions docs/core/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ title: Metrics
description: Core utility
---

Metrics creates custom metrics asynchronously by logging metrics to standard output following [Amazon CloudWatch Embedded Metric Format (EMF)](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format.html).
Metrics creates custom metrics asynchronously by logging metrics to standard output following [Amazon CloudWatch Embedded Metric Format (EMF)](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Embedded_Metric_Format.html){target="_blank"}.

These metrics can be visualized through [Amazon CloudWatch Console](https://console.aws.amazon.com/cloudwatch/).
These metrics can be visualized through [Amazon CloudWatch Console](https://console.aws.amazon.com/cloudwatch/){target="_blank"}.

## Key features

Expand All @@ -22,7 +22,7 @@ If you're new to Amazon CloudWatch, there are two terminologies you must be awar
* **Dimensions**. Metrics metadata in key-value format. They help you slice and dice metrics visualization, for example `ColdStart` metric by Payment `service`.
* **Metric**. It's the name of the metric, for example: `SuccessfulBooking` or `UpdatedBooking`.
* **Unit**. It's a value representing the unit of measure for the corresponding metric, for example: `Count` or `Seconds`.
* **Resolution**. It's a value representing the storage resolution for the corresponding metric. Metrics can be either Standard or High resolution. Read more [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html#high-resolution-metrics).
* **Resolution**. It's a value representing the storage resolution for the corresponding metric. Metrics can be either Standard or High resolution. Read more [here](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.html#high-resolution-metrics){target="_blank"}.

<figure>
<img src="../../media/metrics_terminology.png" />
Expand Down Expand Up @@ -83,7 +83,7 @@ You can create metrics using `add_metric`, and you can create dimensions for all

### Adding high-resolution metrics

You can create [high-resolution metrics](https://aws.amazon.com/about-aws/whats-new/2023/02/amazon-cloudwatch-high-resolution-metric-extraction-structured-logs/) passing `resolution` parameter to `add_metric`.
You can create [high-resolution metrics](https://aws.amazon.com/about-aws/whats-new/2023/02/amazon-cloudwatch-high-resolution-metric-extraction-structured-logs/){target="_blank"} passing `resolution` parameter to `add_metric`.

???+ tip "When is it useful?"
High-resolution metrics are data with a granularity of one second and are very useful in several situations such as telemetry, time series, real-time incident management, and others.
Expand Down Expand Up @@ -154,7 +154,7 @@ This decorator also **validates**, **serializes**, and **flushes** all your metr

* Maximum of 29 user-defined dimensions
* Namespace is set, and no more than one
* Metric units must be [supported by CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html)
* Metric units must be [supported by CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html){target="_blank"}

#### Raising SchemaValidationError on empty metrics

Expand Down Expand Up @@ -191,7 +191,7 @@ If it's a cold start invocation, this feature will:
This has the advantage of keeping cold start metric separate from your application metrics, where you might have unrelated dimensions.

???+ info
We do not emit 0 as a value for ColdStart metric for cost reasons. [Let us know](https://github.com/awslabs/aws-lambda-powertools-python/issues/new?assignees=&labels=feature-request%2C+triage&template=feature_request.md&title=) if you'd prefer a flag to override it.
We do not emit 0 as a value for ColdStart metric for cost reasons. [Let us know](https://github.com/awslabs/aws-lambda-powertools-python/issues/new?assignees=&labels=feature-request%2C+triage&template=feature_request.md&title=){target="_blank"} if you'd prefer a flag to override it.

## Advanced

Expand Down Expand Up @@ -219,7 +219,7 @@ You can add high-cardinality data as part of your Metrics log with `add_metadata
CloudWatch EMF uses the same dimensions across all your metrics. Use `single_metric` if you have a metric that should have different dimensions.

???+ info
Generally, this would be an edge case since you [pay for unique metric](https://aws.amazon.com/cloudwatch/pricing). Keep the following formula in mind:
Generally, this would be an edge case since you [pay for unique metric](https://aws.amazon.com/cloudwatch/pricing){target="_blank"}. Keep the following formula in mind:

**unique metric = (metric_name + dimension_name + dimension_value)**

Expand Down Expand Up @@ -292,7 +292,7 @@ The former creates metrics asynchronously via CloudWatch Logs, and the latter us
!!! important "Key concept"
CloudWatch [considers a metric unique](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html#Metric){target="_blank"} by a combination of metric **name**, metric **namespace**, and zero or more metric **dimensions**.

With EMF, metric dimensions are shared with any metrics you define. With `PutMetricData` API, you can set a [list](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html) defining one or more metrics with distinct dimensions.
With EMF, metric dimensions are shared with any metrics you define. With `PutMetricData` API, you can set a [list](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html){target="_blank"} defining one or more metrics with distinct dimensions.

This is a subtle yet important distinction. Imagine you had the following metrics to emit:

Expand Down Expand Up @@ -329,7 +329,7 @@ That is why `Metrics` shares data across instances by default, as that covers 80

For example, `Metrics(namespace="ServerlessAirline", service="booking")`

Make sure to set `POWERTOOLS_METRICS_NAMESPACE` and `POWERTOOLS_SERVICE_NAME` before running your tests to prevent failing on `SchemaValidation` exception. You can set it before you run tests or via pytest plugins like [dotenv](https://pypi.org/project/pytest-dotenv/).
Make sure to set `POWERTOOLS_METRICS_NAMESPACE` and `POWERTOOLS_SERVICE_NAME` before running your tests to prevent failing on `SchemaValidation` exception. You can set it before you run tests or via pytest plugins like [dotenv](https://pypi.org/project/pytest-dotenv/){target="_blank"}.

```bash title="Injecting dummy Metric Namespace before running tests"
--8<-- "examples/metrics/src/run_tests_env_var.sh"
Expand Down Expand Up @@ -374,4 +374,4 @@ You can read standard output and assert whether metrics have been flushed. Here'
```

???+ tip
For more elaborate assertions and comparisons, check out [our functional testing for Metrics utility.](https://github.com/awslabs/aws-lambda-powertools-python/blob/develop/tests/functional/test_metrics.py)
For more elaborate assertions and comparisons, check out [our functional testing for Metrics utility.](https://github.com/awslabs/aws-lambda-powertools-python/blob/develop/tests/functional/test_metrics.py){target="_blank"}
14 changes: 7 additions & 7 deletions docs/core/tracer.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Tracer
description: Core utility
---

Tracer is an opinionated thin wrapper for [AWS X-Ray Python SDK](https://github.com/aws/aws-xray-sdk-python/).
Tracer is an opinionated thin wrapper for [AWS X-Ray Python SDK](https://github.com/aws/aws-xray-sdk-python/){target="_blank"}.

![Tracer showcase](../media/tracer_utility_showcase.png)

Expand All @@ -29,7 +29,7 @@ Add `aws-lambda-powertools[tracer]` as a dependency in your preferred tool: _e.g

### Permissions

Before your use this utility, your AWS Lambda function [must have permissions](https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html#services-xray-permissions) to send traces to AWS X-Ray.
Before your use this utility, your AWS Lambda function [must have permissions](https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html#services-xray-permissions){target="_blank"} to send traces to AWS X-Ray.

```yaml hl_lines="9 12" title="AWS Serverless Application Model (SAM) example"
--8<-- "examples/tracer/sam/template.yaml"
Expand All @@ -51,7 +51,7 @@ You can quickly start by initializing `Tracer` and use `capture_lambda_handler`

### Annotations & Metadata

**Annotations** are key-values associated with traces and indexed by AWS X-Ray. You can use them to filter traces and to create [Trace Groups](https://aws.amazon.com/about-aws/whats-new/2018/11/aws-xray-adds-the-ability-to-group-traces/) to slice and dice your transactions.
**Annotations** are key-values associated with traces and indexed by AWS X-Ray. You can use them to filter traces and to create [Trace Groups](https://aws.amazon.com/about-aws/whats-new/2018/11/aws-xray-adds-the-ability-to-group-traces/){target="_blank"} to slice and dice your transactions.

```python hl_lines="8" title="Adding annotations with put_annotation method"
--8<-- "examples/tracer/src/put_trace_annotations.py"
Expand Down Expand Up @@ -107,7 +107,7 @@ You can trace asynchronous functions and generator functions (including context

### Patching modules

Tracer automatically patches all [supported libraries by X-Ray](https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-python-patching.html) during initialization, by default. Underneath, AWS X-Ray SDK checks whether a supported library has been imported before patching.
Tracer automatically patches all [supported libraries by X-Ray](https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-python-patching.html){target="_blank"} during initialization, by default. Underneath, AWS X-Ray SDK checks whether a supported library has been imported before patching.

If you're looking to shave a few microseconds, or milliseconds depending on your function memory configuration, you can patch specific modules using `patch_modules` param:

Expand Down Expand Up @@ -172,7 +172,7 @@ You can use `aiohttp_trace_config` function to create a valid [aiohttp trace_con

You can use `tracer.provider` attribute to access all methods provided by AWS X-Ray `xray_recorder` object.

This is useful when you need a feature available in X-Ray that is not available in the Tracer utility, for example [thread-safe](https://github.com/aws/aws-xray-sdk-python/#user-content-trace-threadpoolexecutor), or [context managers](https://github.com/aws/aws-xray-sdk-python/#user-content-start-a-custom-segmentsubsegment).
This is useful when you need a feature available in X-Ray that is not available in the Tracer utility, for example [thread-safe](https://github.com/aws/aws-xray-sdk-python/#user-content-trace-threadpoolexecutor){target="_blank"}, or [context managers](https://github.com/aws/aws-xray-sdk-python/#user-content-start-a-custom-segmentsubsegment){target="_blank"}.

```python hl_lines="14" title="Tracing a code block with in_subsegment escape hatch"
--8<-- "examples/tracer/src/sdk_escape_hatch.py"
Expand All @@ -181,7 +181,7 @@ This is useful when you need a feature available in X-Ray that is not available
### Concurrent asynchronous functions

???+ warning
[X-Ray SDK will raise an exception](https://github.com/aws/aws-xray-sdk-python/issues/164) when async functions are run and traced concurrently
[X-Ray SDK will raise an exception](https://github.com/aws/aws-xray-sdk-python/issues/164){target="_blank"} when async functions are run and traced concurrently

A safe workaround mechanism is to use `in_subsegment_async` available via Tracer escape hatch (`tracer.provider`).

Expand Down Expand Up @@ -221,4 +221,4 @@ Tracer is disabled by default when not running in the AWS Lambda environment - T

* Use annotations on key operations to slice and dice traces, create unique views, and create metrics from it via Trace Groups
* Use a namespace when adding metadata to group data more easily
* Annotations and metadata are added to the current subsegment opened. If you want them in a specific subsegment, use a [context manager](https://github.com/aws/aws-xray-sdk-python/#start-a-custom-segmentsubsegment) via the escape hatch mechanism
* Annotations and metadata are added to the current subsegment opened. If you want them in a specific subsegment, use a [context manager](https://github.com/aws/aws-xray-sdk-python/#start-a-custom-segmentsubsegment){target="_blank"} via the escape hatch mechanism
Loading