title | description |
---|---|
Logging |
Core utility |
Logging provides an opinionated logger with output structured as JSON.
- Leverages standard logging libraries: SLF4J{target="_blank"} as the API, and log4j2{target="_blank"} or logback{target="_blank"} for the implementation
- Captures key fields from Lambda context, cold start and structures logging output as JSON
- Optionally logs Lambda request
- Optionally logs Lambda response
- Optionally supports log sampling by including a configurable percentage of DEBUG logs in logging output
- Allows additional keys to be appended to the structured log at any point in time
???+ tip You can find complete examples in the project repository{target="_blank"}.
Depending on preference, you must choose to use either log4j2 or logback as your log provider. In both cases you need to configure aspectj to weave the code and make sure the annotation is processed.
=== "log4j2"
```xml hl_lines="3-7 24-27"
<dependencies>
...
<dependency>
<groupId>software.amazon.lambda</groupId>
<artifactId>powertools-logging-log4j</artifactId>
<version>{{ powertools.version }}</version>
</dependency>
...
</dependencies>
...
<!-- configure the aspectj-maven-plugin to compile-time weave (CTW) the aws-lambda-powertools-java aspects into your project -->
<build>
<plugins>
...
<plugin>
<groupId>dev.aspectj</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.13.1</version>
<configuration>
<source>11</source> <!-- or higher -->
<target>11</target> <!-- or higher -->
<complianceLevel>11</complianceLevel> <!-- or higher -->
<aspectLibraries>
<aspectLibrary>
<groupId>software.amazon.lambda</groupId>
<artifactId>powertools-logging</artifactId>
</aspectLibrary>
</aspectLibraries>
</configuration>
<executions>
<execution>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
```
=== "logback"
```xml hl_lines="3-7 24-27"
<dependencies>
...
<dependency>
<groupId>software.amazon.lambda</groupId>
<artifactId>powertools-logging-logback</artifactId>
<version>{{ powertools.version }}</version>
</dependency>
...
</dependencies>
...
<!-- configure the aspectj-maven-plugin to compile-time weave (CTW) the aws-lambda-powertools-java aspects into your project -->
<build>
<plugins>
...
<plugin>
<groupId>dev.aspectj</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.13.1</version>
<configuration>
<source>11</source> <!-- or higher -->
<target>11</target> <!-- or higher -->
<complianceLevel>11</complianceLevel> <!-- or higher -->
<aspectLibraries>
<aspectLibrary>
<groupId>software.amazon.lambda</groupId>
<artifactId>powertools-logging</artifactId>
</aspectLibrary>
</aspectLibraries>
</configuration>
<executions>
<execution>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
```
=== "log4j2"
```groovy hl_lines="3 11"
plugins {
id 'java'
id 'io.freefair.aspectj.post-compile-weaving' version '8.1.0'
}
repositories {
mavenCentral()
}
dependencies {
aspect 'software.amazon.lambda:powertools-logging-log4j:{{ powertools.version }}'
}
sourceCompatibility = 11
targetCompatibility = 11
```
=== "logback"
```groovy hl_lines="3 11"
plugins {
id 'java'
id 'io.freefair.aspectj.post-compile-weaving' version '8.1.0'
}
repositories {
mavenCentral()
}
dependencies {
aspect 'software.amazon.lambda:powertools-logging-logback:{{ powertools.version }}'
}
sourceCompatibility = 11
targetCompatibility = 11
```
The logging module requires two settings:
Environment variable | Setting | Description |
---|---|---|
POWERTOOLS_LOG_LEVEL |
Logging level | Sets how verbose Logger should be. If not set, will use the Logging configuration |
POWERTOOLS_SERVICE_NAME |
Service | Sets service key that will be included in all log statements (Default is service_undefined ) |
Here is an example using AWS Serverless Application Model (SAM):
=== "template.yaml"
Resources:
PaymentFunction:
Type: AWS::Serverless::Function
Properties:
MemorySize: 512
Timeout: 20
Runtime: java17
Environment:
Variables:
POWERTOOLS_LOG_LEVEL: WARN
POWERTOOLS_SERVICE_NAME: payment
There are some other environment variables which can be set to modify Logging's settings at a global scope:
Environment variable | Type | Description |
---|---|---|
POWERTOOLS_LOGGER_SAMPLE_RATE |
float | Configure the sampling rate at which DEBUG logs should be included. See sampling rate |
POWERTOOLS_LOG_EVENT |
boolean | Specify if the incoming Lambda event should be logged. See Logging event |
POWERTOOLS_LOG_RESPONSE |
boolean | Specify if the Lambda response should be logged. See logging response |
POWERTOOLS_LOG_ERROR |
boolean | Specify if a Lambda uncaught exception should be logged. See logging exception |
Powertools for AWS Lambda (Java) simply extends the functionality of the underlying library you choose (log4j2 or logback). You can leverage the standard configuration files (log4j2.xml or logback.xml):
=== "log4j2.xml"
With log4j2, we leverage the [`JsonTemplateLayout`](https://logging.apache.org/log4j/2.x/manual/json-template-layout.html){target="_blank"}
to provide structured logging. A default template is provided in powertools ([_LambdaJsonLayout.json_](https://github.com/aws-powertools/powertools-lambda-java/tree/v2/powertools-logging/powertools-logging-log4j/src/main/resources/LambdaJsonLayout.json){target="_blank"}):
```xml hl_lines="5"
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Appenders>
<Console name="JsonAppender" target="SYSTEM_OUT">
<JsonTemplateLayout eventTemplateUri="classpath:LambdaJsonLayout.json" />
</Console>
</Appenders>
<Loggers>
<Logger name="com.example" level="debug" additivity="false">
<AppenderRef ref="JsonAppender"/>
</Logger>
<Root level="info">
<AppenderRef ref="JsonAppender"/>
</Root>
</Loggers>
</Configuration>
```
=== "logback.xml"
With logback, we leverage a custom [Encoder](https://logback.qos.ch/manual/encoders.html){target="_blank"}
to provide structured logging:
```xml hl_lines="4 5"
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
</encoder>
</appender>
<logger name="com.example" level="DEBUG" additivity="false">
<appender-ref ref="console" />
</logger>
<root level="INFO">
<appender-ref ref="console" />
</root>
</configuration>
```
Log level is generally configured in the log4j2.xml
or logback.xml
. But this level is static and needs a redeployment of the function to be changed.
Powertools for AWS Lambda permits to change this level dynamically thanks to an environment variable POWERTOOLS_LOG_LEVEL
.
We support the following log levels (SLF4J levels): TRACE
, DEBUG
, INFO
, WARN
, ERROR
.
If the level is set to CRITICAL
(supported in log4j but not logback), we revert it back to ERROR
.
If the level is set to any other value, we set it to the default value (INFO
).
!!!question "When is it useful?" When you want to set a logging policy to drop informational or verbose logs for one or all AWS Lambda functions, regardless of runtime and logger used.
With AWS Lambda Advanced Logging Controls (ALC){target="_blank"}, you can enforce a minimum log level that Lambda will accept from your application code.
When enabled, you should keep Powertools and ALC log level in sync to avoid data loss.
Here's a sequence diagram to demonstrate how ALC will drop both INFO
and DEBUG
logs emitted from Logger
, when ALC log level is stricter than Logger
.
sequenceDiagram
participant Lambda service
participant Lambda function
participant Application Logger
Note over Lambda service: AWS_LAMBDA_LOG_LEVEL="WARN"
Note over Application Logger: POWERTOOLS_LOG_LEVEL="DEBUG"
Lambda service->>Lambda function: Invoke (event)
Lambda function->>Lambda function: Calls handler
Lambda function->>Application Logger: logger.error("Something happened")
Lambda function-->>Application Logger: logger.debug("Something happened")
Lambda function-->>Application Logger: logger.info("Something happened")
Lambda service--xLambda service: DROP INFO and DEBUG logs
Lambda service->>CloudWatch Logs: Ingest error logs
We prioritise log level settings in this order:
AWS_LAMBDA_LOG_LEVEL
environment variablePOWERTOOLS_LOG_LEVEL
environment variable- level defined in the
log4j2.xml
orlogback.xml
files
If you set Powertools level lower than ALC, we will emit a warning informing you that your messages will be discarded by Lambda.
NOTE
With ALC enabled, we are unable to increase the minimum log level below the
AWS_LAMBDA_LOG_LEVEL
environment variable value, see AWS Lambda service documentation{target="_blank"} for more details.
To use Lambda Powertools for AWS Lambda Logging, use the @Logging
annotation in your code and the standard SLF4J logger:
=== "PaymentFunction.java"
```java hl_lines="8 10 12 14"
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import software.amazon.lambda.powertools.logging.Logging;
// ... other imports
public class PaymentFunction implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(PaymentFunction.class);
@Logging
public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
LOGGER.info("Collecting payment");
// ...
LOGGER.debug("order={}, amount={}", order.getId(), order.getAmount());
// ...
}
}
```
Your logs will always include the following keys in your structured logging:
Key | Type | Example | Description |
---|---|---|---|
timestamp | String | "2023-12-01T14:49:19.293Z" | Timestamp of actual log statement, by default uses default AWS Lambda timezone (UTC) |
level | String | "INFO" | Logging level (any level supported by SLF4J (i.e. TRACE , DEBUG , INFO , WARN , ERROR ) |
service | String | "payment" | Service name defined, by default service_undefined |
sampling_rate | float | 0.1 | Debug logging sampling rate in percentage e.g. 10% in this case (logged if not 0) |
message | String | "Collecting payment" | Log statement value. Unserializable JSON values will be casted to string |
xray_trace_id | String | "1-5759e988-bd862e3fe1be46a994272793" | X-Ray Trace ID when Tracing is enabled{target="_blank"} |
error | Map | { "name": "InvalidAmountException", "message": "Amount must be superior to 0", "stack": "at..." } |
Eventual exception (e.g. when doing logger.error("Error", new InvalidAmountException("Amount must be superior to 0")); ) |
The following keys will also be added to all your structured logs (unless configured otherwise):
Key | Type | Example | Description |
---|---|---|---|
cold_start | Boolean | false | ColdStart value |
function_name | String | "example-PaymentFunction-1P1Z6B39FLU73" | Name of the function |
function_version | String | "12" | Version of the function |
function_memory_size | String | "512" | Memory configure for the function |
function_arn | String | "arn:aws:lambda:eu-west-1:012345678910:function:example-PaymentFunction-1P1Z6B39FLU73" | ARN of the function |
function_request_id | String | "899856cb-83d1-40d7-8611-9e78f15f32f4"" | AWS Request ID from lambda context |
You can set a correlation ID using the correlationIdPath
attribute of the @Logging
annotation,
by passing a JMESPath expression{target="_blank"},
including our custom JMESPath Functions.
=== "AppCorrelationIdPath.java"
```java hl_lines="5"
public class AppCorrelationIdPath implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(AppCorrelationIdPath.class);
@Logging(correlationIdPath = "headers.my_request_id_header")
public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
// ...
LOGGER.info("Collecting payment")
// ...
}
}
```
=== "Example HTTP Event"
```json hl_lines="3"
{
"headers": {
"my_request_id_header": "correlation_id_value"
}
}
```
=== "CloudWatch Logs"
```json hl_lines="6"
{
"level": "INFO",
"message": "Collecting payment",
"timestamp": "2023-12-01T14:49:19.293Z",
"service": "payment",
"correlation_id": "correlation_id_value"
}
```
Known correlation IDs
To ease routine tasks like extracting correlation ID from popular event sources, we provide built-in JMESPath expressions.
=== "AppCorrelationId.java"
```java hl_lines="1 7"
import software.amazon.lambda.powertools.logging.CorrelationIdPaths;
public class AppCorrelationId implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(AppCorrelationId.class);
@Logging(correlationIdPath = CorrelationIdPaths.API_GATEWAY_REST)
public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
// ...
LOGGER.info("Collecting payment")
// ...
}
}
```
=== "Example Event"
```json hl_lines="3"
{
"requestContext": {
"requestId": "correlation_id_value"
}
}
```
=== "Example CloudWatch Logs"
```json hl_lines="6"
{
"level": "INFO",
"message": "Collecting payment",
"timestamp": "2023-12-01T14:49:19.293Z",
"service": "payment",
"correlation_id": "correlation_id_value"
}
```
Using StructuredArguments
To append additional keys in your logs, you can use the StructuredArguments
class:
=== "PaymentFunction.java"
```java hl_lines="1 2 11 17"
import static software.amazon.lambda.powertools.logging.argument.StructuredArguments.entry;
import static software.amazon.lambda.powertools.logging.argument.StructuredArguments.entries;
public class PaymentFunction implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(AppLogResponse.class);
@Logging
public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
// ...
LOGGER.info("Collecting payment", entry("orderId", order.getId()));
// ...
Map<String, String> customKeys = new HashMap<>();
customKeys.put("paymentId", payment.getId());
customKeys.put("amount", payment.getAmount);
LOGGER.info("Payment successful", entries(customKeys));
}
}
```
=== "CloudWatch Logs for PaymentFunction"
```json hl_lines="7 16-18"
{
"level": "INFO",
"message": "Collecting payment",
"service": "payment",
"timestamp": "2023-12-01T14:49:19.293Z",
"xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
"orderId": "41376"
}
...
{
"level": "INFO",
"message": "Payment successful",
"service": "payment",
"timestamp": "2023-12-01T14:49:20.118Z",
"xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
"orderId": "41376",
"paymentId": "3245",
"amount": 345.99
}
```
StructuredArguments
provides several options:
entry
to add one key and value into the log structure. Note that value can be any object type.entries
to add multiple keys and values (from a Map) into the log structure. Note that values can be any object type.json
to add a key and raw json (string) as value into the log structure.array
to add one key and multiple values into the log structure. Note that values can be any object type.
=== "OrderFunction.java"
```java hl_lines="1 2 11 17"
import static software.amazon.lambda.powertools.logging.argument.StructuredArguments.entry;
import static software.amazon.lambda.powertools.logging.argument.StructuredArguments.array;
public class OrderFunction implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(AppLogResponse.class);
@Logging
public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
// ...
LOGGER.info("Processing order", entry("order", order), array("products", productList));
// ...
}
}
```
=== "CloudWatch Logs for OrderFunction"
```json hl_lines="7 13"
{
"level": "INFO",
"message": "Processing order",
"service": "payment",
"timestamp": "2023-12-01T14:49:19.293Z",
"xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
"order": {
"orderId": 23542,
"amount": 459.99,
"date": "2023-12-01T14:49:19.018Z",
"customerId": 328496
},
"products": [
{
"productId": 764330,
"name": "product1",
"quantity": 1,
"price": 300
},
{
"productId": 798034,
"name": "product42",
"quantity": 1,
"price": 159.99
}
]
}
```
???+ tip "Use arguments without log placeholders"
As shown in the example above, you can use arguments (with StructuredArguments
) without placeholders ({}
) in the message.
If you add the placeholders, the arguments will be logged both as an additional field and also as a string in the log message, using the toString()
method.
=== "Function1.java"
```java
LOGGER.info("Processing {}", entry("order", order));
```
=== "Order.java"
```java hl_lines="5"
public class Order {
// ...
@Override
public String toString() {
return "Order{" +
"orderId=" + id +
", amount=" + amount +
", date='" + date + '\'' +
", customerId=" + customerId +
'}';
}
}
```
=== "CloudWatch Logs Function1"
```json hl_lines="3 7"
{
"level": "INFO",
"message": "Processing order=Order{orderId=23542, amount=459.99, date='2023-12-01T14:49:19.018Z', customerId=328496}",
"service": "payment",
"timestamp": "2023-12-01T14:49:19.293Z",
"xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
"order": {
"orderId": 23542,
"amount": 459.99,
"date": "2023-12-01T14:49:19.018Z",
"customerId": 328496
}
}
```
You can also combine structured arguments with non structured ones. For example:
=== "Function2.java"
```java
LOGGER.info("Processing order {}", order.getOrderId(), entry("order", order));
```
=== "CloudWatch Logs Function2"
```json
{
"level": "INFO",
"message": "Processing order 23542",
"service": "payment",
"timestamp": "2023-12-01T14:49:19.293Z",
"xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
"order": {
"orderId": 23542,
"amount": 459.99,
"date": "2023-12-01T14:49:19.018Z",
"customerId": 328496
}
}
```
Using MDC
Mapped Diagnostic Context (MDC) is essentially a Key-Value store. It is supported by the SLF4J API{target="_blank"}, logback{target="_blank"} and log4j (known as ThreadContext{target="_blank"}). You can use the following standard:
MDC.put("key", "value");
???+ warning "Custom keys stored in the MDC are persisted across warm invocations"
Always set additional keys as part of your handler method to ensure they have the latest value, or explicitly clear them with clearState=true
.
You can remove additional keys added with the MDC using MDC.remove("key")
.
Logger is commonly initialized in the global scope. Due to Lambda Execution Context reuse{target="_blank"},
this means that custom keys, added with the MDC can be persisted across invocations. If you want all custom keys to be deleted, you can use
clearState=true
attribute on the @Logging
annotation.
=== "CreditCardFunction.java"
```java hl_lines="5 8"
public class CreditCardFunction implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(CreditCardFunction.class);
@Logging(clearState = true)
public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
// ...
MDC.put("cardNumber", card.getId());
LOGGER.info("Updating card information");
// ...
}
}
```
=== "#1 Request"
```json hl_lines="7"
{
"level": "INFO",
"message": "Updating card information",
"service": "card",
"timestamp": "2023-12-01T14:49:19.293Z",
"xray_trace_id": "1-6569f266-4b0c7f97280dcd8428d3c9b5",
"cardNumber": "6818 8419 9395 5322"
}
```
=== "#2 Request"
```json hl_lines="7"
{
"level": "INFO",
"message": "Updating card information",
"service": "card",
"timestamp": "2023-12-01T14:49:20.213Z",
"xray_trace_id": "2-7a518f43-5e9d2b1f6cfd5e8b3a4e1f9c",
"cardNumber": "7201 6897 6685 3285"
}
```
clearState
is based on MDC.clear()
. State clearing is automatically done at the end of the execution of the handler if set to true
.
When debugging in non-production environments, you can instruct the @Logging
annotation to log the incoming event with logEvent
param or via POWERTOOLS_LOGGER_LOG_EVENT
env var.
???+ warning This is disabled by default to prevent sensitive info being logged
=== "AppLogEvent.java"
```java hl_lines="5"
public class AppLogEvent implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(AppLogEvent.class);
@Logging(logEvent = true)
public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
// ...
}
}
```
???+ note If you use this on a RequestStreamHandler, Powertools must duplicate input streams in order to log them.
When debugging in non-production environments, you can instruct the @Logging
annotation to log the response with logResponse
param or via POWERTOOLS_LOGGER_LOG_RESPONSE
env var.
???+ warning This is disabled by default to prevent sensitive info being logged
=== "AppLogResponse.java"
```java hl_lines="5"
public class AppLogResponse implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(AppLogResponse.class);
@Logging(logResponse = true)
public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
// ...
}
}
```
???+ note If you use this on a RequestStreamHandler, Powertools must duplicate output streams in order to log them.
By default, AWS Lambda logs any uncaught exception that might happen in the handler. However, this log is not structured
and does not contain any additional context. You can instruct the @Logging
annotation to log this kind of exception
with logError
param or via POWERTOOLS_LOGGER_LOG_ERROR
env var.
???+ warning This is disabled by default to prevent double logging
=== "AppLogResponse.java"
```java hl_lines="5"
public class AppLogError implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(AppLogError.class);
@Logging(logError = true)
public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
// ...
}
}
```
You can dynamically set a percentage of your logs toDEBUG
level to be included in the logger output, regardless of configured log leve, using thePOWERTOOLS_LOGGER_SAMPLE_RATE
environment variable or
via samplingRate
attribute on the @Logging
annotation.
!!! info Configuration on environment variable is given precedence over sampling rate configuration on annotation, provided it's in valid value range.
=== "Sampling via annotation attribute"
```java hl_lines="5"
public class App implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(App.class);
@Logging(samplingRate = 0.5)
public APIGatewayProxyResponseEvent handleRequest(final APIGatewayProxyRequestEvent input, final Context context) {
// will eventually be logged based on the sampling rate
LOGGER.debug("Handle payment");
}
}
```
=== "Sampling via environment variable"
```yaml hl_lines="8"
Resources:
PaymentFunction:
Type: AWS::Serverless::Function
Properties:
...
Environment:
Variables:
POWERTOOLS_LOGGER_SAMPLE_RATE: 0.5
```
You can use any of the following built-in JMESPath expressions as part of @Logging(correlationIdPath = ...)
:
???+ note "Note: Any object key named with -
must be escaped"
For example, request.headers."x-amzn-trace-id"
.
Name | Expression | Description |
---|---|---|
API_GATEWAY_REST | "requestContext.requestId" |
API Gateway REST API request ID |
API_GATEWAY_HTTP | "requestContext.requestId" |
API Gateway HTTP API request ID |
APPSYNC_RESOLVER | request.headers."x-amzn-trace-id" |
AppSync X-Ray Trace ID |
APPLICATION_LOAD_BALANCER | headers."x-amzn-trace-id" |
ALB X-Ray Trace ID |
EVENT_BRIDGE | "id" |
EventBridge Event ID |
Powertools for AWS Lambda comes with default json structure (standard fields & lambda context fields).
You can go further and customize which fields you want to keep in your logs or not. The configuration varies according to the underlying logging library.
Log4j2 configuration is done in log4j2.xml and leverages JsonTemplateLayout
:
<Console name="console" target="SYSTEM_OUT">
<JsonTemplateLayout eventTemplateUri="classpath:LambdaJsonLayout.json" />
</Console>
The JsonTemplateLayout
is automatically configured with the provided template:
??? example "LambdaJsonLayout.json"
json { "level": { "$resolver": "level", "field": "name" }, "message": { "$resolver": "powertools", "field": "message" }, "error": { "message": { "$resolver": "exception", "field": "message" }, "name": { "$resolver": "exception", "field": "className" }, "stack": { "$resolver": "exception", "field": "stackTrace", "stackTrace": { "stringified": true } } }, "cold_start": { "$resolver": "powertools", "field": "cold_start" }, "function_arn": { "$resolver": "powertools", "field": "function_arn" }, "function_memory_size": { "$resolver": "powertools", "field": "function_memory_size" }, "function_name": { "$resolver": "powertools", "field": "function_name" }, "function_request_id": { "$resolver": "powertools", "field": "function_request_id" }, "function_version": { "$resolver": "powertools", "field": "function_version" }, "sampling_rate": { "$resolver": "powertools", "field": "sampling_rate" }, "service": { "$resolver": "powertools", "field": "service" }, "timestamp": { "$resolver": "timestamp", "pattern": { "format": "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'" } }, "xray_trace_id": { "$resolver": "powertools", "field": "xray_trace_id" }, "": { "$resolver": "powertools" } }
You can create your own template and leverage the PowertoolsResolver{target="_blank"} and any other resolver to log the desired fields with the desired format. Some examples of customization are given below:
Utility by default emits timestamp
field in the logs in format yyyy-MM-dd'T'HH:mm:ss.SSS'Z'
and in system default timezone.
If you need to customize format and timezone, you can update your template.json or by configuring log4j2.component.properties
as shown in examples below:
=== "my-custom-template.json"
```json
{
"timestamp": {
"$resolver": "timestamp",
"pattern": {
"format": "yyyy-MM-dd HH:mm:ss",
"timeZone": "Europe/Paris",
}
},
}
```
=== "log4j2.component.properties"
```properties hl_lines="1 2"
log4j.layout.jsonTemplate.timestampFormatPattern=yyyy-MM-dd'T'HH:mm:ss.SSSZz
log4j.layout.jsonTemplate.timeZone=Europe/Oslo
```
See TimestampResolver
documentation{target="_blank"} for more details.
???+ warning "Lambda Advanced Logging Controls date format" When using the Lambda ALC, you must have a date format compatible with the RFC3339
You can also customize how exceptions are logged{target="_blank"}, and much more. See the JSON Layout template documentation{target="_blank"} for more details.
Logback configuration is done in logback.xml and the Powertools LambdaJsonEncoder
:
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
</encoder>
</appender>
The LambdaJsonEncoder
can be customized in different ways:
Utility by default emits timestamp
field in the logs in format yyyy-MM-dd'T'HH:mm:ss.SSS'Z'
and in system default timezone.
If you need to customize format and timezone, you can change use the following:
<encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
<timestampFormat>yyyy-MM-dd HH:mm:ss</timestampFormat>
<timestampFormatTimezoneId>Europe/Paris</timestampFormatTimezoneId>
</encoder>
- You can use a standard
ThrowableHandlingConverter
to customize the exception format (default is no converter). Example:
<encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>30</maxDepthPerThrowable>
<maxLength>2048</maxLength>
<shortenedClassNameLength>20</shortenedClassNameLength>
<exclude>sun\.reflect\..*\.invoke.*</exclude>
<exclude>net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
<evaluator class="myorg.MyCustomEvaluator"/>
<rootCauseFirst>true</rootCauseFirst>
<inlineHash>true</inlineHash>
</throwableConverter>
</encoder>
- You can choose to add information about threads (default is
false
):
<encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
<includeThreadInfo>true</includeThreadInfo>
</encoder>
- You can even choose to remove Powertools information from the logs like function name, arn:
<encoder class="software.amazon.lambda.powertools.logging.logback.LambdaJsonEncoder">
<includePowertoolsInfo>false</includePowertoolsInfo>
</encoder>
Utility also supports Elastic Common Schema(ECS){target="_blank"} format. The field emitted in logs will follow specs from ECS{target="_blank"} together with field captured by utility as mentioned above.
Use LambdaEcsLayout.json
as eventTemplateUri
when configuring JsonTemplateLayout
.
=== "log4j2.xml"
```xml hl_lines="5"
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Appenders>
<Console name="JsonAppender" target="SYSTEM_OUT">
<JsonTemplateLayout eventTemplateUri="classpath:LambdaEcsLayout.json" />
</Console>
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="JsonAppender"/>
</Root>
</Loggers>
</Configuration>
```
Use the LambdaEcsEncoder
rather than the LambdaJsonEncoder
when configuring the appender:
=== "logback.xml"
```xml hl_lines="3"
<configuration>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="software.amazon.lambda.powertools.logging.logback.LambdaEcsEncoder">
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="console" />
</root>
</configuration>
```