Skip to content

Commit 920d70e

Browse files
duc00heitorlessa
andauthored
docs(batch): explain record type discrepancy in failure and success handler (#2868)
Co-authored-by: Heitor Lessa <[email protected]>
1 parent d97d176 commit 920d70e

File tree

2 files changed

+17
-6
lines changed

2 files changed

+17
-6
lines changed

docs/utilities/batch.md

+11-6
Original file line numberDiff line numberDiff line change
@@ -522,14 +522,19 @@ You might want to bring custom logic to the existing `BatchProcessor` to slightl
522522

523523
For these scenarios, you can subclass `BatchProcessor` and quickly override `success_handler` and `failure_handler` methods:
524524

525-
* **`success_handler()`** – Keeps track of successful batch records
526-
* **`failure_handler()`** – Keeps track of failed batch records
525+
* **`success_handler()`** is called for each successfully processed record
526+
* **`failure_handler()`** is called for each failed record
527527

528-
???+ example
529-
Let's suppose you'd like to add a metric named `BatchRecordFailures` for each batch record that failed processing
528+
???+ note
529+
These functions have a common `record` argument. For backward compatibility reasons, their type is not the same:
530530

531-
```python hl_lines="8 9 16-19 22 38" title="Extending failure handling mechanism in BatchProcessor"
532-
--8<-- "examples/batch_processing/src/extending_failure.py"
531+
- `success_handler`: `record` type is `dict[str, Any]`, the raw record data.
532+
- `failure_handler`: `record` type can be an Event Source Data Class or your [Pydantic model](#pydantic-integration). During Pydantic validation errors, we fall back and serialize `record` to Event Source Data Class to not break the processing pipeline.
533+
534+
Let's suppose you'd like to add metrics to track successes and failures of your batch records.
535+
536+
```python hl_lines="8-10 18-25 28 44" title="Extending failure handling mechanism in BatchProcessor"
537+
--8<-- "examples/batch_processing/src/extending_processor_handlers.py"
533538
```
534539

535540
### Create your own partial processor

examples/batch_processing/src/extending_failure.py renamed to examples/batch_processing/src/extending_processor_handlers.py

+6
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
import json
2+
from typing import Any
23

34
from aws_lambda_powertools import Logger, Metrics, Tracer
45
from aws_lambda_powertools.metrics import MetricUnit
@@ -9,11 +10,16 @@
910
FailureResponse,
1011
process_partial_response,
1112
)
13+
from aws_lambda_powertools.utilities.batch.base import SuccessResponse
1214
from aws_lambda_powertools.utilities.data_classes.sqs_event import SQSRecord
1315
from aws_lambda_powertools.utilities.typing import LambdaContext
1416

1517

1618
class MyProcessor(BatchProcessor):
19+
def success_handler(self, record: dict[str, Any], result: Any) -> SuccessResponse:
20+
metrics.add_metric(name="BatchRecordSuccesses", unit=MetricUnit.Count, value=1)
21+
return super().success_handler(record, result)
22+
1723
def failure_handler(self, record: SQSRecord, exception: ExceptionInfo) -> FailureResponse:
1824
metrics.add_metric(name="BatchRecordFailures", unit=MetricUnit.Count, value=1)
1925
return super().failure_handler(record, exception)

0 commit comments

Comments
 (0)