|
| 1 | +--- |
| 2 | +title: Batch |
| 3 | +description: Utility |
| 4 | +--- |
| 5 | + |
| 6 | +import Note from "../../src/components/Note" |
| 7 | + |
| 8 | +The batch utility provides an abstraction to process a batch event. Useful for lambda integrations with [AWS SQS](https://aws.amazon.com/sqs/), [AWS Kinesis](https://aws.amazon.com/kinesis/) and [AWS DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html). |
| 9 | +It also provides base classes (`BaseProcessor`, `BasePartialProcessor`) allowing you to create your **own** batch processor. |
| 10 | + |
| 11 | +**Key Features** |
| 12 | + |
| 13 | +* Run batch processing logic with a clean interface; |
| 14 | +* Middleware and context to handle a batch event; |
| 15 | +* Removal of successful messages for [AWS SQS](https://aws.amazon.com/sqs/) batch - in case of partial failure. |
| 16 | + |
| 17 | +**IAM Permissions** |
| 18 | + |
| 19 | +This utility requires additional permissions to work as expected. See the following table: |
| 20 | + |
| 21 | +Processor | Function/Method | IAM Permission |
| 22 | +|---------|-----------------|---------------| |
| 23 | +PartialSQSProcessor | `_clean` | `sqs:DeleteMessageBatch` |
| 24 | + |
| 25 | +### PartialSQSProcessor |
| 26 | + |
| 27 | +A special batch processor which aims to `clean` your SQS:Queue if one or more (not all) records of the batch fails. |
| 28 | +A batch's partial failure sends back all the records to the queue, reprocessing this batch until all records succed. |
| 29 | +This processor exists to improve performance in such cases, deleting successful messages of a batch with partial failure. |
| 30 | + |
| 31 | +### Middleware |
| 32 | + |
| 33 | +```python:title=app.py |
| 34 | +from aws_lambda_powertools.utilities.batch import batch_processor, PartialSQSProcessor |
| 35 | + |
| 36 | +def record_handler(record): |
| 37 | + return record["body"] |
| 38 | + |
| 39 | +# highlight-start |
| 40 | +@batch_processor(record_handler=record_handler, processor=PartialSQSProcessor()) |
| 41 | +# highlight-end |
| 42 | +def lambda_handler(event, context): |
| 43 | + return {"statusCode": 200} |
| 44 | +``` |
| 45 | + |
| 46 | +## Create your own processor |
| 47 | + |
| 48 | +You can create your own batch processor by inheriting the `BaseProcessor` class, and implementing `_prepare()`, `_clean` and `_process_record()`. |
| 49 | +It's also possible to inherit the `BasePartialProcessor` which contains additional logic to handle a partial failure and keep track of record status. |
| 50 | + |
| 51 | +Here is an example implementation of a DynamoDBStream custom processor: |
| 52 | + |
| 53 | +```python:title=custom_processor.py |
| 54 | +import json |
| 55 | + |
| 56 | +from aws_lambda_powertools.utilities.batch import BaseProcessor, batch_processor |
| 57 | +import boto3 |
| 58 | + |
| 59 | +class DynamoDBProcessor(BaseProcessor): |
| 60 | + |
| 61 | + def __init__(self, queue_url: str): |
| 62 | + self.queue_url = queue_url |
| 63 | + self.client = boto3.client("sqs") |
| 64 | + |
| 65 | + def _prepare(self): |
| 66 | + pass |
| 67 | + |
| 68 | + def _clean(self): |
| 69 | + pass |
| 70 | + |
| 71 | + def _process_record(self, record): |
| 72 | + """ |
| 73 | + Process record and send result to sqs |
| 74 | + """ |
| 75 | + result = self.handler(record) |
| 76 | + body = json.dumps(result) |
| 77 | + self.client.send_message(QueueUrl=self.queue_url, MessageBody=body) |
| 78 | + return result |
| 79 | + |
| 80 | +def record_handler(record): |
| 81 | + return record["Keys"] |
| 82 | + |
| 83 | +# As context |
| 84 | + |
| 85 | +processor = DynamoDBProcessor("dummy") |
| 86 | +records = {"Records": []} |
| 87 | + |
| 88 | +with processor(records=records, handler=record_handler) as ctx: |
| 89 | + result = ctx.process() |
| 90 | + |
| 91 | +# As middleware |
| 92 | +@batch.batch_processor(record_handler=record_handler, processor=DynamoDBProcessor("dummy")) |
| 93 | +def lambda_handler(event, context): |
| 94 | + return {"statusCode": 200} |
| 95 | +``` |
0 commit comments