Skip to content

Commit 946d9f8

Browse files
docs: small fixes
1 parent 3bbb1f2 commit 946d9f8

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

docs/utilities/batch.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ If your function fails to process any message from the batch, the entire batch r
2121
With this utility, batch records are processed individually – only messages that failed to be processed return to the queue or stream for a further retry. This works when two mechanisms are in place:
2222

2323
1. `ReportBatchItemFailures` is set in your SQS, Kinesis, or DynamoDB event source properties
24-
2. [A specific response](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#sqs-batchfailurereporting-syntax){target="_blank"} is returned so Lambda knows which records should not be deleted during partial responses
24+
2. [A specific response](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html#services-sqs-batchfailurereporting){target="_blank"} is returned so Lambda knows which records should not be deleted during partial responses
2525

2626
<!-- HTML tags are required in admonition content thus increasing line length beyond our limits -->
2727
<!-- markdownlint-disable MD013 -->
@@ -32,7 +32,7 @@ With this utility, batch records are processed individually – only messages th
3232

3333
## Getting started
3434

35-
Regardless whether you're using SQS, Kinesis Data Streams or DynamoDB Streams, you must configure your Lambda function event source to use ``ReportBatchItemFailures`.
35+
Regardless whether you're using SQS, Kinesis Data Streams or DynamoDB Streams, you must configure your Lambda function event source to use `ReportBatchItemFailures`.
3636

3737
You do not need any additional IAM permissions to use this utility, except for what each event source requires.
3838

@@ -231,14 +231,14 @@ You can use `AsyncBatchProcessor` class and `async_process_partial_response` fun
231231

232232
The reason this is not the default behaviour is that not all use cases can handle concurrency safely (e.g., loyalty points must be updated in order).
233233

234-
```python hl_lines="3 11 14 24" title="High-concurrency with AsyncBatchProcessor"
234+
```python hl_lines="3 11 14 24-26" title="High-concurrency with AsyncBatchProcessor"
235235
--8<-- "examples/batch_processing/src/getting_started_async.py"
236236
```
237237

238238
???+ warning "Using tracer?"
239239
`AsyncBatchProcessor` uses `asyncio.gather` which can cause side effects and reach trace limits at high concurrency.
240240

241-
See [Tracing concurrent asynchronous functions](../core/tracer.md#concurrent-asynchronous-functions).
241+
See [Tracing concurrent asynchronous functions](../core/tracer.md#concurrent-asynchronous-functions){target="_blank"}.
242242

243243
## Advanced
244244

@@ -385,7 +385,7 @@ As 2.12.0, `process_partial_response` and `async_process_partial_response` are t
385385

386386
When using Sentry.io for error monitoring, you can override `failure_handler` to capture each processing exception with Sentry SDK:
387387

388-
> Credits to [Charles-Axel Dein](https://github.com/awslabs/aws-lambda-powertools-python/issues/293#issuecomment-781961732)
388+
> Credits to [Charles-Axel Dein](https://github.com/awslabs/aws-lambda-powertools-python/issues/293#issuecomment-781961732){target="_blank"}
389389
390390
```python hl_lines="1 7-8" title="Integrating error tracking with Sentry.io"
391391
--8<-- "examples/batch_processing/src/sentry_error_tracking.py"

0 commit comments

Comments
 (0)