Skip to content

Commit f85fcf3

Browse files
committed
docs: move caching to getting started
1 parent 1e440ff commit f85fcf3

File tree

2 files changed

+25
-23
lines changed

2 files changed

+25
-23
lines changed

Diff for: docs/utilities/idempotency.md

+22-21
Original file line numberDiff line numberDiff line change
@@ -264,6 +264,28 @@ The output serializer supports any JSON serializable data, **Python Dataclasses*
264264
2. This function does the following <br><br>**1**. Receives the dictionary saved into the persistent storage <br>**1** Serializes to `OrderOutput` before `@idempotent` returns back to the caller.
265265
3. This serializer receives both functions so it knows who to call when to serialize to and from dictionary.
266266

267+
### Using in-memory cache
268+
269+
!!! note "In-memory cache is local to each Lambda execution environment."
270+
271+
You can enable caching with the `use_local_cache` parameter in `IdempotencyConfig`. When enabled, you can adjust cache capacity _(256)_ with `local_cache_max_items`.
272+
273+
By default, caching is disabled since we don't know how big your response could be in relation to your configured memory size.
274+
275+
=== "Enabling cache"
276+
277+
```python hl_lines="12"
278+
--8<-- "examples/idempotency/src/working_with_local_cache.py"
279+
```
280+
281+
1. You can adjust cache capacity with [`local_cache_max_items`](#customizing-the-default-behavior) parameter.
282+
283+
=== "Sample event"
284+
285+
```json
286+
--8<-- "examples/idempotency/src/working_with_local_cache_payload.json"
287+
```
288+
267289
### Choosing a payload subset for idempotency
268290

269291
???+ tip "Tip: Dealing with always changing payloads"
@@ -750,27 +772,6 @@ This utility will raise an **`IdempotencyAlreadyInProgressError`** exception if
750772

751773
This is a locking mechanism for correctness. Since we don't know the result from the first invocation yet, we can't safely allow another concurrent execution.
752774

753-
### Using in-memory cache
754-
755-
**By default, in-memory local caching is disabled**, since we don't know how much memory you consume per invocation compared to the maximum configured in your Lambda function.
756-
757-
???+ note "Note: This in-memory cache is local to each Lambda execution environment"
758-
This means it will be effective in cases where your function's concurrency is low in comparison to the number of "retry" invocations with the same payload, because cache might be empty.
759-
760-
You can enable in-memory caching with the **`use_local_cache`** parameter:
761-
762-
=== "Caching idempotent transactions in-memory to prevent multiple calls to storage"
763-
764-
```python hl_lines="11"
765-
--8<-- "examples/idempotency/src/working_with_local_cache.py"
766-
```
767-
768-
=== "Sample event"
769-
770-
```json
771-
--8<-- "examples/idempotency/src/working_with_local_cache_payload.json"
772-
```
773-
774775
When enabled, the default is to cache a maximum of 256 records in each Lambda execution environment - You can change it with the **`local_cache_max_items`** parameter.
775776

776777
### Expiring idempotency records

Diff for: examples/idempotency/src/working_with_local_cache.py

+3-2
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,9 @@
77

88
persistence_layer = DynamoDBPersistenceLayer(table_name="IdempotencyTable")
99
config = IdempotencyConfig(
10-
event_key_jmespath="body",
11-
use_local_cache=True,
10+
event_key_jmespath="powertools_json(body)",
11+
# by default, it holds 256 items in a Least-Recently-Used (LRU) manner
12+
use_local_cache=True, # (1)!
1213
)
1314

1415

0 commit comments

Comments
 (0)