You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc
+2-3Lines changed: 2 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -65,8 +65,7 @@ In either case, you should NOT perform any seeks on the consumer because the con
65
65
66
66
Starting with version 2.8, the legacy `ErrorHandler` and `BatchErrorHandler` interfaces have been superseded by a new `CommonErrorHandler`.
67
67
These error handlers can handle errors for both record and batch listeners, allowing a single listener container factory to create containers for both types of listener.
68
-
`CommonErrorHandler` implementations to replace most legacy framework error handler implementations are provided and the legacy error handlers deprecated.
69
-
The legacy interfaces are still supported by listener containers and listener container factories; they will be deprecated in a future release.
68
+
`CommonErrorHandler` implementations to replace most legacy framework error handler implementations are provided.
70
69
71
70
See xref:kafka/annotation-error-handling.adoc#migrating-legacy-eh[Migrating Custom Legacy Error Handler Implementations to `CommonErrorHandler`] for information to migrate custom error handlers to `CommonErrorHandler`.
72
71
@@ -425,7 +424,7 @@ To replace any `BatchErrorHandler` implementation, you should implement `handleB
425
424
You should also implement `handleOtherException()` - to handle exceptions that occur outside the scope of record processing (e.g. consumer errors).
426
425
427
426
[[after-rollback]]
428
-
== After-rollback Processor
427
+
== After Rollback Processor
429
428
430
429
When using transactions, if the listener throws an exception (and an error handler, if present, throws an exception), the transaction is rolled back.
431
430
By default, any unprocessed records (including the failed record) are re-fetched on the next poll.
Copy file name to clipboardExpand all lines: spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc
+31-22Lines changed: 31 additions & 22 deletions
Original file line number
Diff line number
Diff line change
@@ -30,14 +30,18 @@
30
30
See the JavaDocs for `ContainerProperties.AssignmentCommitOption` for more information about the available options.
31
31
32
32
|[[asyncAcks]]<<asyncAcks,`asyncAcks`>>
33
-
|false
33
+
|`false`
34
34
|Enable out-of-order commits (see xref:kafka/receiving-messages/ooo-commits.adoc[Manually Committing Offsets]); the consumer is paused and commits are deferred until gaps are filled.
|A provider for `OffsetAndMetadata`; by default, the provider creates an offset and metadata with empty metadata. The provider gives a way to customize the metadata.
The default executor creates threads named `<name>-C-n`; with the `KafkaMessageListenerContainer`, the name is the bean name; with the `ConcurrentMessageListenerContainer` the name is the bean name suffixed with `-n` where n is incremented for each child container.
The default executor creates threads named `<name>-C-n`; with the `KafkaMessageListenerContainer`, the name is the bean name; with the `ConcurrentMessageListenerContainer` the name is the bean name suffixed with `-n` where n is incremented for each child container.
|A provider for `OffsetAndMetadata`; by default, the provider creates an offset and metadata with empty metadata. The provider gives a way to customize the metadata.
Copy file name to clipboardExpand all lines: spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ NOTE: With the concurrent container, timers are created for each thread and the
24
24
[[monitoring-kafkatemplate-performance]]
25
25
== Monitoring KafkaTemplate Performance
26
26
27
-
Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context.
27
+
Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s+++ for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context.
28
28
The timers can be disabled by setting the template's `micrometerEnabled` property to `false`.
29
29
30
30
Two timers are maintained - one for successful calls to the listener and one for failures.
@@ -95,7 +95,7 @@ Using Micrometer for observation is now supported, since version 3.0, for the `K
95
95
96
96
Set `observationEnabled` to `true` on the `KafkaTemplate` and `ContainerProperties` to enable observation; this will disable xref:kafka/micrometer.adoc[Micrometer Timers] because the timers will now be managed with each observation.
97
97
98
-
Refer to https://micrometer.io/docs/tracing[Micrometer Tracing] for more information.
98
+
Refer to https://docs.micrometer.io/tracing/reference/index.html[Micrometer Tracing] for more information.
99
99
100
100
To add tags to timers/traces, configure a custom `KafkaTemplateObservationConvention` or `KafkaListenerObservationConvention` to the template or listener container, respectively.
101
101
@@ -109,6 +109,6 @@ Starting with version 3.0.6, you can add dynamic tags to the timers and traces,
109
109
To do so, add a custom `KafkaListenerObservationConvention` and/or `KafkaTemplateObservationConvention` to the listener container properties or `KafkaTemplate` respectively.
110
110
The `record` property in both observation contexts contains the `ConsumerRecord` or `ProducerRecord` respectively.
111
111
112
-
The sender and receiver contexts' `remoteServiceName` properties are set to the Kafka `clusterId` property; this is retrieved by a `KafkaAdmin`.
112
+
The sender and receiver contexts `remoteServiceName` properties are set to the Kafka `clusterId` property; this is retrieved by a `KafkaAdmin`.
113
113
If, for some reason - perhaps lack of admin permissions, you cannot retrieve the cluster id, starting with version 3.1, you can set a manual `clusterId` on the `KafkaAdmin` and inject it into `KafkaTemplate` s and listener containers.
114
114
When it is `null` (default), the admin will invoke the `describeCluster` admin operation to retrieve it from the broker.
Copy file name to clipboardExpand all lines: spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/pause-resume.adoc
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -13,11 +13,11 @@ Starting with version 2.1.5, you can call `isPauseRequested()` to see if `pause(
13
13
However, the consumers might not have actually paused yet.
14
14
`isConsumerPaused()` returns true if all `Consumer` instances have actually paused.
15
15
16
-
In addition(also since 2.1.5), `ConsumerPausedEvent` and `ConsumerResumedEvent` instances are published with the container as the `source` property and the `TopicPartition` instances involved in the `partitions` property.
16
+
In addition(also since 2.1.5), `ConsumerPausedEvent` and `ConsumerResumedEvent` instances are published with the container as the `source` property and the `TopicPartition` instances involved in the `partitions` property.
17
17
18
18
Starting with version 2.9, a new container property `pauseImmediate`, when set to true, causes the pause to take effect after the current record is processed.
19
-
By default, the pause takes effect when all of the records from the previous poll have been processed.
20
-
See <<pauseImmediate>>.
19
+
By default, the pause takes effect when all the records from the previous poll have been processed.
20
+
See xref:kafka/container-props.adoc#pauseImmediate[pauseImmediate].
21
21
22
22
The following simple Spring Boot application demonstrates by using the container registry to get a reference to a `@KafkaListener` method's container and pausing or resuming its consumers as well as receiving the corresponding events:
0 commit comments