Skip to content

Commit 4ce75ee

Browse files
authored
Ref doc improvements (#3045)
* Supplement container-props.adoc ContainerProperties properties * Remove properties for `AbstractListenerContainer` * polish doc annotation-error-handling.adoc * Polish micrometer.adoc * modify a Micrometer Tracing link change https://micrometer.io/docs/tracing to https://docs.micrometer.io/tracing/reference/index.html * fix pause-resume.adoc ref
1 parent 81c5329 commit 4ce75ee

File tree

4 files changed

+39
-31
lines changed

4 files changed

+39
-31
lines changed

spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/annotation-error-handling.adoc

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -65,8 +65,7 @@ In either case, you should NOT perform any seeks on the consumer because the con
6565

6666
Starting with version 2.8, the legacy `ErrorHandler` and `BatchErrorHandler` interfaces have been superseded by a new `CommonErrorHandler`.
6767
These error handlers can handle errors for both record and batch listeners, allowing a single listener container factory to create containers for both types of listener.
68-
`CommonErrorHandler` implementations to replace most legacy framework error handler implementations are provided and the legacy error handlers deprecated.
69-
The legacy interfaces are still supported by listener containers and listener container factories; they will be deprecated in a future release.
68+
`CommonErrorHandler` implementations to replace most legacy framework error handler implementations are provided.
7069

7170
See xref:kafka/annotation-error-handling.adoc#migrating-legacy-eh[Migrating Custom Legacy Error Handler Implementations to `CommonErrorHandler`] for information to migrate custom error handlers to `CommonErrorHandler`.
7271

@@ -425,7 +424,7 @@ To replace any `BatchErrorHandler` implementation, you should implement `handleB
425424
You should also implement `handleOtherException()` - to handle exceptions that occur outside the scope of record processing (e.g. consumer errors).
426425

427426
[[after-rollback]]
428-
== After-rollback Processor
427+
== After Rollback Processor
429428

430429
When using transactions, if the listener throws an exception (and an error handler, if present, throws an exception), the transaction is rolled back.
431430
By default, any unprocessed records (including the failed record) are re-fetched on the next poll.

spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/container-props.adoc

Lines changed: 31 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -30,14 +30,18 @@
3030
See the JavaDocs for `ContainerProperties.AssignmentCommitOption` for more information about the available options.
3131

3232
|[[asyncAcks]]<<asyncAcks,`asyncAcks`>>
33-
|false
33+
|`false`
3434
|Enable out-of-order commits (see xref:kafka/receiving-messages/ooo-commits.adoc[Manually Committing Offsets]); the consumer is paused and commits are deferred until gaps are filled.
3535

3636
|[[authExceptionRetryInterval]]<<authExceptionRetryInterval,`authExceptionRetryInterval`>>
3737
|`null`
3838
|When not null, a `Duration` to sleep between polls when an `AuthenticationException` or `AuthorizationException` is thrown by the Kafka client.
3939
When null, such exceptions are considered fatal and the container will stop.
4040

41+
|[[batchRecoverAfterRollback]]<<batchRecoverAfterRollback,`batchRecoverAfterRollback`>>
42+
|`false`
43+
|Set to `true` to enable batch recovery, See xref:kafka/annotation-error-handling.adoc#after-rollback[After Rollback Processor].
44+
4145
|[[clientId]]<<clientId,`clientId`>>
4246
|(empty string)
4347
|A prefix for the `client.id` consumer property.
@@ -57,10 +61,6 @@ Useful when the consumer code cannot determine that an `ErrorHandlingDeserialize
5761
|`null`
5862
|When present and `syncCommits` is `false` a callback invoked after the commit completes.
5963

60-
|[[offsetAndMetadataProvider]]<<offsetAndMetadataProvider,`offsetAndMetadataProvider`>>
61-
|`null`
62-
|A provider for `OffsetAndMetadata`; by default, the provider creates an offset and metadata with empty metadata. The provider gives a way to customize the metadata.
63-
6464
|[[commitLogLevel]]<<commitLogLevel,`commitLogLevel`>>
6565
|DEBUG
6666
|The logging level for logs pertaining to committing offsets.
@@ -69,15 +69,15 @@ Useful when the consumer code cannot determine that an `ErrorHandlingDeserialize
6969
|`null`
7070
|A rebalance listener; see xref:kafka/receiving-messages/rebalance-listeners.adoc[Rebalancing Listeners].
7171

72-
|[[consumerStartTimout]]<<consumerStartTimout,`consumerStartTimout`>>
72+
|[[commitRetries]]<<commitRetries,`commitRetries`>>
73+
|3
74+
|Set the number of retries `RetriableCommitFailedException` when using `syncCommits` set to true.
75+
Default 3 (4-attempt total).
76+
77+
|[[consumerStartTimeout]]<<consumerStartTimeout,`consumerStartTimeout`>>
7378
|30s
7479
|The time to wait for the consumer to start before logging an error; this might happen if, say, you use a task executor with insufficient threads.
7580

76-
|[[consumerTaskExecutor]]<<consumerTaskExecutor,`consumerTaskExecutor`>>
77-
|`SimpleAsyncTaskExecutor`
78-
|A task executor to run the consumer threads.
79-
The default executor creates threads named `<name>-C-n`; with the `KafkaMessageListenerContainer`, the name is the bean name; with the `ConcurrentMessageListenerContainer` the name is the bean name suffixed with `-n` where n is incremented for each child container.
80-
8181
|[[deliveryAttemptHeader]]<<deliveryAttemptHeader,`deliveryAttemptHeader`>>
8282
|`false`
8383
|See xref:kafka/annotation-error-handling.adoc#delivery-header[Delivery Attempts Header].
@@ -123,9 +123,14 @@ Also see `idleBeforeDataMultiplier`.
123123
|None
124124
|Used to override any arbitrary consumer properties configured on the consumer factory.
125125

126+
|[[listenerTaskExecutor]]<<listenerTaskExecutor,`listenerTaskExecutor`>>
127+
|`SimpleAsyncTaskExecutor`
128+
|A task executor to run the consumer threads.
129+
The default executor creates threads named `<name>-C-n`; with the `KafkaMessageListenerContainer`, the name is the bean name; with the `ConcurrentMessageListenerContainer` the name is the bean name suffixed with `-n` where n is incremented for each child container.
130+
126131
|[[logContainerConfig]]<<logContainerConfig,`logContainerConfig`>>
127132
|`false`
128-
|Set to true to log at INFO level all container properties.
133+
|Set to `true` to log at INFO level all container properties.
129134

130135
|[[messageListener]]<<messageListener,`messageListener`>>
131136
|`null`
@@ -145,7 +150,7 @@ Also see `idleBeforeDataMultiplier`.
145150

146151
|[[missingTopicsFatal]]<<missingTopicsFatal,`missingTopicsFatal`>>
147152
|`false`
148-
|When true prevents the container from starting if the confifgured topic(s) are not present on the broker.
153+
|When true prevents the container from starting if the configured topic(s) are not present on the broker.
149154

150155
|[[monitorInterval]]<<monitorInterval,`monitorInterval`>>
151156
|30s
@@ -157,9 +162,21 @@ See `noPollThreshold` and `pollTimeout`.
157162
|Multiplied by `pollTimeOut` to determine whether to publish a `NonResponsiveConsumerEvent`.
158163
See `monitorInterval`.
159164

165+
|[[observationConvention]]<<observationConvention,`observationConvention`>>
166+
|`null`
167+
|When set, add dynamic tags to the timers and traces, based on information in the consumer records.
168+
169+
|[[observationEnabled]]<<observationEnabled,`observationEnabled`>>
170+
|`false`
171+
|Set to `true` to enable observation via Micrometer.
172+
173+
|[[offsetAndMetadataProvider]]<<offsetAndMetadataProvider,`offsetAndMetadataProvider`>>
174+
|`null`
175+
|A provider for `OffsetAndMetadata`; by default, the provider creates an offset and metadata with empty metadata. The provider gives a way to customize the metadata.
176+
160177
|[[onlyLogRecordMetadata]]<<onlyLogRecordMetadata,`onlyLogRecordMetadata`>>
161178
|`false`
162-
|Set to false to log the complete consumer record (in error, debug logs etc) instead of just `topic-partition@offset`.
179+
|Set to `false` to log the complete consumer record (in error, debug logs etc.) instead of just `topic-partition@offset`.
163180

164181
|[[pauseImmediate]]<<pauseImmediate,`pauseImmediate`>>
165182
|`false`
@@ -256,14 +273,6 @@ See xref:kafka/annotation-error-handling.adoc#error-handlers[Container Error Han
256273
|`ContainerProperties`
257274
|The container properties instance.
258275

259-
|[[errorHandler]]<<errorHandler,`errorHandler`>>
260-
|See desc.
261-
|Deprecated - see `commonErrorHandler`.
262-
263-
|[[genericErrorHandler]]<<genericErrorHandler,`genericErrorHandler`>>
264-
|See desc.
265-
|Deprecated - see `commonErrorHandler`.
266-
267276
|[[groupId2]]<<groupId2,`groupId`>>
268277
|See desc.
269278
|The `containerProperties.groupId`, if present, otherwise the `group.id` property from the consumer factory.

spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/micrometer.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ NOTE: With the concurrent container, timers are created for each thread and the
2424
[[monitoring-kafkatemplate-performance]]
2525
== Monitoring KafkaTemplate Performance
2626

27-
Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context.
27+
Starting with version 2.5, the template will automatically create and update Micrometer `Timer`+++s+++ for send operations, if `Micrometer` is detected on the classpath, and a single `MeterRegistry` is present in the application context.
2828
The timers can be disabled by setting the template's `micrometerEnabled` property to `false`.
2929

3030
Two timers are maintained - one for successful calls to the listener and one for failures.
@@ -95,7 +95,7 @@ Using Micrometer for observation is now supported, since version 3.0, for the `K
9595

9696
Set `observationEnabled` to `true` on the `KafkaTemplate` and `ContainerProperties` to enable observation; this will disable xref:kafka/micrometer.adoc[Micrometer Timers] because the timers will now be managed with each observation.
9797

98-
Refer to https://micrometer.io/docs/tracing[Micrometer Tracing] for more information.
98+
Refer to https://docs.micrometer.io/tracing/reference/index.html[Micrometer Tracing] for more information.
9999

100100
To add tags to timers/traces, configure a custom `KafkaTemplateObservationConvention` or `KafkaListenerObservationConvention` to the template or listener container, respectively.
101101

@@ -109,6 +109,6 @@ Starting with version 3.0.6, you can add dynamic tags to the timers and traces,
109109
To do so, add a custom `KafkaListenerObservationConvention` and/or `KafkaTemplateObservationConvention` to the listener container properties or `KafkaTemplate` respectively.
110110
The `record` property in both observation contexts contains the `ConsumerRecord` or `ProducerRecord` respectively.
111111

112-
The sender and receiver contexts' `remoteServiceName` properties are set to the Kafka `clusterId` property; this is retrieved by a `KafkaAdmin`.
112+
The sender and receiver contexts `remoteServiceName` properties are set to the Kafka `clusterId` property; this is retrieved by a `KafkaAdmin`.
113113
If, for some reason - perhaps lack of admin permissions, you cannot retrieve the cluster id, starting with version 3.1, you can set a manual `clusterId` on the `KafkaAdmin` and inject it into `KafkaTemplate` s and listener containers.
114114
When it is `null` (default), the admin will invoke the `describeCluster` admin operation to retrieve it from the broker.

spring-kafka-docs/src/main/antora/modules/ROOT/pages/kafka/pause-resume.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,11 @@ Starting with version 2.1.5, you can call `isPauseRequested()` to see if `pause(
1313
However, the consumers might not have actually paused yet.
1414
`isConsumerPaused()` returns true if all `Consumer` instances have actually paused.
1515

16-
In addition (also since 2.1.5), `ConsumerPausedEvent` and `ConsumerResumedEvent` instances are published with the container as the `source` property and the `TopicPartition` instances involved in the `partitions` property.
16+
In addition(also since 2.1.5), `ConsumerPausedEvent` and `ConsumerResumedEvent` instances are published with the container as the `source` property and the `TopicPartition` instances involved in the `partitions` property.
1717

1818
Starting with version 2.9, a new container property `pauseImmediate`, when set to true, causes the pause to take effect after the current record is processed.
19-
By default, the pause takes effect when all of the records from the previous poll have been processed.
20-
See <<pauseImmediate>>.
19+
By default, the pause takes effect when all the records from the previous poll have been processed.
20+
See xref:kafka/container-props.adoc#pauseImmediate[pauseImmediate].
2121

2222
The following simple Spring Boot application demonstrates by using the container registry to get a reference to a `@KafkaListener` method's container and pausing or resuming its consumers as well as receiving the corresponding events:
2323

0 commit comments

Comments
 (0)