You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<1> Set the consumer name (mandatory for offset tracking)
962
971
<2> Use manual tracking with defaults
963
-
<3> Store at the current offset on some condition
972
+
<3> Store the current offset on some condition
964
973
965
974
Manual tracking has only one setting: the *check interval*. The client checks
966
975
that the last requested stored offset has been actually stored at the
@@ -1018,11 +1027,7 @@ The client provides a `SubscriptionListener` interface callback to add behavior
1018
1027
This callback can be used to customize the offset the client library computed for the subscription.
1019
1028
The callback is called when the consumer is first created and when the client has to re-subscribe (e.g. after a disconnection or a topology change).
1020
1029
1021
-
[WARNING]
1022
-
.Experimental
1023
-
====
1024
-
This API is experimental, it is subject to change.
1025
-
====
1030
+
WARNING: This API is *experimental*, it is subject to change.
1026
1031
1027
1032
It is possible to use the callback to get the last processed offset from an external store, that is not using the server-side offset tracking feature RabbitMQ Stream provides.
1028
1033
The following code snippet shows how this can be done (note the interaction with the external store is not detailed):
@@ -1056,4 +1061,122 @@ The application knows exactly when a message is processed and updates its in-mem
1056
1061
Let's take the example of a named consumer with an offset tracking strategy that is lagging because of bad timing and a long flush interval.
1057
1062
When a glitch happens and triggers the re-subscription, the server-side stored offset can be quite behind what the application actually processed.
1058
1063
Using this server-side stored offset can lead to duplicates, whereas using the in-memory, application-specific offset tracking variable is more accurate.
1059
-
A custom `SubscriptionListener` lets the application developer uses what's best for the application if the computed value is not optimal.
1064
+
A custom `SubscriptionListener` lets the application developer uses what's best for the application if the computed value is not optimal.
1065
+
1066
+
[[single-active-consumer]]
1067
+
===== Single Active Consumer
1068
+
1069
+
WARNING: Single Active Consumer requires *RabbitMQ 3.11* or more.
1070
+
1071
+
When the single active consumer feature is enabled for several consumer instances sharing the same stream and name, only one of these instances will be active at a time and so will receive messages.
1072
+
The other instances will be idle.
1073
+
1074
+
The single active consumer feature provides 2 benefits:
1075
+
1076
+
* Messages are processed in order: there is only one consumer at a time.
1077
+
* Consumption continuity is maintained: a consumer from the group will take over if the active one stops or crashes.
1078
+
1079
+
A typical sequence of events would be the following:
1080
+
1081
+
* Several instances of the same consuming application start up.
1082
+
* Each application instance registers a single active consumer.
1083
+
The consumer instances share the same name.
1084
+
* The broker makes the first registered consumer the active one.
1085
+
* The active consumer receives and processes messages, the other consumer instances remain idle.
1086
+
* The active consumer stops or crashes.
1087
+
* The broker chooses the consumer next in line to become the new active one.
1088
+
* The new active consumer starts receiving messages.
1089
+
1090
+
The next figures illustrates this mechanism.
1091
+
There can be only one active consumer:
1092
+
1093
+
.The first registered consumer is active, the next ones are inactive
1094
+
[ditaa]
1095
+
....
1096
+
+----------+
1097
+
+------+ consumer + Active
1098
+
| +----------+
1099
+
|
1100
+
+--------+ | +=---------+
1101
+
+ stream +---+------+ consumer + Inactive
1102
+
+--------+ | +----------+
1103
+
|
1104
+
| +=---------+
1105
+
+------+ consumer + Inactive
1106
+
+----------+
1107
+
....
1108
+
1109
+
The broker rolls over to another consumer when the active one stops or crashes:
1110
+
1111
+
.When the active consumer stops, the next in line becomes active
1112
+
[ditaa]
1113
+
....
1114
+
+=---------+
1115
+
| consumer + Closed
1116
+
+----------+
1117
+
1118
+
+--------+ +----------+
1119
+
+ stream +---+------+ consumer + Active
1120
+
+--------+ | +----------+
1121
+
|
1122
+
| +=---------+
1123
+
+------+ consumer + Inactive
1124
+
+----------+
1125
+
....
1126
+
1127
+
1128
+
Note there can be several groups of single active consumers on the same stream.
1129
+
What makes them different from each other is the name used by the consumers.
1130
+
The broker deals with them independently.
1131
+
Let's use an example.
1132
+
Imagine 2 different `app-1` and `app-2` applications consuming from the same stream, with 3 identical instances each.
1133
+
Each instance registers 1 single active consumer with the name of the application.
1134
+
We end up with 3 `app-1` consumers and 3 `app-2` consumers, 1 active consumer in each group, so overall 6 consumers and 2 active ones, all of this on the same stream.
1135
+
1136
+
Let's see now the API for single active consumer.
1137
+
1138
+
====== Enabling Single Active Consumer
1139
+
1140
+
Use the `ConsumerBuilder#singleActiveConsumer()` method to enable the feature:
<1> Set the consumer name (mandatory to enable single active consumer)
1148
+
<2> Enable single active consumer
1149
+
1150
+
With the configuration above, the consumer will take part in the `application-1` group on the `my-stream` stream.
1151
+
If the consumer instance is the first in a group, it will get messages as soon as there are some available. If it is not the first in the group, it will remain idle until it is its turn to be active (likely when all the instances registered before it are gone).
1152
+
1153
+
====== Offset Tracking
1154
+
1155
+
Single active consumer and offset tracking work together: when the active consumer goes away, another consumer takes over and resumes when the former active left off.
1156
+
Well, this is how things should work and luckily this is what happens when using <<consumer-offset-tracking, server-side offset tracking>>.
1157
+
So as long as you use <<consumer-automatic-offset-tracking, automatic offset tracking>> or <<consumer-manual-offset-tracking, manual offset tracking>>, the handoff between a former active consumer and the new one will go well.
1158
+
1159
+
The story is different is you are using an external store for offset tracking.
1160
+
In this case you need to tell the client library where to resume from and you can do this by implementing the `ConsumerUpdateListener` API.
1161
+
1162
+
[[consumer-update-listener]]
1163
+
====== Reacting to Consumer State Change
1164
+
1165
+
The broker notifies a consumer that becomes active before dispatching messages to it.
1166
+
The broker expects a response from the consumer and this response contains the offset the dispatching should start from.
1167
+
So this is the consumer's responsibility to compute the appropriate offset, not the broker's.
1168
+
The default behavior is to look up the last stored offset for the consumer on the stream.
1169
+
This works when server-side offset tracking is in use, but it does not when the application chose to use an external store for offset tracking.
1170
+
In this case, it is possible to use the `ConsumerBuilder#consumerUpdateListener(ConsumerUpdateListener)` method like demonstrated in the following snippet:
1171
+
1172
+
.Fetching the last stored offset from an external store in the consumer update listener callback
The same blog post covers why a https://blog.rabbitmq.com/posts/2021/07/connecting-to-streams/#with-a-load-balancer[load balancer can make things more complicated] for client applications like the performance tool and how https://blog.rabbitmq.com/posts/2021/07/connecting-to-streams/#client-workaround-with-a-load-balancer[they can mitigate these issues].
493
495
496
+
[[performance-tool-sac]]
497
+
===== Single Active Consumer
498
+
499
+
If the `--single-active-consumer` flag is set, the performance tool will create <<api.adoc#single-active-consumer, single active consumer>> instances.
500
+
This means that if there are more consumers than streams, there will be only one active consumer at a time on a stream, _if they share the same name_.
501
+
Note <<performance-tool-offset-tracking, offset tracking>> gets enabled automatically if it's not with `--single-active-consumer` (using 10,000 for `--store-every`).
502
+
Let's see a couple of examples.
503
+
504
+
In the following command we have 1 producer publishing to 1 stream and 3 consumers on this stream.
505
+
As `--single-active-consumer` is used, only one of these consumers will be active at a time.
Note we use a fixed value for the consumer names: if they don't have the same name, the broker will not consider them as a group of consumers, so they will all get messages, like regular consumers.
513
+
514
+
In the following example we have 2 producers for 2 streams and 6 consumers overall (3 for each stream).
515
+
Note the consumers have the same name on their streams with the use of `--consumer-names my-app-%s`, as `%s` is a <<consumer-names, placeholder for the stream name>>.
The performance tool has a `--super-streams` flag to enable <<super-streams.adoc#super-streams, super streams>> on the publisher and consumer sides.
526
+
This support is meant to be used with the <<performance-tool-sac, `--single-active-consumer` flag>>, to <<super-streams.adoc#super-stream-sac, benefit from both features>>.
527
+
We recommend reading the appropriate sections of the documentation to understand the semantics of the flags before using them.
528
+
Let's see some examples.
529
+
530
+
The example below creates 1 producer and 3 consumers on the default `stream`, which is now a _super stream_ because of the `--super-streams` flag:
The performance tool creates 3 individual streams by default, they are the partitions of the super stream.
538
+
They are named `stream-0`, `stream-1`, and `stream-2`, after the name of the super stream, `stream`.
539
+
The producer will publish to each of them, using a <<super-streams.adoc#super-stream-producer, hash-based routing strategy>>.
540
+
541
+
A consumer is _composite_ with `--super-streams`: it creates a consumer instance for each partition.
542
+
This is 9 consumer instances overall – 3 composite consumers and 3 partitions – spread evenly across the partitions, but with only one active at a time on a given stream.
543
+
544
+
Note we use a fixed consumer name so that the broker considers the consumers belong to the same group and enforce the single active consumer behavior.
545
+
546
+
The next example is more convoluted.
547
+
We are going to work with 2 super streams (`--stream-count 2` and `--super-streams`).
548
+
Each super stream will have 5 partitions (`--super-stream-partitions 5`), so this is 10 streams overall (`stream-1-0` to `stream-1-4` and `stream-2-0` to `stream-2-4`).
We see also that each super stream has 1 producer (`--producers 2`) and 3 consumers (`--consumers 6`).
559
+
The composite consumers will spread their consumer instances across the partitions.
560
+
Each partition will have 3 consumers but only 1 active at a time with `--single-active-consumer` and `--consumer-names my-app-%s` (the consumers on a given stream have the same name, so the broker make sure only one consumes at a time).
561
+
562
+
Note the performance tool does not use <<performance-tool-connection-pooling, connection pooling>> by default.
563
+
The command above opens a significant number of connections – 30 just for consumers – and may not reflect exactly how applications are deployed in the real world.
564
+
Don't hesitate to use the `--producers-by-connection` and `--consumers-by-connection` options to make the runs as close to your workloads as possible.
565
+
494
566
===== Monitoring
495
567
496
568
The tool can expose some runtime information on HTTP.
0 commit comments