@@ -161,7 +161,6 @@ then send no more messages, then a message batch of 10 will be sent immediately
161
161
162
162
If we send 1000 messages of 100 bytes each, then - after exceeding 64'000 bytes (after ca. 64 messages) - we'll
163
163
send the message batch, and this might have taken only 3 ms.
164
-
165
164
166
165
NOTE: Since 3.x, message bundling is the default, and it cannot be enabled or disabled anymore (the config
167
166
is ignored). However, a message can set the `DONT_BUNDLE` flag to skip message bundling. This is only recognized
@@ -183,7 +182,8 @@ If max_bundle_size exceeded, or no more message -> send message batch
183
182
184
183
When the message send rate is high and/or many large messages are sent, latency is more or less the time to fill
185
184
`max_bundle_size`. This should be sufficient for a lot of applications. If not, flags `OOB` and `DONT_BUNDLE` can be
186
- used to bypass bundling.
185
+ used to bypass bundling. Alternatively, `max_bundle_size` can be reduced, so that smaller batches are accumulated
186
+ and sent (it takes less time to fill smaller batches).
187
187
188
188
189
189
@@ -200,14 +200,13 @@ be the address and port number of the _unicast_ socket.
200
200
201
201
===== Using UDP and plain IP multicasting
202
202
203
- A protocol stack with UDP as transport protocol is typically used with clusters whose members run on
204
- the same host or are distributed across a LAN. Note that before running instances
205
- _in different subnets_, an admin has to make sure that IP multicast is enabled
206
- across subnets. It is often the case that IP multicast is not enabled across subnets.
207
- Refer to section <<ItDoesntWork>> for running a test program that determines whether
208
- members can reach each other via IP multicast. If this does not work, the protocol stack cannot use
209
- UDP with IP multicast as transport. In this case, the stack has to either use UDP without IP
210
- multicasting, or use a different transport such as TCP.
203
+ A protocol stack with UDP as transport protocol is typically used with clusters whose members run on the same host or
204
+ are distributed across a LAN. Note that before running instances _in different subnets_, an admin has to make sure
205
+ that IP multicast is enabled across subnets. It is often the case that IP multicast is not enabled across subnets.
206
+
207
+ Refer to section <<ItDoesntWork>> for running a test program that determines whether members can reach each other
208
+ via IP multicast. If this does not work, the protocol stack cannot use UDP with IP multicast as transport. In this
209
+ case, the stack has to either use UDP without IP multicasting, or use a different transport such as TCP.
211
210
212
211
213
212
[[IpNoMulticast]]
@@ -597,8 +596,8 @@ message 5. Alternatively, the application might receive a message batch containi
597
596
through that batch, message 4 will be consumed before message 5.
598
597
599
598
Regular messages from different senders P and Q are delivered in parallel. E.g if P sends 4 and 5 and Q sends 56 and 57,
600
- then the `receive()` callback might get invoked in parallel for P4 and Q57. Therefore the `receive()` callbacks
601
- have to be thread-safe.
599
+ then the `receive()` callback might get invoked in parallel for P4 and Q57. Therefore the `receive()` callback
600
+ has to be thread-safe.
602
601
603
602
In contrast, OOB messages are delivered in an undefined order, e.g. messages P4 and P5 might get delivered as P4 -> P5
604
603
(P4 followed by P5) in some receivers and P5 -> P4 in others. It is also possible that P4 is delivered in parallel with
@@ -607,6 +606,54 @@ P5, each message getting delivered by a different thread.
607
606
The only guarantee for both regular and OOB messages is that a message will get delivered exactly once. Dropped messages
608
607
are retransmitted and duplicate messages are dropped.
609
608
609
+ [[MessageProcessingPolicy]]
610
+ ==== Message processing policy
611
+
612
+ When a message or message batch is received, it is forwarded to an implementation of `MessageProcessingPolicy`. The
613
+ policy defines how to deliver a message / batch. A number of predefined policies are provided, but custom policies
614
+ can be used: to do this, `message_processing_policy` can be set to the fully qualified name of a class implementing
615
+ `MessageProcessingPolicy`.
616
+
617
+ The following policies are provided:
618
+
619
+ .MessageProcessingPolicy implementations
620
+ [options="header",cols="2,10"]
621
+ |===============
622
+ |Name|Description
623
+
624
+ | `submit` | `SubmitToThreadPool`. Messages and batches are handed to the thread pool for delivery. E.g. if we receive
625
+ `A:7`, `B:22`, `A:8` and a batch `B:23-30`, then we'll use 4 threads from the thread pool to deliver the 3 messages and
626
+ the batch. Contrast this to `max`, which uses 2 threads (see below). +
627
+ This is the default for OOB messages/batches, as they can be delivered in any order. Can be used for regular messages /
628
+ batches, too, but not very efficient as they will get reordered in `NAKACK-X` / `UNICAST-X` again. +
629
+ The main advantage of `submit` is that independent messages/batches can be processed *concurrently* by the application;
630
+ in a message batch, messages are processed one-by-one, delaying messages at the tail of the batch by the time it
631
+ takes to process messages ahead of them.
632
+
633
+ | `max` | `MaxOneThreadPerSender`, subclass of `SubmitToThreadPool`. Default policy. OOB messages/batches are passed to
634
+ the superclass (`SubmitToThreadPool`). +
635
+ Regular messages/batches are queued: one queue exists for each sender. When a message/batch is received, it is added
636
+ to the corresponding queue. If no thread is processing that queue, a new thread from the pool is started to process
637
+ all messages from that queue, until empty, then the thread terminates. +
638
+ This ensures that at max one thread is delivering messages/batches from a given sender. In the example above, we'd have
639
+ 2 threads delivering messages/batches: one for `A:7-8` and the other for `B22-30`. +
640
+
641
+ | `unbatch` | `UnbatchOOBBatches`, subclass of `MaxOneThreadPerSender`. This policy passes message batches up to
642
+ `max_size` up to the `SubmitToThreadPool` policy. When `max_size` is exceeded, then all messages in the batch are
643
+ passed to the thread pool one by one. +
644
+ Example with `max_size=5`: an OOB batch of 4 is passed up to `submit`. An OOB batch of 6 is not passed up, but the 6
645
+ messages are each passed up by a separate thread from the pool (6 threads). +
646
+ The idea of `unbatch` is that when the average processing time per message is large, then the messages toward the tail
647
+ of the batch are disadvantaged; processing all messages in separate threads is faster. See <<MessageBatch>> for
648
+ details. +
649
+ The max size can be defined in the configuration as follows:
650
+ `<TCP message_processing_policy="unbatch" msg_processing_policy.max_size="5">`
651
+
652
+ | `direct` | `PassRegularMessagesUpDirectly`, subclass of `SubmitToThreadPool`. OOB messages/batches are handled by
653
+ `SubmitToThreadPool`. Regular messages/batches are passed up on the same thread (the one that read from the network). +
654
+ Experimental, used to measure performance. Might get removed soon.
655
+ |===============
656
+
610
657
611
658
[[OOB]]
612
659
===== Out-of-band messages
0 commit comments