-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Support BroadcastChannel(Channel.UNLIMITED) #736
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Can you please clarify what your use-case for that? |
I don't really see why this needs a use case, but I want to not bottleneck the producer on consumers being slow, and not bottleneck one consumer because its neighbours are slow. While arguably a huge speed difference might be an indication of a bug, I want my program to be resilient to this, and I also test that my program can handle these sorts of cases by simulating increased latency on multiple ends. Right now, in the tests I have that do this, I have to simply use a large capacity instead of Plus, it's very convenient to be able to use |
But if slow consumers don't provide back-pressure to slow down producers, then you are eventually running out of memory. So what would be use-case for that? Why would you prefer that as opposed to proper back-pressure support? |
Personally, I'd rather have an OOM error show up than my program just being slow and me not knowing why. Also, in my case the producers send the data to two places: one on disk and a second down a channel to be processed, so that if execution stops suddenly it can repeat the work it's done. In my workflow specifically, I end up doing large web requests that can sometimes take a while to finish. Because it takes a while, it's sometimes useful for me to start the pipeline after I've added the first few stages, continue to develop it, then start it over again without repeating any work. In these cases, it's very useful to have a "just use as much memory as you need" option, because even if it ends up breaking before it finishes, in the time until it got there, I managed to get some real work done. And, chances are that by the time I've finished developing the entire thing, I've managed to fix the bugs that resulted in the congestion. To clarify, I completely agree with you that in most cases, you'll want to properly limit the size of the queue. But I do think that it's very reasonable to provide an unlimited option for |
I aggree. This is the same for channels and we do have unlimited channels. Most of the time I'll favor a rendez-vous channel or a fixed buffer. But For instance, sometime (often in our code-base) we know that the publisher won't have to publish a crazy amount of messages, and therefore it is safe to use an unlimited buffer, because the OOM is very unlikely, or even impossible. In theses cases, it is sad that we have to either suspend the publisher (using small fixed-buffer) or waste memory before even knowing how much we need (using big fixed-buffer). |
@elizarov are you still looking for more clarification on this? |
@elizarov This should be something that's supported. Here is an example: I use the following class to create a delegated property that when it's set, or read, it delegates to an android / iOS abstract class AbstractDelegate<T>(protected val store: Store) : ReadWriteProperty<Any?, T> {
private val broadcastChannel: BroadcastChannel<T> by lazy {
BroadcastChannel<T>(10).apply { offer(performGet()) }
}
protected abstract fun performGet(): T
protected abstract fun performSet(value: T)
final override fun getValue(thisRef: Any?, property: KProperty<*>): T {
return performGet()
}
final override fun setValue(thisRef: Any?, property: KProperty<*>, value: T) {
performSet(value)
broadcastChannel.offer(value) // Always succeeds for ConflatedBroadcastChannel
}
fun createChannel(): ReceiveChannel<T> = broadcastChannel.openSubscription()
} I have to make an odd trade off between allocating too much memory in the Offering an I'm also forced to make decisions like this every time my team uses a GlobalScope.launch(Dispatchers.Main) {
broadcastChannel.send(event)
} This is annoying. The dev has to remember to use It would be far better to just be able to use |
Thanks for description of various use-cases. All (well almost all) of them make sense. @ScottPierce One question is about watching preference. Wouldn't |
Won't be implemented. A corresponding |
@qwwdfsad Is there a ticket we can follow for that? Also - can you help me understand if the
@elizarov Sorry - I never saw your response. So in a lot of cases it's likely When using RxJava there were several cases where I explicitly used |
#1082 is the closest one. Real-world examples of both After a bit of investigation and prototyping, we've found that
Sure, |
This was mentioned in #254 but never had an issue created for it.
The text was updated successfully, but these errors were encountered: