-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Introduce first version of Dispatchers.IO for K/N #3576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ac91fe2
to
192f770
Compare
How so? Why can't we ensure that the number of workers is at most the number of workers in
Elasticity was introduced as a way to prevent once-in-a-blue-moon bugs. I'd argue that they are just as possible if we consider the use case of a web server built on Native. If someone naively ported their JVM program to Native, they would likely not read the documentation thoroughly enough to notice that there's no elasticity on Native. The fact that Kotlin tutorials are written JVM-first and will not notice this caveat also worsens things.
Why 64? I get that it's the same number as for the JVM, but there, we have the ability to configure this number, and here we don't. In general, I'm really wary of having a single constant for all of Native. The range of platforms encompassed by Native is very wide. AFAIK on Linux, you could generally append a zero to this number and not notice a difference. AFAIK on Windows, this number should be cut in half. And iOS worries me the most. It's notoriously finicky and doesn't behave well unless you adopt the best practices. It seems that the best practices in iOS are very much not spawning a bunch of threads but instead using the grand central dispatch library, which, it seems, does handle I/O in its own way. Does iOS behave well with the number of threads so large? I don't know. Should
So, 6 cores in an iPhone * 16 = 96 threads. 64 threads for On the application developer side, it's trivial to recreate this with just an I would maybe be ± okay with this if we marked the API as experimental and explicitly published this as means of gathering feedback like "my iOS app becomes unusably slow after 32 dispatches happen in |
Unfortunately, workers have neither "park/unpark" API nor graceful termintation, meaning that it will be
Indeed, I do not argue against elasticity and I believe this is a concept worth supporting. It doesn't seem reasonable thought to attempt supporting it with workers API, but providing access to common "Dispatchers.IO" is a nice thing to have for early adopters. As soon as threads are here, we'll change that.
Good points. Our main (probably the only) target is iOS
Yeah, I've tried to figure out which queue to use and failed. My initial idea was to "if you want to use non-trivial dq, we can provide a CoroutineDispatcher over it, and K/N is more of a general-purpose dispatcher", but having a thought after your comments I'm not sure it is a good idea. Since recently we have an iOS expert in K/N team who maintained a large iOS codebase for VK, I'll try to reach him out and will keep you updated about my findings. |
Do they need it? Everything they do is in a coroutine, so we emulate anything we need with the usual val wakeup = Channel<Unit>(Channel.UNLIMITED)
suspend fun park() {
wakeup.receive()
}
fun unpark() {
wakeup.trySend(Unit)
} |
If we merge #3595, the limit on the number of workers will be "how many workers are allowed to be allocated at any moment by the limits in dispatchers." |
* Emulate expect declaration refinement via extension property as the only way to do it in a backwards-compatible manner: in the current model it is impossible to have common 'expect' Dispatchers declaration, then refined 'concurrent' Dispatchers declaration with 'expect val IO' and then JVM declaration with JVM-specific members. Current solutions seems to be the less intrusive one * Elasticity is not supported as K/N Workers API lacks capability to gracefully shutdown workers, meaning that for any unlimited underlying pool, memory consumption is only going to grow in an unbound manner Fixes #3205
192f770
to
a38ed39
Compare
Now that workers are allocated lazily (thanks for that!), does this implementation make sense to you or do you have any other potential improvements in mind? |
Fixes #3205