-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Eliminate unneeded LimitedDispatcher
instances on Dispatchers.Default
and Dispatchers.IO
#3562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…n#3442) `LimitedDispatcher.limitedParallelism` returns `this` if requested parallelism is greater or equal to the own parallelism of the said `LimitedDispatcher`. `UnlimitedIoScheduler` has parallelism effectively set to `Int.MAX_VALUE`, so `parallelism >= this.parallelism` check folds into `parallelism == Int.MAX_VALUE`. Before the change `LimitedDispatcher(Int.MAX_VALUE)` was returned. While it does work as expected, any submitted task goes through its queue and `Int.MAX_VALUE` number of workers. The change allows to eliminate the `LimitedDispatcher` instance and its queue in this extreme case.
…elism >= core pool size (Kotlin#3442) `LimitedDispatcher.limitedParallelism` returns `this` if requested parallelism is greater or equal to the own parallelism of the said `LimitedDispatcher`. `DefaultScheduler` has parallelism effectively set to `CORE_POOL_SIZE`. Before the change `LimitedDispatcher(parallelism)` was returned. While it does work as expected, any submitted task goes through its queue and `parallelism` number of workers. The change allows to eliminate the `LimitedDispatcher` instance and its queue in case the requested parallelism is greater or equal to `CORE_POOL_SIZE`.
No conceptual objections from me. @dkhalanskyjb mind taking a look at the general approach? |
Another approach was to expose |
Co-authored-by: Dmitry Khalanskiy <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
effectiveParallelism
seems to me like a leaky abstraction because Dispatchers.IO
has an effective parallelism of 64, but Dispatchers.IO.limitedParallelism(n) == UnlimitedIoDispatcher.limitedParallelism(n)
has an effective parallelism of two million threads. So, I like what you propose now more.
I'd argue that (There is still time since |
Practicality outweighed consistency in this case. I'm sure nobody will be sad if they try to limit A naive and problematic attempt at migrating some backend code to val dbConnections = Dispatchers.IO.limitedParallelism(100)
val loggers = Dispatchers.IO.limitedParallelism(2)
val cachers = Dispatchers.IO.limitedParallelism(5) The number of max connections for PostgreSQL and MySQL is about 100, and the need to further increase this limit is common.
|
(Changed the base twice for GH to catch the most recent Thanks! |
Before the change
LimitedDispatcher(parallelism)
was returned. While it does work as expected, any submitted taskgoes through its queue and
parallelism
number of workers. The change allows to eliminate theLimitedDispatcher
instance and its queue in case the requested parallelism is greater or equal to the effective parallelism of
Dispatchers.Default
and/orDispatchers.IO