Skip to content

Concurrent Connection/Request Handling #24

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
benjchristensen opened this issue Mar 31, 2015 · 3 comments
Closed

Concurrent Connection/Request Handling #24

benjchristensen opened this issue Mar 31, 2015 · 3 comments

Comments

@benjchristensen
Copy link
Contributor

A server needs to receive connections (TCP etc) and requests (HTTP etc) concurrently.

For example, on a 32-core machine where each core can be receiving IO connections and requests, each core should be able to process independently of each other.

This means we can not represent a server handler as a Publisher<Connection> or Publisher<Request> since Publisher can not invoke onNext concurrently.

If Publisher was used for this it would result in all CPU cores serializing through the single Publisher.

Thus a server handler must just be a function call that is invoked concurrently whenever a connection or request is received.

@smaldini
Copy link
Contributor

Yeah the oNext argument is somehow interesting, one can argue that serializing the publishers in a non blocking pipeline could leave you manage how many in-flight/concurrent connections you can accept at most. But a Function<> would suit better probably 👍

@rstoyanchev
Copy link
Contributor

Agreed.

This appears to be a duplicate of #7.

@NiteshKant
Copy link
Contributor

Closing this as we seem to agree that a server must not be represented as a Publisher

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants