-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
More documentation on pooling #782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Sure thing - first read this: https://gist.github.com/brianc/f906bacc17409203aee0 The tl;dr - Doing Hope this helps! Good luck & feel free to ask any more questions via issues. 😄 |
Currently if you call pg.connect(), the call will block indefinitely until a connection becomes available. In many cases, if a connection is not available after some period of time, it's preferable to return an error (and call control) to the client, instead of tying up resources forever. Blocking on resource checkout also makes it easier for clients to deadlock - recently at Shyp, we had a situation where a row got locked and the thread that could unlock it was blocked waiting for a connection to become available, leading to deadlock. In that situation, it would be better to abort the checkout, which would have errored, but also broken the deadlock. Add two new settings to defaults: `block`, which, if false, will immediately return an error if the pool is full when you attempt to check out a new connection. Also adds `acquireTimeout`, which will wait for `acquireTimeout` milliseconds before giving up and returning an error. This builds on two pull requests against `generic-pool`: - Support options.block: coopernurse/node-pool#125 - Support options.timeout: coopernurse/node-pool#127 For the moment I'm pointing `generic-pool` at a branch that incorporates both of these commits. I'm marking this as a proof-of-concept until those go through, which hopefully they will soon. I'd also like feedback on the API. Adds semicolons in many places that omitted them and fixes several typos. I'm happy to pull those out into a different commit. Sets the TZ=GMT environment variable before running the tests; without this value set, and with a Postgres server set to the America/Los_Angeles timezone, a timezone test failed. Fixes brianc#782 and brianc#805. Will help alleviate brianc#902. May help with brianc#397.
Currently if you call pg.connect(), the call will block indefinitely until a connection becomes available. In many cases, if a connection is not available after some period of time, it's preferable to return an error (and call control) to the client, instead of tying up resources forever. Blocking on resource checkout also makes it easier for clients to deadlock - recently at Shyp, we had a situation where a row got locked and the thread that could unlock it was blocked waiting for a connection to become available, leading to deadlock. In that situation, it would be better to abort the checkout, which would have errored, but also broken the deadlock. Add a new setting to defaults: `acquireTimeout`, which will wait for `acquireTimeout` milliseconds before giving up and returning an error. If the value is undefined (the default), `node-postgres` will continue to wait indefinitely for a connection to become available. This builds on a pull request against `generic-pool`, support options.timeout: coopernurse/node-pool#127. Review has been slow going, so I published a new package with that change as `generic-pool-timeout`, and updated the reference in this codebase. Adds semicolons in many places that omitted them and fixes several typos. I'm happy to pull those out into a different commit. Sets the TZ=GMT environment variable before running the tests; without this value set, and with a Postgres server set to the America/Los_Angeles timezone, a timezone test failed. Fixes brianc#782 and brianc#805. Will help alleviate brianc#902. May help with brianc#397.
Can you elaborate on how pooling works? Right now I create a new instance like the following in every request:
I read the documentation more carefully and it seems that this is bad because creating a connection takes some effort and this request is called very often (multiple times every second).
I want to switch to pooling, but I don't know how pooling will work after I exceed the pool size. Let's say I set the size to 20, but 21 requests happened at the same time, what happens to the last request? Is it not responded? Blocked until one connection becomes available? The pool grows automatically to handle this request? Or is doing new pg.Client(conString); the right way to go?
Thanks in advance
The text was updated successfully, but these errors were encountered: