Skip to content

More documentation on pooling #782

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
napalm272 opened this issue May 12, 2015 · 1 comment
Closed

More documentation on pooling #782

napalm272 opened this issue May 12, 2015 · 1 comment

Comments

@napalm272
Copy link

Can you elaborate on how pooling works? Right now I create a new instance like the following in every request:

someHttpGetRoute(req, res){
        var pgClient = new pg.Client(conString);
        pgClient.connect(function(err){
            pgClient.query('some query', [...], function(err, result){
                pgClient.end();
                res.send(result);
            });
        });
}

I read the documentation more carefully and it seems that this is bad because creating a connection takes some effort and this request is called very often (multiple times every second).
I want to switch to pooling, but I don't know how pooling will work after I exceed the pool size. Let's say I set the size to 20, but 21 requests happened at the same time, what happens to the last request? Is it not responded? Blocked until one connection becomes available? The pool grows automatically to handle this request? Or is doing new pg.Client(conString); the right way to go?

Thanks in advance

@brianc
Copy link
Owner

brianc commented May 18, 2015

Sure thing - first read this:

https://gist.github.com/brianc/f906bacc17409203aee0

The tl;dr - Doing new pg.Client in a web request is almost never the way to go. If you have more than a few thousand requests per second and you're not using pooling postgres will crash. The requests will block waiting on the pool to have a free client if the db becomes the bottleneck but that's natural and how web apps traditionally have always worked.

Hope this helps! Good luck & feel free to ask any more questions via issues. 😄

@brianc brianc closed this as completed May 18, 2015
kevinburkeshyp pushed a commit to Shyp/node-postgres that referenced this issue Feb 17, 2016
Currently if you call pg.connect(), the call will block indefinitely until a
connection becomes available. In many cases, if a connection is not available
after some period of time, it's preferable to return an error (and call
control) to the client, instead of tying up resources forever.

Blocking on resource checkout also makes it easier for clients to deadlock -
recently at Shyp, we had a situation where a row got locked and the thread
that could unlock it was blocked waiting for a connection to become available,
leading to deadlock. In that situation, it would be better to abort the
checkout, which would have errored, but also broken the deadlock.

Add two new settings to defaults: `block`, which, if false, will immediately
return an error if the pool is full when you attempt to check out a new
connection. Also adds `acquireTimeout`, which will wait for `acquireTimeout`
milliseconds before giving up and returning an error.

This builds on two pull requests against `generic-pool`:

- Support options.block: coopernurse/node-pool#125
- Support options.timeout: coopernurse/node-pool#127

For the moment I'm pointing `generic-pool` at a branch that incorporates
both of these commits. I'm marking this as a proof-of-concept until those go
through, which hopefully they will soon. I'd also like feedback on the
API.

Adds semicolons in many places that omitted them and fixes several typos. I'm
happy to pull those out into a different commit.

Sets the TZ=GMT environment variable before running the tests; without this
value set, and with a Postgres server set to the America/Los_Angeles timezone,
a timezone test failed.

Fixes brianc#782 and brianc#805. Will help alleviate brianc#902. May help with brianc#397.
kevinburkeshyp pushed a commit to Shyp/node-postgres that referenced this issue May 27, 2016
Currently if you call pg.connect(), the call will block indefinitely until a
connection becomes available. In many cases, if a connection is not available
after some period of time, it's preferable to return an error (and call
control) to the client, instead of tying up resources forever.

Blocking on resource checkout also makes it easier for clients to deadlock -
recently at Shyp, we had a situation where a row got locked and the thread
that could unlock it was blocked waiting for a connection to become available,
leading to deadlock. In that situation, it would be better to abort the
checkout, which would have errored, but also broken the deadlock.

Add a new setting to defaults: `acquireTimeout`, which will wait for
`acquireTimeout` milliseconds before giving up and returning an error. If the
value is undefined (the default), `node-postgres` will continue to wait
indefinitely for a connection to become available.

This builds on a pull request against `generic-pool`, support options.timeout:
coopernurse/node-pool#127. Review has been slow going,
so I published a new package with that change as `generic-pool-timeout`, and
updated the reference in this codebase.

Adds semicolons in many places that omitted them and fixes several typos. I'm
happy to pull those out into a different commit.

Sets the TZ=GMT environment variable before running the tests; without this
value set, and with a Postgres server set to the America/Los_Angeles timezone,
a timezone test failed.

Fixes brianc#782 and brianc#805. Will help alleviate brianc#902. May help with brianc#397.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants