-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Adjust fetch size on queries #2097
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Can you send a link to what you mean by fetch size? Also, do you have a way
to replicate this is a local environment? I plan on doing research
here...but for now can you try setting ‘PG_FAST_CONNECTION=true’
environment variable? This turns on an experimental different parser i
intend to use after i release the 8.0 release in a week or two.
…On Tue, Feb 4, 2020 at 1:49 PM Germán Lena ***@***.***> wrote:
Hi, we are suffereing from hight ClientWaits and one thing we would like
to try is to make the fetch size bigger when we receive many results from
the query. I cannot find anything related to this in the docs or the code
(to be honest I have to dig deeper yet), is there any way to play with this
value?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2097?email_source=notifications&email_token=AAAMHIKOVSUZ4K4H3GRILOLRBG2B3A5CNFSM4KP4ATR2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4IK7RSKQ>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAMHILVF24KLI3TTY3WPMLRBG2B3ANCNFSM4KP4ATRQ>
.
|
I am talking about the analogous to this JDBC configuration. At this point we are running out of ideas and experimenting with a setting as such will help us to tune or discard the hypothesis |
K well please keep me posted. Like I mentioned earlier Im going to look at
some pretty substantial perf work next month. Any findings you come up with
and can share with me will be valuable.
…On Tue, Feb 4, 2020 at 3:07 PM Germán Lena ***@***.***> wrote:
I am talking about the analogous to this JDBC configuration. At this point
we are running out of ideas and experimenting with a setting as such will
help us to tune or discard the hypothesis
***@***.***/oracle-postgres-jdbc-fetch-size-3012d494712
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2097?email_source=notifications&email_token=AAAMHIMFHFRGIFSB77KXDDLRBHDJFA5CNFSM4KP4ATR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKZAH3A#issuecomment-582091756>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAMHIPC5QOWAW6KGOSGYADRBHDJFANCNFSM4KP4ATRQ>
.
|
I have being playing with this and works as expected (probably is worth documenting). PG will split the results into batches of x rows which helps avoiding flooding the consumer on large result-sets (or unbounded queries). It is not helping our particular case as we were under the assumption that by default it will use a small batch (as the java driver does) but turns out it just gets everything from one transfer. I could not test fast-connection yet, what is the current status? is it production ready or you are still polishing out? what is the main differences between both? Thank you! |
ping. same question here. does this library have anything analogous to jdbc setfetchsize? ref: https://stackoverflow.com/questions/1318354/what-does-statement-setfetchsizensize-method-really-do-in-sql-server-jdbc-driv |
You can use a Cursor which afaik is eqv: https://node-postgres.com/apis/cursor |
|
Hi, we are suffereing from hight
ClientWaits
and one thing we would like to try is to make thefetch size
bigger when we receive many results from the query. I cannot find anything related to this in the docs or the code (to be honest I have to dig deeper yet), is there any way to play with this value?Am I right to assume this
rows
(https://github.com/brianc/node-postgres/blob/master/packages/pg/lib/query.js#L27) config defined the amount of rows to fetch by the cursor each time? I dont find docs around thisThe text was updated successfully, but these errors were encountered: