-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
ClientRead #1774
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I raised the same question with AWS support. Here is the response:
|
@brianc Do you have experience with the above? Looking at the dashboard, the ClientRead constitutes by far the longest wait in the query execution breakdown. I am wondering if this is node-postgres specific or if this could be (e.g.) network latency? |
Hmmm I don't have experience with that, but I'll look at the code real quick right now & see if anything stands out to me.= |
Just a bit on how the driver / protocol work: When you submit a query pg checks to see if it needs to be prepared: https://github.com/brianc/node-postgres/blob/master/lib/query.js#L154 If the query needs preparation it results in a couple extra packets being sent: First (usually) parse: All these messages are written to a single buffer & pushed into the socket at once on the client-side of things. Looking I don't see any place where significant latency would be introduced there...but, it definitely performs a lot more work than doing a non-parameterized query. 2 things you could try:
Lemme know if there's anything in particular you want me to look at! |
Thank you. I am migrating our services to another DC tonight. Will evaluate what/ if any impact it has. If the improvement is not significant enough, will attempt to experiment with the suggested changes. Based on what you have described, given that there is only a single round-trip per query, I doubt the issue is anything to-do with the node-postgres implementation. I could see this being the case only if evaluating a query with parameters would involve multiple round-trips (such as: 1) prepare the statement; 2) (wait for the statement creation and) send parameters). |
I'm having this same exact issue. I have high latency here and there (4s-20s) and my ClientRead looks similar to yours, maybe a bit less over the limit. @gajus Could you clarify what a "DC" is? Database Connection? Thanks! |
DC = data center |
I had the same issue, after switching to |
Disclaimer: Not directly related to this issue, I'm not using AWS. Switching to pg-native
pg
My query is very simple, and if speed difference with |
@madacol this is only due to an issue I haven't been able to track down yet. Do you have a self-contained way to reproduce this? Most reports showed this was happening only in a data center. If you have steps to reproduce that'd help me track it down significantly. (in all my local benchmarks pg is actually faster than pg-native) |
My DB is in Google Cloud and I'm using pg version This is roughly what I'm doing to measure time import { Pool } from 'pg';
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
query_timeout: 10000,
max: 6,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
})
...
console.time('latestDB');
const { rows } = await pool.query(
'SELECT symbol, name, "namePlural", flag, code, buy, sell, lobit_prices.created_at AS date\
FROM lobit_prices\
JOIN currencies ON lobit_prices.currency_id = currencies.id\
WHERE code = $1\
ORDER BY lobit_prices.created_at DESC\
LIMIT 1',
['usd']
);
console.timeEnd('latestDB'); full code is here: |
I just made same measurements with a local DB, a little less difference but still significant pg-native
pg
I can give you a dump of this local DB, if you'd like |
Hey @brianc , I am working with @nwjsmith who opened #2189. We switched to I am not sure if we measured query time. We are using knex. There must be something wrong with the way knex is using pg driver or the way we use knex, but I couldn't pinpoint the issue. One thing that comes to mind is that we do other things in transaction like writing logs, submitting metrics, and datadog tracing. Each metric results in a sync network call to an agent on the host. Maybe pg and pg-native handle this case differently? We also use async/await in more places than we should. Can this be a problem in transactions if event loop is overloaded with other work? |
This is really troubling to me...this has cropped up from time to time & I've never been able to reproduce. I'll put this high on my list of priorities to do another, deeper investigation into it. I'll try to simulate slower connections or connect to remote databases & see if I can get more info. As an aside...what tool are you using to graph ClientRead? |
Another thing that would be helpful if you have the time is to fork the repo & run a local fork & modify the code to see if you can shake out where the perf difference might be? |
That looks like AWS dashboard. |
Yeah, this is AWS RDS Performance Insights. We've moved to Aurora since then.
Converting the benchmark to knex with async/await syntax is probably the best strategy. I'll see if I can get to it. I also plan to trace the impact of metrics calls |
I'm also seeing high ClientRead waits when observing using RDS Performance Insights. My clients are running on AWS Lambda so I don't think possible to run |
For me, it is completely solved by upgrading to 8.5.1 |
Perhaps I am misunderstanding something, but I am seeing a lot of
ClientRead
waits.Above: A screenshot of AWS RDS Performance Insights.
If I am understanding this correctly,
ClientRead
is PostgreSQL server waiting for the client to submit to parameters, i.e. is thisnode-postgres
taking its time to encode the parameters before sending them to the server?Whats odd is that in case of the particular queries highlighted in the AWS RDS Performance Insights report, the input parameter is a single integer.
Might be unrelated to the above issue, but earlier I have observed that some queries are occasionally hanging in the ClientRead state, e.g.
Is anyone familiar with this?
The text was updated successfully, but these errors were encountered: