Skip to content

Proxito: use cacheops for 2 more models #10106

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 6, 2023
Merged

Conversation

humitos
Copy link
Member

@humitos humitos commented Mar 6, 2023

Organization and PlanFeature weren't being cached by cacheops and they are used when serving documentation.

By caching these, we will be caching all the models used when serving documentation, reducing the db hits and a also a litte the response time.

With this change, we won't hit the db at all when serving documentation 👍🏼 . This has other benefits besides trying to reduce the response time.

`Organization` and `PlanFeature` weren't being cached by `cacheops` and they are
used when serving documentation.

By caching these, we will be caching all the models used when serving
documentation, reducing the db hits and a also a litte the response time.
@humitos humitos requested a review from ericholscher March 6, 2023 09:10
@humitos humitos requested a review from a team as a code owner March 6, 2023 09:10
Copy link
Member

@ericholscher ericholscher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@humitos I thought we were pulling back on this experiment? Do you think this will show a significant improvement in doc serving response times?

This is interesting for me in another context, which is availability. It would be really nice to be able to continue to serve docs via redis if we're having issues with Postgres.

Another area this is interesting is in distributed nodes around the world. If we're doing redis-only doc serving, we could do some read-only proxito notes with proxito + redis in Europe, that only hit the US when something isn't already cached. This would be a really interesting experiment in my mind, if we could enable it with something this simple in our codebase.

@humitos
Copy link
Member Author

humitos commented Mar 6, 2023

@humitos I thought we were pulling back on this experiment?

I'm not canceling the experiment just yet because it's not worse than the original approach and I think it could be better with some more tuning.

Do you think this will show a significant improvement in doc serving response times?

I don't think so, but I wanted to remove the db requirement completely if possible because of the reasons you mentioned. Serving everything from cache could open other possibilities in the future and it seems that a hybrid method as we have right now is not a problem so far. However, taking a look at the graph, and we can earn ~1ms maybe.

Due to these, I'll leave one instance running with these settings to keep exploring results when we have some time.

@humitos
Copy link
Member Author

humitos commented Mar 6, 2023

If we're doing redis-only doc serving, we could do some read-only proxito notes with proxito + redis in Europe, that only hit the US when something isn't already cached.

Note that it does not serve the docs from Redis, we would still need a S3 replica in Europe as well.

@humitos humitos merged commit bc8aa71 into main Mar 6, 2023
@humitos humitos deleted the humitos/cacheops-models branch March 6, 2023 17:14
@ericholscher
Copy link
Member

If we're doing redis-only doc serving, we could do some read-only proxito notes with proxito + redis in Europe, that only hit the US when something isn't already cached.

Note that it does not serve the docs from Redis, we would still need a S3 replica in Europe as well.

Useful note. But at least removing Postgres from the picture is interesting as a starting point architecturally. It's also the most likely to have some kind of failure in my mind, and we could proxy S3 to Europe if we wanted to pay for it. But yea, CDN solves most of this in a much cleaner way :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

2 participants