Skip to content

Implement cursor support #1053

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
julienschmidt opened this issue Jan 7, 2020 · 4 comments
Open

Implement cursor support #1053

julienschmidt opened this issue Jan 7, 2020 · 4 comments

Comments

@julienschmidt
Copy link
Member

Issue description

In some cases it might be undesirable to fetch a potentially huge resultset at once.

MySQL provides cursors for that in the binary protocol (prepared statements).

First, a cursor flag has to be set in COM_STMT_EXECUTE, then the result chunks can be requested with the COM_STMT_FETCH command.

Initially, we will probably only support forward-only and read-only cursors. However the question remains, trough what interface we make this feature available.

@advdv
Copy link

advdv commented Oct 4, 2020

First off, thank you for the work that is put into this project!

I'm evaluating database drivers an I'm wondering what the lack of cursors means for a potential application. Let's say I want to stream a large result set from the database to the client would it mean that I now need to store the full result in-memory (on the web server)? Does calling rows.Scan read all results into memory or does it copy from the underlying connection in chucks or rows?

If not, would there be workaround for that usecase? I could keep a cursor on the web-server or at client and query the database repeatedly. But that might mean the read as a whole is spread across multiple transactions.

@methane
Copy link
Member

methane commented Oct 5, 2020

Let's say I want to stream a large result set from the database to the client would it mean that I now need to store the full result in-memory (on the web server)?

No.

Does calling rows.Scan read all results into memory or does it copy from the underlying connection in chucks or rows?

"copy from the underlying connection in chucks"

Not having cursor is not affect most users. That's why many user use this driver without problem.

With cursor, you can fetch two resultset simultaneously in one connection (transaction).

@advdv
Copy link

advdv commented Oct 5, 2020

Thank you for the quick response, that's great to hear!

@oakad
Copy link

oakad commented Feb 7, 2025

It's a pity that this feature was not implemented yet. From the look of it, NextResultSet will work just fine for receiving the next batch, especially if there's a way to specify the row batch size via some option to the Query call.

In sufficiently complex queries including outer joins there's no way to efficiently specify the row limit within the query. Running the query without a limit statement may result in million of rows being returned.

In theory, most queries can be paginated on the consumer side but it complicates dealing with client APIs a lot (common parts of the outer join rows need to be grouped correctly and it may prove rather difficult to do).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants