-
Notifications
You must be signed in to change notification settings - Fork 184
Insertion of too many rows do not complete #222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I have the exact same problem but it hangs on the 2590th insert every time. Example that hangs on the 2590th insert: for (int i = 0; i < 100000; i++) {
var person = new Person()
.setFirstName("testing");
System.out.println("Trying to save: " + i);
databaseClient.insert()
.into(Person.class)
.using(person)
.map(Function.identity())
.first()
.block();
System.out.println("Done saving: " + i);
} Example that works: for (int i = 0; i < 100000; i++) {
var person = new Person()
.setFirstName("testing");
System.out.println("Trying to save: " + i);
databaseClient.insert()
.into(Person.class)
.using(person)
.map(Function.identity())
.one() // .first() has been replaced with .one()
.block();
System.out.println("Done saving: " + i);
} |
Thanks for investigating. I assume it's related to #2. SQL Server and MySQL drivers ship already a |
I also encountered it, and temporarily replaced it with one() |
I was able to reproduce the issue with just R2DBC Postgres: for (int i = 0; i < 100000; i++) {
connection.createStatement("INSERT INTO insert_test (value) VALUES($1)")
.returnGeneratedValues()
.bind("$1", "a")
.execute().flatMap(postgresqlResult -> postgresqlResult.map((row, rowMetadata) -> row.get(0)), 1,1)
.collectList().block();
System.out.println("Done saving: " + i);
} What happens here is that after a couple of iterations (about 260 for me), one of the last statements puts the socket into explicit read mode because it has no demand anymore (the one row that was requested via The demand is left as of zero. A subsequent query, the one that hangs, has been sent to the channel but because the receiving side has no demand, no packet is read and therefore, the entirre query result receive part is hanging. |
FluxDiscardOnCancel replays source signals unless cancelling the subscription. On cancellation, the subscriber requests Long.MAX_VALUE to drain the source and discard elements that are emitted afterwards. [closes #222]
FluxDiscardOnCancel replays source signals unless cancelling the subscription. On cancellation, the subscriber requests Long.MAX_VALUE to drain the source and discard elements that are emitted afterwards. [closes #222]
FluxDiscardOnCancel replays source signals unless cancelling the subscription. On cancellation, the subscriber requests Long.MAX_VALUE to drain the source and discard elements that are emitted afterwards. [closes #222]
FluxDiscardOnCancel replays source signals unless cancelling the subscription. On cancellation, the subscriber requests Long.MAX_VALUE to drain the source and discard elements that are emitted afterwards. [closes #222]
For two days I was wondering why instead of 100k+ records I can see in the database exactly 2600.... either way upgrading to the version that have the fix resolved my issue. |
Is this issue resolved? I see the same issue with latest version 0.8.6.RELEASE |
That fix was shipped with 0.8.1.RELEASE. Please file a new issue along with a minimal reproducer so we can look into it @jaswanthbellam |
Bug Report
Versions
Current Behavior
Insertion queries that use bind and returning do not complete when inserting too many rows.
When inserting multiple rows, there is a limit to the amount that can be inserted. When that limit is reached, the execution will just hang forever.
The problem is happening with and without the use of the transactional operator. The only difference between them is the amount of rows inserted before the issue happens.
When not using transactional operator, 518 rows and then it will hang.
When using transactional operator, 260 rows will be inserted and then it will hang.
Those numbers do also change dependending on the queries that were run before the issue query. This behavior can be seen by changing the last argument passed to the method called "behaviorChangingSample" that is present in the repro code. In general, the amount of inserts with the issue query before the issue will be increased by 2 by each query ran before it.
There is no error stack trace at all, it just hangs.
Table schema
Input Code
Steps to reproduce
Just execute the code below or the DemoApplication in the repo below.
There are 5 methods in the repro code. They do showcase that the problem is only hapenning in insert queries using bind and return at the same time.
Repo link: https://github.com/gabrieldn/spring-r2dbc-postgres-insert-issue
Input Code
Expected behavior/code
Insert as many rows as needed.
Possible Solution
Additional context
The problem seems to be happening only with the postgres driver. I have tested with both mysql drivers available and the h2 driver and it works just fine even when inserting a few million rows.
When running with spring webflux, the problem does get worse and worse. Sometimes being unable to insert just a few rows, the worst case I got was failing to complete a POST request that should insert 10 rows.
The text was updated successfully, but these errors were encountered: