-
Notifications
You must be signed in to change notification settings - Fork 184
ReactorNettyClient requestProcessor
can retain data from queries
#492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Reuse connection-closed exception factory method. [#492] Signed-off-by: Mark Paluch <[email protected]>
… while emitting requests. Once the conversation is accepted, we no longer need to check on a new backend message whether the connection is closed as a channelInactive()/connection.close() signal terminates conversations anyway. [#492] Signed-off-by: Mark Paluch <[email protected]>
Reuse connection-closed exception factory method. [#492] Signed-off-by: Mark Paluch <[email protected]>
… while emitting requests. Once the conversation is accepted, we no longer need to check on a new backend message whether the connection is closed as a channelInactive()/connection.close() signal terminates conversations anyway. [#492] Signed-off-by: Mark Paluch <[email protected]>
Reuse connection-closed exception factory method. [#492] Signed-off-by: Mark Paluch <[email protected]>
… while emitting requests. Once the conversation is accepted, we no longer need to check on a new backend message whether the connection is closed as a channelInactive()/connection.close() signal terminates conversations anyway. [#492] Signed-off-by: Mark Paluch <[email protected]>
@typik89 we applied a change to the driver that seems to resolve the issue. Since we weren't able to fully confirm that the fix is working, can you retest against the latest snapshots r2dbc-postgresql-0.8.12.BUILD-20220216.140312-3.jar and let us know the outcome? |
I run 5 requests sequentially and created a heap dump after this. It seems that only one object holds. It's better than It was, but It still looks strange. 2022-02-16 23:02:31.078 ERROR 17928 --- [actor-tcp-nio-1] io.netty.util.ResourceLeakDetector : LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information. 2022-02-16 23:02:46.589 ERROR 17928 --- [ctor-http-nio-3] io.netty.util.ResourceLeakDetector : LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information. |
Regarding the referenced byte array, I think I need additional insights from the Reactor team. The other issue that is related to the lingering ByteBuf is with us already for quite some time and we're not able to pinpoint it really. |
Windowed fluxes now properly discard ref-counted objects avoiding memory leaks upon cancellation. [#492] Signed-off-by: Mark Paluch <[email protected]>
Windowed fluxes now properly discard ref-counted objects avoiding memory leaks upon cancellation. [#492] Signed-off-by: Mark Paluch <[email protected]>
The leaking |
I've been running the sample with 500_000 elements in db and 5 loops with a 20s sleep at the end, as it is enough to see the issue in heap dumps. with snapshots, I indeed see a big improvement. there was a lingering array retained by If we use .reduceWith({ ByteArrayOutputStream() }) { output, el ->
output.write(el.toString().toByteArray())
output.write(" ".toByteArray())
output
} |
Sounds as if we could close this ticket. Any objections? |
yes, reducewith helps. |
Bug Report
Versions
Current Behavior
Following a report from a user first in Spring Framework then in Reactor Core, I investigated a memory leak where reducing a large dataset into a single array via
Flux#reduce
led toOutOfMemoryError
.In addition to the reproducer in the above issues from the original author (in Kotlin and using both Spring and Spring Data R2dbc), I've managed to create a self-contained test class that can run from the
r2dbc-postgresql
test suite with minimum effort (see below).In a nutshell, the reduced
byte[]
from previous loops are retained, preventing garbage collection through netty channels / selectors. It appears that one major component in that retention is theReactorNettyClient
requestProcessor
. ThisEmitterProcessor
instance has a single subscriber, left even when the query has completed. This is congruent with what could be observed in the OP original repro.This could point with a pooling issue in OP's reproducer, kind of simulated here by the fact the
Connection
is not closed?I'm mentioning this because at first I didn't close the
Connection
in my above repro and I was seeing very similar paths to GC roots for the retainedbyte[]
...It also appears that settingStatement#fetchSize
higher than the number of returned rows makes the issue go away.edit: fetchSize doesn't help if
Connection
is not closed.Table schema
Input Code
Steps to reproduce
In addition to the OP's own Kotlin reproducer here, my own self-contained reproducer is below.
At
LIMIT 30000
, the OOM doesn't occur. The 20s pause at the end can be leveraged to trigger a Heap Dump from eg. JVisualVm, which can be inspected for retainedErasableByteArrayOutputStream
s (or their internalbyte[]
). At that limit, thebyte[]
arrays should be the top size objects.Typically:

Self-contained reproducer
The text was updated successfully, but these errors were encountered: