Skip to content

Error in reactive flow when adding BlockHound #1444

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
meistermeier opened this issue Jun 30, 2023 · 7 comments
Closed

Error in reactive flow when adding BlockHound #1444

meistermeier opened this issue Jun 30, 2023 · 7 comments

Comments

@meistermeier
Copy link
Contributor

Original report: spring-projects/spring-data-neo4j#2755

Investigating the issue, I got rid of all Spring Data Neo4j bits and can reproduce the problem with the (reactive) session of the Java driver alone.

There are different outcomes observable:

  1. It runs fine BUT the driver never gets closed in the end
12:29:58.033 [Neo4jDriverIO-2-12] DEBUG org.neo4j.driver.internal.logging.ChannelErrorLogger -- [0x53f6f200][localhost:7687][] Fatal error occurred in the pipeline (class io.netty.channel.StacklessClosedChannelException)
12:29:58.033 [Neo4jDriverIO-2-12] DEBUG org.neo4j.driver.internal.logging.ChannelErrorLogger -- [0x53f6f200][localhost:7687][] Closing channel because of a failure (class org.neo4j.driver.exceptions.ServiceUnavailableException)
12:29:58.033 [Neo4jDriverIO-2-12] DEBUG org.neo4j.driver.internal.async.inbound.ChannelErrorHandler -- [0x53f6f200][localhost:7687][] Channel is inactive
12:29:58.033 [Neo4jDriverIO-2-12] DEBUG org.neo4j.driver.internal.logging.ChannelErrorLogger -- [0x53f6f200][localhost:7687][] Closing channel because of a failure (class org.neo4j.driver.exceptions.ServiceUnavailableException)
  1. It fails during reading the results with
java.lang.AssertionError: expectation "expectNextCount(100)" failed (expected: count = 100; actual: counted = 95; signal: onError(java.lang.IllegalMonitorStateException))

	at reactor.test.MessageFormatter.assertionError(MessageFormatter.java:115)
	at reactor.test.MessageFormatter.failPrefix(MessageFormatter.java:104)
	at reactor.test.MessageFormatter.fail(MessageFormatter.java:73)
	at reactor.test.MessageFormatter.failOptional(MessageFormatter.java:88)
	at reactor.test.DefaultStepVerifierBuilder$DefaultVerifySubscriber.checkCountMismatch(DefaultStepVerifierBuilder.java:1372)
	at reactor.test.DefaultStepVerifierBuilder$DefaultVerifySubscriber.onSignalCount(DefaultStepVerifierBuilder.java:1610)
	at reactor.test.DefaultStepVerifierBuilder$DefaultVerifySubscriber.onExpectation(DefaultStepVerifierBuilder.java:1467)
	at reactor.test.DefaultStepVerifierBuilder$DefaultVerifySubscriber.onError(DefaultStepVerifierBuilder.java:1129)
	at reactor.core.publisher.FluxUsingWhen$UsingWhenSubscriber.deferredError(FluxUsingWhen.java:398)
	at reactor.core.publisher.FluxUsingWhen$RollbackInner.onComplete(FluxUsingWhen.java:475)
	at reactor.core.publisher.MonoCreate$DefaultMonoSink.success(MonoCreate.java:140)
	at org.neo4j.driver.internal.reactive.RxUtils.lambda$createEmptyPublisher$0(RxUtils.java:44)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179)
	at org.neo4j.driver.internal.handlers.ChannelReleasingResetResponseHandler.lambda$resetCompleted$2(ChannelReleasingResetResponseHandler.java:63)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2179)
	at org.neo4j.driver.internal.util.Futures.lambda$asCompletionStage$0(Futures.java:73)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557)
	at io.netty.util.concurrent.DefaultPromise.access$200(DefaultPromise.java:35)
	at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:503)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:1623)
	Suppressed: java.lang.IllegalMonitorStateException
		at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryRelease(ReentrantReadWriteLock.java:372)
		at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1059)
		at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.unlock(ReentrantReadWriteLock.java:1147)
		at org.neo4j.driver.internal.async.pool.NettyChannelTracker.doInWriteLock(NettyChannelTracker.java:69)
		at org.neo4j.driver.internal.async.pool.NettyChannelTracker.channelAcquired(NettyChannelTracker.java:95)
		at io.netty.channel.pool.SimpleChannelPool.notifyHealthCheck(SimpleChannelPool.java:249)
		at io.netty.channel.pool.SimpleChannelPool.access$200(SimpleChannelPool.java:43)
		at io.netty.channel.pool.SimpleChannelPool$4.operationComplete(SimpleChannelPool.java:235)
		at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
		at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557)
		at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
		at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
		at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625)
		at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105)
		at io.netty.util.internal.PromiseNotificationUtil.trySuccess(PromiseNotificationUtil.java:48)
		at io.netty.util.concurrent.PromiseNotifier.operationComplete(PromiseNotifier.java:121)
		at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
		at io.netty.util.concurrent.DefaultPromise.notifyListenerWithStackOverFlowProtection(DefaultPromise.java:522)
		at io.netty.util.concurrent.DefaultPromise.notifyListener(DefaultPromise.java:478)
		at io.netty.util.concurrent.CompleteFuture.addListener(CompleteFuture.java:48)
		at org.neo4j.driver.internal.async.pool.NettyChannelHealthChecker.lambda$isHealthy$0(NettyChannelHealthChecker.java:107)
		at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
		at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
		at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:482)
		... 8 more

where I suspect

	Suppressed: java.lang.IllegalMonitorStateException
		at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryRelease(ReentrantReadWriteLock.java:372)
		at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1059)
		at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.unlock(ReentrantReadWriteLock.java:1147)
		at org.neo4j.driver.internal.async.pool.NettyChannelTracker.doInWriteLock(NettyChannelTracker.java:69)

to be the root problem.

  1. And I could not get to this with the reproducer yet but I have seen this while creating the test case (and did not change anything). All the above but at the end, the real BlockHound error.
[Neo4jDriverIO-2-5] 2023-06-30 11:52:31,936  WARN  io.netty.util.concurrent.DefaultPromise: 593 - An exception was thrown by org.neo4j.driver.internal.async.pool.NettyChannelTracker$$Lambda$621/0x00000008003b7158.operationComplete()
java.lang.IllegalMonitorStateException: null
	at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryRelease(ReentrantReadWriteLock.java:372)
	at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1007)
	at java.base/java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.unlock(ReentrantReadWriteLock.java:1147)
	at org.neo4j.driver.internal.async.pool.NettyChannelTracker.doInWriteLock(NettyChannelTracker.java:69)
	at org.neo4j.driver.internal.async.pool.NettyChannelTracker.channelClosed(NettyChannelTracker.java:133)
	at org.neo4j.driver.internal.async.pool.NettyChannelTracker.lambda$new$0(NettyChannelTracker.java:51)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
	at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625)
	at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105)
	at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
	at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1164)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:755)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:731)
	at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:620)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:105)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:174)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:833)
[Neo4jDriverIO-2-1] 2023-06-30 11:52:31,937  WARN r.internal.async.pool.ConnectionPoolImpl:  55 - An error occurred while closing connection pool towards localhost:7687.
java.util.concurrent.CompletionException: reactor.blockhound.BlockingOperationError: Blocking call! java.lang.Object#wait
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332)
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:874)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at org.neo4j.driver.internal.util.Futures.lambda$asCompletionStage$0(Futures.java:75)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557)
	at io.netty.util.concurrent.DefaultPromise.access$200(DefaultPromise.java:35)
	at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:503)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
	at io.netty.channel.nio.NioEventLoop.run(Unknown Source)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Unknown Source)
Caused by: reactor.blockhound.BlockingOperationError: Blocking call! java.lang.Object#wait
	at java.base/java.lang.Object.wait(Object.java)
	at java.base/java.lang.Object.wait(Object.java:338)
	at io.netty.util.concurrent.DefaultPromise.awaitUninterruptibly(DefaultPromise.java:276)
	at io.netty.channel.DefaultChannelPromise.awaitUninterruptibly(DefaultChannelPromise.java:137)
	at io.netty.channel.DefaultChannelPromise.awaitUninterruptibly(DefaultChannelPromise.java:30)
	at io.netty.channel.pool.SimpleChannelPool.close(SimpleChannelPool.java:408)
	at io.netty.channel.pool.FixedChannelPool.access$1301(FixedChannelPool.java:42)
	at io.netty.channel.pool.FixedChannelPool$6.call(FixedChannelPool.java:512)
	at io.netty.channel.pool.FixedChannelPool$6.call(FixedChannelPool.java:509)
	at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:96)
	at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:262)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:833)
[Neo4jDriverIO-2-1] 2023-06-30 11:52:31,938  WARN r.internal.async.pool.ConnectionPoolImpl:  55 - An error occurred while closing connection pool towards localhost:7687.
java.util.concurrent.CompletionException: reactor.blockhound.BlockingOperationError: Blocking call! java.lang.Object#wait
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332)
	at java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:874)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at org.neo4j.driver.internal.util.Futures.lambda$asCompletionStage$0(Futures.java:75)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557)
	at io.netty.util.concurrent.DefaultPromise.access$200(DefaultPromise.java:35)
	at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:503)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
	at io.netty.channel.nio.NioEventLoop.run(Unknown Source)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Unknown Source)
Caused by: reactor.blockhound.BlockingOperationError: Blocking call! java.lang.Object#wait
	at java.base/java.lang.Object.wait(Object.java)
	at java.base/java.lang.Object.wait(Object.java:338)
	at io.netty.util.concurrent.DefaultPromise.awaitUninterruptibly(DefaultPromise.java:276)
	at io.netty.channel.DefaultChannelPromise.awaitUninterruptibly(DefaultChannelPromise.java:137)
	at io.netty.channel.DefaultChannelPromise.awaitUninterruptibly(DefaultChannelPromise.java:30)
	at io.netty.channel.pool.SimpleChannelPool.close(SimpleChannelPool.java:408)
	at io.netty.channel.pool.FixedChannelPool.access$1301(FixedChannelPool.java:42)
	at io.netty.channel.pool.FixedChannelPool$6.call(FixedChannelPool.java:512)
	at io.netty.channel.pool.FixedChannelPool$6.call(FixedChannelPool.java:509)
	at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:96)
	at io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run(GlobalEventExecutor.java:262)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:833)
[main] 2023-06-30 11:52:32,149  WARN ns.factory.support.DisposableBeanAdapter: 243 - Invocation of close method failed on bean with name 'driver': org.neo4j.driver.exceptions.Neo4jException: Driver execution failed
  1. Everything runs fine (please increase the node count in the test setup)

Added dependency:

<dependency>
    <groupId>io.projectreactor.tools</groupId>
    <artifactId>blockhound</artifactId>
    <version>1.0.8.RELEASE</version>
    <scope>test</scope>
</dependency>

Driver version: 5.10.0
Reproducer: https://github.com/meistermeier/neo4j-driver-reactive-blockhound

@seabamirum
Copy link

I played around with this a bit today. Replacing the ReentrantReadWriteLock, with a synchronized block or semaphore to acquire or release the channel in NettyChannelTracker allows the attached test case to pass. I also used ConcurrentHashMaps to store the channel counts, to avoid the need for explicit synchronization in the channelClosed and channelCreated methods. I'm not sure why the Reentrant locks don't work though.

@Override
    public void channelAcquired(Channel channel)
    {
    	synchronized (channel)
    	{
            incrementCount(channel, addressToInUseChannelCount);
            decrementCount(channel, addressToIdleChannelCount);
            channel.closeFuture().removeListener(closeListener);
        }

        log.debug(
                "Channel [0x%s] acquired from the pool. Local address: %s, remote address: %s",
                channel.id(), channel.localAddress(), channel.remoteAddress());
    }

@meistermeier
Copy link
Contributor Author

meistermeier commented Jul 3, 2023

Thanks for your investigation. After jumping through the exceptions I can see and your findings regarding the lock, I focussed on: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/locks/ReentrantLock.html#unlock--
Could it be that under "some" (that's where I am currently trying to figure out the right definition for some) the unlock happens in a different thread than the lock? I am far from being an expert when it comes to reactive flows but afaik pausing and context/thread switching can happen all the time, or?

Edit: From ReentrantReadWriteLock

protected final boolean isHeldExclusively() {
    // While we must in general read state before owner,
    // we don't need to do so to check if current thread is owner
    return getExclusiveOwnerThread() == Thread.currentThread();
}

Edit 2: Alright, this blog post https://spring.io/blog/2019/12/13/flight-of-the-flux-3-hopping-threads-and-schedulers makes it clear that it can happen every time.

@injectives
Copy link
Contributor

@meistermeier and @seabamirum, many thanks for investigating this, it is very much appreciated! 👍

As far as I can tell, BlockHound catches a blocking call and throws an exception. However, it gets swallowed by another error coming from a finally block as the result of the unexpected error. See below:

try {
    write.lock();  // throws reactor.blockhound.BlockingOperationError: Blocking call! jdk.internal.misc.Unsafe#park
    work.run();
} finally {
    write.unlock(); // throws java.lang.IllegalMonitorStateException because the lock has not actually been acquired by this thread
}

This may be whitelisted:

BlockHound.install(builder -> builder.allowBlockingCallsInside("org.neo4j.driver.internal.async.pool.NettyChannelTracker", "doInWriteLock"));

However, there are several places with locks in the driver.

Perhaps it is worth making a BlockHound integration similar to what was done here? reactor/BlockHound#75

What do you think?

@seabamirum
Copy link

Before converting a number of my Flux.flatMaps to Flux.concatMaps, I was observing mysterious application hangs on a test server that was not running Blockhound. However, the tests all seem to pass with Blockhound disabled, so if telling it to ignore the blocking lock() method prevents the subsequent IllegalMonitorStateException then maybe that's all that is needed?

@seabamirum
Copy link

With Blockhound enabled, my application grinds to a halt with JMeter running 15 threads. So it must be related to the swallowed exception that you found and locks not getting released. If a custom integration works, it seems less risky than replacing the locks with other synchronization mechanisms. However, allowBlockingCallsInside doesn't seem to work if there is a nested lambda inside the method, and it could be tricky to find everywhere that a lock is used.

@injectives
Copy link
Contributor

@seabamirum, we have introduced an initial experimental support for BlockHound in the driver.

If this is something that is easy enough for you to test, please build the driver and give it a try.

mvn clean install -DskipTests

Obviously If we detect another issue, we are more than happy to take a look as well.

@injectives
Copy link
Contributor

injectives commented Jul 18, 2023

@seabamirum, the next driver release will also come with this update: #1457
A cancellation on a reactive session run previously could result in a dangling ReactiveResult that would never be published, but would occupy a connection.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants