Skip to content

High CPU usage when cancelled query #242

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
steven-sheehy opened this issue Feb 4, 2020 · 5 comments
Closed

High CPU usage when cancelled query #242

steven-sheehy opened this issue Feb 4, 2020 · 5 comments
Assignees
Labels
type: blocker An issue that is blocking us from releasing type: bug A general bug
Milestone

Comments

@steven-sheehy
Copy link
Contributor

steven-sheehy commented Feb 4, 2020

Bug Report

Using 0.8.1 in our project and running into an issue where we see very high CPU utilization after performing a simple select on a table. The below jstack output shows some sort of infinite loop in io.r2dbc.postgresql.client.ReactorNettyClient$BackendMessageSubscriber.drainLoop(ReactorNettyClient.java:828).

Also, after the query is completed and returned the appropriate results, pg_stat_activity still shows that query as active even though it should've been completed and returned to the pool. I've confirmed with logs for doOnCancel and doOnComplete that this flux is actually complete. Eventually all connections in the thread pool show as active and the thread pool is exhausted. It's not clear if these are related or two separate issues.

Versions

  • Driver: 0.8.1
  • Database: PostgreSQL 9.6 (Google Cloud SQL)
  • Java: OpenJDK 11
  • OS: Ubuntu 18.04

Current Behavior

Application logs

jstack output
2020-02-03 22:11:34
Full thread dump OpenJDK 64-Bit Server VM (11.0.6+10-post-Ubuntu-1ubuntu118.04.1 mixed mode, sharing):

Threads class SMR info:
_java_thread_list=0x00007fd974001880, length=20, elements={
0x00007fd9a0165000, 0x00007fd9a0167000, 0x00007fd9a016f000, 0x00007fd9a0171800,
0x00007fd9a0173800, 0x00007fd9a0175800, 0x00007fd9a01ac800, 0x00007fd9a01b1800,
0x00007fd9a0bb1800, 0x00007fd9a046a000, 0x00007fd9a0997000, 0x00007fd9a0997800,
0x00007fd9a0c36000, 0x00007fd9a15d9800, 0x00007fd9a15f0000, 0x00007fd9a1872800,
0x00007fd9a187d800, 0x00007fd9a0015000, 0x00007fd9640a5000, 0x00007fd95400b800
}

"Reference Handler" #2 daemon prio=10 os_prio=0 cpu=5.61ms elapsed=198.14s tid=0x00007fd9a0165000 nid=0x4562 waiting on condition  [0x00007fd9842d3000]
   java.lang.Thread.State: RUNNABLE
	at java.lang.ref.Reference.waitForReferencePendingList([email protected]/Native Method)
	at java.lang.ref.Reference.processPendingReferences([email protected]/Reference.java:241)
	at java.lang.ref.Reference$ReferenceHandler.run([email protected]/Reference.java:213)

   Locked ownable synchronizers:
	- None

"Finalizer" #3 daemon prio=8 os_prio=0 cpu=0.54ms elapsed=198.14s tid=0x00007fd9a0167000 nid=0x4563 in Object.wait()  [0x00007fd9841d2000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait([email protected]/Native Method)
	- waiting on <0x000000008b2caba8> (a java.lang.ref.ReferenceQueue$Lock)
	at java.lang.ref.ReferenceQueue.remove([email protected]/ReferenceQueue.java:155)
	- waiting to re-lock in wait() <0x000000008b2caba8> (a java.lang.ref.ReferenceQueue$Lock)
	at java.lang.ref.ReferenceQueue.remove([email protected]/ReferenceQueue.java:176)
	at java.lang.ref.Finalizer$FinalizerThread.run([email protected]/Finalizer.java:170)

   Locked ownable synchronizers:
	- None

"Signal Dispatcher" #4 daemon prio=9 os_prio=0 cpu=0.41ms elapsed=198.14s tid=0x00007fd9a016f000 nid=0x4564 runnable  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
	- None

"C2 CompilerThread0" #5 daemon prio=9 os_prio=0 cpu=11534.97ms elapsed=198.14s tid=0x00007fd9a0171800 nid=0x4565 waiting on condition  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   No compile task

   Locked ownable synchronizers:
	- None

"C1 CompilerThread0" #6 daemon prio=9 os_prio=0 cpu=3708.57ms elapsed=198.14s tid=0x00007fd9a0173800 nid=0x4566 waiting on condition  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   No compile task

   Locked ownable synchronizers:
	- None

"Sweeper thread" #7 daemon prio=9 os_prio=0 cpu=63.44ms elapsed=198.14s tid=0x00007fd9a0175800 nid=0x4567 runnable  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
	- None

"Service Thread" #8 daemon prio=9 os_prio=0 cpu=27.01ms elapsed=198.11s tid=0x00007fd9a01ac800 nid=0x4568 runnable  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
	- None

"Common-Cleaner" #9 daemon prio=8 os_prio=0 cpu=12.86ms elapsed=198.10s tid=0x00007fd9a01b1800 nid=0x456a in Object.wait()  [0x00007fd96f2f8000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
	at java.lang.Object.wait([email protected]/Native Method)
	- waiting on <0x000000008b239728> (a java.lang.ref.ReferenceQueue$Lock)
	at java.lang.ref.ReferenceQueue.remove([email protected]/ReferenceQueue.java:155)
	- waiting to re-lock in wait() <0x000000008b239728> (a java.lang.ref.ReferenceQueue$Lock)
	at jdk.internal.ref.CleanerImpl.run([email protected]/CleanerImpl.java:148)
	at java.lang.Thread.run([email protected]/Thread.java:834)
	at jdk.internal.misc.InnocuousThread.run([email protected]/InnocuousThread.java:134)

   Locked ownable synchronizers:
	- None

"reactor-tcp-epoll-1" #14 daemon prio=5 os_prio=0 cpu=177360.54ms elapsed=190.39s tid=0x00007fd9a0bb1800 nid=0x4602 runnable  [0x00007fd96e09e000]
   java.lang.Thread.State: RUNNABLE
	at io.r2dbc.postgresql.client.ReactorNettyClient$BackendMessageSubscriber.drainLoop(ReactorNettyClient.java:828)
	at io.r2dbc.postgresql.client.ReactorNettyClient$BackendMessageSubscriber.onNext(ReactorNettyClient.java:795)
	at io.r2dbc.postgresql.client.ReactorNettyClient$BackendMessageSubscriber.onNext(ReactorNettyClient.java:713)
	at reactor.core.publisher.FluxHandle$HandleSubscriber.onNext(FluxHandle.java:112)
	at reactor.core.publisher.FluxPeekFuseable$PeekConditionalSubscriber.onNext(FluxPeekFuseable.java:845)
	at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:213)
	at reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:213)
	at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:218)
	at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:351)
	at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:348)
	at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:90)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:321)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:308)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:422)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)
	at io.netty.handler.logging.LoggingHandler.channelRead(LoggingHandler.java:227)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:355)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:377)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:792)
	at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:475)
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run([email protected]/Thread.java:834)

   Locked ownable synchronizers:
	- None

"reactor-tcp-epoll-2" #15 daemon prio=5 os_prio=0 cpu=136.70ms elapsed=190.39s tid=0x00007fd9a046a000 nid=0x4603 runnable  [0x00007fd96cf9f000]
   java.lang.Thread.State: RUNNABLE
	at io.netty.channel.epoll.Native.epollWait(Native Method)
	at io.netty.channel.epoll.Native.epollWait(Native.java:129)
	at io.netty.channel.epoll.Native.epollWait(Native.java:122)
	at io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290)
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run([email protected]/Thread.java:834)

   Locked ownable synchronizers:
	- None

"reactor-tcp-epoll-3" #16 daemon prio=5 os_prio=0 cpu=114.51ms elapsed=190.39s tid=0x00007fd9a0997000 nid=0x4604 runnable  [0x00007fd96ca2b000]
   java.lang.Thread.State: RUNNABLE
	at io.netty.channel.epoll.Native.epollWait(Native Method)
	at io.netty.channel.epoll.Native.epollWait(Native.java:129)
	at io.netty.channel.epoll.Native.epollWait(Native.java:122)
	at io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290)
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run([email protected]/Thread.java:834)

   Locked ownable synchronizers:
	- None

"reactor-tcp-epoll-4" #17 daemon prio=5 os_prio=0 cpu=119.56ms elapsed=190.39s tid=0x00007fd9a0997800 nid=0x4605 runnable  [0x00007fd96c92a000]
   java.lang.Thread.State: RUNNABLE
	at io.netty.channel.epoll.Native.epollWait(Native Method)
	at io.netty.channel.epoll.Native.epollWait(Native.java:129)
	at io.netty.channel.epoll.Native.epollWait(Native.java:122)
	at io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290)
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run([email protected]/Thread.java:834)

   Locked ownable synchronizers:
	- None

"elastic-metrics-publisher" #18 daemon prio=5 os_prio=0 cpu=680.37ms elapsed=189.26s tid=0x00007fd9a0c36000 nid=0x4619 waiting on condition  [0x00007fd96c429000]
   java.lang.Thread.State: TIMED_WAITING (parking)
	at jdk.internal.misc.Unsafe.park([email protected]/Native Method)
	- parking to wait for  <0x00000000971c9490> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
	at java.util.concurrent.locks.LockSupport.parkNanos([email protected]/LockSupport.java:234)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos([email protected]/AbstractQueuedSynchronizer.java:2123)
	at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take([email protected]/ScheduledThreadPoolExecutor.java:1182)
	at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take([email protected]/ScheduledThreadPoolExecutor.java:899)
	at java.util.concurrent.ThreadPoolExecutor.getTask([email protected]/ThreadPoolExecutor.java:1054)
	at java.util.concurrent.ThreadPoolExecutor.runWorker([email protected]/ThreadPoolExecutor.java:1114)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run([email protected]/ThreadPoolExecutor.java:628)
	at java.lang.Thread.run([email protected]/Thread.java:834)

   Locked ownable synchronizers:
	- None

"grpc-default-boss-ELG-1-1" #19 daemon prio=5 os_prio=0 cpu=38.55ms elapsed=187.12s tid=0x00007fd9a15d9800 nid=0x4640 runnable  [0x00007fd96c128000]
   java.lang.Thread.State: RUNNABLE
	at io.grpc.netty.shaded.io.netty.channel.epoll.Native.epollWait0(Native Method)
	at io.grpc.netty.shaded.io.netty.channel.epoll.Native.epollWait(Native.java:101)
	at io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:304)
	at io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:355)
	at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
	at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run([email protected]/Thread.java:834)

   Locked ownable synchronizers:
	- None

"container-0" #20 prio=5 os_prio=0 cpu=0.28ms elapsed=187.10s tid=0x00007fd9a15f0000 nid=0x4641 in Object.wait()  [0x00007fd9437ee000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait([email protected]/Native Method)
	- waiting on <0x0000000097015cc8> (a java.lang.Object)
	at java.lang.Object.wait([email protected]/Object.java:328)
	at io.grpc.internal.ServerImpl.awaitTermination(ServerImpl.java:330)
	- waiting to re-lock in wait() <0x0000000097015cc8> (a java.lang.Object)
	at net.devh.boot.grpc.server.serverfactory.GrpcServerLifecycle$1.run(GrpcServerLifecycle.java:99)

   Locked ownable synchronizers:
	- None

"reactor-http-epoll-1" #21 daemon prio=5 os_prio=0 cpu=7.50ms elapsed=186.91s tid=0x00007fd9a1872800 nid=0x4644 runnable  [0x00007fd9436ed000]
   java.lang.Thread.State: RUNNABLE
	at io.netty.channel.epoll.Native.epollWait(Native Method)
	at io.netty.channel.epoll.Native.epollWait(Native.java:129)
	at io.netty.channel.epoll.Native.epollWait(Native.java:122)
	at io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:290)
	at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:347)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run([email protected]/Thread.java:834)

   Locked ownable synchronizers:
	- None

"server" #22 prio=5 os_prio=0 cpu=2.44ms elapsed=186.90s tid=0x00007fd9a187d800 nid=0x4645 waiting on condition  [0x00007fd9435ec000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park([email protected]/Native Method)
2020-02-03 22:11:38
Full thread dump OpenJDK 64-Bit Server VM (11.0.6+10-post-Ubuntu-1ubuntu118.04.1 mixed mode, sharing):

Threads class SMR info:
_java_thread_list=0x00007fd974001880, length=20, elements={
0x00007fd9a0165000, 0x00007fd9a0167000, 0x00007fd9a016f000, 0x00007fd9a0171800,
0x00007fd9a0173800, 0x00007fd9a0175800, 0x00007fd9a01ac800, 0x00007fd9a01b1800,
0x00007fd9a0bb1800, 0x00007fd9a046a000, 0x00007fd9a0997000, 0x00007fd9a0997800,
0x00007fd9a0c36000, 0x00007fd9a15d9800, 0x00007fd9a15f0000, 0x00007fd9a1872800,
0x00007fd9a187d800, 0x00007fd9a0015000, 0x00007fd9640a5000, 0x00007fd95400b800
}

"Reference Handler" #2 daemon prio=10 os_prio=0 cpu=5.61ms elapsed=202.27s tid=0x00007fd9a0165000 nid=0x4562 waiting on condition  [0x00007fd9842d3000]
   java.lang.Thread.State: RUNNABLE
        at java.lang.ref.Reference.waitForReferencePendingList([email protected]/Native Method)
        at java.lang.ref.Reference.processPendingReferences([email protected]/Reference.java:241)
        at java.lang.ref.Reference$ReferenceHandler.run([email protected]/Reference.java:213)

   Locked ownable synchronizers:
        - None

"Finalizer" #3 daemon prio=8 os_prio=0 cpu=0.54ms elapsed=202.27s tid=0x00007fd9a0167000 nid=0x4563 in Object.wait()  [0x00007fd9841d2000]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait([email protected]/Native Method)
        - waiting on <0x000000008b2caba8> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove([email protected]/ReferenceQueue.java:155)
        - waiting to re-lock in wait() <0x000000008b2caba8> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove([email protected]/ReferenceQueue.java:176)
        at java.lang.ref.Finalizer$FinalizerThread.run([email protected]/Finalizer.java:170)

   Locked ownable synchronizers:
        - None

"Signal Dispatcher" #4 daemon prio=9 os_prio=0 cpu=0.41ms elapsed=202.26s tid=0x00007fd9a016f000 nid=0x4564 runnable  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
        - None

"C2 CompilerThread0" #5 daemon prio=9 os_prio=0 cpu=11535.00ms elapsed=202.26s tid=0x00007fd9a0171800 nid=0x4565 waiting on condition  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   No compile task

   Locked ownable synchronizers:
        - None

"C1 CompilerThread0" #6 daemon prio=9 os_prio=0 cpu=3708.59ms elapsed=202.26s tid=0x00007fd9a0173800 nid=0x4566 waiting on condition  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   No compile task

   Locked ownable synchronizers:
        - None

"Sweeper thread" #7 daemon prio=9 os_prio=0 cpu=63.44ms elapsed=202.26s tid=0x00007fd9a0175800 nid=0x4567 runnable  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
"Sweeper thread" #7 daemon prio=9 os_prio=0 cpu=63.44ms elapsed=202.26s tid=0x00007fd9a0175800 nid=0x4567 runnable  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
        - None

"Service Thread" #8 daemon prio=9 os_prio=0 cpu=27.01ms elapsed=202.23s tid=0x00007fd9a01ac800 nid=0x4568 runnable  [0x0000000000000000]
	- parking to wait for  <0x0000000096f2e270> (a java.util.concurrent.CountDownLatch$Sync)
	at java.util.concurrent.locks.LockSupport.park([email protected]/LockSupport.java:194)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt([email protected]/AbstractQueuedSynchronizer.java:885)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly([email protected]/AbstractQueuedSynchronizer.java:1039)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly([email protected]/AbstractQueuedSynchronizer.java:1345)
	at java.util.concurrent.CountDownLatch.await([email protected]/CountDownLatch.java:232)
	at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:87)
	at reactor.core.publisher.Mono.block(Mono.java:1663)
	at org.springframework.boot.web.embedded.netty.NettyWebServer$1.run(NettyWebServer.java:131)

   Locked ownable synchronizers:
	- None

"DestroyJavaVM" #24 prio=5 os_prio=0 cpu=8443.05ms elapsed=186.89s tid=0x00007fd9a0015000 nid=0x455b waiting on condition  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
	- None

"grpc-default-worker-ELG-3-1" #25 daemon prio=5 os_prio=0 cpu=369.48ms elapsed=181.79s tid=0x00007fd9640a5000 nid=0x46b5 runnable  [0x00007fd9434eb000]
   java.lang.Thread.State: RUNNABLE
	at io.grpc.netty.shaded.io.netty.channel.epoll.Native.epollWait0(Native Method)
	at io.grpc.netty.shaded.io.netty.channel.epoll.Native.epollWait(Native.java:101)
	at io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:304)
	at io.grpc.netty.shaded.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:355)
	at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
	at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run([email protected]/Thread.java:834)

   Locked ownable synchronizers:
	- None

"Attach Listener" #37 daemon prio=9 os_prio=0 cpu=0.46ms elapsed=21.04s tid=0x00007fd95400b800 nid=0x541c waiting on condition  [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE

   Locked ownable synchronizers:
	- None

"VM Thread" os_prio=0 cpu=191.38ms elapsed=198.14s tid=0x00007fd9a0162000 nid=0x4561 runnable

"GC Thread#0" os_prio=0 cpu=326.77ms elapsed=198.16s tid=0x00007fd9a002d000 nid=0x455c runnable

"GC Thread#1" os_prio=0 cpu=299.57ms elapsed=197.92s tid=0x00007fd964001000 nid=0x456b runnable

"G1 Main Marker" os_prio=0 cpu=1.82ms elapsed=198.15s tid=0x00007fd9a005c000 nid=0x455d runnable

"G1 Conc#0" os_prio=0 cpu=114.67ms elapsed=198.15s tid=0x00007fd9a005e000 nid=0x455e runnable

"G1 Refine#0" os_prio=0 cpu=6.82ms elapsed=198.15s tid=0x00007fd9a00c8800 nid=0x455f runnable

"G1 Young RemSet Sampling" os_prio=0 cpu=55.07ms elapsed=198.15s tid=0x00007fd9a00ca800 nid=0x4560 runnable
"VM Periodic Task Thread" os_prio=0 cpu=156.21ms elapsed=198.11s tid=0x00007fd9a01af000 nid=0x4569 waiting on condition

JNI global refs: 35, weak refs: 0

Table schema

Steps to reproduce

Expected behavior/code

Possible Solution

Additional context

@mp911de mp911de added type: blocker An issue that is blocking us from releasing type: bug A general bug labels Feb 4, 2020
@mp911de
Copy link
Collaborator

mp911de commented Feb 4, 2020

Caused by #231.

@steven-sheehy
Copy link
Contributor Author

steven-sheehy commented Feb 6, 2020

I've made a reproducer project at https://github.com/steven-sheehy/r2dbc-highcpu

Please checkout and debug R2DbcExampleApplicationTests. If you run jstack or put a break point in ReactorNettyClient$BackendMessageSubscriber.drainLoop() after it runs for ahwile you can see that it is an infinite busy loop.

@mp911de mp911de added this to the 0.8.2.RELEASE milestone Feb 7, 2020
@mp911de
Copy link
Collaborator

mp911de commented Feb 7, 2020

Thanks a lot. The issue is that we react to whether the buffer has buffered protocol frames but not whether the conversation has demand. The drain loop keeps looping forever. It actually should exit if the conversation has no demand.

@mp911de mp911de self-assigned this Feb 7, 2020
mp911de added a commit that referenced this issue Feb 7, 2020
The drainloop is now left if the current conversation has no demand.

[#242]
mp911de added a commit that referenced this issue Feb 7, 2020
The drainloop is now left if the current conversation has no demand.

[#242]
@mp911de
Copy link
Collaborator

mp911de commented Feb 7, 2020

I just pushed a fix for this one. The test case runs after upgrading to 0.8.2.BUILD-SNAPSHOT.

@steven-sheehy steven-sheehy changed the title High CPU usage and query still shows active High CPU usage when cancelled query Feb 7, 2020
@steven-sheehy
Copy link
Contributor Author

Thanks, I can confirm it addresses the high CPU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: blocker An issue that is blocking us from releasing type: bug A general bug
Projects
None yet
Development

No branches or pull requests

2 participants