-
Notifications
You must be signed in to change notification settings - Fork 155
Update netty version to the latest. #631
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
In the updated netty pool, `ChannelPoolHandler#channelCreated` is called at channel creation, and then `ChannelPoolHandler#channelAcquired` is called when the connection is borrowed out of the pool. When acquiring a connection from a newly created pool, a new connection will be created and then acquired. In previous netty version, when a connection is created, then it is directly lent out of the pool without being acquired. Thus we need to change some logic regarding how to count in use and idle connections.
Very happy to see this working under way. I'd love to see this and the additional steps into the final 4.0 release. |
11edaec
to
acb1f68
Compare
The new netty version provides both blocking pool close `FixedChannelPool#close` and async pool close `FixedChannelPool#closeAsync`. The blocking close causes deadlock issues in our code. One deadlock issue happens when `ConnectionPoolImpl#retainAll` trying to close a netty pool inside an IO thread that is used by one of the channels owned by the closing pool. The deadlock happens because 1) The channel cannot be closed until the IO thread is fully free. 2) The pool cannot be closed until the channel is closed. 3) However the IO thread cannot be free until the pool is closed. `pool --wait-on--> channel --wait-on--> IO thread --wait-on--> pool` The fix is to change 3) to let the IO thread to call async pool close to not block the close operation. Another deadlock issue happens in `Driver#close`. In `Driver#close`, we need to chain the event loop group shutdown after all pool close. As pool close tasks are executed on event loop group, if we shutdown event loop group first, then the pool close will be stuck because of no thread to pick up any shutdown tasks.
acb1f68
to
decba71
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, made a few really minor comments
driver/src/main/java/org/neo4j/driver/internal/async/pool/ConnectionPoolImpl.java
Outdated
Show resolved
Hide resolved
driver/src/main/java/org/neo4j/driver/internal/async/pool/ConnectionPoolImpl.java
Outdated
Show resolved
Hide resolved
driver/src/main/java/org/neo4j/driver/internal/util/Futures.java
Outdated
Show resolved
Hide resolved
@@ -20,9 +20,13 @@ | |||
|
|||
import io.netty.channel.pool.ChannelPool; | |||
|
|||
import java.util.concurrent.CompletionStage; | |||
|
|||
public interface ExtendedChannelPool extends ChannelPool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wdyt about removing extends ChannelPool
and exposing only the methods ConnectionPoolImpl
really needs?
For example, this would hide close()
and closeAsync()
so ConnectionPoolImpl
would only have a possibility to call repeatableCloseAsync()
driver/src/main/java/org/neo4j/driver/internal/async/pool/ExtendedChannelPool.java
Outdated
Show resolved
Hide resolved
0b193dc
to
da705d8
Compare
da705d8
to
e5a6f48
Compare
Connection Counters
In the updated netty pool,
ChannelPoolHandler#channelCreated
is called at channel creation, and thenChannelPoolHandler#channelAcquired
is called when the connection is borrowed out of the pool. Whereas in previous netty version, when a connection is created, then it is directly lent out of the pool without being acquired. Thus some changes are made to maintain the counters for in use and idle connections.Deadlock Issues Caused by Netty Blocking
ChannelPool#close
.The new netty version provides both blocking pool close
FixedChannelPool#close
and async pool closeFixedChannelPool#closeAsync
.The blocking close causes deadlock issues in our code.
One deadlock issue happens when
ConnectionPoolImpl#retainAll
trying to close a netty pool inside an IO thread that is used by one of the channels owned by the closing pool.The deadlock happens because
pool --wait-on--> channel --wait-on--> IO thread --wait-on--> pool
The fix is to change 3) to let the IO thread to call async pool close to not block the close operation.
Another deadlock issue happens in
Driver#close
.In
Driver#close
, we need to chain the event loop group shutdown after all pool close.As pool close tasks are executed on event loop group, if we shutdown event loop group first,
then the pool close will be stuck because of no thread to pick up any shutdown tasks.