Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains two changes that are intended to address #300 by cutting the number of dqlite connections that need to be opened in typical cases.
The first change is to remember the address of the cluster leader whenever we connect to it successfully, and to try this remembered address first when opening a new leader connection. When the cluster is stable, this means that we open only one connection in
Connector.Connect
; previously we would open on the order ofN
(= cluster size) connections. The tradeoff is thatConnector.Connect
is now slower and less efficient when the cluster is not stable. This optimization applies to bothclient.FindLeader
and the driver implementation.The second change is to enable
client.FindLeader
to reuse an existing connection to the leader instead of opening a new one. This is valid because the operations that can be performed on the connection using the returnedClient
do not depend on the logical state of the connection (open database, prepared statements). When the leader is stable, this saves one new connection perclient.FindLeader
call after the first change has been implemented. The long-lived connection is checked for validity and leadership before returning it to be reused.Both of these changes rely on storing some new state in the
NodeStore
using some fun embedding tricks. I did it this way because theNodeStore
is the only object that is passed around to all the right places and lives long enough to persist state between connection attempts.On my machine, using @masnax's repro script, these two changes combined cut the spikes in CPU usage from 30% to 8-10%, with the first change being responsible for most of that improvement. The remaining spike is due to opening
N
connections (in parallel) withinmakeRolesChanges
, and could perhaps be dealt with by augmenting theNodeStore
further with a pool of connections to all nodes instead of just the last known leader, but I've left that for a possible follow-up.Signed-off-by: Cole Miller cole.miller@canonical.com