Skip to content

Connection pooling / cluster failover support #528

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Mar 12, 2014

Conversation

Mpdreamz
Copy link
Member

This PR adds support for builtin cluster failover support and connection pooling.

SingleNodeConnectionPool

This is still the default when you do not specify any connectionpool.

var node = new Uri("http://localhost:9200");
var settings = new ConnectionSettings(node, "defaultindex");
var client = new ElasticClient(settings);

In this example we never explicitly passed an IConnectionPool to use so it will default to the SingleNodeConnectionPool which will always report the single node we passed in as alive and well. This means that the defaults for existing code is unchanged

StaticConnectionPool

var nodes = new [] { 
    new Uri("http://localhost:9200"),
    new Uri("http://localhost:9201"),
    new Uri("http://localhost:9202"),
    new Uri("http://localhost:9203"),
};
var pool = new StaticConnectionPool(nodes);
var settings = new ConnectionSettings(pool, "defaultindex")
    .MaxRetries(8); //bit much but hey... why not:)
var client = new ElasticClient(settings);

This adds a static pool of nodes that will be round robin'ed over when performing elasticsearch calls. Whenever a node reports a known failure (timout, 503 etcetera) it will retry on the next node in the pool and mark the failing node as dead. This means that while the node has been marked dead it will automatically be skipped for the duration it has been marked dead. When all nodes are marked dead it will simply pick one at random. You can specify the maximum amount of retries which will default to the amount of nodes you pass in minus 1.

SniffingConnectionPool

var nodes = new [] { 
    new Uri("http://localhost:9200"),
    new Uri("http://localhost:9201"),
    new Uri("http://localhost:9202"),
    new Uri("http://localhost:9203"),
};
var pool = new SniffingConnectionPool(nodes);
var settings = new ConnectionSettings(pool, "defaultindex")
        .SniffOnStart()
        .SniffOnConnectionFault()
        .SniffLifeSpan(TimeSpan.FromMinutes(10));
var client = new ElasticClient(settings);

Similar to the static connection pool this will round robin over the specified nodes but will use the nodes to initially sniff the rest of the cluster to build the list of known hosts. You can make it re-sniff whenever a connection fault occurs or sniff whenever the last sniff happened too long ago.

See the unit tests here:
https://github.com/Mpdreamz/NEST/tree/feature/connection-pooling/src/Elasticsearch.Net.Tests.Unit/Connection

In particular read this tests to know why builtin support for cluster failover in NEST is an absolute must:

https://github.com/Mpdreamz/NEST/blob/feature/connection-pooling/src/Elasticsearch.Net.Tests.Unit/Connection/ConcurrencyTests.cs#L94

Mpdreamz added a commit that referenced this pull request Mar 12, 2014
Connection pooling / cluster failover support
@Mpdreamz Mpdreamz merged commit db7c7ba into master Mar 12, 2014
@Mpdreamz Mpdreamz deleted the feature/connection-pooling branch March 12, 2014 11:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant