Skip to content

Commit 9b70717

Browse files
lephixmedcl
authored andcommitted
chapter2_part5:/020_Distributed_Cluster/20_Add_failover.asciidoc (#308)
* 第二章第5部分 * 按照reviewer建议进行修改 * 按照要求增加文件名,green部分修复
1 parent 4b16681 commit 9b70717

File tree

1 file changed

+19
-25
lines changed

1 file changed

+19
-25
lines changed
Lines changed: 19 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,39 +1,33 @@
1-
=== Add Failover
1+
[[_add_failover]]
2+
=== 添加故障转移
23

3-
Running a single node means that you have a single point of failure--there
4-
is no redundancy.((("failover, adding"))) Fortunately, all we need to do to protect ourselves from data
5-
loss is to start another node.
4+
当集群中只有一个节点在运行时,意味着会有一个单点故障问题——没有冗余。
5+
幸运的是,我们只需再启动一个节点即可防止数据丢失。
66

7-
.Starting a Second Node
7+
.启动第二个节点
88
***************************************
99
10-
To test what happens when you add a second((("nodes", "starting a second node"))) node, you can start a new node
11-
in exactly the same way as you started the first one (see
12-
<<running-elasticsearch>>), and from the same directory. Multiple nodes can
13-
share the same directory.
10+
为了测试第二个节点启动后的情况,你可以在同一个目录内,完全依照启动第一个节点的方式来启动一个新节点(参考<<running-elasticsearch>>)。多个节点可以共享同一个目录。
1411
15-
When you run a second node on the same machine, it automatically discovers
16-
and joins the cluster as long as it has the same `cluster.name` as the first node.
17-
However, for nodes running on different machines
18-
to join the same cluster, you need to configure a list of unicast hosts the nodes can contact
19-
to join the cluster. For more information, see <<unicast, Prefer Unicast over Multicast>>.
12+
当你在同一台机器上启动了第二个节点时,只要它和第一个节点有同样的 `cluster.name` 配置,它就会自动发现集群并加入到其中。
13+
但是在不同机器上启动节点的时候,为了加入到同一集群,你需要配置一个可连接到的单播主机列表。
14+
详细信息请查看<<unicast>>
2015
2116
***************************************
2217

23-
If we start a second node, our cluster would look like <<cluster-two-nodes>>.
18+
如果启动了第二个节点,我们的集群将会如<<cluster-two-nodes>>所示。
19+
2420

2521
[[cluster-two-nodes]]
26-
.A two-node cluster--all primary and replica shards are allocated
27-
image::images/elas_0203.png["A two-node cluster"]
22+
.拥有两个节点的集群——所有主分片和副本分片都已被分配
23+
image::images/elas_0203.png["拥有两个节点的集群"]
2824

29-
The((("clusters", "two-node cluster"))) second node has joined the cluster, and three _replica shards_ have ((("replica shards", "allocated to second node")))been
30-
allocated to it--one for each primary shard. That means that we can lose
31-
either node, and all of our data will be intact.
25+
当第二个节点加入到集群后,3个 _副本分片_ 将会分配到这个节点上——每个主分片对应一个副本分片。
26+
这意味着当集群内任何一个节点出现问题时,我们的数据都完好无损。
3227

33-
Any newly indexed document will first be stored on a primary shard, and then copied in parallel to the associated replica shard(s). This ensures that our document can be retrieved from a primary shard or from any of its replicas.
28+
所有新近被索引的文档都将会保存在主分片上,然后被并行的复制到对应的副本分片上。这就保证了我们既可以从主分片又可以从副本分片上获得文档。
3429

35-
The `cluster-health` now ((("cluster health", "checking after adding second node")))shows a status of `green`, which means that all six
36-
shards (all three primary shards and all three replica shards) are active:
30+
`cluster-health` 现在展示的状态为 `green` ,这表示所有6个分片(包括3个主分片和3个副本分片)都在正常运行。
3731

3832
[source,js]
3933
--------------------------------------------------
@@ -55,6 +49,6 @@ shards (all three primary shards and all three replica shards) are active:
5549
"active_shards_percent_as_number": 100
5650
}
5751
--------------------------------------------------
58-
<1> Cluster `status` is `green`.
52+
<1> 集群 `status` 值为 `green`
5953

60-
Our cluster is not only fully functional, but also _always available_.
54+
我们的集群现在不仅仅是正常运行的,并且还处于 _始终可用_ 的状态。

0 commit comments

Comments
 (0)