You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: frontend/app/blog/2016/stm.component.html
+6-3
Original file line number
Diff line number
Diff line change
@@ -40,8 +40,7 @@ <h2 class="blog-title">Serializability and Distributed Software Transactional Me
40
40
atomicity. To judge whether etcd3’s primitives are expressive and efficient, we implemented and benchmarked common distributed concurrency control "recipes".
41
41
</p>
42
42
<p>
43
-
This post looks at the atomicity granted by etcd3’s new mini-transactions. We’ll cover etcd3 transactions and demonstrate atomic updates with transactions. Next, we’ll show how etcd’s revision metadata naturally maps to software transactional memory (STM)
44
-
by outlining a simple client-side STM implementation. Finally, we’ll show this STM implementation is a performant alternative to distributed shared locks.
43
+
This post looks at the atomicity granted by etcd3’s new mini-transactions. We’ll cover etcd3 transactions and demonstrate atomic updates with transactions. Next, we’ll show how etcd’s revision metadata naturally maps to software transactional memory (STM)<sup>[1]</sup> by outlining a simple client-side STM implementation. Finally, we’ll show this STM implementation is a performant alternative to distributed shared locks.
45
44
</p>
46
45
47
46
<br>
@@ -236,7 +235,11 @@ <h3>What’s Next?</h3>
236
235
in the v3 client <ahref="https://github.com/coreos/etcd/tree/master/clientv3/concurrency" target="_blank" class="normal-link">concurrency package</a>.
Copy file name to clipboardExpand all lines: frontend/app/doc/tip/faq.component.html
+8-7
Original file line number
Diff line number
Diff line change
@@ -42,10 +42,15 @@ <h3 md-subheader>More</h3>
42
42
<divid="cap-theorem-in-etcd"></div>
43
43
<h2><ahref="/doc/{{version.etcdVersionURL}}/faq#cap-theorem-in-etcd" class="faq-title">CAP theorem in etcd?</a></h2>
44
44
<p>
45
-
CAP often represents <i>Consistency</i>, <i>Availability</i>, <i>Partition tolerance</i>: you can only pick 2 out of 3. Since network partition is not avoidable, you are left with either consitency or availability when partition happens. That
46
-
is, systems with A and P are more tolerant of network faults, but possible to serve stale data. etcd chooses C and P to achieve linearizability with strong consistency.
45
+
CAP<sup>[1]</sup>represents <i>Consistency</i>, <i>Availability</i>, <i>Partition tolerance</i>: you can only pick 2 out of 3. Since network partition is not avoidable, you are left with either consitency or availability when partition happens.
46
+
That is, systems with A and P are more tolerant of network faults, but possible to serve stale data. etcd chooses C and P to achieve linearizability with strong consistency.
47
47
</p>
48
-
<br>
48
+
<hralign="left" class="footer-top-line">
49
+
<footer>
50
+
[1] Seth Gilbert and Nancy Lynch: "<ahref="https://pdfs.semanticscholar.org/24ce/ce61e2128780072bc58f90b8ba47f624bc27.pdf" target="_blank" class="footer-link">Brewer's Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web Services</a>,"
51
+
ACM SIGACT News, volume 33, number 2, pages 51–59, 2002.
52
+
</footer>
53
+
<br><br>
49
54
50
55
<divid="remove-member-first"></div>
51
56
<h2><ahref="/doc/{{version.etcdVersionURL}}/faq#remove-member-first" class="faq-title">Always remove first when replacing member?</a></h2>
If a 3-member cluster has 1 downed member, it can still make forward progress because the quorum is 2 and 2 members are still live. However, adding a new member to a 3-member cluster will increase the quorum to 3 because 3 votes are required for a majority
62
66
of 4 members. Since the quorum increased, this extra member buys nothing in terms of fault tolerance; the cluster is still one node failure away from being unrecoverable.
63
67
</p>
64
-
65
68
<p>
66
69
Additionally, that new member is risky because it may turn out to be misconfigured or incapable of joining the cluster. In that case, there's no way to recover quorum because the cluster has two members down and two members up, but needs three votes to
67
70
change membership to undo the botched membership addition. etcd will by default (as of last week) reject member add attempts that could take down the cluster in this manner.
68
71
</p>
69
-
70
72
<p>
71
73
On the other hand, if the downed member is removed from cluster membership first, the number of members becomes 2 and the quorum remains at 2. Following that removal by adding a new member will also keep the quorum steady at 2. So, even if the new node
72
74
can't be brought up, it's still possible to remove the new member through quorum on the remaining live members.
<h2><ahref="/doc/{{version.etcdVersionURL}}/faq#why-so-strict-about-membership-change" class="faq-title">Why so strict about membershp change?</a></h2>
etcd is not an eventually consistent database, where two different nodes give two different values. etcd provides <b>linearizability</b><sup>[2]</sup> with <b>strong consistency</b><sup>[3]</sup>. When write completes,
40
-
all etcd clients read the same value, most recent and up-to-date in any node.
39
+
etcd is not an eventually consistent database, where two different nodes can serve two different values (stale data). etcd provides <b>linearizability</b><sup>[2]</sup> with <b>strong consistency</b><sup>[3]</sup>. When
40
+
write completes, all etcd clients would read the same, <i>most recent and up-to-date</i>, value in any node.
41
41
</p>
42
42
</div>
43
43
44
44
<divclass="feature">
45
45
<h3>Partition Tolerant</h3>
46
46
<p>
47
-
etcd continues to function, even with network partition, where message is not delivered or gets delayed. Network glitches are very common, and <b>no system is immune</b> from such network faults<sup>[4]</sup>.
47
+
etcd continues to function, even with network partition, where message is not delivered or gets delayed. Network glitches are very common, and <b>no system is immune</b> from network faults<sup>[4]</sup>.
0 commit comments