1
+ [[_preventing_combinatorial_explosions]]
2
+ === 优化聚合查询
1
3
2
- === Preventing Combinatorial Explosions
4
+ “elasticsearch 里面桶的叫法和 SQL 里面分组的概念是类似的,一个桶就类似 SQL 里面的一个 group,多级嵌套的 aggregation,
5
+ 类似 SQL 里面的多字段分组(group by field1,field2, .....),注意这里仅仅是概念类似,底层的实现原理是不一样的。 -译者注”
3
6
4
- The `terms` bucket dynamically builds buckets based on your data; it doesn't
5
- know up front how many buckets will be generated. ((("combinatorial explosions, preventing")))((("aggregations", "preventing combinatorial explosions"))) While this is fine with a
6
- single aggregation, think about what can happen when one aggregation contains
7
- another aggregation, which contains another aggregation, and so forth. The combination of
8
- unique values in each of these aggregations can lead to an explosion in the
9
- number of buckets generated.
7
+ `terms` 桶基于我们的数据动态构建桶;它并不知道到底生成了多少桶。((("combinatorial explosions, preventing")))((("aggregations", "preventing combinatorial explosions"))) 大多数时候对单个字段的聚合查询还是非常快的,
8
+ 但是当需要同时聚合多个字段时,就可能会产生大量的分组,最终结果就是占用 es 大量内存,从而导致 OOM 的情况发生。
10
9
11
- Imagine we have a modest dataset that represents movies. Each document lists
12
- the actors in that movie:
10
+ 假设我们现在有一些关于电影的数据集,每条数据里面会有一个数组类型的字段存储表演该电影的所有演员的名字。
13
11
14
12
[source,js]
15
13
----
@@ -22,8 +20,7 @@ the actors in that movie:
22
20
}
23
21
----
24
22
25
- If we want to determine the top 10 actors and their top costars, that's trivial
26
- with an aggregation:
23
+ 如果我们想要查询出演影片最多的十个演员以及与他们合作最多的演员,使用聚合是非常简单的:
27
24
28
25
[source,js]
29
26
----
@@ -47,28 +44,19 @@ with an aggregation:
47
44
}
48
45
----
49
46
50
- This will return a list of the top 10 actors, and for each actor, a list of their
51
- top five costars. This seems like a very modest aggregation; only 50
52
- values will be returned!
47
+ 这会返回前十位出演最多的演员,以及与他们合作最多的五位演员。这看起来是一个简单的聚合查询,最终只返回 50 条数据!
53
48
54
- However, this seemingly ((("aggregations", "fielddata", "datastructure overview")))innocuous query can easily consume a vast amount of
55
- memory. You can visualize a `terms` aggregation as building a tree in memory.
56
- The `actors` aggregation will build the first level of the tree, with a bucket
57
- for every actor. Then, nested under each node in the first level, the
58
- `costars` aggregation will build a second level, with a bucket for every costar, as seen in <<depth-first-1>>. That means that a single movie will generate n^2^ buckets!
49
+ 但是,((("aggregations", "fielddata", "datastructure overview"))) 这个看上去简单的查询可以轻而易举地消耗大量内存,我们可以通过在内存中构建一个树来查看这个 `terms` 聚合。
50
+ `actors` 聚合会构建树的第一层,每个演员都有一个桶。然后,内套在第一层的每个节点之下, `costar` 聚合会构建第二层,每个联合出演一个桶,请参见 <<depth-first-1>> 所示。这意味着每部影片会生成 n^2^ 个桶!
59
51
60
52
[[depth-first-1]]
61
53
.Build full depth tree
62
54
image::images/300_120_depth_first_1.svg["Build full depth tree"]
63
55
64
- To use some real numbers, imagine each movie has 10 actors on average. Each movie
65
- will then generate 10^2^ == 100 buckets. If you have 20,000 movies, that's
66
- roughly 2,000,000 generated buckets.
56
+ 用真实点的数据,设想平均每部影片有 10 名演员,每部影片就会生成 10^2^ == 100 个桶。如果总共有 20,000 部影片,粗率计算就会生成 2,000,000 个桶。
67
57
68
- Now, remember, our aggregation is simply asking for the top 10 actors and their
69
- co-stars, totaling 50 values. To get the final results, we have to generate
70
- that tree of 2,000,000 buckets, sort it, and finally prune it such that only the
71
- top 10 actors are left. This is illustrated in <<depth-first-2>> and <<depth-first-3>>.
58
+ 现在,记住,聚合只是简单的希望得到前十位演员和与他们联合出演者,总共 50 条数据。为了得到最终的结果,我们创建了一个有 2,000,000 桶的树,然后对其排序,取 top10。
59
+ 图 <<depth-first-2>> 和图 <<depth-first-3>> 对这个过程进行了阐述。
72
60
73
61
[[depth-first-2]]
74
62
.Sort tree
@@ -78,30 +66,19 @@ image::images/300_120_depth_first_2.svg["Sort tree"]
78
66
.Prune tree
79
67
image::images/300_120_depth_first_3.svg["Prune tree"]
80
68
81
- At this point you should be quite distraught. Twenty thousand documents is paltry,
82
- and the aggregation is pretty tame. What if you had 200 million documents, wanted
83
- the top 100 actors and their top 20 costars, as well as the costars' costars?
69
+ 这时我们一定非常抓狂,在 2 万条数据下执行任何聚合查询都是毫无压力的。如果我们有 2 亿文档,想要得到前 100 位演员以及与他们合作最多的 20 位演员,作为查询的最终结果会出现什么情况呢?
84
70
85
- You can appreciate how quickly combinatorial expansion can grow, making this
86
- strategy untenable. There is not enough memory in the world to support uncontrolled
87
- combinatorial explosions.
71
+ 可以推测聚合出来的分组数非常大,会使这种策略难以维持。世界上并不存在足够的内存来支持这种不受控制的聚合查询。
88
72
89
- ==== Depth-First Versus Breadth-First
73
+ ==== 深度优先与广度优先( Depth-First Versus Breadth-First)
90
74
91
- Elasticsearch allows you to change the _collection mode_ of an aggregation, for
92
- exactly this situation. ((("collection mode"))) ((("aggregations", "preventing combinatorial explosions", "depth-first versus breadth-first")))The strategy we outlined previously--building the tree fully
93
- and then pruning--is called _depth-first_ and it is the default. ((("depth-first collection strategy"))) Depth-first
94
- works well for the majority of aggregations, but can fall apart in situations
95
- like our actors and costars example.
75
+ Elasticsearch 允许我们改变聚合的 _集合模式_ ,就是为了应对这种状况。((("collection mode"))) ((("aggregations", "preventing combinatorial explosions", "depth-first versus breadth-first")))
76
+ 我们之前展示的策略叫做 _深度优先_ ,它是默认设置,((("depth-first collection strategy"))) 先构建完整的树,然后修剪无用节点。 _深度优先_ 的方式对于大多数聚合都能正常工作,但对于如我们演员和联合演员这样例子的情形就不太适用。
96
77
97
- For these special cases, you should use an alternative collection strategy called
98
- _breadth-first_. ((("beadth-first collection strategy")))This strategy works a little differently. It executes the first
99
- layer of aggregations, and _then_ performs a pruning phase before continuing, as illustrated in <<breadth-first-1>> through <<breadth-first-3>>.
78
+ 为了应对这些特殊的应用场景,我们应该使用另一种集合策略叫做 _广度优先_ 。((("beadth-first collection strategy")))这种策略的工作方式有些不同,它先执行第一层聚合, _再_ 继续下一层聚合之前会先做修剪。
79
+ 图 <<breadth-first-1>> 和图 <<breadth-first-3>> 对这个过程进行了阐述。
100
80
101
- In our example, the `actors` aggregation would be executed first. At this
102
- point, we have a single layer in the tree, but we already know who the top 10
103
- actors are! There is no need to keep the other actors since they won't be in
104
- the top 10 anyway.
81
+ 在我们的示例中, `actors` 聚合会首先执行,在这个时候,我们的树只有一层,但我们已经知道了前 10 位的演员!这就没有必要保留其他的演员信息,因为它们无论如何都不会出现在前十位中。
105
82
106
83
[[breadth-first-1]]
107
84
.Build first level
@@ -115,17 +92,14 @@ image::images/300_120_breadth_first_2.svg["Sort first level"]
115
92
.Prune first level
116
93
image::images/300_120_breadth_first_3.svg["Prune first level"]
117
94
118
- Since we already know the top ten actors, we can safely prune away the rest of the
119
- long tail. After pruning, the next layer is populated based on _its_ execution mode,
120
- and the process repeats until the aggregation is done, as illustrated in <<breadth-first-4>>. This prevents the
121
- combinatorial explosion of buckets and drastically reduces memory requirements
122
- for classes of queries that are amenable to breadth-first.
95
+ 因为我们已经知道了前十名演员,我们可以安全的修剪其他节点。修剪后,下一层是基于 _它的_ 执行模式读入的,重复执行这个过程直到聚合完成,如图 <<breadth-first-4>> 所示。
96
+ 这种场景下,广度优先可以大幅度节省内存。
123
97
124
98
[[breadth-first-4]]
125
99
.Populate full depth for remaining nodes
126
100
image::images/300_120_breadth_first_4.svg["Step 4: populate full depth for remaining nodes"]
127
101
128
- To use breadth-first, simply ((("collect parameter, enabling breadth-first")))enable it via the `collect` parameter:
102
+ 要使用广度优先,只需简单 ((("collect parameter, enabling breadth-first"))) 的通过参数 `collect` 开启:
129
103
130
104
[source,js]
131
105
----
@@ -149,23 +123,11 @@ To use breadth-first, simply ((("collect parameter, enabling breadth-first")))en
149
123
}
150
124
}
151
125
----
152
- <1> Enable `breadth_first` on a per-aggregation basis.
126
+ <1> 按聚合来开启 `breadth_first` 。
153
127
154
- Breadth-first should be used only when you expect more buckets to be generated
155
- than documents landing in the buckets. Breadth-first works by caching
156
- document data at the bucket level, and then replaying those documents to child
157
- aggregations after the pruning phase.
158
-
159
- The memory requirement of a breadth-first aggregation is linear to the number
160
- of documents in each bucket prior to pruning. For many aggregations, the
161
- number of documents in each bucket is very large. Think of a histogram with
162
- monthly intervals: you might have thousands or hundreds of thousands of
163
- documents per bucket. This makes breadth-first a bad choice, and is why
164
- depth-first is the default.
165
-
166
- But for the actor example--which generates a large number of
167
- buckets, but each bucket has relatively few documents--breadth-first is much
168
- more memory efficient, and allows you to build aggregations that would
169
- otherwise fail.
128
+ 广度优先仅仅适用于每个组的聚合数量远远小于当前总组数的情况下,因为广度优先会在内存中缓存裁剪后的仅仅需要缓存的每个组的所有数据,以便于它的子聚合分组查询可以复用上级聚合的数据。
170
129
130
+ 广度优先的内存使用情况与裁剪后的缓存分组数据量是成线性的。对于很多聚合来说,每个桶内的文档数量是相当大的。
131
+ 想象一种按月分组的直方图,总组数肯定是固定的,因为每年只有12个月,这个时候每个月下的数据量可能非常大。这使广度优先不是一个好的选择,这也是为什么深度优先作为默认策略的原因。
171
132
133
+ 针对上面演员的例子,如果数据量越大,那么默认的使用深度优先的聚合模式生成的总分组数就会非常多,但是预估二级的聚合字段分组后的数据量相比总的分组数会小很多所以这种情况下使用广度优先的模式能大大节省内存,从而通过优化聚合模式来大大提高了在某些特定场景下聚合查询的成功率。
0 commit comments