-
Notifications
You must be signed in to change notification settings - Fork 1.5k
chapter22_part21:/300_Aggregations/120_breadth_vs_depth.asciidoc #294
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
e06435e
a7602b2
6ab13d2
62336b9
779723e
3d633da
915a946
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,15 +1,13 @@ | ||
[[_preventing_combinatorial_explosions]] | ||
=== 优化聚合查询 | ||
|
||
=== Preventing Combinatorial Explosions | ||
“elasticsearch 里面桶的叫法和 SQL 里面分组的概念是类似的,一个桶就类似 SQL 里面的一个 group,多级嵌套的 aggregation, | ||
类似 SQL 里面的多字段分组(group by field1,field2, .....),注意这里仅仅是概念类似,底层的实现原理是不一样的。 -译者注” | ||
|
||
The `terms` bucket dynamically builds buckets based on your data; it doesn't | ||
know up front how many buckets will be generated. ((("combinatorial explosions, preventing")))((("aggregations", "preventing combinatorial explosions"))) While this is fine with a | ||
single aggregation, think about what can happen when one aggregation contains | ||
another aggregation, which contains another aggregation, and so forth. The combination of | ||
unique values in each of these aggregations can lead to an explosion in the | ||
number of buckets generated. | ||
`terms` 桶基于我们的数据动态构建桶;它并不知道到底生成了多少桶。((("combinatorial explosions, preventing")))((("aggregations", "preventing combinatorial explosions"))) 大多数时候对单个字段的聚合查询还是非常快的, | ||
但是当需要同时聚合多个字段时,就可能会产生大量的分组,最终结果就是占用 es 大量内存,从而导致 OOM 的情况发生。 | ||
|
||
Imagine we have a modest dataset that represents movies. Each document lists | ||
the actors in that movie: | ||
假设我们现在有一些关于电影的数据集,每条数据里面会有一个数组类型的字段存储表演该电影的所有演员的名字。 | ||
|
||
[source,js] | ||
---- | ||
|
@@ -22,8 +20,7 @@ the actors in that movie: | |
} | ||
---- | ||
|
||
If we want to determine the top 10 actors and their top costars, that's trivial | ||
with an aggregation: | ||
如果我们想要查询出演影片最多的十个演员以及与他们合作最多的演员,使用聚合是非常简单的: | ||
|
||
[source,js] | ||
---- | ||
|
@@ -47,28 +44,19 @@ with an aggregation: | |
} | ||
---- | ||
|
||
This will return a list of the top 10 actors, and for each actor, a list of their | ||
top five costars. This seems like a very modest aggregation; only 50 | ||
values will be returned! | ||
这会返回前十位出演最多的演员,以及与他们合作最多的五位演员。这看起来是一个简单的聚合查询,最终只返回 50 条数据! | ||
|
||
However, this seemingly ((("aggregations", "fielddata", "datastructure overview")))innocuous query can easily consume a vast amount of | ||
memory. You can visualize a `terms` aggregation as building a tree in memory. | ||
The `actors` aggregation will build the first level of the tree, with a bucket | ||
for every actor. Then, nested under each node in the first level, the | ||
`costars` aggregation will build a second level, with a bucket for every costar, as seen in <<depth-first-1>>. That means that a single movie will generate n^2^ buckets! | ||
但是,((("aggregations", "fielddata", "datastructure overview"))) 这个看上去简单的查询可以轻而易举地消耗大量内存,我们可以通过在内存中构建一个树来查看这个 `terms` 聚合。 | ||
`actors` 聚合会构建树的第一层,每个演员都有一个桶。然后,内套在第一层的每个节点之下, `costar` 聚合会构建第二层,每个联合出演一个桶,请参见 <<depth-first-1>> 所示。这意味着每部影片会生成 n^2^ 个桶! | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. n^2^ |
||
|
||
[[depth-first-1]] | ||
.Build full depth tree | ||
image::images/300_120_depth_first_1.svg["Build full depth tree"] | ||
|
||
To use some real numbers, imagine each movie has 10 actors on average. Each movie | ||
will then generate 10^2^ == 100 buckets. If you have 20,000 movies, that's | ||
roughly 2,000,000 generated buckets. | ||
用真实点的数据,设想平均每部影片有 10 名演员,每部影片就会生成 10^2^ == 100 个桶。如果总共有 20,000 部影片,粗率计算就会生成 2,000,000 个桶。 | ||
|
||
Now, remember, our aggregation is simply asking for the top 10 actors and their | ||
co-stars, totaling 50 values. To get the final results, we have to generate | ||
that tree of 2,000,000 buckets, sort it, and finally prune it such that only the | ||
top 10 actors are left. This is illustrated in <<depth-first-2>> and <<depth-first-3>>. | ||
现在,记住,聚合只是简单的希望得到前十位演员和与他们联合出演者,总共 50 条数据。为了得到最终的结果,我们创建了一个有 2,000,000 桶的树,然后对其排序,取 top10。 | ||
图 <<depth-first-2>> 和图 <<depth-first-3>> 对这个过程进行了阐述。 | ||
|
||
[[depth-first-2]] | ||
.Sort tree | ||
|
@@ -78,30 +66,19 @@ image::images/300_120_depth_first_2.svg["Sort tree"] | |
.Prune tree | ||
image::images/300_120_depth_first_3.svg["Prune tree"] | ||
|
||
At this point you should be quite distraught. Twenty thousand documents is paltry, | ||
and the aggregation is pretty tame. What if you had 200 million documents, wanted | ||
the top 100 actors and their top 20 costars, as well as the costars' costars? | ||
这时我们一定非常抓狂,在 2 万条数据下执行任何聚合查询都是毫无压力的。如果我们有 2 亿文档,想要得到前 100 位演员以及与他们合作最多的 20 位演员,作为查询的最终结果会出现什么情况呢? | ||
|
||
You can appreciate how quickly combinatorial expansion can grow, making this | ||
strategy untenable. There is not enough memory in the world to support uncontrolled | ||
combinatorial explosions. | ||
可以推测聚合出来的分组数非常大,会使这种策略难以维持。世界上并不存在足够的内存来支持这种不受控制的聚合查询。 | ||
|
||
==== Depth-First Versus Breadth-First | ||
==== 深度优先与广度优先(Depth-First Versus Breadth-First) | ||
|
||
Elasticsearch allows you to change the _collection mode_ of an aggregation, for | ||
exactly this situation. ((("collection mode"))) ((("aggregations", "preventing combinatorial explosions", "depth-first versus breadth-first")))The strategy we outlined previously--building the tree fully | ||
and then pruning--is called _depth-first_ and it is the default. ((("depth-first collection strategy"))) Depth-first | ||
works well for the majority of aggregations, but can fall apart in situations | ||
like our actors and costars example. | ||
Elasticsearch 允许我们改变聚合的 _集合模式_ ,就是为了应对这种状况。((("collection mode"))) ((("aggregations", "preventing combinatorial explosions", "depth-first versus breadth-first"))) | ||
我们之前展示的策略叫做 _深度优先_ ,它是默认设置,((("depth-first collection strategy"))) 先构建完整的树,然后修剪无用节点。 _深度优先_ 的方式对于大多数聚合都能正常工作,但对于如我们演员和联合演员这样例子的情形就不太适用。 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 然后修剪无用节点。后面 漏了 。它是默认的聚合模式 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 这个写了。我们之前展示的策略叫做 深度优先 ,它是默认设置,先构建完整的树,然后修剪无用节点。 |
||
|
||
For these special cases, you should use an alternative collection strategy called | ||
_breadth-first_. ((("beadth-first collection strategy")))This strategy works a little differently. It executes the first | ||
layer of aggregations, and _then_ performs a pruning phase before continuing, as illustrated in <<breadth-first-1>> through <<breadth-first-3>>. | ||
为了应对这些特殊的应用场景,我们应该使用另一种集合策略叫做 _广度优先_ 。((("beadth-first collection strategy")))这种策略的工作方式有些不同,它先执行第一层聚合, _再_ 继续下一层聚合之前会先做修剪。 | ||
图 <<breadth-first-1>> 和图 <<breadth-first-3>> 对这个过程进行了阐述。 | ||
|
||
In our example, the `actors` aggregation would be executed first. At this | ||
point, we have a single layer in the tree, but we already know who the top 10 | ||
actors are! There is no need to keep the other actors since they won't be in | ||
the top 10 anyway. | ||
在我们的示例中, `actors` 聚合会首先执行,在这个时候,我们的树只有一层,但我们已经知道了前 10 位的演员!这就没有必要保留其他的演员信息,因为它们无论如何都不会出现在前十位中。 | ||
|
||
[[breadth-first-1]] | ||
.Build first level | ||
|
@@ -115,17 +92,14 @@ image::images/300_120_breadth_first_2.svg["Sort first level"] | |
.Prune first level | ||
image::images/300_120_breadth_first_3.svg["Prune first level"] | ||
|
||
Since we already know the top ten actors, we can safely prune away the rest of the | ||
long tail. After pruning, the next layer is populated based on _its_ execution mode, | ||
and the process repeats until the aggregation is done, as illustrated in <<breadth-first-4>>. This prevents the | ||
combinatorial explosion of buckets and drastically reduces memory requirements | ||
for classes of queries that are amenable to breadth-first. | ||
因为我们已经知道了前十名演员,我们可以安全的修剪其他节点。修剪后,下一层是基于 _它的_ 执行模式读入的,重复执行这个过程直到聚合完成,如图 <<breadth-first-4>> 所示。 | ||
这种场景下,广度优先可以大幅度节省内存。 | ||
|
||
[[breadth-first-4]] | ||
.Populate full depth for remaining nodes | ||
image::images/300_120_breadth_first_4.svg["Step 4: populate full depth for remaining nodes"] | ||
|
||
To use breadth-first, simply ((("collect parameter, enabling breadth-first")))enable it via the `collect` parameter: | ||
要使用广度优先,只需简单 ((("collect parameter, enabling breadth-first"))) 的通过参数 `collect` 开启: | ||
|
||
[source,js] | ||
---- | ||
|
@@ -149,23 +123,11 @@ To use breadth-first, simply ((("collect parameter, enabling breadth-first")))en | |
} | ||
} | ||
---- | ||
<1> Enable `breadth_first` on a per-aggregation basis. | ||
<1> 按聚合来开启 `breadth_first` 。 | ||
|
||
Breadth-first should be used only when you expect more buckets to be generated | ||
than documents landing in the buckets. Breadth-first works by caching | ||
document data at the bucket level, and then replaying those documents to child | ||
aggregations after the pruning phase. | ||
|
||
The memory requirement of a breadth-first aggregation is linear to the number | ||
of documents in each bucket prior to pruning. For many aggregations, the | ||
number of documents in each bucket is very large. Think of a histogram with | ||
monthly intervals: you might have thousands or hundreds of thousands of | ||
documents per bucket. This makes breadth-first a bad choice, and is why | ||
depth-first is the default. | ||
|
||
But for the actor example--which generates a large number of | ||
buckets, but each bucket has relatively few documents--breadth-first is much | ||
more memory efficient, and allows you to build aggregations that would | ||
otherwise fail. | ||
广度优先仅仅适用于每个组的聚合数量远远小于当前总组数的情况下,因为广度优先会在内存中缓存裁剪后的仅仅需要缓存的每个组的所有数据,以便于它的子聚合分组查询可以复用上级聚合的数据。 | ||
|
||
广度优先的内存使用情况与裁剪后的缓存分组数据量是成线性的。对于很多聚合来说,每个桶内的文档数量是相当大的。 | ||
想象一种按月分组的直方图,总组数肯定是固定的,因为每年只有12个月,这个时候每个月下的数据量可能非常大。这使广度优先不是一个好的选择,这也是为什么深度优先作为默认策略的原因。 | ||
|
||
针对上面演员的例子,如果数据量越大,那么默认的使用深度优先的聚合模式生成的总分组数就会非常多,但是预估二级的聚合字段分组后的数据量相比总的分组数会小很多所以这种情况下使用广度优先的模式能大大节省内存,从而通过优化聚合模式来大大提高了在某些特定场景下聚合查询的成功率。 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
开头地方可以加个译者注,为了方便读者理解es里面bucket的概念:
es里面bucket的叫法和SQL里面分组的概念是类似的,一个bucket就类似SQL里面的一个group
多级嵌套的aggregation,类似SQL里面的多字段分组(group by field1, field2, .....)
注意这里仅仅是概念类似,底层的实现原理是不一样的。