Skip to content

Commit 8943653

Browse files
committed
Auto merge of rust-lang#23936 - pnkfelix:rollup, r=pnkfelix
This is an attempt to fix rust-lang#23922
2 parents d754722 + 2b71aed commit 8943653

File tree

284 files changed

+3820
-5191
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

284 files changed

+3820
-5191
lines changed

src/compiletest/compiletest.rs

-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,6 @@
1818
#![feature(std_misc)]
1919
#![feature(test)]
2020
#![feature(path_ext)]
21-
#![feature(convert)]
2221
#![feature(str_char)]
2322

2423
#![deny(warnings)]

src/doc/reference.md

-4
Original file line numberDiff line numberDiff line change
@@ -977,17 +977,13 @@ An example of `use` declarations:
977977

978978
```
979979
# #![feature(core)]
980-
use std::iter::range_step;
981980
use std::option::Option::{Some, None};
982981
use std::collections::hash_map::{self, HashMap};
983982
984983
fn foo<T>(_: T){}
985984
fn bar(map1: HashMap<String, usize>, map2: hash_map::HashMap<String, usize>){}
986985
987986
fn main() {
988-
// Equivalent to 'std::iter::range_step(0, 10, 2);'
989-
range_step(0, 10, 2);
990-
991987
// Equivalent to 'foo(vec![std::option::Option::Some(1.0f64),
992988
// std::option::Option::None]);'
993989
foo(vec![Some(1.0f64), None]);

src/doc/trpl/SUMMARY.md

+1
Original file line numberDiff line numberDiff line change
@@ -42,5 +42,6 @@
4242
* [Intrinsics](intrinsics.md)
4343
* [Lang items](lang-items.md)
4444
* [Link args](link-args.md)
45+
* [Benchmark Tests](benchmark-tests.md)
4546
* [Conclusion](conclusion.md)
4647
* [Glossary](glossary.md)

src/doc/trpl/benchmark-tests.md

+152
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,152 @@
1+
% Benchmark tests
2+
3+
Rust supports benchmark tests, which can test the performance of your
4+
code. Let's make our `src/lib.rs` look like this (comments elided):
5+
6+
```{rust,ignore}
7+
#![feature(test)]
8+
9+
extern crate test;
10+
11+
pub fn add_two(a: i32) -> i32 {
12+
a + 2
13+
}
14+
15+
#[cfg(test)]
16+
mod tests {
17+
use super::*;
18+
use test::Bencher;
19+
20+
#[test]
21+
fn it_works() {
22+
assert_eq!(4, add_two(2));
23+
}
24+
25+
#[bench]
26+
fn bench_add_two(b: &mut Bencher) {
27+
b.iter(|| add_two(2));
28+
}
29+
}
30+
```
31+
32+
Note the `test` feature gate, which enables this unstable feature.
33+
34+
We've imported the `test` crate, which contains our benchmarking support.
35+
We have a new function as well, with the `bench` attribute. Unlike regular
36+
tests, which take no arguments, benchmark tests take a `&mut Bencher`. This
37+
`Bencher` provides an `iter` method, which takes a closure. This closure
38+
contains the code we'd like to benchmark.
39+
40+
We can run benchmark tests with `cargo bench`:
41+
42+
```bash
43+
$ cargo bench
44+
Compiling adder v0.0.1 (file:///home/steve/tmp/adder)
45+
Running target/release/adder-91b3e234d4ed382a
46+
47+
running 2 tests
48+
test tests::it_works ... ignored
49+
test tests::bench_add_two ... bench: 1 ns/iter (+/- 0)
50+
51+
test result: ok. 0 passed; 0 failed; 1 ignored; 1 measured
52+
```
53+
54+
Our non-benchmark test was ignored. You may have noticed that `cargo bench`
55+
takes a bit longer than `cargo test`. This is because Rust runs our benchmark
56+
a number of times, and then takes the average. Because we're doing so little
57+
work in this example, we have a `1 ns/iter (+/- 0)`, but this would show
58+
the variance if there was one.
59+
60+
Advice on writing benchmarks:
61+
62+
63+
* Move setup code outside the `iter` loop; only put the part you want to measure inside
64+
* Make the code do "the same thing" on each iteration; do not accumulate or change state
65+
* Make the outer function idempotent too; the benchmark runner is likely to run
66+
it many times
67+
* Make the inner `iter` loop short and fast so benchmark runs are fast and the
68+
calibrator can adjust the run-length at fine resolution
69+
* Make the code in the `iter` loop do something simple, to assist in pinpointing
70+
performance improvements (or regressions)
71+
72+
## Gotcha: optimizations
73+
74+
There's another tricky part to writing benchmarks: benchmarks compiled with
75+
optimizations activated can be dramatically changed by the optimizer so that
76+
the benchmark is no longer benchmarking what one expects. For example, the
77+
compiler might recognize that some calculation has no external effects and
78+
remove it entirely.
79+
80+
```{rust,ignore}
81+
#![feature(test)]
82+
83+
extern crate test;
84+
use test::Bencher;
85+
86+
#[bench]
87+
fn bench_xor_1000_ints(b: &mut Bencher) {
88+
b.iter(|| {
89+
(0..1000).fold(0, |old, new| old ^ new);
90+
});
91+
}
92+
```
93+
94+
gives the following results
95+
96+
```text
97+
running 1 test
98+
test bench_xor_1000_ints ... bench: 0 ns/iter (+/- 0)
99+
100+
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured
101+
```
102+
103+
The benchmarking runner offers two ways to avoid this. Either, the closure that
104+
the `iter` method receives can return an arbitrary value which forces the
105+
optimizer to consider the result used and ensures it cannot remove the
106+
computation entirely. This could be done for the example above by adjusting the
107+
`b.iter` call to
108+
109+
```rust
110+
# struct X;
111+
# impl X { fn iter<T, F>(&self, _: F) where F: FnMut() -> T {} } let b = X;
112+
b.iter(|| {
113+
// note lack of `;` (could also use an explicit `return`).
114+
(0..1000).fold(0, |old, new| old ^ new)
115+
});
116+
```
117+
118+
Or, the other option is to call the generic `test::black_box` function, which
119+
is an opaque "black box" to the optimizer and so forces it to consider any
120+
argument as used.
121+
122+
```rust
123+
#![feature(test)]
124+
125+
extern crate test;
126+
127+
# fn main() {
128+
# struct X;
129+
# impl X { fn iter<T, F>(&self, _: F) where F: FnMut() -> T {} } let b = X;
130+
b.iter(|| {
131+
let n = test::black_box(1000);
132+
133+
(0..n).fold(0, |a, b| a ^ b)
134+
})
135+
# }
136+
```
137+
138+
Neither of these read or modify the value, and are very cheap for small values.
139+
Larger values can be passed indirectly to reduce overhead (e.g.
140+
`black_box(&huge_struct)`).
141+
142+
Performing either of the above changes gives the following benchmarking results
143+
144+
```text
145+
running 1 test
146+
test bench_xor_1000_ints ... bench: 131 ns/iter (+/- 3)
147+
148+
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured
149+
```
150+
151+
However, the optimizer can still modify a testcase in an undesirable manner
152+
even when using either of the above.

src/doc/trpl/concurrency.md

+9-7
Original file line numberDiff line numberDiff line change
@@ -280,13 +280,15 @@ it returns an `Result<T, E>`, and because this is just an example, we `unwrap()`
280280
it to get a reference to the data. Real code would have more robust error handling
281281
here. We're then free to mutate it, since we have the lock.
282282

283-
This timer bit is a bit awkward, however. We have picked a reasonable amount of
284-
time to wait, but it's entirely possible that we've picked too high, and that
285-
we could be taking less time. It's also possible that we've picked too low,
286-
and that we aren't actually finishing this computation.
287-
288-
Rust's standard library provides a few more mechanisms for two threads to
289-
synchronize with each other. Let's talk about one: channels.
283+
Lastly, while the threads are running, we wait on a short timer. But
284+
this is not ideal: we may have picked a reasonable amount of time to
285+
wait but it's more likely we'll either be waiting longer than
286+
necessary or not long enough, depending on just how much time the
287+
threads actually take to finish computing when the program runs.
288+
289+
A more precise alternative to the timer would be to use one of the
290+
mechanisms provided by the Rust standard library for synchronizing
291+
threads with each other. Let's talk about one of them: channels.
290292

291293
## Channels
292294

src/doc/trpl/iterators.md

+7-6
Original file line numberDiff line numberDiff line change
@@ -243,11 +243,12 @@ for num in nums.iter() {
243243
```
244244

245245
These two basic iterators should serve you well. There are some more
246-
advanced iterators, including ones that are infinite. Like `count`:
246+
advanced iterators, including ones that are infinite. Like using range syntax
247+
and `step_by`:
247248

248249
```rust
249-
# #![feature(core)]
250-
std::iter::count(1, 5);
250+
# #![feature(step_by)]
251+
(1..).step_by(5);
251252
```
252253

253254
This iterator counts up from one, adding five each time. It will give
@@ -292,11 +293,11 @@ just use `for` instead.
292293
There are tons of interesting iterator adapters. `take(n)` will return an
293294
iterator over the next `n` elements of the original iterator, note that this
294295
has no side effect on the original iterator. Let's try it out with our infinite
295-
iterator from before, `count()`:
296+
iterator from before:
296297

297298
```rust
298-
# #![feature(core)]
299-
for i in std::iter::count(1, 5).take(5) {
299+
# #![feature(step_by)]
300+
for i in (1..).step_by(5).take(5) {
300301
println!("{}", i);
301302
}
302303
```

src/doc/trpl/macros.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ number of elements.
3737

3838
```rust
3939
let x: Vec<u32> = vec![1, 2, 3];
40-
# assert_eq!(&[1,2,3], &x);
40+
# assert_eq!(x, [1, 2, 3]);
4141
```
4242

4343
This can't be an ordinary function, because it takes any number of arguments.
@@ -51,7 +51,7 @@ let x: Vec<u32> = {
5151
temp_vec.push(3);
5252
temp_vec
5353
};
54-
# assert_eq!(&[1,2,3], &x);
54+
# assert_eq!(x, [1, 2, 3]);
5555
```
5656

5757
We can implement this shorthand, using a macro: [^actual]
@@ -73,7 +73,7 @@ macro_rules! vec {
7373
};
7474
}
7575
# fn main() {
76-
# assert_eq!([1,2,3], vec![1,2,3]);
76+
# assert_eq!(vec![1,2,3], [1, 2, 3]);
7777
# }
7878
```
7979

src/doc/trpl/ownership.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -477,7 +477,7 @@ forbidden in item signatures to allow reasoning about the types just based in
477477
the item signature alone. However, for ergonomic reasons a very restricted
478478
secondary inference algorithm called “lifetime elision” applies in function
479479
signatures. It infers only based on the signature components themselves and not
480-
based on the body of the function, only infers lifetime paramters, and does
480+
based on the body of the function, only infers lifetime parameters, and does
481481
this with only three easily memorizable and unambiguous rules. This makes
482482
lifetime elision a shorthand for writing an item signature, while not hiding
483483
away the actual types involved as full local inference would if applied to it.

0 commit comments

Comments
 (0)