Skip to content

Commit fe1d7c8

Browse files
committed
---
yaml --- r: 149036 b: refs/heads/try2 c: 3a610e9 h: refs/heads/master v: v3
1 parent 8a4b022 commit fe1d7c8

38 files changed

+371
-789
lines changed

[refs]

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ refs/heads/snap-stage3: 78a7676898d9f80ab540c6df5d4c9ce35bb50463
55
refs/heads/try: 519addf6277dbafccbb4159db4b710c37eaa2ec5
66
refs/tags/release-0.1: 1f5c5126e96c79d22cb7862f75304136e204f105
77
refs/heads/ndm: f3868061cd7988080c30d6d5bf352a5a5fe2460b
8-
refs/heads/try2: 2780d9dd5410a5c093f27eacfb1684ddbfcb4632
8+
refs/heads/try2: 3a610e98a292f5bc75a720aa15c3600787a5ddb2
99
refs/heads/dist-snap: ba4081a5a8573875fed17545846f6f6902c8ba8d
1010
refs/tags/release-0.2: c870d2dffb391e14efb05aa27898f1f6333a9596
1111
refs/tags/release-0.3: b5f0d0f648d9a6153664837026ba1be43d3e2503

branches/try2/mk/crates.mk

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@
5050
################################################################################
5151

5252
TARGET_CRATES := std extra green rustuv native flate arena glob term semver \
53-
uuid serialize sync getopts collections fourcc
53+
uuid serialize sync getopts collections
5454
HOST_CRATES := syntax rustc rustdoc
5555
CRATES := $(TARGET_CRATES) $(HOST_CRATES)
5656
TOOLS := compiletest rustdoc rustc
@@ -74,7 +74,6 @@ DEPS_uuid := std serialize
7474
DEPS_sync := std
7575
DEPS_getopts := std
7676
DEPS_collections := std serialize
77-
DEPS_fourcc := syntax std
7877

7978
TOOL_DEPS_compiletest := extra green rustuv getopts
8079
TOOL_DEPS_rustdoc := rustdoc green rustuv

branches/try2/src/doc/guide-testing.md

Lines changed: 56 additions & 130 deletions
Original file line numberDiff line numberDiff line change
@@ -16,12 +16,10 @@ fn return_two_test() {
1616
}
1717
~~~
1818

19-
To run these tests, compile with `rustc --test` and run the resulting
20-
binary:
19+
To run these tests, use `rustc --test`:
2120

2221
~~~ {.notrust}
23-
$ rustc --test foo.rs
24-
$ ./foo
22+
$ rustc --test foo.rs; ./foo
2523
running 1 test
2624
test return_two_test ... ok
2725
@@ -49,8 +47,8 @@ value. To run the tests in a crate, it must be compiled with the
4947
`--test` flag: `rustc myprogram.rs --test -o myprogram-tests`. Running
5048
the resulting executable will run all the tests in the crate. A test
5149
is considered successful if its function returns; if the task running
52-
the test fails, through a call to `fail!`, a failed `assert`, or some
53-
other (`assert_eq`, ...) means, then the test fails.
50+
the test fails, through a call to `fail!`, a failed `check` or
51+
`assert`, or some other (`assert_eq`, ...) means, then the test fails.
5452

5553
When compiling a crate with the `--test` flag `--cfg test` is also
5654
implied, so that tests can be conditionally compiled.
@@ -102,63 +100,7 @@ failure output difficult. In these cases you can set the
102100
`RUST_TEST_TASKS` environment variable to 1 to make the tests run
103101
sequentially.
104102

105-
## Examples
106-
107-
### Typical test run
108-
109-
~~~ {.notrust}
110-
$ mytests
111-
112-
running 30 tests
113-
running driver::tests::mytest1 ... ok
114-
running driver::tests::mytest2 ... ignored
115-
... snip ...
116-
running driver::tests::mytest30 ... ok
117-
118-
result: ok. 28 passed; 0 failed; 2 ignored
119-
~~~
120-
121-
### Test run with failures
122-
123-
~~~ {.notrust}
124-
$ mytests
125-
126-
running 30 tests
127-
running driver::tests::mytest1 ... ok
128-
running driver::tests::mytest2 ... ignored
129-
... snip ...
130-
running driver::tests::mytest30 ... FAILED
131-
132-
result: FAILED. 27 passed; 1 failed; 2 ignored
133-
~~~
134-
135-
### Running ignored tests
136-
137-
~~~ {.notrust}
138-
$ mytests --ignored
139-
140-
running 2 tests
141-
running driver::tests::mytest2 ... failed
142-
running driver::tests::mytest10 ... ok
143-
144-
result: FAILED. 1 passed; 1 failed; 0 ignored
145-
~~~
146-
147-
### Running a subset of tests
148-
149-
~~~ {.notrust}
150-
$ mytests mytest1
151-
152-
running 11 tests
153-
running driver::tests::mytest1 ... ok
154-
running driver::tests::mytest10 ... ignored
155-
... snip ...
156-
running driver::tests::mytest19 ... ok
157-
158-
result: ok. 11 passed; 0 failed; 1 ignored
159-
~~~
160-
161-
# Microbenchmarking
103+
## Benchmarking
162104

163105
The test runner also understands a simple form of benchmark execution.
164106
Benchmark functions are marked with the `#[bench]` attribute, rather
@@ -169,12 +111,11 @@ component of your testsuite, pass `--bench` to the compiled test
169111
runner.
170112

171113
The type signature of a benchmark function differs from a unit test:
172-
it takes a mutable reference to type
173-
`extra::test::BenchHarness`. Inside the benchmark function, any
174-
time-variable or "setup" code should execute first, followed by a call
175-
to `iter` on the benchmark harness, passing a closure that contains
176-
the portion of the benchmark you wish to actually measure the
177-
per-iteration speed of.
114+
it takes a mutable reference to type `test::BenchHarness`. Inside the
115+
benchmark function, any time-variable or "setup" code should execute
116+
first, followed by a call to `iter` on the benchmark harness, passing
117+
a closure that contains the portion of the benchmark you wish to
118+
actually measure the per-iteration speed of.
178119

179120
For benchmarks relating to processing/generating data, one can set the
180121
`bytes` field to the number of bytes consumed/produced in each
@@ -187,16 +128,15 @@ For example:
187128
~~~
188129
extern mod extra;
189130
use std::vec;
190-
use extra::test::BenchHarness;
191131
192132
#[bench]
193-
fn bench_sum_1024_ints(b: &mut BenchHarness) {
133+
fn bench_sum_1024_ints(b: &mut extra::test::BenchHarness) {
194134
let v = vec::from_fn(1024, |n| n);
195135
b.iter(|| {v.iter().fold(0, |old, new| old + *new);} );
196136
}
197137
198138
#[bench]
199-
fn initialise_a_vector(b: &mut BenchHarness) {
139+
fn initialise_a_vector(b: &mut extra::test::BenchHarness) {
200140
b.iter(|| {vec::from_elem(1024, 0u64);} );
201141
b.bytes = 1024 * 8;
202142
}
@@ -223,87 +163,73 @@ Advice on writing benchmarks:
223163
To run benchmarks, pass the `--bench` flag to the compiled
224164
test-runner. Benchmarks are compiled-in but not executed by default.
225165

166+
## Examples
167+
168+
### Typical test run
169+
226170
~~~ {.notrust}
227-
$ rustc mytests.rs -O --test
228-
$ mytests --bench
171+
> mytests
229172
230-
running 2 tests
231-
test bench_sum_1024_ints ... bench: 709 ns/iter (+/- 82)
232-
test initialise_a_vector ... bench: 424 ns/iter (+/- 99) = 19320 MB/s
173+
running 30 tests
174+
running driver::tests::mytest1 ... ok
175+
running driver::tests::mytest2 ... ignored
176+
... snip ...
177+
running driver::tests::mytest30 ... ok
233178
234-
test result: ok. 0 passed; 0 failed; 0 ignored; 2 measured
235-
~~~
179+
result: ok. 28 passed; 0 failed; 2 ignored
180+
~~~ {.notrust}
236181
237-
## Benchmarks and the optimizer
182+
### Test run with failures
238183
239-
Benchmarks compiled with optimizations activated can be dramatically
240-
changed by the optimizer so that the benchmark is no longer
241-
benchmarking what one expects. For example, the compiler might
242-
recognize that some calculation has no external effects and remove
243-
it entirely.
184+
~~~ {.notrust}
185+
> mytests
244186
245-
~~~
246-
extern mod extra;
247-
use extra::test::BenchHarness;
187+
running 30 tests
188+
running driver::tests::mytest1 ... ok
189+
running driver::tests::mytest2 ... ignored
190+
... snip ...
191+
running driver::tests::mytest30 ... FAILED
248192
249-
#[bench]
250-
fn bench_xor_1000_ints(bh: &mut BenchHarness) {
251-
bh.iter(|| {
252-
range(0, 1000).fold(0, |old, new| old ^ new);
253-
});
254-
}
193+
result: FAILED. 27 passed; 1 failed; 2 ignored
255194
~~~
256195

257-
gives the following results
196+
### Running ignored tests
258197

259198
~~~ {.notrust}
260-
running 1 test
261-
test bench_xor_1000_ints ... bench: 0 ns/iter (+/- 0)
262-
263-
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured
264-
~~~
199+
> mytests --ignored
265200
266-
The benchmarking runner offers two ways to avoid this. Either, the
267-
closure that the `iter` method receives can return an arbitrary value
268-
which forces the optimizer to consider the result used and ensures it
269-
cannot remove the computation entirely. This could be done for the
270-
example above by adjusting the `bh.iter` call to
201+
running 2 tests
202+
running driver::tests::mytest2 ... failed
203+
running driver::tests::mytest10 ... ok
271204
272-
~~~
273-
bh.iter(|| range(0, 1000).fold(0, |old, new| old ^ new))
205+
result: FAILED. 1 passed; 1 failed; 0 ignored
274206
~~~
275207

276-
Or, the other option is to call the generic `extra::test::black_box`
277-
function, which is an opaque "black box" to the optimizer and so
278-
forces it to consider any argument as used.
208+
### Running a subset of tests
279209

280-
~~~
281-
use extra::test::black_box
210+
~~~ {.notrust}
211+
> mytests mytest1
282212
283-
bh.iter(|| {
284-
black_box(range(0, 1000).fold(0, |old, new| old ^ new));
285-
});
286-
~~~
213+
running 11 tests
214+
running driver::tests::mytest1 ... ok
215+
running driver::tests::mytest10 ... ignored
216+
... snip ...
217+
running driver::tests::mytest19 ... ok
287218
288-
Neither of these read or modify the value, and are very cheap for
289-
small values. Larger values can be passed indirectly to reduce
290-
overhead (e.g. `black_box(&huge_struct)`).
219+
result: ok. 11 passed; 0 failed; 1 ignored
220+
~~~
291221

292-
Performing either of the above changes gives the following
293-
benchmarking results
222+
### Running benchmarks
294223

295224
~~~ {.notrust}
296-
running 1 test
297-
test bench_xor_1000_ints ... bench: 375 ns/iter (+/- 148)
225+
> mytests --bench
298226
299-
test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured
300-
~~~
227+
running 2 tests
228+
test bench_sum_1024_ints ... bench: 709 ns/iter (+/- 82)
229+
test initialise_a_vector ... bench: 424 ns/iter (+/- 99) = 19320 MB/s
301230
302-
However, the optimizer can still modify a testcase in an undesirable
303-
manner even when using either of the above. Benchmarks can be checked
304-
by hand by looking at the output of the compiler using the `--emit=ir`
305-
(for LLVM IR), `--emit=asm` (for assembly) or compiling normally and
306-
using any method for examining object code.
231+
test result: ok. 0 passed; 0 failed; 0 ignored; 2 measured
232+
~~~
307233

308234
## Saving and ratcheting metrics
309235

0 commit comments

Comments
 (0)