Skip to content

Commit bacd8ee

Browse files
authored
Merge pull request #11 from rkkautsar/refactor
Add basic documentation with mkdocs
2 parents d9bd62b + a1b5623 commit bacd8ee

13 files changed

+276
-4
lines changed

Pipfile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ url = "https://pypi.org/simple"
44
verify_ssl = true
55

66
[dev-packages]
7+
mkdocs = "*"
78

89
[packages]
910
lxml = "*"

Pipfile.lock

Lines changed: 109 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

benchmarks/examples/clasp/runscript.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ configs:
1818
systems:
1919
- name: clasp
2020
version: 1.3.2
21-
measures: .resultparsers.clasp
21+
measures: resultparsers.clasp
2222
config: seq-generic
2323
settings:
2424
- name: default
@@ -44,7 +44,7 @@ systems:
4444
cmdline: '--stats --restarts=no 1'
4545
- name: claspar
4646
version: 2.1.0
47-
measures: .resultparsers.claspar
47+
measures: resultparsers.claspar
4848
config: pbs-generic
4949
settings:
5050
- name: one-as
File renamed without changes.

docs/export-import/index.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Exporting a benchmark
2+
3+
```sh
4+
./bexport my.benchmark.tgz benchmarks/my-benchmark
5+
```
6+
7+
# Importing a benchmark
8+
9+
```
10+
./bexport imported.benchmark.tgz benchmarks/imported_benchmark
11+
```

docs/how-to/evaluating-results.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
# Evaluating the results
2+
3+
```sh
4+
# pipenv shell
5+
./beval benchmarks/.../runscript.yml > evaluation.xml
6+
```
7+
8+
This script will collect statistics from the runs like its time and memory usage, errors etc according to the [result parser](../result-parser.md) defined in the system measures.

docs/how-to/generating-scripts.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# Generating Scripts
2+
3+
```sh
4+
# pipenv shell
5+
./bgen benchmarks/.../runscript.yml
6+
```
7+
8+
After running the command, shortly the scripts will be generated in `$(base_dir)/$(output_dir)/$(project)/$(machine)` according to the runscript.
9+
10+
The structure of the scripts generated will depend on the job, but generally the structure will be like this:
11+
12+
```
13+
[base_dir]/[output_dir]/[project]/[machine]/results/
14+
└── start.py
15+
└── $(benchmark)
16+
├── $(system)-$(system)version]-$(system)setting]-n$(system)setting.proc]
17+
│   ├── $(instance_file_name)
18+
│   │   └── run$(run_number)
19+
│   │   └── start.sh
20+
```

docs/how-to/running-benchmark.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Running the benchmark
2+
3+
Depending on the job, there should be a single entry point for running the benchmark in `$(base_dir)/$(output_dir)`. For example, there will be a `start.py` for a [Sequential Job](#). Just run this script to run the whole benchmark.
4+
5+
```sh
6+
./benchmarks/.../output/.../start.py
7+
```

docs/how-to/summarize-evaluation.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
# Summarizing evaluations
2+
3+
```
4+
./bconv -c < evaluation.xml > result.csv
5+
```

docs/how-to/writing-runscript.md

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
# Writing a runscript
2+
3+
A runscript is a yaml script that defines the benchmark. The runscript will be validated with [pykwalify](https://pykwalify.readthedocs.io/en/master/). The schema can be seen in `src/benchmarktool/runscript/schema.yml`. An example runscript can be seen in `benchmarks/examples/*/runscript.yml`. Here's an overview of the keys:
4+
5+
## base_dir
6+
7+
The base directory of the benchmark. Example: `benchmarks/examples/clasp`.
8+
9+
## output_dir
10+
11+
The directory to where the output (scripts and results) are generated. The path is relative to [base_dir](#base_dir). Example: `output`.
12+
13+
## machines
14+
15+
Description such as CPU and memory capabilities of the machine used for the benchmarks. Currently not used.
16+
17+
## configs
18+
19+
Description of the configuration to run the systems, i.e. template for the run script (might be shell scripts, etc).
20+
21+
## systems
22+
23+
Description of the system (tools/solvers) on which the benchmark instances will be run against.
24+
25+
### system.measures
26+
27+
Module path to the python function of the [result parser](../result-parser.md) that will be used to measure the result of this system, relative to [base_dir](#base_dir). For example, a value of `resultparser.sudokuresultparser` will use the function `sudokuresultparser` defined in `[base_dir]/resultparser.py`.
28+
29+
### system.settings
30+
31+
Description of the system's various settings, such as the `cmdline` options, `tag`, etc.
32+
33+
## jobs
34+
35+
Description of various [jobs](../jobs/index.md), including its type and resource limits.
36+
37+
## benchmarks
38+
39+
Description of various benchmark instances through specifications. Currently the specification type can be either `folder` or `files`.
40+
41+
## projects
42+
43+
Description of project, which can consists of many becnhmark runs by tags (runtags), or by manual specifications (runspecs).

docs/index.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
# Welcome to benchmark-tool
2+
3+
## Installation
4+
5+
Clone the repository:
6+
7+
```bash
8+
git clone [email protected]:daajoe/benchmark-tool.git
9+
```
10+
11+
Installing the dependencies can be done with [Pipenv](https://github.com/pypa/pipenv) (preferred)
12+
or pip + virtualenv.
13+
14+
### Pipenv
15+
16+
```bash
17+
pipenv install
18+
```
19+
20+
### pip + virtualenv
21+
22+
```bash
23+
python3 -m pip install -r requirements.txt
24+
```
25+
26+
## How to use
27+
28+
1. [Writing a runscript](how-to/writing-runscript.md)
29+
2. [Generate scripts with `./bgen`](how-to/generating-scripts.md)
30+
3. [Running the benchmarks](how-to/running-benchmark.md)
31+
4. [Evaluating the results with `./beval`](how-to/evaluating-results.md)
32+
33+
## Project layout
34+
35+
```
36+
.
37+
├── benchmarks/ # benchmark directory
38+
│   └── examples/ # example benchmarks
39+
├── docs/ # this documentation
40+
├── external-tools/ # various third-party tools
41+
├── src/ # main source code
42+
│   ├── benchmarktool/ # main module
43+
├── utils/ # general utilities
44+
├── bgen* # script generation
45+
├── beval* # evaluate benchmark results
46+
├── bconv* # summarize benchmark evaluation
47+
├── bexport* # export benchmarks
48+
├── bimport* # import benchmarks
49+
```

mkdocs.yml

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
site_name: benchmark-tool
2+
theme:
3+
name: readthedocs
4+
highlightjs: true
5+
hljs_languages:
6+
- yaml
7+
- python
8+
repo_url: https://github.com/daajoe/benchmark-tool/
9+
nav:
10+
- Home: 'index.md'
11+
- How to run your benchmark:
12+
- '1 - Writing a runscript': 'how-to/writing-runscript.md'
13+
- '2 - Generating scripts': 'how-to/generating-scripts.md'
14+
- '3 - Running the benchmark': 'how-to/running-benchmark.md'
15+
- '4 - Evaluating the results': 'how-to/evaluating-results.md'
16+
- Export / Import: 'export-import/index.md'

requirements.txt

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
lxml=="4.3.1"
2+
Jinja2=="2.10"
3+
PyYAML=="3.13"
4+
pykwalify=="1.7.0"
5+
dotmap=="1.3.4"

0 commit comments

Comments
 (0)