You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -98,7 +98,7 @@ Professors: Kenneth Kent, Vaughn Betz, Jonathan Rose, Jason Anderson, Peter Jami
98
98
Research Assistants: Aaron Graham
99
99
100
100
101
-
Graduate Students: Kevin Murray, Jason Luu, Oleg Petelin, Xifian Tang, Mohamed Elgammal, Mohamed Eldafrawy, Jeffrey Goeders, Chi Wai Yu, Andrew Somerville, Ian Kuon, Alexander Marquardt, Andy Ye, Wei Mark Fang, Tim Liu, Charles Chiasson, Panagiotis (Panos) Patros, Jean-Philippe Legault, Aaron Graham, Nasrin Eshraghi Ivari, Maria Patrou, Scott Young, Sarah Khalid, Seyed Alireza Damghani
101
+
Graduate Students: Kevin Murray, Jason Luu, Oleg Petelin, Xifian Tang, Mohamed Elgammal, Mohamed Eldafrawy, Jeffrey Goeders, Chi Wai Yu, Andrew Somerville, Ian Kuon, Alexander Marquardt, Andy Ye, Wei Mark Fang, Tim Liu, Charles Chiasson, Panagiotis (Panos) Patros, Jean-Philippe Legault, Aaron Graham, Nasrin Eshraghi Ivari, Maria Patrou, Scott Young, Sarah Khalid, Seyed Alireza Damghani, Harpreet Kaur
102
102
103
103
104
104
Summer Students: Opal Densmore, Ted Campbell, Cong Wang, Peter Milankov, Scott Whitty, Michael Wainberg, Suya Liu, Miad Nasr, Nooruddin Ahmed, Thien Yu, Long Yu Wang, Matthew J.P. Walker, Amer Hesson, Sheng Zhong, Hanqing Zeng, Vidya Sankaranarayanan, Jia Min Wang, Eugene Sha, Jean-Philippe Legault, Richard Ren, Dingyu Yang, Alexandrea Demmings, Hillary Soontiens, Julie Brown, Bill Hu, David Baines, Mahshad Farahani, Helen Dai, Daniel Zhai
Copy file name to clipboardExpand all lines: doc/src/odin/dev_guide/contributing.md
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ To fix issues or add a new feature submit a PR or WIP PR following the provided
8
8
**Important** Before creating a Pull Request (PR), if it is a bug you have happened upon and intend to fix make sure you create an issue beforehand.
9
9
10
10
Pull requests are intended to correct bugs and improve Odin's performance.
11
-
To create a pull request, clone the vtr-verilog-rooting repository and branch from the master.
11
+
To create a pull request, clone the [vtr-verilog-to-routing repository](https://github.com/verilog-to-routing/vtr-verilog-to-routing) and branch from the master.
12
12
Make changes to the branch that improve Odin II and correct the bug.
13
13
**Important** In addition to correcting the bug, it is required that test cases (benchmarks) are created that reproduce the issue and are included in the regression tests.
14
14
An example of a good test case could be the benchmark found in the "Issue" being addressed.
@@ -21,9 +21,9 @@ Add a description of the changes made and reference the "issue" that it corrects
21
21
**Important** Before creating a WIP PR, if it is a bug you have happened upon and intend to fix make sure you create an issue beforehand.
22
22
23
23
A "work in progress" PR is a pull request that isn't complete or ready to be merged.
24
-
It is intended to demonstrate that an Issue is being addressed and indicates to other developpers that they don't need to fix it.
25
-
Creating a WIP PR is similar to a regular PR with a few adjustements.
26
-
First, clone the [vtr-verilog-rooting repository](https://github.com/verilog-to-routing/vtr-verilog-to-routing) and branch from the master.
24
+
It is intended to demonstrate that an Issue is being addressed and indicates to other developers that they don't need to fix it.
25
+
Creating a WIP PR is similar to a regular PR with a few adjustments.
26
+
First, clone the [vtr-verilog-to-routing repository](https://github.com/verilog-to-routing/vtr-verilog-to-routing) and branch from the master.
27
27
Make changes to that branch.
28
28
Then, create a pull request with that branch and **include WIP in the title.**
29
29
This will automatically indicate that this PR is not ready to be merged.
Copy file name to clipboardExpand all lines: doc/src/odin/dev_guide/regression_test.md
+25-25Lines changed: 25 additions & 25 deletions
Original file line number
Diff line number
Diff line change
@@ -5,30 +5,30 @@ Each regression test targets a specific function of Odin II.
5
5
There are two main components of a regression test; benchmarks and a configuration file.
6
6
The benchmarks are comprised of verilog files, input vector files and output vector files.
7
7
The configuration file calls upon each benchmark and synthesizes them with different architectures.
8
-
The current regression tests of Odin II can be found in regression_test/benchmarks.
8
+
The current regression tests of Odin II can be found in regression_test/benchmark.
9
9
10
10
## Benchmarks
11
11
12
12
Benchmarks are used to test the functionality of Odin II and ensure that it runs properly.
13
-
Benchmarks of Odin II can be found in regression_test/benchmarks/verilog/any_folder.
13
+
Benchmarks of Odin II can be found in regression_test/benchmark/verilog/any_folder.
14
14
Each benchmark is comprised of a verilog file, an input vector file, and an output vector file.
15
15
They are called upon during regression tests and synthesized with different architectures to be compared against the expected results.
16
-
These tests are usefull for developers to test the functionality of Odin II after implementing changes.
16
+
These tests are useful for developers to test the functionality of Odin II after implementing changes.
17
17
The command `make test` runs through all these tests, comparing the results to previously generated results, and should be run through when first installing.
18
18
19
19
### Unit Benchmarks
20
20
21
-
Unit Benchmarks are the simplest of benchmarks. They are meant to isolate different functions of Odin II.
21
+
Unit benchmarks are the simplest of benchmarks. They are meant to isolate different functions of Odin II.
22
22
The goal is that if it does not function properly, the error can be traced back to the function being tested.
23
23
This cannot always be achieved as different functions depend on others to work properly.
24
-
It is ideal that these benchmarks test bit size capacity, errorenous cases, as well as standards set by the IEEE Standard for Verilog® Hardware Description Language - 2005.
24
+
It is ideal that these benchmarks test bit size capacity, erroneous cases, as well as standards set by the IEEE Standard for Verilog® Hardware Description Language - 2005.
25
25
26
26
### Micro Benchmarks
27
27
28
-
Micro benchmark's are precise, like unit benchmarks, however are more syntactic.
28
+
Micro benchmarks are precise, like unit benchmarks, however are more syntactic.
29
29
They are meant to isolate the behaviour of different functions.
30
30
They trace the behaviour of functions to ensure they adhere to the IEEE Standard for Verilog® Hardware Description Language - 2005.
31
-
Like micro benchmarks, they should check errorenous cases and behavioural standards et by the IEEE Standard for Verilog® Hardware Description Language - 2005.
31
+
Like unit benchmarks, they should check erroneous cases and behavioural standards set by the IEEE Standard for Verilog® Hardware Description Language - 2005.
32
32
33
33
### Macro Benchmarks
34
34
@@ -38,10 +38,10 @@ These tests are designed to test things like syntax and more complicated standar
38
38
39
39
### External Benchmarks
40
40
41
-
External Benchmarks are benchmarks created by outside users to the project.
41
+
External benchmarks are benchmarks created by outside users to the project.
42
42
It is possible to pull an outside directory and build them on the fly thus creating a benchmark for Odin II.
43
43
44
-
## Creating Regression tests
44
+
## Creating Regression Tests
45
45
46
46
### New Regression Test Checklist
47
47
@@ -50,7 +50,7 @@ It is possible to pull an outside directory and build them on the fly thus creat
50
50
* Create a folder in the task directory for the configuration file [here](#creating-a-task)
51
51
* Generate the results [here](#regenerating-results)
52
52
* Add the task to a suite (large suite if generating the results takes longer than 3 minutes, otherwise put in light suite) [here](#creating-a-suite)
53
-
* Update the documentation by providing a summary in Regression Test Summary section and updating the Directory tree[here](#regression-test-summaries)
53
+
* Update the documentation by providing a summary in Regression Test Summary section and updating the Directory Tree[here](#regression-test-summaries)
54
54
55
55
### New Benchmarks added to Regression Test Checklist
56
56
@@ -61,22 +61,22 @@ It is possible to pull an outside directory and build them on the fly thus creat
61
61
62
62
* verilog file
63
63
* input vector file
64
-
* expected ouptut vector file
64
+
* expected output vector file
65
65
* configuration file (conditional)
66
66
* architecture file (optional)
67
67
68
68
### Creating Benchmarks
69
69
70
70
If only a few benchmarks are needed for a PR, simply add the benchmarks to the appropriate set of regression tests.
71
-
[The Regression Test Summary](#regression-test-summaries) summarizes the target of each regression test which may be helpful.
71
+
The [Regression Test Summary](#regression-test-summaries) summarizes the target of each regression test which may be helpful.
72
72
73
73
The standard of naming the benchmarks are as follows:
74
74
75
75
* verilog file: meaningful_title.v
76
76
* input vector file: meaningful_title_input
77
77
* output vector file: meaningful_title_output
78
78
79
-
If the tests needed do not fit in an already existing set of regression tests or need certain architecture(s), create a seperate folder in the verilog directory and label appropriately.
79
+
If the tests needed do not fit in an already existing set of regression tests or need certain architecture(s), create a separate folder in the verilog directory and label appropriately.
80
80
Store the benchmarks in that folder.
81
81
Add the architecture (if it isn't one that already exists) to ../vtr_flow/arch.
82
82
@@ -147,12 +147,12 @@ Regression Parameters:
147
147
*`--concat_circuit_list` concatenate the circuit list and pass it straight through to odin
148
148
*`--generate_bench` generate input and output vectors from scratch
149
149
*`--disable_simulation` disable the simulation for this task
150
-
*`--disable_parallel_jobs` disable running circuit/task pairs in parralel
151
-
*`--randomize`performs a dry run randomly to check the validity of the task and flow |
152
-
*`--regenerate_expectation`regenerates expectation and overrides thee expected value only if there's a mismatch |
153
-
*`--generate_expectation` generate the expectation and overrides the expectation file |
150
+
*`--disable_parallel_jobs` disable running circuit/task pairs in parallel
151
+
*`--randomize`perform a dry run randomly to check the validity of the task and flow |
152
+
*`--regenerate_expectation`regenerate expectation and override the expected value only if there's a mismatch |
153
+
*`--generate_expectation` generate the expectation and override the expectation file |
154
154
155
-
### Creating a task
155
+
### Creating a Task
156
156
157
157
The following diagram illustrates the structure of regression tests.
158
158
Each regression test needs a corresponding folder in the task directory containing the configuration file.
@@ -179,7 +179,7 @@ The task diplay name and the verilog file group should share the same title.
179
179
180
180
There are times where multiple configuration files are needed in a regression test due to different commands wanted or architectures.
181
181
The task cmd_line_args is an example of this.
182
-
If that is the case, each configuration file will still need its own folder, however these folder's should be placed in a parent folder.
182
+
If that is the case, each configuration file will still need its own folder, however these folders should be placed in a parent folder.
183
183
184
184
```bash
185
185
└── ODIN_II
@@ -205,7 +205,7 @@ If that is the case, each configuration file will still need its own folder, how
205
205
206
206
Suites are used to call multiple tasks at once. This is handy for regenerating results for multiple tasks.
207
207
In the diagram below you can see the structure of the suite.
208
-
The suite contains a configuration file that calls upon the different tasks **named task_list.conf**.
208
+
The suite contains a configuration file that calls upon the different tasks named **task_list.conf**.
209
209
210
210
```bash
211
211
└── ODIN_II
@@ -266,7 +266,7 @@ then: where N is the number of processors in the computer, and the path followin
266
266
267
267
> **NOTE**
268
268
>
269
-
> **DO NOT** run the `make sanitize` if regenerating the large test. It is probable that the computer will not have a enough ram to do so and it will take a long time. Instead run `make build`
269
+
> **DO NOT** run the `make sanitize` if regenerating the large test. It is probable that the computer will not have enough ram to do so and it will take a long time. Instead run `make build`
270
270
271
271
For more on regenerating results, refer to the [Verify Script](./verify_script.md) section.
272
272
@@ -299,19 +299,19 @@ This regression test targets cases that require a lot of ram and time.
299
299
300
300
### micro
301
301
302
-
The micro regression tests targets hards blocks and pieces that can be easily instantiated in architectures.
302
+
The micro regression test targets hards blocks and pieces that can be easily instantiated in architectures.
303
303
304
304
### mixing_optimization
305
305
306
-
The mixing optimization regression tests targets mixing implementations for operations implementable in hard blocks and their soft logic counterparts that can be can be easily instantiated in architectures. The tests support extensive command line coverage, as well as provide infrastructure to enable the optimization from an .xml configuration file, require for using the optimization as a part of VTR synthesis flow.
306
+
The mixing optimization regression test targets mixing implementations for operations implementable in hard blocks and their soft logic counterparts that can be can be easily instantiated in architectures. The tests support extensive command line coverage, as well as provide infrastructure to enable the optimization from an .xml configuration file, require for using the optimization as a part of VTR synthesis flow.
307
307
308
308
### operators
309
309
310
-
This regression test targets the functionality of different opertators. It checks bit size capacity and behaviour.
310
+
This regression test targets the functionality of different operators. It checks bit size capacity and behaviour.
311
311
312
312
### syntax
313
313
314
-
The syntax regression tests targets syntactic behaviour. It checks that functions work cohesively together and adhere to the verilog standard.
314
+
The syntax regression test targets syntactic behaviour. It checks that functions work cohesively together and adhere to the verilog standard.
Copy file name to clipboardExpand all lines: doc/src/odin/dev_guide/verify_script.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ make sanitize
47
47
```
48
48
49
49
A synthesis_result.json and a simulation_result.json will be generated in the task's folder.
50
-
The simulation results for each benchmark are only generated if they syntehsize correctly (no exit error), thus if none of the benchmarks synthesize there will be no simulation_result.json generated.
50
+
The simulation results for each benchmark are only generated if they synthesize correctly (no exit error), thus if none of the benchmarks synthesize there will be no simulation_result.json generated.
0 commit comments