@@ -77,20 +77,20 @@ Commandline options
77
77
--benchmark-max-time=SECONDS
78
78
Maximum run time per test - it will be repeated until
79
79
this total time is reached. It may be exceeded if test
80
- function is very slow or --benchmark-min-rounds is
81
- large (it takes precedence). Default: '1.0'
80
+ function is very slow or --benchmark-min-rounds is large
81
+ (it takes precedence). Default: '1.0'
82
82
--benchmark-min-rounds=NUM
83
- Minimum rounds, even if total time would exceed
84
- ` --max- time `. Default: 5
83
+ Minimum rounds, even if total time would exceed ` --max-
84
+ time `. Default: 5
85
85
--benchmark-timer=FUNC
86
86
Timer to use when measuring time. Default:
87
87
'time.perf_counter'
88
88
--benchmark-calibration-precision=NUM
89
- Precision to use when calibrating number of
90
- iterations. Precision of 10 will make the timer look
91
- 10 times more accurate, at a cost of less precise
92
- measure of deviations. Default: 10
93
- --benchmark-warmup=KIND
89
+ Precision to use when calibrating number of iterations.
90
+ Precision of 10 will make the timer look 10 times more
91
+ accurate, at a cost of less precise measure of
92
+ deviations. Default: 10
93
+ --benchmark-warmup=[ KIND]
94
94
Activates warmup. Will run the test function up to
95
95
number of times in the calibration phase. See
96
96
`--benchmark-warmup-iterations `. Note: Even the warmup
@@ -104,11 +104,11 @@ Commandline options
104
104
Disable GC during benchmarks.
105
105
--benchmark-skip Skip running any tests that contain benchmarks.
106
106
--benchmark-disable Disable benchmarks. Benchmarked functions are only ran
107
- once and no stats are reported. Use this if you want
108
- to run the test but don't do any benchmarking.
109
- --benchmark-enable Forcibly enable benchmarks. Use this option to
110
- override --benchmark-disable (in case you have it in
111
- pytest configuration).
107
+ once and no stats are reported. Use this is you want to
108
+ run the test but don't do any benchmarking.
109
+ --benchmark-enable Forcibly enable benchmarks. Use this option to override
110
+ --benchmark-disable (in case you have it in pytest
111
+ configuration).
112
112
--benchmark-only Only run benchmarks. This overrides --benchmark-skip.
113
113
--benchmark-save=NAME
114
114
Save the current run into 'STORAGE-PATH/counter-
@@ -123,49 +123,61 @@ Commandline options
123
123
stats.
124
124
--benchmark-json=PATH
125
125
Dump a JSON report into PATH. Note that this will
126
- include the complete data (all the timings, not just
127
- the stats).
128
- --benchmark-compare=NUM
126
+ include the complete data (all the timings, not just the
127
+ stats).
128
+ --benchmark-compare=[ NUM|_ID]
129
129
Compare the current run against run NUM (or prefix of
130
130
_id in elasticsearch) or the latest saved run if
131
131
unspecified.
132
- --benchmark-compare-fail=EXPR
132
+ --benchmark-compare-fail=EXPR [EXPR ...]
133
133
Fail test if performance regresses according to given
134
134
EXPR (eg: min:5% or mean:0.001 for number of seconds).
135
135
Can be used multiple times.
136
136
--benchmark-cprofile=COLUMN
137
- If specified measure one run with cProfile and stores
138
- 10 top functions. Argument is a column to sort by.
139
- Available columns: 'ncalls_recursion', 'ncalls',
140
- 'tottime', 'tottime_per', 'cumtime', 'cumtime_per',
141
- 'function_name'.
137
+ If specified cProfile will be enabled. Top functions
138
+ will be stored for the given column. Available columns:
139
+ 'ncalls_recursion', 'ncalls', 'tottime', 'tottime_per',
140
+ 'cumtime', 'cumtime_per', 'function_name'.
141
+ --benchmark-cprofile-loops=LOOPS
142
+ How many times to run the function in cprofile.
143
+ Available options: 'auto', or an integer.
144
+ --benchmark-cprofile-top=COUNT
145
+ How many rows to display.
146
+ --benchmark-cprofile-dump=[FILENAME-PREFIX]
147
+ Save cprofile dumps as FILENAME-PREFIX-test_name.prof.
148
+ If FILENAME-PREFIX contains slashes ('/') then
149
+ directories will be created. Default:
150
+ 'benchmark_20241028_160327'
151
+ --benchmark-time-unit=COLUMN
152
+ Unit to scale the results to. Available units: 'ns',
153
+ 'us', 'ms', 's'. Default: 'auto'.
142
154
--benchmark-storage=URI
143
155
Specify a path to store the runs as uri in form
144
- file\:\/\ / path or elasticsearch+http[s]\:\/\ / host1,host2/[in
145
- dex /doctype?project_name=Project] (when --benchmark-
146
- save or --benchmark-autosave are used). For backwards
156
+ file:/ /path or elasticsearch+http[s]:/ /host1,host2/[inde
157
+ x /doctype?project_name=Project] (when --benchmark-save
158
+ or --benchmark-autosave are used). For backwards
147
159
compatibility unexpected values are converted to
148
- file\:\/\ / <value>. Default: 'file\:\/\ / ./.benchmarks'.
149
- --benchmark-netrc=BENCHMARK_NETRC
160
+ file:/ /<value>. Default: 'file:/ /./.benchmarks'.
161
+ --benchmark-netrc=[ BENCHMARK_NETRC]
150
162
Load elasticsearch credentials from a netrc file.
151
163
Default: ''.
152
164
--benchmark-verbose Dump diagnostic and progress information.
153
- --benchmark-sort=COL Column to sort on. Can be one of: 'min', 'max',
154
- 'mean ', 'stddev ', 'name', 'fullname'. Default: 'min'
155
- --benchmark-group-by=LABELS
156
- Comma-separated list of categories by which to
157
- group tests. Can be one or more of: 'group', 'name',
158
- 'fullname', 'func', 'fullfunc', 'param' or
159
- 'param:NAME', where NAME is the name passed to
160
- @pytest.parametrize. Default: 'group'
165
+ --benchmark-quiet Disable reporting. Verbose mode takes precedence.
166
+ --benchmark-sort=COL Column to sort on. Can be one of: 'min ', 'max ', 'mean',
167
+ 'stddev', 'name', 'fullname'. Default: 'min'
168
+ --benchmark-group-by=LABEL
169
+ How to group tests. Can be one of: 'group', 'name',
170
+ 'fullname', 'func', 'fullfunc', 'param' or 'param:NAME',
171
+ where NAME is the name passed to @pytest.parametrize.
172
+ Default: 'group'
161
173
--benchmark-columns=LABELS
162
174
Comma-separated list of columns to show in the result
163
175
table. Default: 'min, max, mean, stddev, median, iqr,
164
176
outliers, ops, rounds, iterations'
165
177
--benchmark-name=FORMAT
166
178
How to format names in results. Can be one of 'short',
167
179
'normal', 'long', or 'trial'. Default: 'normal'
168
- --benchmark-histogram=FILENAME-PREFIX
180
+ --benchmark-histogram=[ FILENAME-PREFIX]
169
181
Plot graphs of min/max/avg/stddev over time in
170
182
FILENAME-PREFIX-test_name.svg. If FILENAME-PREFIX
171
183
contains slashes ('/') then directories will be
0 commit comments