You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/en/benchmark.md
+88-69Lines changed: 88 additions & 69 deletions
Original file line number
Diff line number
Diff line change
@@ -3,12 +3,11 @@ title: NumPy Benchmarks
3
3
sidebar: false
4
4
---
5
5
6
-
<imgsrc = "/images/content_images/performance_benchmarking.png"alt = "Visualization"title = "Performance Benchmark; Number of Iterations: 5">
7
-
6
+
<imgsrc = "/images/content_images/performance_benchmarking.png"alt = "Visualization"title = "Performance Benchmark; Number of Iterations: 50">
8
7
9
8
## Overview
10
9
11
-
This blog post aims to benchmark NumPy's performance on the widely accepted N-body problem<ahref="#nbody">[2]</a>. This work also compares NumPy with other popular libraries like pure Python and C++ and compilers like Numba and Pythran.
10
+
This web page aims to benchmark NumPy's performance on the widely accepted N-body problem<ahref="#nbody">[2]</a>. This work also compares NumPy with Python & C++ and with compilers like Numba and Pythran.
12
11
13
12
The objective of benchmarking NumPy revolves around the efficiency of the library in quasi real-life situations, and the N-body problem suits the purpose well. Benchmarking is performed over several iterations for different datasets to ensure the accuracy of the results.
14
13
@@ -21,10 +20,10 @@ The objective of benchmarking NumPy revolves around the efficiency of the librar
21
20
<!-- 2. About N-body Problem: Brief description on N-body problem and why it was chosen. -->
22
21
<!-- 3. Dataset Description -->
23
22
<!-- 4. Implemented Accelerators -->
24
-
<!-- 5. Results-->
25
-
<!-- 6. Source Code-->
26
-
<!-- 7. References-->
27
-
23
+
<!-- 5. Source Code-->
24
+
<!-- 6. Results-->
25
+
<!-- 7. Conclusion-->
26
+
<!-- 8. References -->
28
27
29
28
## About N-Body Problem
30
29
@@ -46,23 +45,27 @@ From the definition above, the N-body problem includes the kinematics between th
46
45
47
46
A brief description of computations involved in solving the N-body problem is given below, along with the pseudo-code in the next section:
48
47
49
-
Consider $n$ bodies of masses $m_1, m_2, m_3, ... , m_n$, moving under the mutual [gravitational force](https://en.wikipedia.org/wiki/Gravity) of attraction between them in an [inertial frame of reference](https://en.wikipedia.org/wiki/Inertial_frame_of_reference) of three dimensions, such that consecutive positions and velocities of an ${ith}$ body are denoted by ($s_{i-1}$, $s_i$) and ($v_{i-1}$, $v_i$) respectively. The gravitational force felt on the $ith$ body of mass $m_i$ by a single body of mass $m_j$ is denoted as $F_{gravitational}$ and the acceleration of the $ith$ body is represented as $a_i$. Consider the position vectors of these two bodies as $r_i$ and $r_j$.
48
+
Consider $n$ bodies of masses $m_1, m_2, m_3, ... , m_n$, moving under the mutual [gravitational force](https://en.wikipedia.org/wiki/Gravity) of attraction between them in an [inertial frame of reference](https://en.wikipedia.org/wiki/Inertial_frame_of_reference) of three dimensions, such that consecutive positions and velocities of an ${ith}$ body are denoted by ($s_{k-1}$, $s_k$) and ($v_{k-1}$, $v_k$) respectively. According to the [Newton's law of gravity](https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation), the gravitational force felt on the $ith$ body of mass $m_i$ by a single body of mass $m_j$ is denoted as $F_{ij}$ and the acceleration of the $ith$ body is represented as $a_i$. Let $r_i$ and $r_j$ be the position vectors of two body, such that:
The final aim is to find time taken to evaluate the total energy of each particle in the celestial space at a given time step. The equations involved in solving the problem are listed below:
Since Numba is a compiler focused on accelerating Python and NumPy codes, the user API of the library supports various decorators. The supported decorators are`@jit, @vectorize, @guvectorize, @stencil, @jitclass, @cfunc, @overload`. It also supports `nopython` mode to generate fully compiled results without the need for intermediate Python interpreter calls. Numba's assistance to NumPy arrays and functions also makes it a good candidate for comparison.
116
+
Since Numba is a compiler focused on accelerating Python and NumPy codes, the user API of the library supports various decorators. It uses the industry-standard LLVM compiler library. It aims to translate the Python functions to optimized machine code during runtime. It supports variety of decorators like`@jit, @vectorize, @guvectorize, @stencil, @jitclass, @cfunc, @overload`. We are using `Just-In-Time` compilation in this work. It also supports `nopython` mode to generate fully compiled results without the need for intermediate Python interpreter calls. Numba's assistance to NumPy arrays and functions also makes it a good candidate for comparison.
114
117
115
118
<!-- NumPy and Numba both use a similar type of compilation for ufuncs in manual looping resulting in the same speed. Another thing that Numba lacks behind is that it does not support all functions of NumPy. There are functions in NumPy which does not hold up some of the optional arguments in nopython mode. It can implement linear algebra calls in the compiled functions but does not return any faster implementation. -->
116
119
117
-
Implementation details for benchmarking:
118
-
119
-
*`jit` decorator from Numba was used to compile the Python functions just-in-time.
120
-
*`cache = True`: To avoid repetitive compile time.
121
-
* Uses NumPy arrays and loops.
122
-
* Implemented `jit` decorated functions to call another `jit` decorated functions to increase the performance of our model.
123
-
124
120
### Pythran
125
121
126
122
> Pythran is an ahead of time compiler for a subset of the Python language, with a focus on scientific computing.
@@ -131,9 +127,51 @@ Since the focus of Pythran was on accelerating Python and NumPy codes, its C++ A
131
127
132
128
<!-- NumPy arrays in Cython should be stored in contiguous memory like C-style or Fortran to use Pythran in the backend. Here, the Pythran lacks behind. Another limitation is that the sequence of bytes of words must be the same as the targeted architecture to make Pythran work.-->
133
129
130
+
## Source Code
131
+
132
+
* The code is inspired by <ahref = "https://github.com/paugier/nbabel">Pierre Augier's work on N-Body Problem</a>.
<td>Just-In-Time Compilation, Non-Vectorized Approach, Pythran at the Backend via Transonic, NumPy Arrays</td>
168
+
</tr>
169
+
</table>
170
+
</html>
171
+
134
172
## Results
135
173
136
-
Table values represent the normalized time taken in seconds by each algorithm to run on the given datasets for $5$ number of iterations.
174
+
Table values represent the normalized time taken in seconds by each algorithm to run on the given datasets for $50$ number of iterations. The raw timing data can be downloaded from <ahref = "benchmarks/data/table.csv">here</a>.
137
175
138
176
<html>
139
177
<head>
@@ -152,99 +190,80 @@ table, th, td {
152
190
<td><b>32</b></td>
153
191
<td><b>64</b></td>
154
192
<td><b>128</b></td>
193
+
<td><b>256</b></td>
155
194
</tr>
156
195
<tr>
157
196
<tr>
158
197
<td><b>NumPy</b></td>
159
198
<td>12.61</td>
160
199
<td>13.88</td>
161
200
<td>15.59</td>
162
-
<td>17.9</td>
201
+
<td>17.90</td>
202
+
<td>18.27</td>
163
203
</tr>
164
204
<tr>
165
205
<td><b>Python</b></td>
166
206
<td>12.85</td>
167
207
<td>26.82</td>
168
208
<td>50.13</td>
169
209
<td>105.01</td>
210
+
<td></td>
170
211
</tr>
171
212
<tr>
172
213
<td><b>C++</b></td>
173
214
<td>1.646</td>
174
215
<td>3.206</td>
175
216
<td>5.725</td>
176
217
<td>11.44</td>
218
+
<td>19.43</td>
177
219
<tr>
178
220
<td><b>Numba</b></td>
179
221
<td>1.567</td>
180
222
<td>3.223</td>
181
223
<td>6.521</td>
182
224
<td>13.64</td>
225
+
<td>26.64</td>
183
226
</tr>
184
227
<tr>
185
228
<td><b>Pythran</b></td>
186
229
<td>0.3177</td>
187
230
<td>0.6591</td>
188
231
<td>1.2811</td>
189
232
<td>2.5082</td>
233
+
<td>5.2042</td>
190
234
</tr>
191
235
</table>
192
236
</body>
193
237
</html>
194
238
195
-
**Note** on machine configuration used for benchmarking:
239
+
## Environment configuration
196
240
197
-
***Machine:** Intel(R) Core(TM) i7-10870H CPU @ 2.20GHz, 16GB RAM
241
+
***CPU Model:** Intel(R) Core(TM) i7-10870H CPU @ 2.20GHz
242
+
***RAM GB:** 16
243
+
***RAM Model:** DDR4
244
+
***Speed:** 3200 MT/s
198
245
***Operating System:** Manjaro Linux 21.1.1, Pahvo
199
246
***Library Versions:**
200
247
* Python: 3.9.6
201
248
* NumPy: 1.20.3
202
249
* Numba: 0.54.0
203
250
* Pythran: 0.9.12.post1
204
251
* Transonic: 0.4.10
252
+
* GCC: 11.1.0
205
253
206
-
## Source Code
254
+
## Conclusion
207
255
208
-
* The code is highly inspired by <ahref = "https://github.com/paugier/nbabel">Pierre Augier's work on N-Body Problem</a>.
*NumPy is very efficient, especially for larger datasets. NumPy performs $3.2$ times faster than Python for input size $64$, $5.8$ times faster for a dataset of size, $128$, and $$ times better performance than Python for input size $256$. The performance of NumPy increases drastically as the number of particles in the datasets increases. Thanks to the vectorized approach in NumPy. Vectorization makes the code look clean and concise to read. It results in better performance without any explicit looping, indexing, etc. NumPy's concept of vectorization is handy for the beginner to learn. It is also beneficial for a highly skilled developer to debug the errors with fewer lines of code.
257
+
*It uses pre-compiled C code, which adds up to the performance of NumPy. We can observe from the table the performance of the NumPy approaches to the speed of C++. For a dataset of size $64$, NumPy is $2.72$ times slower than C++. For the dataset of size $128$, it reaches equivalent to the speed of C++, with a running time of $1.56$ times the time taken by C++. NumPy outperforms C++ by $1.06$ times for input size $256$.
<td>Just-In-Time Compilation, Non-Vectorized Approach, Pythran via Transonic Compiler</td>
244
-
</tr>
245
-
</table>
246
-
</html>
259
+
**How can we accelerate NumPy?**
260
+
261
+
NumPy aims to improve itself and to give better performance for the end-users. It performs well in most cases. But to fill the gaps where NumPy is not so good various compiled methods like Numba, Pythran, etc are used. They play a huge role. In this implementation, we used Transonic's JIT Compilation at the backend for NumPy arrays to implement Numba & Pythran. To be specific, we want to compare NumPy's vectorized approach with the JIT-compiled non-vectorized approach.
262
+
263
+
* We observed Numba performs $2.72$ times faster than NumPy for input size $64$ and $1.56$ times faster for input size $128$. But later, NumPy outperforms Numba by $1.45$ times faster for input size $256$.
264
+
* Pythran performs $12.17$ times faster for input size $64$, $7.13$ times better for input size $128$, and $3.51$ times faster than NumPy for input size $256$.
247
265
266
+
We have compared the performance of NumPy with two of the most popular languages Python and C++, and with popular compiled methods like Numba and Pythran. NumPy achieves better performance for scientific computations as well as for solving real-life situations. That's NumPy. It stands explicitly well in all kinds of circumstances.
0 commit comments