diff --git a/content/en/about.md b/content/en/about.md index 30ba46b2ac..b75feeeb75 100644 --- a/content/en/about.md +++ b/content/en/about.md @@ -5,7 +5,7 @@ sidebar: false _Some information about the NumPy project and community_ -NumPy is an open source project aiming to enable numerical computing with Python. It was created in 2005, building on early work of the Numerical and Numarray libraries. NumPy will always be 100% open-source software, free for all to use and released under the liberal terms of the [modified BSD license](https://github.com/numpy/numpy/blob/master/LICENSE.txt). +NumPy is an open source project aiming to enable numerical computing with Python. It was created in 2005, building on the early work of the Numerical and Numarray libraries. NumPy will always be 100% open source software, free for all to use and released under the liberal terms of the [modified BSD license](https://github.com/numpy/numpy/blob/master/LICENSE.txt). NumPy is developed in the open on GitHub, through the consensus of the NumPy and wider scientific Python community. For more information on our governance approach, please see our [Governance Document](https://www.numpy.org/devdocs/dev/governance/index.html). @@ -60,11 +60,11 @@ Institutional Partners are organizations that support the project by employing p ## Donate -If you have found NumPy to be useful in your work, research or company, please consider making a donation to the project commensurate with your resources. Any amount helps! All donations will be used strictly to fund the development of NumPy’s open source software, documentation and community. +If you have found NumPy to be useful in your work, research, or company, please consider making a donation to the project commensurate with your resources. Any amount helps! All donations will be used strictly to fund the development of NumPy’s open source software, documentation, and community. NumPy is a Sponsored Project of NumFOCUS, a 501(c)(3) nonprofit charity in the United States. NumFOCUS provides NumPy with fiscal, legal, and administrative support to help ensure the health and sustainability of the project. Visit [numfocus.org](https://numfocus.org) for more information. -Donations to NumPy are managed by [NumFOCUS](https://numfocus.org). For donors in the United States, your gift is tax-deductible to the extent provided by law. As with any donation, you should consult with your tax adviser about your particular tax situation. +Donations to NumPy are managed by [NumFOCUS](https://numfocus.org). For donors in the United States, your gift is tax-deductible to the extent provided by law. As with any donation, you should consult with your tax advisor about your particular tax situation. NumPy's Steering Council will make the decisions on how to best use any funds received. Technical and infrastructure priorities are documented on the [NumPy Roadmap](https://www.numpy.org/neps/index.html#roadmap). {{< numfocus >}} diff --git a/content/en/arraycomputing.md b/content/en/arraycomputing.md index 211acb0665..e92db609d8 100644 --- a/content/en/arraycomputing.md +++ b/content/en/arraycomputing.md @@ -6,11 +6,11 @@ sidebar: false *Array computing is the foundation of statistical, mathematical, scientific computing in various contemporary data science and analytics applications such as data visualization, digital signal processing, image processing, bioinformatics, -machine learning, AI and several others.* +machine learning, AI, and several others.* Large scale data manipulation and transformation depends on efficient, high-performance array computing. The language of choice for data analytics, -machine learning and productive numerical computing is **Python.** +machine learning, and productive numerical computing is **Python.** **Num**erical **Py**thon or NumPy is its de-facto standard Python programming language library that supports large, multi-dimensional arrays and matrices, @@ -30,10 +30,10 @@ pack newer algorithms and features geared towards machine learning and artificia **Array computing** is based on **arrays** data structures. *Arrays* are used to organize vast amounts of data such that a related set of values can be easily -sorted, searched, mathematically manipulated and transformed easily and quickly. +sorted, searched, mathematically manipulated, and transformed easily and quickly. Array computing is *unique* as it involves operating on the data array *at once*. What this means is that any array operation applies to an entire set of -values in one shot. This vectorized approach provides speed and simplicity by +values in one shot. This vectorized approach provides speed and simplicity by enabling programmers to code and operate on aggregates of data, without having to use loops of individual scalar operations. diff --git a/content/en/case-studies/blackhole-image.md b/content/en/case-studies/blackhole-image.md index d74611c726..6933ff68f4 100644 --- a/content/en/case-studies/blackhole-image.md +++ b/content/en/case-studies/blackhole-image.md @@ -29,7 +29,7 @@ from a sidewalk café in Paris! * **A New View of the Universe:** The EHT is an exciting new tool for studying the most extreme objects in the universe. The EHT's groundbreaking image was published 100 years - after [Sir Arthur Eddington's expidition][eddington] yielded the first + after [Sir Arthur Eddington's experiment][eddington] yielded the first observational evidence in support of Einstein's theory of general relativity. * **Investigating Black Holes:** @@ -44,7 +44,7 @@ from a sidewalk café in Paris! * **Comparing Observations to Theory:** Based on Einstein’s general theory of relativity, scientists expected - see a dark region similar to a shadow, caused by the gravitational bending + to see a dark region similar to a shadow, caused by the gravitational bending and capture of light by the event horizon. By studying this shadow scientists could measure the enormous mass of M87’s central supermassive black hole. @@ -94,7 +94,7 @@ techniques to verify that the resulting images were consistent. Results from these independent teams of researchers were combined to yield the first-of-a-kind image of the black hole. This approach is a powerful example of the importance of reproducibility and -collaboration to modern scientific discovery, and illustrates the role that +collaboration to modern scientific discovery and illustrates the role that the scientific Python ecosystem plays in supporting scientific advancement through collaborative data analysis. @@ -125,7 +125,7 @@ of the final image of the black hole. NumPy enabled researchers to manipulate large numerical datasets through its efficient and generic n-dimensional array, providing a foundation for the -software used to generated the first ever image of +software used to generate the first ever image of a black hole. The direct imaging of a black hole is a major scientific accomplishment providing stunning, visual evidence of Einstein’s general theory of relativity. This achievement encompasses not only diff --git a/content/en/case-studies/cricket-analytics.md b/content/en/case-studies/cricket-analytics.md index 311d40be14..81a0e8d80c 100644 --- a/content/en/case-studies/cricket-analytics.md +++ b/content/en/case-studies/cricket-analytics.md @@ -34,7 +34,7 @@ Cricket is a game of numbers - the runs scored by a batsman, the wickets taken by a bowler, the matches won by a cricket team, the number of times a batsman responds in a certain way to a kind of bowling attack, etc. The capability to dig into cricketing numbers for both improving performance and studying -the business opportunities, overall market and economics of cricket via powerful +the business opportunities, overall market, and economics of cricket via powerful analytics tools, powered by numerical computing software such as NumPy, is a big deal. Cricket analytics provides interesting insights into the game and predictive intelligence regarding game outcomes. @@ -72,7 +72,7 @@ metrics for improving match winning chances: for changing tactics by the team and by associated businesses for economic benefits and growth. * Besides historical analysis, predictive models are - harnessed to determine the possible match outcomes that require significant + harnessed to determine the possible match outcomes that require significant number crunching and data science know-how, visualization tools and capability to include newer observations in the analysis. @@ -90,28 +90,28 @@ metrics for improving match winning chances: IPL has expanded cricket beyond the classic test match format to a much larger scale. The number of matches played every season across various formats has increased and so has the data, the algorithms, newer sports data - analysis technologies and simulation models. Cricket data analysis requires - field mapping, player tracking, ball tracking, player shot analysis and + analysis technologies and simulation models. Cricket data analysis requires + field mapping, player tracking, ball tracking, player shot analysis, and several other aspects involved in how the ball is delivered, its angle, spin, - velocity and trajectory. All these factors together have increased the + velocity, and trajectory. All these factors together have increased the complexity of data cleaning and preprocessing. * **Dynamic Modeling** In cricket, just like any other sport, there can be a large number of variables related to tracking various numbers - of players on the field, their attributes, the ball and several possibilities - of potential actions. The complexity of data analytics and modeling is + of players on the field, their attributes, the ball, and several possibilities + of potential actions. The complexity of data analytics and modeling is directly proportional to the kind of predictive questions that are put forth during analysis and are highly dependent on data representation and the - model. Things get even more challenging in terms of computation, data + model. Things get even more challenging in terms of computation, data comparisons when dynamic cricket play predictions are sought such as what would have happened if the batsman had hit the ball at a different angle or velocity. * **Predictive Analytics Complexity** - Much of the decision making in Cricket is based on questions such as "how + Much of the decision making in cricket is based on questions such as "how often does a batsman play a certain kind of shot if the ball delivery is of a particular type", or "how does a bowler change his line and length if the batsman responds to his delivery in a certain way". @@ -135,20 +135,19 @@ for various kinds of cricket related sporting analytics such as: and [big data approaches](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4996805/) are used for tactical analysis. -* **Data Visualization:** Data graphing and [visualization](https://towardsdatascience.com/advanced-sports-visualization-with-pandas-matplotlib-and-seaborn-9c16df80a81b) - provides useful insights into relationship between various datasets. +* **Data Visualization:** Data graphing and [visualization](https://towardsdatascience.com/advanced-sports-visualization-with-pandas-matplotlib-and-seaborn-9c16df80a81b) provides useful insights into relationship between various datasets. ## Summary Sports Analytics is a game changer when it comes to how professional games are played, especially how strategic decision making happens, which until recently -was primarily done based on “gut feeling" or adherence to past traditions. NumPy +was primarily done based on “gut feeling" or adherence to past traditions. NumPy forms a solid foundation for a large set of Python packages which provide higher -level functions related to data analytics, machine learning and AI algorithms. +level functions related to data analytics, machine learning, and AI algorithms. These packages are widely deployed to gain real-time insights that help in decision making for game-changing outcomes, both on field as well as to draw -inferences and drive business around the game of cricket. Finding out the -hidden parameters, patterns and attributes that lead to the outcome of a +inferences and drive business around the game of cricket. Finding out the +hidden parameters, patterns, and attributes that lead to the outcome of a cricket match helps the stakeholders to take notice of game insights that are otherwise hidden in numbers and statistics. diff --git a/content/en/case-studies/gw-discov.md b/content/en/case-studies/gw-discov.md index e19ae347d7..4d30385ac6 100644 --- a/content/en/case-studies/gw-discov.md +++ b/content/en/case-studies/gw-discov.md @@ -22,8 +22,8 @@ The [Laser Interferometer Gravitational-Wave Observatory (LIGO)](https://www.lig was designed to open the field of gravitational-wave astrophysics through the direct detection of gravitational waves predicted by Einstein’s General Theory of Relativity. It comprises two widely-separated interferometers within the -United States—one in Hanford, Washington and the other in Livingston, -Louisiana—operated in unison to detect gravitational waves. Each of them has +United States — one in Hanford, Washington and the other in Livingston, +Louisiana — operated in unison to detect gravitational waves. Each of them has multi-kilometer-scale gravitational wave detectors that use laser interferometry. The LIGO Scientific Collaboration (LSC), is a group of more than 1000 scientists from universities around the United States and in 14 @@ -56,7 +56,7 @@ made from warped spacetime. * **Computation** Gravitational Waves are hard to detect as they produce a very small effect - and have tiny interaction with matter. Processing and analyzing all of + and have tiny interaction with matter. Processing and analyzing all of LIGO's data requires a vast computing infrastructure.After taking care of noise, which is billions of times of the signal, there is still very complex relativity equations and huge amounts of data which present a @@ -89,7 +89,7 @@ made from warped spacetime. {{< figure src="/images/content_images/cs/gw_strain_amplitude.png" class="fig-center" alt="gravitational waves strain amplitude" caption="**Estimated gravitational-wave strain amplitude from GW150914**" attr="(**Graph Credits:** Observation of Gravitational Waves from a Binary Black Hole Merger, ResearchGate Publication)" attrlink="https://www.researchgate.net/publication/293886905_Observation_of_Gravitational_Waves_from_a_Binary_Black_Hole_Merger" >}} -## NumPy’s Role in the detection of Gravitational Waves +## NumPy’s Role in the Detection of Gravitational Waves Gravitational waves emitted from the merger cannot be computed using any technique except brute force numerical relativity using supercomputers. @@ -103,7 +103,7 @@ speed. Here are some examples: * [Signal Processing](https://www.uv.es/virgogroup/Denoising_ROF.html): Glitch detection, [Noise identification and Data Characterization](https://ep2016.europython.eu/media/conference/slides/pyhton-in-gravitational-waves-research-communities.pdf) - (NumPy, scikit-learn, scipy, matplotlib, pandas, pyCharm ) + (NumPy, scikit-learn, scipy, matplotlib, pandas, pyCharm) * Data retrieval: Deciding which data can be analyzed, figuring out whether it contains a signal - needle in a haystack * Statistical analysis: estimate the statistical significance of observational @@ -116,7 +116,7 @@ speed. Here are some examples: * Key [Software](https://github.com/lscsoft) developed in GW data analysis such as [GwPy](https://gwpy.github.io/docs/stable/overview.html) and [PyCBC](https://pycbc.org) uses NumPy and AstroPy under the hood for - providing object based interfaces to utilities, tools and methods for + providing object based interfaces to utilities, tools, and methods for studying data from gravitational-wave detectors. {{< figure src="/images/content_images/cs/gwpy-numpy-dep-graph.png" class="fig-center" alt="gwpy-numpy depgraph" caption="**Dependency graph showing how GwPy package depends on NumPy**" >}} @@ -134,7 +134,7 @@ that helps scientists gain insights into data gathered from the scientific observations and understand the results. The computations are complex and cannot be comprehended by humans unless it is visualized using computer simulations that are fed with the real observed data and analysis. NumPy -along with other Python packages such as matplotlib, pandas and scikit-learn +along with other Python packages such as matplotlib, pandas, and scikit-learn is [enabling researchers](https://www.gw-openscience.org/events/GW150914/) to answer complex questions and discover new horizons in our understanding of the universe. diff --git a/content/en/citing-numpy.md b/content/en/citing-numpy.md index 16af218fea..9d103b1746 100644 --- a/content/en/citing-numpy.md +++ b/content/en/citing-numpy.md @@ -6,7 +6,7 @@ sidebar: false If NumPy has been significant in your research, and you would like to acknowledge the project in your academic publication, we suggest citing the following papers: * Travis E, Oliphant. _A guide to NumPy_, USA: Trelgol Publishing, (2006). -* Stéfan van der Walt, S. Chris Colbert and Gaël Varoquaux. _The NumPy Array: A Structure for Efficient Numerical Computation_, Computing in Science & Engineering, 13, 22-30 (2011),[ DOI:10.1109/MCSE.2011.37](http://dx.doi.org/10.1109/MCSE.2011.37) ([publisher link](http://scitation.aip.org/content/aip/journal/cise/13/2/10.1109/MCSE.2011.37)) +* Stéfan van der Walt, S. Chris Colbert, and Gaël Varoquaux. _The NumPy Array: A Structure for Efficient Numerical Computation_, Computing in Science & Engineering, 13, 22-30 (2011),[ DOI:10.1109/MCSE.2011.37](http://dx.doi.org/10.1109/MCSE.2011.37) ([publisher link](http://scitation.aip.org/content/aip/journal/cise/13/2/10.1109/MCSE.2011.37)) _In BibTeX format:_ diff --git a/content/en/code-of-conduct.md b/content/en/code-of-conduct.md index a29e60c93c..26c01784bd 100644 --- a/content/en/code-of-conduct.md +++ b/content/en/code-of-conduct.md @@ -19,7 +19,7 @@ We strive to: 2. Be empathetic, welcoming, friendly, and patient. We work together to resolve conflict, and assume good intentions. We may all experience some frustration from time to time, but we do not allow frustration to turn into a personal attack. A community where people feel uncomfortable or threatened is not a productive one. 3. Be collaborative. Our work will be used by other people, and in turn we will depend on the work of others. When we make something for the benefit of the project, we are willing to explain to others how it works, so that they can build on the work to make it even better. Any decision we make will affect users and colleagues, and we take those consequences seriously when making decisions. 4. Be inquisitive. Nobody knows everything! Asking questions early avoids many problems later, so we encourage questions, although we may direct them to the appropriate forum. We will try hard to be responsive and helpful. -5. Be careful in the words that we choose. We are careful and respectful in our communication and we take responsibility for our own speech. Be kind to others. Do not insult or put down other participants. We will not accept harassment or other exclusionary behaviour, such as: +5. Be careful in the words that we choose. We are careful and respectful in our communication, and we take responsibility for our own speech. Be kind to others. Do not insult or put down other participants. We will not accept harassment or other exclusionary behaviour, such as: * Violent threats or language directed against another person. * Sexist, racist, or otherwise discriminatory jokes and language. * Posting sexually explicit or violent material. diff --git a/content/en/installing-python-and-numpy-guide.md b/content/en/installing-python-and-numpy-guide.md index 50482cf2de..6f4e7e0081 100644 --- a/content/en/installing-python-and-numpy-guide.md +++ b/content/en/installing-python-and-numpy-guide.md @@ -3,15 +3,15 @@ title: Guide to installing Python and NumPy sidebar: false --- -Installing and managing packages in Python land is complicated, there are a +Installing and managing packages in Python is complicated, there are a number of alternative solutions for most tasks. This guide tries to give the reader a sense of the best (or most popular) solutions, and give clear -recommendations. It focuses on users of Python, NumPy and the PyData (or +recommendations. It focuses on users of Python, NumPy, and the PyData (or numerical computing) stack on common operating systems and hardware. ## Recommendations -We'll start with recommendations based on the experience level of the user and +We'll start with recommendations based on the user's experience level and operating system of interest. If you're in between "beginning" and "advanced", please go with "beginning" if you want to keep things simple, and with "advanced" if you want to work according to best practices that go a longer way @@ -19,9 +19,9 @@ in the future. ### Beginning users -On all of Windows, macOS and Linux: +On all of Windows, macOS, and Linux: -- Install [Anaconda](https://www.anaconda.com/distribution/) (it install all +- Install [Anaconda](https://www.anaconda.com/distribution/) (it installs all packages you need and all other tools mentioned below). - For writing and executing code, use notebooks in [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/index.html) for @@ -29,14 +29,14 @@ On all of Windows, macOS and Linux: [Spyder](https://www.spyder-ide.org/) or [Visual Studio Code](https://code.visualstudio.com/) for writing scripts and packages. - Use [Anaconda Navigator](https://docs.anaconda.com/anaconda/navigator/) to - manage your packages and start JupyterLab, Spyder or Visual Studio Code. + manage your packages and start JupyterLab, Spyder, or Visual Studio Code. ### Advanced users #### Windows or macOS -- Install [Miniconda](https://docs.conda.io/en/latest/miniconda.html) +- Install [Miniconda](https://docs.conda.io/en/latest/miniconda.html). - Keep the `base` conda environment minimal, and use one or more [conda environments](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#) to install the package you need for the task or project you're working on. @@ -48,20 +48,20 @@ On all of Windows, macOS and Linux: If you're fine with slightly outdated packages and prefer stability over being able to use the latest versions of libraries: -- use your OS package manager for as much as possible (Python itself, NumPy and - other libraries) -- install packages not provided by your package manager with `pip install somepackage --user` +- Use your OS package manager for as much as possible (Python itself, NumPy, and + other libraries). +- Install packages not provided by your package manager with `pip install somepackage --user`. If you use a GPU: -- Install [Miniconda](https://docs.conda.io/en/latest/miniconda.html) +- Install [Miniconda](https://docs.conda.io/en/latest/miniconda.html). - Keep the `base` conda environment minimal, and use one or more [conda environments](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#) to install the package you need for the task or project you're working on. - Use the `defaults` conda channel (`conda-forge` doesn't have good support for - GPU packages yet) + GPU packages yet). Otherwise: -- Install [Miniforge](https://github.com/conda-forge/miniforge) +- Install [Miniforge](https://github.com/conda-forge/miniforge). - Keep the `base` conda environment minimal, and use one or more [conda environments](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#) to install the package you need for the task or project you're working on. @@ -70,56 +70,56 @@ Otherwise: #### Alternative if you prefer pip/PyPI For users who know, from personal preference or reading about the main -differences between Conda and Pip below, they prefer a pip/PyPI-based solution, +differences between conda and pip below, they prefer a pip/PyPI-based solution, we recommend: - Install Python from, for example, [python.org](https://www.python.org/downloads/), - [Homebrew](https://brew.sh/) or your Linux package manager + [Homebrew](https://brew.sh/), or your Linux package manager. - Use [Poetry](https://python-poetry.org/) as the most well-maintained tool that provides a dependency resolver and environment management capabilities - in a similar fashion as Conda does. + in a similar fashion as conda does. ## Python package management -Managing packages is a challenging problem, and as a result there are lots of +Managing packages is a challenging problem, and, as a result, there are lots of tools. For web and general purpose Python development there's a whole [host of tools](https://packaging.python.org/guides/tool-recommendations/) -complementary with Pip. For high-performance computing (HPC), +complementary with pip. For high-performance computing (HPC), [Spack](https://github.com/spack/spack) is worth considering. For most NumPy -users though, [Conda](https://conda.io/en/latest/) and -[Pip](https://pip.pypa.io/en/stable/) are the two most popular tools. +users though, [conda](https://conda.io/en/latest/) and +[pip](https://pip.pypa.io/en/stable/) are the two most popular tools. -### Pip & Conda +### Pip & conda The two main tools that install Python packages are `pip` and `conda`. Their -functionality partially overlaps (e.g. both can install `numpy`), however they -can also work together. We'll discuss the major differences between Pip and -Conda here - this is important to understand if you want to manage packages +functionality partially overlaps (e.g. both can install `numpy`), however, they +can also work together. We'll discuss the major differences between pip and +conda here - this is important to understand if you want to manage packages effectively. -The first difference is that Conda is cross-language and it can install Python, -while Pip is installed for a particular Python on your system and install other -packages to that same Python install only. This also means Conda can install -non-Python libraries and tools you may need (e.g. compilers, CUDA, HDF5) while -Pip can't. +The first difference is that conda is cross-language and it can install Python, +while pip is installed for a particular Python on your system and installs other +packages to that same Python install only. This also means conda can install +non-Python libraries and tools you may need (e.g. compilers, CUDA, HDF5), while +pip can't. -The second difference is that Pip install from the Python Packaging Index -(PyPI) while Conda installs from its own channels (typically "defaults" or -"conda-forge"). PyPI is the largest collection of packages by far, however all -popular packages are available for Conda as well. +The second difference is that pip installs from the Python Packaging Index +(PyPI), while conda installs from its own channels (typically "defaults" or +"conda-forge"). PyPI is the largest collection of packages by far, however, all +popular packages are available for conda as well. -The third difference is that Pip does not have a _dependency resolver_ (this is -expecting to change next year though) while Conda does. For simple cases (e.g. -you just want NumPy, SciPy, Matplotlib, Pandas, Scikit-learn and a few other -packages) that doesn't matter, however for complicated cases Conda can be -expected to do a better job keeping everything working well together. The flip -side of that coin is that installing with Pip is typically a _lot_ faster than -installing with Conda. +The third difference is that pip does not have a _dependency resolver_ (this is +expected to change in the near future), while conda does. For simple cases (e.g. +you just want NumPy, SciPy, Matplotlib, Pandas, Scikit-learn, and a few other +packages) that doesn't matter, however, for complicated cases conda can be +expected to do a better job keeping everything working well together. The flip +side of that coin is that installing with pip is typically a _lot_ faster than +installing with conda. -The fourth difference is that Conda is an integrated solution for managing -packages, dependencies and environments, while with Pip you may need another -tool (there's many!) for dealing with environments or complex dependencies. +The fourth difference is that conda is an integrated solution for managing +packages, dependencies and environments, while with pip you may need another +tool (there are many!) for dealing with environments or complex dependencies. ### Reproducible installs @@ -158,25 +158,25 @@ importing it in notebooks). ## NumPy packages & accelerated linear algebra libraries -NumPy doesn't depend on any other Python packages, however it does depend on an +NumPy doesn't depend on any other Python packages, however, it does depend on an accelerated linear algebra library - typically [Intel MKL](https://software.intel.com/en-us/mkl) or -[OpenBLAS](https://www.openblas.net/). The user doesn't have to worry about +[OpenBLAS](https://www.openblas.net/). Users don't have to worry about installing those, but it may still be important to understand how the packaging -is done and how that affects performance and behavior the user sees. +is done and how it affects performance and behavior users see. -The NumPy wheels on PyPI, which is what Pip installs, are built with OpenBLAS. +The NumPy wheels on PyPI, which is what pip installs, are built with OpenBLAS. The OpenBLAS libraries are shipped within the wheels itself. This makes those -wheels larger, and if the user installs (for example) SciPy as well, she will +wheels larger, and if a user installs (for example) SciPy as well, they will now have two copies of OpenBLAS on disk. -In the Conda defaults channel, NumPy is built against Intel MKL. MKL is a -separate package that will be installed in the user's environment when she -installs NumPy. That MKL package is a lot larger than OpenBLAS, several hundred +In the conda defaults channel, NumPy is built against Intel MKL. MKL is a +separate package that will be installed in the users' environment when they +install NumPy. That MKL package is a lot larger than OpenBLAS, several hundred MB. MKL is typically a little faster and more robust than OpenBLAS. In the conda-forge channel, NumPy is built against a dummy "BLAS" package. When -the user install NumPy from conda-forge, that BLAS package then gets installed +a user installs NumPy from conda-forge, that BLAS package then gets installed together with the actual library - this defaults to OpenBLAS, but it can also be MKL (from the defaults channel), or even [BLIS](https://github.com/flame/blis) or reference BLAS. @@ -184,7 +184,7 @@ be MKL (from the defaults channel), or even Besides install sizes, performance and robustness, there are two more things to consider: - Intel MKL is not open source. For normal use this is not a problem, but if - the user needs to redistribute an application built with NumPy, this could be + a user needs to redistribute an application built with NumPy, this could be an issue. - Both MKL and OpenBLAS will use multi-threading for function calls like `np.dot`, with the number of threads being determined by both a build-time diff --git a/content/en/learn.md b/content/en/learn.md index a8f9ef2021..517253e690 100644 --- a/content/en/learn.md +++ b/content/en/learn.md @@ -28,7 +28,7 @@ There's a ton of information about NumPy out there. If you are new, we'd strongl * [Guide to NumPy *by Travis E. Oliphant*](http://web.mit.edu/dvp/Public/numpybook.pdf) This is a free version 1 from 2006. For the latest copy (2015) see [here](https://www.barnesandnoble.com/w/guide-to-numpy-travis-e-oliphant-phd/1122853007). * [From Python to NumPy *by Nicolas P. Rougier*](https://www.labri.fr/perso/nrougier/from-python-to-numpy/) -* [Elegant SciPy](https://www.amazon.com/Elegant-SciPy-Art-Scientific-Python/dp/1491922877) *by Juan Nunez-Iglesias, Stefan van der Walt and Harriet Dashnow* +* [Elegant SciPy](https://www.amazon.com/Elegant-SciPy-Art-Scientific-Python/dp/1491922877) *by Juan Nunez-Iglesias, Stefan van der Walt, and Harriet Dashnow* You may also want to check out the [Goodreads list](https://www.goodreads.com/shelf/show/python-scipy) on the subject of "Python+SciPy". Most books there are about the "SciPy ecosystem", which has NumPy at its core. @@ -55,11 +55,11 @@ Try these advanced resources for a better understanding of NumPy concepts like a * [Python Data Science Handbook](https://www.amazon.com/Python-Data-Science-Handbook-Essential/dp/1491912057) *by Jake Vanderplas* * [Python for Data Analysis](https://www.amazon.com/Python-Data-Analysis-Wrangling-IPython/dp/1491957662) *by Wes McKinney* -* [Numerical Python: Scientific Computing and Data Science Applications with Numpy, SciPy and Matplotlib](https://www.amazon.com/Numerical-Python-Scientific-Applications-Matplotlib/dp/1484242459) *by Robert Johansson* +* [Numerical Python: Scientific Computing and Data Science Applications with Numpy, SciPy, and Matplotlib](https://www.amazon.com/Numerical-Python-Scientific-Applications-Matplotlib/dp/1484242459) *by Robert Johansson* **Videos** -* [Advanced NumPy - broadcasting rules, strides and advanced indexing](https://www.youtube.com/watch?v=cYugp9IN1-Q) *by Juan Nunuz-Iglesias* +* [Advanced NumPy - broadcasting rules, strides, and advanced indexing](https://www.youtube.com/watch?v=cYugp9IN1-Q) *by Juan Nunuz-Iglesias* * [Advanced Indexing Operations in NumPy Arrays](https://www.youtube.com/watch?v=2WTDrSkQBng) *by Amuls Academy* *** diff --git a/content/en/news.md b/content/en/news.md index 72f4d354cc..aae7134e43 100644 --- a/content/en/news.md +++ b/content/en/news.md @@ -6,9 +6,9 @@ sidebar: false ### Season of Docs acceptance -_May 11, 2020_ -- NumPy got accepted as one of the mentor organizations for -Google's Season of Docs program. We are excited to again get the opportunity to -work with a technical writer to improve NumPy's documentation! For more +_May 11, 2020_ -- NumPy has been accepted as one of the mentor organizations for +the Google Season of Docs program. We are excited about the opportunity to +work with a technical writer to improve NumPy's documentation once again! For more details, please see [the official Season of Docs site](https://developers.google.com/season-of-docs/) and our [ideas page](https://github.com/numpy/numpy/wiki/Google-Season-of-Docs-2020-Project-Ideas). diff --git a/content/en/press-kit.md b/content/en/press-kit.md index 406122fdc4..504afc32d2 100644 --- a/content/en/press-kit.md +++ b/content/en/press-kit.md @@ -3,7 +3,7 @@ title: Press kit sidebar: false --- -We would like to make it easy for you to include the NumPy brand identity in your next academic paper, course materials or presentation. +We would like to make it easy for you to include the NumPy project identity in your next academic paper, course materials, or presentation. You will find a high resolution logo of NumPy [here](https://github.com/numpy/numpy/tree/master/branding/icons). Note that by using the numpy.org resources, you accept the [NumPy Code of Conduct](/code-of-conduct). diff --git a/static/gallery/team.html b/static/gallery/team.html index 3d56a1fe83..3990c8142c 100644 --- a/static/gallery/team.html +++ b/static/gallery/team.html @@ -573,7 +573,7 @@
For the list of people on the Steering Council, please see here
+For the list of people on the Steering Council, please see here.