Skip to content

DOC: capitalize Python as proper noun #37808

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Nov 14, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/development/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -442,7 +442,7 @@ Some other important things to know about the docs:

contributing_docstring.rst

* The tutorials make heavy use of the `ipython directive
* The tutorials make heavy use of the `IPython directive
<https://matplotlib.org/sampledoc/ipython_directive.html>`_ sphinx extension.
This directive lets you put code in the documentation which will be run
during the doc build. For example::
Expand Down
10 changes: 5 additions & 5 deletions doc/source/development/contributing_docstring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,14 +63,14 @@ The first conventions every Python docstring should follow are defined in
`PEP-257 <https://www.python.org/dev/peps/pep-0257/>`_.

As PEP-257 is quite broad, other more specific standards also exist. In the
case of pandas, the numpy docstring convention is followed. These conventions are
case of pandas, the NumPy docstring convention is followed. These conventions are
explained in this document:

* `numpydoc docstring guide <https://numpydoc.readthedocs.io/en/latest/format.html>`_
(which is based in the original `Guide to NumPy/SciPy documentation
<https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt>`_)

numpydoc is a Sphinx extension to support the numpy docstring convention.
numpydoc is a Sphinx extension to support the NumPy docstring convention.

The standard uses reStructuredText (reST). reStructuredText is a markup
language that allows encoding styles in plain text files. Documentation
Expand Down Expand Up @@ -401,7 +401,7 @@ DataFrame:
* pandas.Categorical
* pandas.arrays.SparseArray

If the exact type is not relevant, but must be compatible with a numpy
If the exact type is not relevant, but must be compatible with a NumPy
array, array-like can be specified. If Any type that can be iterated is
accepted, iterable can be used:

Expand Down Expand Up @@ -819,7 +819,7 @@ positional arguments ``head(3)``.
"""
A sample DataFrame method.

Do not import numpy and pandas.
Do not import NumPy and pandas.

Try to use meaningful data, when it makes the example easier
to understand.
Expand Down Expand Up @@ -854,7 +854,7 @@ Tips for getting your examples pass the doctests
Getting the examples pass the doctests in the validation script can sometimes
be tricky. Here are some attention points:

* Import all needed libraries (except for pandas and numpy, those are already
* Import all needed libraries (except for pandas and NumPy, those are already
imported as ``import pandas as pd`` and ``import numpy as np``) and define
all variables you use in the example.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/development/extending.rst
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ and re-boxes it if necessary.

If applicable, we highly recommend that you implement ``__array_ufunc__`` in your
extension array to avoid coercion to an ndarray. See
`the numpy documentation <https://numpy.org/doc/stable/reference/generated/numpy.lib.mixins.NDArrayOperatorsMixin.html>`__
`the NumPy documentation <https://numpy.org/doc/stable/reference/generated/numpy.lib.mixins.NDArrayOperatorsMixin.html>`__
for an example.

As part of your implementation, we require that you defer to pandas when a pandas
Expand Down
4 changes: 2 additions & 2 deletions doc/source/ecosystem.rst
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@ invoked with the following command

dtale.show(df)

D-Tale integrates seamlessly with jupyter notebooks, python terminals, kaggle
D-Tale integrates seamlessly with Jupyter notebooks, Python terminals, Kaggle
& Google Colab. Here are some demos of the `grid <http://alphatechadmin.pythonanywhere.com/>`__
and `chart-builder <http://alphatechadmin.pythonanywhere.com/charts/4?chart_type=surface&query=&x=date&z=Col0&agg=raw&cpg=false&y=%5B%22security_id%22%5D>`__.

Expand Down Expand Up @@ -421,7 +421,7 @@ If also displays progress bars.
`Vaex <https://docs.vaex.io/>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Increasingly, packages are being built on top of pandas to address specific needs in data preparation, analysis and visualization. Vaex is a python library for Out-of-Core DataFrames (similar to pandas), to visualize and explore big tabular datasets. It can calculate statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid up to a billion (10\ :sup:`9`) objects/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).
Increasingly, packages are being built on top of pandas to address specific needs in data preparation, analysis and visualization. Vaex is a Python library for Out-of-Core DataFrames (similar to pandas), to visualize and explore big tabular datasets. It can calculate statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid up to a billion (10\ :sup:`9`) objects/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).

* vaex.from_pandas
* vaex.to_pandas_df
Expand Down
4 changes: 2 additions & 2 deletions doc/source/getting_started/intro_tutorials/04_plotting.rst
Original file line number Diff line number Diff line change
Expand Up @@ -131,8 +131,8 @@ standard Python to get an overview of the available plot methods:
]

.. note::
In many development environments as well as ipython and
jupyter notebook, use the TAB button to get an overview of the available
In many development environments as well as IPython and
Jupyter Notebook, use the TAB button to get an overview of the available
methods, for example ``air_quality.plot.`` + TAB.

One of the options is :meth:`DataFrame.plot.box`, which refers to a
Expand Down
4 changes: 2 additions & 2 deletions doc/source/user_guide/10min.rst
Original file line number Diff line number Diff line change
Expand Up @@ -239,13 +239,13 @@ Select via the position of the passed integers:

df.iloc[3]

By integer slices, acting similar to numpy/python:
By integer slices, acting similar to numpy/Python:

.. ipython:: python

df.iloc[3:5, 0:2]

By lists of integer position locations, similar to the numpy/python style:
By lists of integer position locations, similar to the NumPy/Python style:

.. ipython:: python

Expand Down
4 changes: 2 additions & 2 deletions doc/source/user_guide/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -845,7 +845,7 @@ For example, we can fit a regression using statsmodels. Their API expects a form

The pipe method is inspired by unix pipes and more recently dplyr_ and magrittr_, which
have introduced the popular ``(%>%)`` (read pipe) operator for R_.
The implementation of ``pipe`` here is quite clean and feels right at home in python.
The implementation of ``pipe`` here is quite clean and feels right at home in Python.
We encourage you to view the source code of :meth:`~DataFrame.pipe`.

.. _dplyr: https://github.com/hadley/dplyr
Expand Down Expand Up @@ -2203,7 +2203,7 @@ You can use the :meth:`~DataFrame.astype` method to explicitly convert dtypes fr
even if the dtype was unchanged (pass ``copy=False`` to change this behavior). In addition, they will raise an
exception if the astype operation is invalid.

Upcasting is always according to the **numpy** rules. If two different dtypes are involved in an operation,
Upcasting is always according to the **NumPy** rules. If two different dtypes are involved in an operation,
then the more *general* one will be used as the result of the operation.

.. ipython:: python
Expand Down
7 changes: 2 additions & 5 deletions doc/source/user_guide/cookbook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,6 @@ above what the in-line examples offer.
pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are kept
explicitly imported for newer users.

These examples are written for Python 3. Minor tweaks might be necessary for earlier python
versions.

Idioms
------

Expand Down Expand Up @@ -71,7 +68,7 @@ Or use pandas where after you've set up a mask
)
df.where(df_mask, -1000)

`if-then-else using numpy's where()
`if-then-else using NumPy's where()
<https://stackoverflow.com/questions/19913659/pandas-conditional-creation-of-a-series-dataframe-column>`__

.. ipython:: python
Expand Down Expand Up @@ -1013,7 +1010,7 @@ The :ref:`Plotting <visualization>` docs.
`Setting x-axis major and minor labels
<https://stackoverflow.com/questions/12945971/pandas-timeseries-plot-setting-x-axis-major-and-minor-ticks-and-labels>`__

`Plotting multiple charts in an ipython notebook
`Plotting multiple charts in an IPython Jupyter notebook
<https://stackoverflow.com/questions/16392921/make-more-than-one-chart-in-same-ipython-notebook-cell>`__

`Creating a multi-line plot
Expand Down
6 changes: 3 additions & 3 deletions doc/source/user_guide/enhancingperf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ hence we'll concentrate our efforts cythonizing these two functions.
Plain Cython
~~~~~~~~~~~~

First we're going to need to import the Cython magic function to ipython:
First we're going to need to import the Cython magic function to IPython:

.. ipython:: python
:okwarning:
Expand All @@ -123,7 +123,7 @@ is here to distinguish between function versions):
.. note::

If you're having trouble pasting the above into your ipython, you may need
to be using bleeding edge ipython for paste to play well with cell magics.
to be using bleeding edge IPython for paste to play well with cell magics.


.. code-block:: ipython
Expand Down Expand Up @@ -160,7 +160,7 @@ We get another huge improvement simply by providing type information:
In [4]: %timeit df.apply(lambda x: integrate_f_typed(x["a"], x["b"], x["N"]), axis=1)
10 loops, best of 3: 20.3 ms per loop

Now, we're talking! It's now over ten times faster than the original python
Now, we're talking! It's now over ten times faster than the original Python
implementation, and we haven't *really* modified the code. Let's have another
look at what's eating up time:

Expand Down
4 changes: 2 additions & 2 deletions doc/source/user_guide/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -672,7 +672,7 @@ accepts the special syntax in :meth:`GroupBy.agg`, known as "named aggregation",
)


If your desired output column names are not valid python keywords, construct a dictionary
If your desired output column names are not valid Python keywords, construct a dictionary
and unpack the keyword arguments

.. ipython:: python
Expand Down Expand Up @@ -1090,7 +1090,7 @@ will be passed into ``values``, and the group index will be passed into ``index`
.. warning::

When using ``engine='numba'``, there will be no "fall back" behavior internally. The group
data and group index will be passed as numpy arrays to the JITed user defined function, and no
data and group index will be passed as NumPy arrays to the JITed user defined function, and no
alternative execution attempts will be tried.

.. note::
Expand Down
8 changes: 4 additions & 4 deletions doc/source/user_guide/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ of multi-axis indexing.
*label* of the index. This use is **not** an integer position along the
index.).
* A list or array of labels ``['a', 'b', 'c']``.
* A slice object with labels ``'a':'f'`` (Note that contrary to usual python
* A slice object with labels ``'a':'f'`` (Note that contrary to usual Python
slices, **both** the start and the stop are included, when present in the
index! See :ref:`Slicing with labels <indexing.slicing_with_labels>`
and :ref:`Endpoints are inclusive <advanced.endpoints_are_inclusive>`.)
Expand Down Expand Up @@ -327,7 +327,7 @@ The ``.loc`` attribute is the primary access method. The following are valid inp

* A single label, e.g. ``5`` or ``'a'`` (Note that ``5`` is interpreted as a *label* of the index. This use is **not** an integer position along the index.).
* A list or array of labels ``['a', 'b', 'c']``.
* A slice object with labels ``'a':'f'`` (Note that contrary to usual python
* A slice object with labels ``'a':'f'`` (Note that contrary to usual Python
slices, **both** the start and the stop are included, when present in the
index! See :ref:`Slicing with labels <indexing.slicing_with_labels>`.
* A boolean array.
Expand Down Expand Up @@ -509,11 +509,11 @@ For getting a cross section using an integer position (equiv to ``df.xs(1)``):

df1.iloc[1]

Out of range slice indexes are handled gracefully just as in Python/Numpy.
Out of range slice indexes are handled gracefully just as in Python/NumPy.

.. ipython:: python

# these are allowed in python/numpy.
# these are allowed in Python/NumPy.
x = list('abcdef')
x
x[4:10]
Expand Down
8 changes: 4 additions & 4 deletions doc/source/user_guide/options.rst
Original file line number Diff line number Diff line change
Expand Up @@ -124,13 +124,13 @@ are restored automatically when you exit the ``with`` block:
Setting startup options in Python/IPython environment
-----------------------------------------------------

Using startup scripts for the Python/IPython environment to import pandas and set options makes working with pandas more efficient. To do this, create a .py or .ipy script in the startup directory of the desired profile. An example where the startup folder is in a default ipython profile can be found at:
Using startup scripts for the Python/IPython environment to import pandas and set options makes working with pandas more efficient. To do this, create a .py or .ipy script in the startup directory of the desired profile. An example where the startup folder is in a default IPython profile can be found at:

.. code-block:: none

$IPYTHONDIR/profile_default/startup

More information can be found in the `ipython documentation
More information can be found in the `IPython documentation
<https://ipython.org/ipython-doc/stable/interactive/tutorial.html#startup-files>`__. An example startup script for pandas is displayed below:

.. code-block:: python
Expand Down Expand Up @@ -332,7 +332,7 @@ display.large_repr truncate For DataFrames exceeding ma
(the behaviour in earlier versions of pandas).
allowable settings, ['truncate', 'info']
display.latex.repr False Whether to produce a latex DataFrame
representation for jupyter frontends
representation for Jupyter frontends
that support it.
display.latex.escape True Escapes special characters in DataFrames, when
using the to_latex method.
Expand Down Expand Up @@ -413,7 +413,7 @@ display.show_dimensions truncate Whether to print out dimens
frame is truncated (e.g. not display
all rows and/or columns)
display.width 80 Width of the display in characters.
In case python/IPython is running in
In case Python/IPython is running in
a terminal this can be set to None
and pandas will correctly auto-detect
the width. Note that the IPython notebook,
Expand Down
2 changes: 1 addition & 1 deletion doc/source/user_guide/sparse.rst
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ sparse values instead.
rather than a SparseSeries or SparseDataFrame.

This section provides some guidance on migrating your code to the new style. As a reminder,
you can use the python warnings module to control warnings. But we recommend modifying
you can use the Python warnings module to control warnings. But we recommend modifying
your code, rather than ignoring the warning.

**Construction**
Expand Down