Skip to content

Commit 6552718

Browse files
tommyodjreback
authored andcommitted
Spellcheck (#19017)
1 parent c883128 commit 6552718

23 files changed

+203
-160
lines changed

doc/source/10min.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ a default integer index:
4848
s = pd.Series([1,3,5,np.nan,6,8])
4949
s
5050
51-
Creating a :class:`DataFrame` by passing a numpy array, with a datetime index
51+
Creating a :class:`DataFrame` by passing a NumPy array, with a datetime index
5252
and labeled columns:
5353

5454
.. ipython:: python
@@ -114,7 +114,7 @@ Here is how to view the top and bottom rows of the frame:
114114
df.head()
115115
df.tail(3)
116116
117-
Display the index, columns, and the underlying numpy data:
117+
Display the index, columns, and the underlying NumPy data:
118118

119119
.. ipython:: python
120120
@@ -311,7 +311,7 @@ Setting values by position:
311311
312312
df.iat[0,1] = 0
313313
314-
Setting by assigning with a numpy array:
314+
Setting by assigning with a NumPy array:
315315

316316
.. ipython:: python
317317

doc/source/advanced.rst

+5-4
Original file line numberDiff line numberDiff line change
@@ -316,7 +316,9 @@ Basic multi-index slicing using slices, lists, and labels.
316316
317317
dfmi.loc[(slice('A1','A3'), slice(None), ['C1', 'C3']), :]
318318
319-
You can use :class:`pandas.IndexSlice` to facilitate a more natural syntax using ``:``, rather than using ``slice(None)``.
319+
320+
You can use :class:`pandas.IndexSlice` to facilitate a more natural syntax
321+
using ``:``, rather than using ``slice(None)``.
320322

321323
.. ipython:: python
322324
@@ -557,7 +559,7 @@ Take Methods
557559

558560
.. _advanced.take:
559561

560-
Similar to numpy ndarrays, pandas Index, Series, and DataFrame also provides
562+
Similar to NumPy ndarrays, pandas Index, Series, and DataFrame also provides
561563
the ``take`` method that retrieves elements along a given axis at the given
562564
indices. The given indices must be either a list or an ndarray of integer
563565
index positions. ``take`` will also accept negative integers as relative positions to the end of the object.
@@ -729,7 +731,7 @@ This is an Immutable array implementing an ordered, sliceable set.
729731
Prior to 0.18.0, the ``Int64Index`` would provide the default index for all ``NDFrame`` objects.
730732
731733
``RangeIndex`` is a sub-class of ``Int64Index`` added in version 0.18.0, now providing the default index for all ``NDFrame`` objects.
732-
``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analogous to python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.
734+
``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analogous to Python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.
733735
734736
.. _indexing.float64index:
735737
@@ -763,7 +765,6 @@ The only positional indexing is via ``iloc``.
763765
sf.iloc[3]
764766
765767
A scalar index that is not found will raise a ``KeyError``.
766-
767768
Slicing is primarily on the values of the index when using ``[],ix,loc``, and
768769
**always** positional when using ``iloc``. The exception is when the slice is
769770
boolean, in which case it will always be positional.

doc/source/api.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -730,7 +730,7 @@ The dtype information is available on the ``Categorical``
730730
Categorical.codes
731731

732732
``np.asarray(categorical)`` works by implementing the array interface. Be aware, that this converts
733-
the Categorical back to a numpy array, so categories and order information is not preserved!
733+
the Categorical back to a NumPy array, so categories and order information is not preserved!
734734

735735
.. autosummary::
736736
:toctree: generated/

doc/source/basics.rst

+6-6
Original file line numberDiff line numberDiff line change
@@ -395,7 +395,7 @@ raise a ValueError:
395395
In [56]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo'])
396396
ValueError: Series lengths must match to compare
397397
398-
Note that this is different from the numpy behavior where a comparison can
398+
Note that this is different from the NumPy behavior where a comparison can
399399
be broadcast:
400400

401401
.. ipython:: python
@@ -1000,7 +1000,7 @@ We create a frame similar to the one used in the above sections.
10001000
tsdf.iloc[3:7] = np.nan
10011001
tsdf
10021002
1003-
Transform the entire frame. ``.transform()`` allows input functions as: a numpy function, a string
1003+
Transform the entire frame. ``.transform()`` allows input functions as: a NumPy function, a string
10041004
function name or a user defined function.
10051005

10061006
.. ipython:: python
@@ -1510,7 +1510,7 @@ To iterate over the rows of a DataFrame, you can use the following methods:
15101510
one of the following approaches:
15111511

15121512
* Look for a *vectorized* solution: many operations can be performed using
1513-
built-in methods or numpy functions, (boolean) indexing, ...
1513+
built-in methods or NumPy functions, (boolean) indexing, ...
15141514

15151515
* When you have a function that cannot work on the full DataFrame/Series
15161516
at once, it is better to use :meth:`~DataFrame.apply` instead of iterating
@@ -1971,7 +1971,7 @@ from the current type (e.g. ``int`` to ``float``).
19711971
df3.dtypes
19721972
19731973
The ``values`` attribute on a DataFrame return the *lower-common-denominator* of the dtypes, meaning
1974-
the dtype that can accommodate **ALL** of the types in the resulting homogeneous dtyped numpy array. This can
1974+
the dtype that can accommodate **ALL** of the types in the resulting homogeneous dtyped NumPy array. This can
19751975
force some *upcasting*.
19761976

19771977
.. ipython:: python
@@ -2253,7 +2253,7 @@ can define a function that returns a tree of child dtypes:
22532253
return dtype
22542254
return [dtype, [subdtypes(dt) for dt in subs]]
22552255
2256-
All numpy dtypes are subclasses of ``numpy.generic``:
2256+
All NumPy dtypes are subclasses of ``numpy.generic``:
22572257

22582258
.. ipython:: python
22592259
@@ -2262,4 +2262,4 @@ All numpy dtypes are subclasses of ``numpy.generic``:
22622262
.. note::
22632263

22642264
Pandas also defines the types ``category``, and ``datetime64[ns, tz]``, which are not integrated into the normal
2265-
numpy hierarchy and wont show up with the above function.
2265+
NumPy hierarchy and wont show up with the above function.

doc/source/categorical.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ The categorical data type is useful in the following cases:
4040
* The lexical order of a variable is not the same as the logical order ("one", "two", "three").
4141
By converting to a categorical and specifying an order on the categories, sorting and
4242
min/max will use the logical order instead of the lexical order, see :ref:`here <categorical.sort>`.
43-
* As a signal to other python libraries that this column should be treated as a categorical
43+
* As a signal to other Python libraries that this column should be treated as a categorical
4444
variable (e.g. to use suitable statistical methods or plot types).
4545

4646
See also the :ref:`API docs on categoricals<api.categorical>`.
@@ -366,7 +366,7 @@ or simply set the categories to a predefined scale, use :func:`Categorical.set_c
366366
.. note::
367367
Be aware that :func:`Categorical.set_categories` cannot know whether some category is omitted
368368
intentionally or because it is misspelled or (under Python3) due to a type difference (e.g.,
369-
numpys S1 dtype and python strings). This can result in surprising behaviour!
369+
numpys S1 dtype and Python strings). This can result in surprising behaviour!
370370

371371
Sorting and Order
372372
-----------------

doc/source/comparison_with_sas.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ performed in pandas.
1010
If you're new to pandas, you might want to first read through :ref:`10 Minutes to pandas<10min>`
1111
to familiarize yourself with the library.
1212

13-
As is customary, we import pandas and numpy as follows:
13+
As is customary, we import pandas and NumPy as follows:
1414

1515
.. ipython:: python
1616
@@ -100,7 +100,7 @@ specifying the column names.
100100
101101
A pandas ``DataFrame`` can be constructed in many different ways,
102102
but for a small number of values, it is often convenient to specify it as
103-
a python dictionary, where the keys are the column names
103+
a Python dictionary, where the keys are the column names
104104
and the values are the data.
105105

106106
.. ipython:: python

doc/source/comparison_with_sql.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ various SQL operations would be performed using pandas.
1010
If you're new to pandas, you might want to first read through :ref:`10 Minutes to pandas<10min>`
1111
to familiarize yourself with the library.
1212

13-
As is customary, we import pandas and numpy as follows:
13+
As is customary, we import pandas and NumPy as follows:
1414

1515
.. ipython:: python
1616

doc/source/computation.rst

+2-3
Original file line numberDiff line numberDiff line change
@@ -57,9 +57,8 @@ Covariance
5757
s2 = pd.Series(np.random.randn(1000))
5858
s1.cov(s2)
5959
60-
Analogously, :meth:`DataFrame.cov` to compute
61-
pairwise covariances among the series in the DataFrame, also excluding
62-
NA/null values.
60+
Analogously, :meth:`DataFrame.cov` to compute pairwise covariances among the
61+
series in the DataFrame, also excluding NA/null values.
6362

6463
.. _computation.covariance.caveats:
6564

doc/source/contributing.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ Creating a development environment
118118
----------------------------------
119119

120120
To test out code changes, you'll need to build pandas from source, which
121-
requires a C compiler and python environment. If you're making documentation
121+
requires a C compiler and Python environment. If you're making documentation
122122
changes, you can skip to :ref:`contributing.documentation` but you won't be able
123123
to build the documentation locally before pushing your changes.
124124

@@ -187,7 +187,7 @@ At this point you should be able to import pandas from your locally built versio
187187
0.22.0.dev0+29.g4ad6d4d74
188188

189189
This will create the new environment, and not touch any of your existing environments,
190-
nor any existing python installation.
190+
nor any existing Python installation.
191191

192192
To view your environments::
193193

doc/source/cookbook.rst

+4-4
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ above what the in-line examples offer.
4141
Pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are kept
4242
explicitly imported for newer users.
4343

44-
These examples are written for python 3.4. Minor tweaks might be necessary for earlier python
44+
These examples are written for Python 3. Minor tweaks might be necessary for earlier python
4545
versions.
4646

4747
Idioms
@@ -750,7 +750,7 @@ Timeseries
750750
<http://nipunbatra.github.io/2015/06/timeseries/>`__
751751

752752
Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series.
753-
`How to rearrange a python pandas DataFrame?
753+
`How to rearrange a Python pandas DataFrame?
754754
<http://stackoverflow.com/questions/15432659/how-to-rearrange-a-python-pandas-dataframe>`__
755755

756756
`Dealing with duplicates when reindexing a timeseries to a specified frequency
@@ -1152,7 +1152,7 @@ Storing Attributes to a group node
11521152
store = pd.HDFStore('test.h5')
11531153
store.put('df',df)
11541154
1155-
# you can store an arbitrary python object via pickle
1155+
# you can store an arbitrary Python object via pickle
11561156
store.get_storer('df').attrs.my_attribute = dict(A = 10)
11571157
store.get_storer('df').attrs.my_attribute
11581158
@@ -1167,7 +1167,7 @@ Storing Attributes to a group node
11671167
Binary Files
11681168
************
11691169

1170-
pandas readily accepts numpy record arrays, if you need to read in a binary
1170+
pandas readily accepts NumPy record arrays, if you need to read in a binary
11711171
file consisting of an array of C structs. For example, given this C program
11721172
in a file called ``main.c`` compiled with ``gcc main.c -std=gnu99`` on a
11731173
64-bit machine,

doc/source/dsintro.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Intro to Data Structures
2323
We'll start with a quick, non-comprehensive overview of the fundamental data
2424
structures in pandas to get you started. The fundamental behavior about data
2525
types, indexing, and axis labeling / alignment apply across all of the
26-
objects. To get started, import numpy and load pandas into your namespace:
26+
objects. To get started, import NumPy and load pandas into your namespace:
2727

2828
.. ipython:: python
2929
@@ -877,7 +877,7 @@ of DataFrames:
877877
wp['Item3'] = wp['Item1'] / wp['Item2']
878878
879879
The API for insertion and deletion is the same as for DataFrame. And as with
880-
DataFrame, if the item is a valid python identifier, you can access it as an
880+
DataFrame, if the item is a valid Python identifier, you can access it as an
881881
attribute and tab-complete it in IPython.
882882

883883
Transposing

doc/source/ecosystem.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Statistics and Machine Learning
2727
`Statsmodels <http://www.statsmodels.org/>`__
2828
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2929

30-
Statsmodels is the prominent python "statistics and econometrics library" and it has
30+
Statsmodels is the prominent Python "statistics and econometrics library" and it has
3131
a long-standing special relationship with pandas. Statsmodels provides powerful statistics,
3232
econometrics, analysis and modeling functionality that is out of pandas' scope.
3333
Statsmodels leverages pandas objects as the underlying data container for computation.
@@ -72,7 +72,7 @@ Hadley Wickham's `ggplot2 <http://ggplot2.org/>`__ is a foundational exploratory
7272
Based on `"The Grammar of Graphics" <http://www.cs.uic.edu/~wilkinson/TheGrammarOfGraphics/GOG.html>`__ it
7373
provides a powerful, declarative and extremely general way to generate bespoke plots of any kind of data.
7474
It's really quite incredible. Various implementations to other languages are available,
75-
but a faithful implementation for python users has long been missing. Although still young
75+
but a faithful implementation for Python users has long been missing. Although still young
7676
(as of Jan-2014), the `yhat/ggplot <https://github.com/yhat/ggplot>`__ project has been
7777
progressing quickly in that direction.
7878

@@ -192,7 +192,7 @@ or multi-indexed DataFrames.
192192
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
193193
fredapi is a Python interface to the `Federal Reserve Economic Data (FRED) <http://research.stlouisfed.org/fred2/>`__
194194
provided by the Federal Reserve Bank of St. Louis. It works with both the FRED database and ALFRED database that
195-
contains point-in-time data (i.e. historic data revisions). fredapi provides a wrapper in python to the FRED
195+
contains point-in-time data (i.e. historic data revisions). fredapi provides a wrapper in Python to the FRED
196196
HTTP API, and also provides several convenient methods for parsing and analyzing point-in-time data from ALFRED.
197197
fredapi makes use of pandas and returns data in a Series or DataFrame. This module requires a FRED API key that
198198
you can obtain for free on the FRED website.

doc/source/enhancingperf.rst

+9-9
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,13 @@ Enhancing Performance
2424
Cython (Writing C extensions for pandas)
2525
----------------------------------------
2626

27-
For many use cases writing pandas in pure python and numpy is sufficient. In some
27+
For many use cases writing pandas in pure Python and NumPy is sufficient. In some
2828
computationally heavy applications however, it can be possible to achieve sizeable
2929
speed-ups by offloading work to `cython <http://cython.org/>`__.
3030

3131
This tutorial assumes you have refactored as much as possible in python, for example
32-
trying to remove for loops and making use of numpy vectorization, it's always worth
33-
optimising in python first.
32+
trying to remove for loops and making use of NumPy vectorization, it's always worth
33+
optimising in Python first.
3434

3535
This tutorial walks through a "typical" process of cythonizing a slow computation.
3636
We use an `example from the cython documentation <http://docs.cython.org/src/quickstart/cythonize.html>`__
@@ -86,8 +86,8 @@ hence we'll concentrate our efforts cythonizing these two functions.
8686

8787
.. note::
8888

89-
In python 2 replacing the ``range`` with its generator counterpart (``xrange``)
90-
would mean the ``range`` line would vanish. In python 3 ``range`` is already a generator.
89+
In Python 2 replacing the ``range`` with its generator counterpart (``xrange``)
90+
would mean the ``range`` line would vanish. In Python 3 ``range`` is already a generator.
9191

9292
.. _enhancingperf.plain:
9393

@@ -232,7 +232,7 @@ the rows, applying our ``integrate_f_typed``, and putting this in the zeros arra
232232
.. note::
233233

234234
Loops like this would be *extremely* slow in python, but in Cython looping
235-
over numpy arrays is *fast*.
235+
over NumPy arrays is *fast*.
236236

237237
.. code-block:: ipython
238238
@@ -315,7 +315,7 @@ Numba works by generating optimized machine code using the LLVM compiler infrast
315315
Jit
316316
~~~
317317

318-
Using ``numba`` to just-in-time compile your code. We simply take the plain python code from above and annotate with the ``@jit`` decorator.
318+
Using ``numba`` to just-in-time compile your code. We simply take the plain Python code from above and annotate with the ``@jit`` decorator.
319319

320320
.. code-block:: python
321321
@@ -391,7 +391,7 @@ Caveats
391391

392392
``numba`` will execute on any function, but can only accelerate certain classes of functions.
393393

394-
``numba`` is best at accelerating functions that apply numerical functions to numpy arrays. When passed a function that only uses operations it knows how to accelerate, it will execute in ``nopython`` mode.
394+
``numba`` is best at accelerating functions that apply numerical functions to NumPy arrays. When passed a function that only uses operations it knows how to accelerate, it will execute in ``nopython`` mode.
395395

396396
If ``numba`` is passed a function that includes something it doesn't know how to work with -- a category that currently includes sets, lists, dictionaries, or string functions -- it will revert to ``object mode``. In ``object mode``, numba will execute but your code will not speed up significantly. If you would prefer that ``numba`` throw an error if it cannot compile a function in a way that speeds up your code, pass numba the argument ``nopython=True`` (e.g. ``@numba.jit(nopython=True)``). For more on troubleshooting ``numba`` modes, see the `numba troubleshooting page <http://numba.pydata.org/numba-doc/0.20.0/user/troubleshoot.html#the-compiled-code-is-too-slow>`__.
397397

@@ -779,7 +779,7 @@ Technical Minutia Regarding Expression Evaluation
779779

780780
Expressions that would result in an object dtype or involve datetime operations
781781
(because of ``NaT``) must be evaluated in Python space. The main reason for
782-
this behavior is to maintain backwards compatibility with versions of numpy <
782+
this behavior is to maintain backwards compatibility with versions of NumPy <
783783
1.7. In those versions of ``numpy`` a call to ``ndarray.astype(str)`` will
784784
truncate any strings that are more than 60 characters in length. Second, we
785785
can't pass ``object`` arrays to ``numexpr`` thus string comparisons must be

doc/source/gotchas.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,7 @@ See also :ref:`Categorical Memory Usage <categorical.memory>`.
9191
Using If/Truth Statements with pandas
9292
-------------------------------------
9393

94-
pandas follows the numpy convention of raising an error when you try to convert something to a ``bool``.
94+
pandas follows the NumPy convention of raising an error when you try to convert something to a ``bool``.
9595
This happens in a ``if`` or when using the boolean operations, ``and``, ``or``, or ``not``. It is not clear
9696
what the result of
9797

0 commit comments

Comments
 (0)