Skip to content

Commit a355d5c

Browse files
tommyodjreback
authored andcommitted
DOC: Spellcheck of merging.rst, reshaping.rst and timeseries.rst (#19081)
1 parent cadbf2d commit a355d5c

10 files changed

+315
-272
lines changed

doc/source/basics.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -2220,7 +2220,7 @@ For example, to select ``bool`` columns:
22202220
22212221
df.select_dtypes(include=[bool])
22222222
2223-
You can also pass the name of a dtype in the `numpy dtype hierarchy
2223+
You can also pass the name of a dtype in the `NumPy dtype hierarchy
22242224
<http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html>`__:
22252225

22262226
.. ipython:: python

doc/source/enhancingperf.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -28,14 +28,14 @@ For many use cases writing pandas in pure Python and NumPy is sufficient. In som
2828
computationally heavy applications however, it can be possible to achieve sizeable
2929
speed-ups by offloading work to `cython <http://cython.org/>`__.
3030

31-
This tutorial assumes you have refactored as much as possible in python, for example
31+
This tutorial assumes you have refactored as much as possible in Python, for example
3232
trying to remove for loops and making use of NumPy vectorization, it's always worth
3333
optimising in Python first.
3434

3535
This tutorial walks through a "typical" process of cythonizing a slow computation.
3636
We use an `example from the cython documentation <http://docs.cython.org/src/quickstart/cythonize.html>`__
3737
but in the context of pandas. Our final cythonized solution is around 100 times
38-
faster than the pure python.
38+
faster than the pure Python.
3939

4040
.. _enhancingperf.pure:
4141

@@ -52,7 +52,7 @@ We have a DataFrame to which we want to apply a function row-wise.
5252
'x': 'x'})
5353
df
5454
55-
Here's the function in pure python:
55+
Here's the function in pure Python:
5656

5757
.. ipython:: python
5858
@@ -173,7 +173,7 @@ Using ndarray
173173

174174
It's calling series... a lot! It's creating a Series from each row, and get-ting from both
175175
the index and the series (three times for each row). Function calls are expensive
176-
in python, so maybe we could minimize these by cythonizing the apply part.
176+
in Python, so maybe we could minimize these by cythonizing the apply part.
177177

178178
.. note::
179179

@@ -231,7 +231,7 @@ the rows, applying our ``integrate_f_typed``, and putting this in the zeros arra
231231
232232
.. note::
233233

234-
Loops like this would be *extremely* slow in python, but in Cython looping
234+
Loops like this would be *extremely* slow in Python, but in Cython looping
235235
over NumPy arrays is *fast*.
236236

237237
.. code-block:: ipython

doc/source/indexing.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ of multi-axis indexing.
8484
``length-1`` of the axis), but may also be used with a boolean
8585
array. ``.iloc`` will raise ``IndexError`` if a requested
8686
indexer is out-of-bounds, except *slice* indexers which allow
87-
out-of-bounds indexing. (this conforms with python/numpy *slice*
87+
out-of-bounds indexing. (this conforms with Python/NumPy *slice*
8888
semantics). Allowed inputs are:
8989

9090
- An integer e.g. ``5``.
@@ -1517,7 +1517,7 @@ The :meth:`~pandas.DataFrame.lookup` Method
15171517

15181518
Sometimes you want to extract a set of values given a sequence of row labels
15191519
and column labels, and the ``lookup`` method allows for this and returns a
1520-
numpy array. For instance:
1520+
NumPy array. For instance:
15211521

15221522
.. ipython:: python
15231523

doc/source/io.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -775,7 +775,7 @@ The simplest case is to just pass in ``parse_dates=True``:
775775
df = pd.read_csv('foo.csv', index_col=0, parse_dates=True)
776776
df
777777
778-
# These are python datetime objects
778+
# These are Python datetime objects
779779
df.index
780780
781781
It is often the case that we may want to store date and time data separately,

0 commit comments

Comments
 (0)