Skip to content

Docs fixes #7745

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 13, 2014
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions doc/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ Some other important things to know about the docs:
itself and the docs in this folder ``pandas/doc/``.

The docstrings provide a clear explanation of the usage of the individual
functions, while the documentation in this filder consists of tutorial-like
overviews per topic together with some other information (whatsnew,
functions, while the documentation in this folder consists of tutorial-like
overviews per topic together with some other information (what's new,
installation, etc).

- The docstrings follow the **Numpy Docstring Standard** which is used widely
Expand All @@ -56,7 +56,7 @@ Some other important things to know about the docs:
x = 2
x**3

will be renderd as
will be rendered as

::

Expand All @@ -66,7 +66,7 @@ Some other important things to know about the docs:
Out[2]: 8

This means that almost all code examples in the docs are always run (and the
ouptut saved) during the doc build. This way, they will always be up to date,
output saved) during the doc build. This way, they will always be up to date,
but it makes the doc building a bit more complex.


Expand Down Expand Up @@ -135,12 +135,12 @@ If you want to do a full clean build, do::

Staring with 0.13.1 you can tell ``make.py`` to compile only a single section
of the docs, greatly reducing the turn-around time for checking your changes.
You will be prompted to delete unrequired `.rst` files, since the last commited
version can always be restored from git.
You will be prompted to delete `.rst` files that aren't required, since the
last committed version can always be restored from git.

::

#omit autosummary and api section
#omit autosummary and API section
python make.py clean
python make.py --no-api

Expand Down
2 changes: 1 addition & 1 deletion doc/source/10min.rst
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ For slicing columns explicitly
df.iloc[:,1:3]
For getting a value explicity
For getting a value explicitly

.. ipython:: python
Expand Down
4 changes: 2 additions & 2 deletions doc/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ General DataFrame Combine
The ``combine_first`` method above calls the more general DataFrame method
``combine``. This method takes another DataFrame and a combiner function,
aligns the input DataFrame and then passes the combiner function pairs of
Series (ie, columns whose names are the same).
Series (i.e., columns whose names are the same).

So, for instance, to reproduce ``combine_first`` as above:

Expand Down Expand Up @@ -1461,7 +1461,7 @@ from the current type (say ``int`` to ``float``)
df3.dtypes

The ``values`` attribute on a DataFrame return the *lower-common-denominator* of the dtypes, meaning
the dtype that can accommodate **ALL** of the types in the resulting homogenous dtyped numpy array. This can
the dtype that can accommodate **ALL** of the types in the resulting homogeneous dtyped numpy array. This can
force some *upcasting*.

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/cookbook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -499,7 +499,7 @@ The :ref:`HDFStores <io.hdf5>` docs
`Merging on-disk tables with millions of rows
<http://stackoverflow.com/questions/14614512/merging-two-tables-with-millions-of-rows-in-python/14617925#14617925>`__

Deduplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from
De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from
csv file and creating a store by chunks, with date parsing as well.
`See here
<http://stackoverflow.com/questions/16110252/need-to-compare-very-large-files-around-1-5gb-in-python/16110391#16110391>`__
Expand Down
6 changes: 3 additions & 3 deletions doc/source/dsintro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ provided. The value will be repeated to match the length of **index**
Series is ndarray-like
~~~~~~~~~~~~~~~~~~~~~~

``Series`` acts very similary to a ``ndarray``, and is a valid argument to most NumPy functions.
``Series`` acts very similarly to a ``ndarray``, and is a valid argument to most NumPy functions.
However, things like slicing also slice the index.

.. ipython :: python
Expand Down Expand Up @@ -474,7 +474,7 @@ DataFrame:

For a more exhaustive treatment of more sophisticated label-based indexing and
slicing, see the :ref:`section on indexing <indexing>`. We will address the
fundamentals of reindexing / conforming to new sets of lables in the
fundamentals of reindexing / conforming to new sets of labels in the
:ref:`section on reindexing <basics.reindexing>`.

Data alignment and arithmetic
Expand Down Expand Up @@ -892,7 +892,7 @@ Slicing
~~~~~~~

Slicing works in a similar manner to a Panel. ``[]`` slices the first dimension.
``.ix`` allows you to slice abitrarily and get back lower dimensional objects
``.ix`` allows you to slice arbitrarily and get back lower dimensional objects

.. ipython:: python

Expand Down
2 changes: 1 addition & 1 deletion doc/source/enhancingperf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -553,7 +553,7 @@ standard Python.
:func:`pandas.eval` Parsers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

There are two different parsers and and two different engines you can use as
There are two different parsers and two different engines you can use as
the backend.

The default ``'pandas'`` parser allows a more intuitive syntax for expressing
Expand Down
2 changes: 1 addition & 1 deletion doc/source/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ Frequency conversion

Frequency conversion is implemented using the ``resample`` method on TimeSeries
and DataFrame objects (multiple time series). ``resample`` also works on panels
(3D). Here is some code that resamples daily data to montly:
(3D). Here is some code that resamples daily data to monthly:

.. ipython:: python

Expand Down
4 changes: 2 additions & 2 deletions doc/source/gotchas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ Why not make NumPy like R?
~~~~~~~~~~~~~~~~~~~~~~~~~~

Many people have suggested that NumPy should simply emulate the ``NA`` support
present in the more domain-specific statistical programming langauge `R
present in the more domain-specific statistical programming language `R
<http://r-project.org>`__. Part of the reason is the NumPy type hierarchy:

.. csv-table::
Expand Down Expand Up @@ -500,7 +500,7 @@ parse HTML tables in the top-level pandas io function ``read_html``.
molasses. However consider the fact that many tables on the web are not
big enough for the parsing algorithm runtime to matter. It is more
likely that the bottleneck will be in the process of reading the raw
text from the url over the web, i.e., IO (input-output). For very large
text from the URL over the web, i.e., IO (input-output). For very large
tables, this might not be true.

**Issues with using** |Anaconda|_
Expand Down
2 changes: 1 addition & 1 deletion doc/source/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -969,7 +969,7 @@ Regroup columns of a DataFrame according to their sum, and sum the aggregated on
df.groupby(df.sum(), axis=1).sum()


Returning a Series to propogate names
Returning a Series to propagate names
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Group DataFrame columns, compute a set of metrics and return a named Series.
Expand Down
24 changes: 12 additions & 12 deletions doc/source/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -88,10 +88,10 @@ of multi-axis indexing.
See more at :ref:`Selection by Position <indexing.integer>`

- ``.ix`` supports mixed integer and label based access. It is primarily label
based, but will fallback to integer positional access. ``.ix`` is the most
based, but will fall back to integer positional access. ``.ix`` is the most
general and will support any of the inputs to ``.loc`` and ``.iloc``, as well
as support for floating point label schemes. ``.ix`` is especially useful
when dealing with mixed positional and label based hierarchial indexes.
when dealing with mixed positional and label based hierarchical indexes.
As using integer slices with ``.ix`` have different behavior depending on
whether the slice is interpreted as position based or label based, it's
usually better to be explicit and use ``.iloc`` or ``.loc``.
Expand Down Expand Up @@ -230,7 +230,7 @@ new column.
- The ``Series/Panel`` accesses are available starting in 0.13.0.

If you are using the IPython environment, you may also use tab-completion to
see these accessable attributes.
see these accessible attributes.

Slicing ranges
--------------
Expand Down Expand Up @@ -328,7 +328,7 @@ For getting values with a boolean array
df1.loc['a']>0
df1.loc[:,df1.loc['a']>0]
For getting a value explicity (equiv to deprecated ``df.get_value('a','A')``)
For getting a value explicitly (equiv to deprecated ``df.get_value('a','A')``)

.. ipython:: python
Expand Down Expand Up @@ -415,7 +415,7 @@ For getting a cross section using an integer position (equiv to ``df.xs(1)``)
df1.iloc[1]
There is one signficant departure from standard python/numpy slicing semantics.
There is one significant departure from standard python/numpy slicing semantics.
python/numpy allow slicing past the end of an array without an associated error.

.. ipython:: python
Expand Down Expand Up @@ -494,7 +494,7 @@ out what you're asking for. If you only want to access a scalar value, the
fastest way is to use the ``at`` and ``iat`` methods, which are implemented on
all of the data structures.

Similary to ``loc``, ``at`` provides **label** based scalar lookups, while, ``iat`` provides **integer** based lookups analagously to ``iloc``
Similarly to ``loc``, ``at`` provides **label** based scalar lookups, while, ``iat`` provides **integer** based lookups analogously to ``iloc``

.. ipython:: python
Expand Down Expand Up @@ -643,7 +643,7 @@ To return a Series of the same shape as the original
s.where(s > 0)
Selecting values from a DataFrame with a boolean critierion now also preserves
Selecting values from a DataFrame with a boolean criterion now also preserves
input data shape. ``where`` is used under the hood as the implementation.
Equivalent is ``df.where(df < 0)``

Expand Down Expand Up @@ -690,7 +690,7 @@ without creating a copy:
**alignment**

Furthermore, ``where`` aligns the input boolean condition (ndarray or DataFrame),
such that partial selection with setting is possible. This is analagous to
such that partial selection with setting is possible. This is analogous to
partial setting via ``.ix`` (but on the contents rather than the axis labels)

.. ipython:: python
Expand Down Expand Up @@ -756,7 +756,7 @@ between the values of columns ``a`` and ``c``. For example:
# query
df.query('(a < b) & (b < c)')
Do the same thing but fallback on a named index if there is no column
Do the same thing but fall back on a named index if there is no column
with the name ``a``.

.. ipython:: python
Expand Down Expand Up @@ -899,7 +899,7 @@ The ``in`` and ``not in`` operators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

:meth:`~pandas.DataFrame.query` also supports special use of Python's ``in`` and
``not in`` comparison operators, providing a succint syntax for calling the
``not in`` comparison operators, providing a succinct syntax for calling the
``isin`` method of a ``Series`` or ``DataFrame``.

.. ipython:: python
Expand Down Expand Up @@ -1416,7 +1416,7 @@ faster, and allows one to index *both* axes if so desired.
Why does the assignment when using chained indexing fail!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
So, why does this show the ``SettingWithCopy`` warning / and possibly not work when you do chained indexing and assignement:
So, why does this show the ``SettingWithCopy`` warning / and possibly not work when you do chained indexing and assignment:
.. code-block:: python
Expand Down Expand Up @@ -2149,7 +2149,7 @@ metadata, like the index ``name`` (or, for ``MultiIndex``, ``levels`` and

You can use the ``rename``, ``set_names``, ``set_levels``, and ``set_labels``
to set these attributes directly. They default to returning a copy; however,
you can specify ``inplace=True`` to have the data change inplace.
you can specify ``inplace=True`` to have the data change in place.

.. ipython:: python
Expand Down
Loading