Skip to content

DOC: Add sphinx spelling extension #21109

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Jun 7, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,4 @@ doc:
cd doc; \
python make.py clean; \
python make.py html
python make.py spellcheck
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm well I was thinking with would be a separate rule but the way you've done it doc is fine and may even be preferable. Just calling out here in case another reviewer has a differing opinion

17 changes: 15 additions & 2 deletions doc/make.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,8 +224,9 @@ def _sphinx_build(self, kind):
--------
>>> DocBuilder(num_jobs=4)._sphinx_build('html')
"""
if kind not in ('html', 'latex'):
raise ValueError('kind must be html or latex, not {}'.format(kind))
if kind not in ('html', 'latex', 'spelling'):
raise ValueError('kind must be html, latex or '
'spelling, not {}'.format(kind))

self._run_os('sphinx-build',
'-j{}'.format(self.num_jobs),
Expand Down Expand Up @@ -304,6 +305,18 @@ def zip_html(self):
'-q',
*fnames)

def spellcheck(self):
"""Spell check the documentation."""
self._sphinx_build('spelling')
output_location = os.path.join('build', 'spelling', 'output.txt')
with open(output_location) as output:
lines = output.readlines()
if lines:
raise SyntaxError(
'Found misspelled words.'
' Check pandas/doc/build/spelling/output.txt'
' for more details.')


def main():
cmds = [method for method in dir(DocBuilder) if not method.startswith('_')]
Expand Down
4 changes: 2 additions & 2 deletions doc/source/advanced.rst
Original file line number Diff line number Diff line change
Expand Up @@ -342,7 +342,7 @@ As usual, **both sides** of the slicers are included as this is label indexing.
columns=micolumns).sort_index().sort_index(axis=1)
dfmi

Basic multi-index slicing using slices, lists, and labels.
Basic MultiIndex slicing using slices, lists, and labels.

.. ipython:: python

Expand Down Expand Up @@ -990,7 +990,7 @@ On the other hand, if the index is not monotonic, then both slice bounds must be
KeyError: 'Cannot get right slice bound for non-unique label: 3'

:meth:`Index.is_monotonic_increasing` and :meth:`Index.is_monotonic_decreasing` only check that
an index is weakly monotonic. To check for strict montonicity, you can combine one of those with
an index is weakly monotonic. To check for strict monotonicity, you can combine one of those with
:meth:`Index.is_unique`

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/basics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -593,7 +593,7 @@ categorical columns:
frame = pd.DataFrame({'a': ['Yes', 'Yes', 'No', 'No'], 'b': range(4)})
frame.describe()

This behaviour can be controlled by providing a list of types as ``include``/``exclude``
This behavior can be controlled by providing a list of types as ``include``/``exclude``
arguments. The special value ``all`` can also be used:

.. ipython:: python
Expand Down
4 changes: 4 additions & 0 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,10 +73,14 @@
'sphinx.ext.ifconfig',
'sphinx.ext.linkcode',
'nbsphinx',
'sphinxcontrib.spelling'
]

exclude_patterns = ['**.ipynb_checkpoints']

spelling_word_list_filename = ['spelling_wordlist.txt', 'names_wordlist.txt']
spelling_ignore_pypi_package_names = True

with open("index.rst") as f:
index_rst_lines = f.readlines()

Expand Down
19 changes: 19 additions & 0 deletions doc/source/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -436,6 +436,25 @@ the documentation are also built by Travis-CI. These docs are then hosted `here
<http://pandas-docs.github.io/pandas-docs-travis>`__, see also
the :ref:`Continuous Integration <contributing.ci>` section.

Spell checking documentation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

When contributing to documentation to **pandas** it's good to check if your work
contains any spelling errors. Sphinx provides an easy way to spell check documentation
and docstrings.

Running the spell check is easy. Just navigate to your local ``pandas/doc/`` directory and run::

python make.py spellcheck
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we alternately add a rule to the Makefile so that make spellcheck works?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't think about this my apologies I'll add the spellcheck on the makefile 👍


The spellcheck will take a few minutes to run (between 1 to 6 minutes). Sphinx will alert you
with warnings and misspelt words - these misspelt words will be added to a file called
``output.txt`` and you can find it on your local directory ``pandas/doc/build/spelling/``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not saying this is a bad thing but any reason you chose to output to a text file instead of to STDOUT? All of the other checks I can think of off the top of my head would write to the latter

Copy link
Contributor Author

@FabioRosado FabioRosado May 27, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, I didn't choose such approach. The spelling library is coded in this way (I checked the source code), it will always output a text file with all the misspelt words. To be honest I would much rather work with STDOUT and avoid calling open on a file to check if it was empty


The Sphinx spelling extension uses an EN-US dictionary to correct words, what means that in
some cases you might need to add a word to this dictionary. You can do so by adding the word to
the bag-of-words file named ``spelling_wordlist.txt`` located in the folder ``pandas/doc/``.

.. _contributing.code:

Contributing to the code base
Expand Down
6 changes: 3 additions & 3 deletions doc/source/contributing_docstring.rst
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ left before or after the docstring. The text starts in the next line after the
opening quotes. The closing quotes have their own line
(meaning that they are not at the end of the last sentence).

In rare occasions reST styles like bold text or itallics will be used in
In rare occasions reST styles like bold text or italics will be used in
docstrings, but is it common to have inline code, which is presented between
backticks. It is considered inline code:

Expand Down Expand Up @@ -706,7 +706,7 @@ than 5, to show the example with the default values. If doing the ``mean``, we
could use something like ``[1, 2, 3]``, so it is easy to see that the value
returned is the mean.

For more complex examples (groupping for example), avoid using data without
For more complex examples (grouping for example), avoid using data without
interpretation, like a matrix of random numbers with columns A, B, C, D...
And instead use a meaningful example, which makes it easier to understand the
concept. Unless required by the example, use names of animals, to keep examples
Expand Down Expand Up @@ -877,7 +877,7 @@ be tricky. Here are some attention points:
the actual error only the error name is sufficient.

* If there is a small part of the result that can vary (e.g. a hash in an object
represenation), you can use ``...`` to represent this part.
representation), you can use ``...`` to represent this part.

If you want to show that ``s.plot()`` returns a matplotlib AxesSubplot object,
this will fail the doctest ::
Expand Down
22 changes: 11 additions & 11 deletions doc/source/cookbook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ New Columns
df = pd.DataFrame(
{'AAA' : [1,1,1,2,2,2,3,3], 'BBB' : [2,1,3,4,5,1,2,3]}); df

Method 1 : idxmin() to get the index of the mins
Method 1 : idxmin() to get the index of the minimums

.. ipython:: python

Expand All @@ -307,7 +307,7 @@ MultiIndexing

The :ref:`multindexing <advanced.hierarchical>` docs.

`Creating a multi-index from a labeled frame
`Creating a MultiIndex from a labeled frame
<http://stackoverflow.com/questions/14916358/reshaping-dataframes-in-pandas-based-on-column-labels>`__

.. ipython:: python
Expand All @@ -330,7 +330,7 @@ The :ref:`multindexing <advanced.hierarchical>` docs.
Arithmetic
**********

`Performing arithmetic with a multi-index that needs broadcasting
`Performing arithmetic with a MultiIndex that needs broadcasting
<http://stackoverflow.com/questions/19501510/divide-entire-pandas-multiindex-dataframe-by-dataframe-variable/19502176#19502176>`__

.. ipython:: python
Expand All @@ -342,7 +342,7 @@ Arithmetic
Slicing
*******

`Slicing a multi-index with xs
`Slicing a MultiIndex with xs
<http://stackoverflow.com/questions/12590131/how-to-slice-multindex-columns-in-pandas-dataframes>`__

.. ipython:: python
Expand All @@ -363,7 +363,7 @@ To take the cross section of the 1st level and 1st axis the index:

df.xs('six',level=1,axis=0)

`Slicing a multi-index with xs, method #2
`Slicing a MultiIndex with xs, method #2
<http://stackoverflow.com/questions/14964493/multiindex-based-indexing-in-pandas>`__

.. ipython:: python
Expand All @@ -386,13 +386,13 @@ To take the cross section of the 1st level and 1st axis the index:
df.loc[(All,'Math'),('Exams')]
df.loc[(All,'Math'),(All,'II')]

`Setting portions of a multi-index with xs
`Setting portions of a MultiIndex with xs
<http://stackoverflow.com/questions/19319432/pandas-selecting-a-lower-level-in-a-dataframe-to-do-a-ffill>`__

Sorting
*******

`Sort by specific column or an ordered list of columns, with a multi-index
`Sort by specific column or an ordered list of columns, with a MultiIndex
<http://stackoverflow.com/questions/14733871/mutli-index-sorting-in-pandas>`__

.. ipython:: python
Expand Down Expand Up @@ -664,7 +664,7 @@ The :ref:`Pivot <reshaping.pivot>` docs.
`Plot pandas DataFrame with year over year data
<http://stackoverflow.com/questions/30379789/plot-pandas-data-frame-with-year-over-year-data>`__

To create year and month crosstabulation:
To create year and month cross tabulation:

.. ipython:: python

Expand All @@ -677,7 +677,7 @@ To create year and month crosstabulation:
Apply
*****

`Rolling Apply to Organize - Turning embedded lists into a multi-index frame
`Rolling Apply to Organize - Turning embedded lists into a MultiIndex frame
<http://stackoverflow.com/questions/17349981/converting-pandas-dataframe-with-categorical-values-into-binary-values>`__

.. ipython:: python
Expand Down Expand Up @@ -1029,8 +1029,8 @@ Skip row between header and data
01.01.1990 05:00;21;11;12;13
"""

Option 1: pass rows explicitly to skiprows
""""""""""""""""""""""""""""""""""""""""""
Option 1: pass rows explicitly to skip rows
"""""""""""""""""""""""""""""""""""""""""""

.. ipython:: python

Expand Down
4 changes: 2 additions & 2 deletions doc/source/dsintro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1014,7 +1014,7 @@ Deprecate Panel
Over the last few years, pandas has increased in both breadth and depth, with new features,
datatype support, and manipulation routines. As a result, supporting efficient indexing and functional
routines for ``Series``, ``DataFrame`` and ``Panel`` has contributed to an increasingly fragmented and
difficult-to-understand codebase.
difficult-to-understand code base.

The 3-D structure of a ``Panel`` is much less common for many types of data analysis,
than the 1-D of the ``Series`` or the 2-D of the ``DataFrame``. Going forward it makes sense for
Expand All @@ -1023,7 +1023,7 @@ pandas to focus on these areas exclusively.
Oftentimes, one can simply use a MultiIndex ``DataFrame`` for easily working with higher dimensional data.

In addition, the ``xarray`` package was built from the ground up, specifically in order to
support the multi-dimensional analysis that is one of ``Panel`` s main usecases.
support the multi-dimensional analysis that is one of ``Panel`` s main use cases.
`Here is a link to the xarray panel-transition documentation <http://xarray.pydata.org/en/stable/pandas.html#panel-transition>`__.

.. ipython:: python
Expand Down
6 changes: 3 additions & 3 deletions doc/source/ecosystem.rst
Original file line number Diff line number Diff line change
Expand Up @@ -184,8 +184,8 @@ and metadata disseminated in
`SDMX <http://www.sdmx.org>`_ 2.1, an ISO-standard
widely used by institutions such as statistics offices, central banks,
and international organisations. pandaSDMX can expose datasets and related
structural metadata including dataflows, code-lists,
and datastructure definitions as pandas Series
structural metadata including data flows, code-lists,
and data structure definitions as pandas Series
or multi-indexed DataFrames.

`fredapi <https://github.com/mortada/fredapi>`__
Expand Down Expand Up @@ -260,7 +260,7 @@ Data validation
`Engarde <http://engarde.readthedocs.io/en/latest/>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Engarde is a lightweight library used to explicitly state your assumptions abour your datasets
Engarde is a lightweight library used to explicitly state your assumptions about your datasets
and check that they're *actually* true.

.. _ecosystem.extensions:
Expand Down
4 changes: 2 additions & 2 deletions doc/source/enhancingperf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Cython (Writing C extensions for pandas)
----------------------------------------

For many use cases writing pandas in pure Python and NumPy is sufficient. In some
computationally heavy applications however, it can be possible to achieve sizeable
computationally heavy applications however, it can be possible to achieve sizable
speed-ups by offloading work to `cython <http://cython.org/>`__.

This tutorial assumes you have refactored as much as possible in Python, for example
Expand Down Expand Up @@ -806,7 +806,7 @@ truncate any strings that are more than 60 characters in length. Second, we
can't pass ``object`` arrays to ``numexpr`` thus string comparisons must be
evaluated in Python space.

The upshot is that this *only* applies to object-dtype'd expressions. So, if
The upshot is that this *only* applies to object-dtype expressions. So, if
you have an expression--for example

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion doc/source/extending.rst
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ you can retain subclasses through ``pandas`` data manipulations.

There are 3 constructor properties to be defined:

- ``_constructor``: Used when a manipulation result has the same dimesions as the original.
- ``_constructor``: Used when a manipulation result has the same dimensions as the original.
- ``_constructor_sliced``: Used when a manipulation result has one lower dimension(s) as the original, such as ``DataFrame`` single columns slicing.
- ``_constructor_expanddim``: Used when a manipulation result has one higher dimension as the original, such as ``Series.to_frame()`` and ``DataFrame.to_panel()``.

Expand Down
4 changes: 2 additions & 2 deletions doc/source/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -994,7 +994,7 @@ is only interesting over one column (here ``colname``), it may be filtered
Handling of (un)observed Categorical values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

When using a ``Categorical`` grouper (as a single grouper, or as part of multipler groupers), the ``observed`` keyword
When using a ``Categorical`` grouper (as a single grouper, or as part of multiple groupers), the ``observed`` keyword
controls whether to return a cartesian product of all possible groupers values (``observed=False``) or only those
that are observed groupers (``observed=True``).

Expand All @@ -1010,7 +1010,7 @@ Show only the observed values:

pd.Series([1, 1, 1]).groupby(pd.Categorical(['a', 'a', 'a'], categories=['a', 'b']), observed=True).count()

The returned dtype of the grouped will *always* include *all* of the catergories that were grouped.
The returned dtype of the grouped will *always* include *all* of the categories that were grouped.

.. ipython:: python

Expand Down
2 changes: 1 addition & 1 deletion doc/source/indexing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -700,7 +700,7 @@ Current Behavior
Reindexing
~~~~~~~~~~

The idiomatic way to achieve selecting potentially not-found elmenents is via ``.reindex()``. See also the section on :ref:`reindexing <basics.reindexing>`.
The idiomatic way to achieve selecting potentially not-found elements is via ``.reindex()``. See also the section on :ref:`reindexing <basics.reindexing>`.

.. ipython:: python

Expand Down
4 changes: 2 additions & 2 deletions doc/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ PyPI and through conda.
Starting **January 1, 2019**, all releases will be Python 3 only.

If there are people interested in continued support for Python 2.7 past December
31, 2018 (either backporting bugfixes or funding) please reach out to the
31, 2018 (either backporting bug fixes or funding) please reach out to the
maintainers on the issue tracker.

For more information, see the `Python 3 statement`_ and the `Porting to Python 3 guide`_.
Expand Down Expand Up @@ -199,7 +199,7 @@ Running the test suite
----------------------

pandas is equipped with an exhaustive set of unit tests, covering about 97% of
the codebase as of this writing. To run it on your machine to verify that
the code base as of this writing. To run it on your machine to verify that
everything is working (and that you have all of the dependencies, soft and hard,
installed), make sure you have `pytest
<http://doc.pytest.org/en/latest/>`__ and run:
Expand Down
2 changes: 1 addition & 1 deletion doc/source/internals.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ There are functions that make the creation of a regular index easy:
- ``date_range``: fixed frequency date range generated from a time rule or
DateOffset. An ndarray of Python datetime objects
- ``period_range``: fixed frequency date range generated from a time rule or
DateOffset. An ndarray of ``Period`` objects, representing Timespans
DateOffset. An ndarray of ``Period`` objects, representing timespans

The motivation for having an ``Index`` class in the first place was to enable
different implementations of indexing. This means that it's possible for you,
Expand Down
Loading