Skip to content

Commit 6925b8e

Browse files
mroeschkelithomas1
authored andcommitted
Backport PR #58413: DEPS: Unpin docutils
1 parent 74312f3 commit 6925b8e

File tree

8 files changed

+137
-158
lines changed

8 files changed

+137
-158
lines changed

doc/source/user_guide/basics.rst

+3-4
Original file line numberDiff line numberDiff line change
@@ -160,11 +160,10 @@ Here is a sample (using 100 column x 100,000 row ``DataFrames``):
160160
.. csv-table::
161161
:header: "Operation", "0.11.0 (ms)", "Prior Version (ms)", "Ratio to Prior"
162162
:widths: 25, 25, 25, 25
163-
:delim: ;
164163

165-
``df1 > df2``; 13.32; 125.35; 0.1063
166-
``df1 * df2``; 21.71; 36.63; 0.5928
167-
``df1 + df2``; 22.04; 36.50; 0.6039
164+
``df1 > df2``, 13.32, 125.35, 0.1063
165+
``df1 * df2``, 21.71, 36.63, 0.5928
166+
``df1 + df2``, 22.04, 36.50, 0.6039
168167

169168
You are highly encouraged to install both libraries. See the section
170169
:ref:`Recommended Dependencies <install.recommended_dependencies>` for more installation info.

doc/source/user_guide/gotchas.rst

+2-13
Original file line numberDiff line numberDiff line change
@@ -315,19 +315,8 @@ Why not make NumPy like R?
315315

316316
Many people have suggested that NumPy should simply emulate the ``NA`` support
317317
present in the more domain-specific statistical programming language `R
318-
<https://www.r-project.org/>`__. Part of the reason is the NumPy type hierarchy:
319-
320-
.. csv-table::
321-
:header: "Typeclass","Dtypes"
322-
:widths: 30,70
323-
:delim: |
324-
325-
``numpy.floating`` | ``float16, float32, float64, float128``
326-
``numpy.integer`` | ``int8, int16, int32, int64``
327-
``numpy.unsignedinteger`` | ``uint8, uint16, uint32, uint64``
328-
``numpy.object_`` | ``object_``
329-
``numpy.bool_`` | ``bool_``
330-
``numpy.character`` | ``bytes_, str_``
318+
<https://www.r-project.org/>`__. Part of the reason is the
319+
`NumPy type hierarchy <https://numpy.org/doc/stable/user/basics.types.html>`__.
331320

332321
The R language, by contrast, only has a handful of built-in data types:
333322
``integer``, ``numeric`` (floating-point), ``character``, and

doc/source/user_guide/groupby.rst

+37-40
Original file line numberDiff line numberDiff line change
@@ -509,29 +509,28 @@ listed below, those with a ``*`` do *not* have an efficient, GroupBy-specific, i
509509
.. csv-table::
510510
:header: "Method", "Description"
511511
:widths: 20, 80
512-
:delim: ;
513-
514-
:meth:`~.DataFrameGroupBy.any`;Compute whether any of the values in the groups are truthy
515-
:meth:`~.DataFrameGroupBy.all`;Compute whether all of the values in the groups are truthy
516-
:meth:`~.DataFrameGroupBy.count`;Compute the number of non-NA values in the groups
517-
:meth:`~.DataFrameGroupBy.cov` * ;Compute the covariance of the groups
518-
:meth:`~.DataFrameGroupBy.first`;Compute the first occurring value in each group
519-
:meth:`~.DataFrameGroupBy.idxmax`;Compute the index of the maximum value in each group
520-
:meth:`~.DataFrameGroupBy.idxmin`;Compute the index of the minimum value in each group
521-
:meth:`~.DataFrameGroupBy.last`;Compute the last occurring value in each group
522-
:meth:`~.DataFrameGroupBy.max`;Compute the maximum value in each group
523-
:meth:`~.DataFrameGroupBy.mean`;Compute the mean of each group
524-
:meth:`~.DataFrameGroupBy.median`;Compute the median of each group
525-
:meth:`~.DataFrameGroupBy.min`;Compute the minimum value in each group
526-
:meth:`~.DataFrameGroupBy.nunique`;Compute the number of unique values in each group
527-
:meth:`~.DataFrameGroupBy.prod`;Compute the product of the values in each group
528-
:meth:`~.DataFrameGroupBy.quantile`;Compute a given quantile of the values in each group
529-
:meth:`~.DataFrameGroupBy.sem`;Compute the standard error of the mean of the values in each group
530-
:meth:`~.DataFrameGroupBy.size`;Compute the number of values in each group
531-
:meth:`~.DataFrameGroupBy.skew` *;Compute the skew of the values in each group
532-
:meth:`~.DataFrameGroupBy.std`;Compute the standard deviation of the values in each group
533-
:meth:`~.DataFrameGroupBy.sum`;Compute the sum of the values in each group
534-
:meth:`~.DataFrameGroupBy.var`;Compute the variance of the values in each group
512+
513+
:meth:`~.DataFrameGroupBy.any`,Compute whether any of the values in the groups are truthy
514+
:meth:`~.DataFrameGroupBy.all`,Compute whether all of the values in the groups are truthy
515+
:meth:`~.DataFrameGroupBy.count`,Compute the number of non-NA values in the groups
516+
:meth:`~.DataFrameGroupBy.cov` * ,Compute the covariance of the groups
517+
:meth:`~.DataFrameGroupBy.first`,Compute the first occurring value in each group
518+
:meth:`~.DataFrameGroupBy.idxmax`,Compute the index of the maximum value in each group
519+
:meth:`~.DataFrameGroupBy.idxmin`,Compute the index of the minimum value in each group
520+
:meth:`~.DataFrameGroupBy.last`,Compute the last occurring value in each group
521+
:meth:`~.DataFrameGroupBy.max`,Compute the maximum value in each group
522+
:meth:`~.DataFrameGroupBy.mean`,Compute the mean of each group
523+
:meth:`~.DataFrameGroupBy.median`,Compute the median of each group
524+
:meth:`~.DataFrameGroupBy.min`,Compute the minimum value in each group
525+
:meth:`~.DataFrameGroupBy.nunique`,Compute the number of unique values in each group
526+
:meth:`~.DataFrameGroupBy.prod`,Compute the product of the values in each group
527+
:meth:`~.DataFrameGroupBy.quantile`,Compute a given quantile of the values in each group
528+
:meth:`~.DataFrameGroupBy.sem`,Compute the standard error of the mean of the values in each group
529+
:meth:`~.DataFrameGroupBy.size`,Compute the number of values in each group
530+
:meth:`~.DataFrameGroupBy.skew` * ,Compute the skew of the values in each group
531+
:meth:`~.DataFrameGroupBy.std`,Compute the standard deviation of the values in each group
532+
:meth:`~.DataFrameGroupBy.sum`,Compute the sum of the values in each group
533+
:meth:`~.DataFrameGroupBy.var`,Compute the variance of the values in each group
535534

536535
Some examples:
537536

@@ -835,19 +834,18 @@ The following methods on GroupBy act as transformations.
835834
.. csv-table::
836835
:header: "Method", "Description"
837836
:widths: 20, 80
838-
:delim: ;
839-
840-
:meth:`~.DataFrameGroupBy.bfill`;Back fill NA values within each group
841-
:meth:`~.DataFrameGroupBy.cumcount`;Compute the cumulative count within each group
842-
:meth:`~.DataFrameGroupBy.cummax`;Compute the cumulative max within each group
843-
:meth:`~.DataFrameGroupBy.cummin`;Compute the cumulative min within each group
844-
:meth:`~.DataFrameGroupBy.cumprod`;Compute the cumulative product within each group
845-
:meth:`~.DataFrameGroupBy.cumsum`;Compute the cumulative sum within each group
846-
:meth:`~.DataFrameGroupBy.diff`;Compute the difference between adjacent values within each group
847-
:meth:`~.DataFrameGroupBy.ffill`;Forward fill NA values within each group
848-
:meth:`~.DataFrameGroupBy.pct_change`;Compute the percent change between adjacent values within each group
849-
:meth:`~.DataFrameGroupBy.rank`;Compute the rank of each value within each group
850-
:meth:`~.DataFrameGroupBy.shift`;Shift values up or down within each group
837+
838+
:meth:`~.DataFrameGroupBy.bfill`,Back fill NA values within each group
839+
:meth:`~.DataFrameGroupBy.cumcount`,Compute the cumulative count within each group
840+
:meth:`~.DataFrameGroupBy.cummax`,Compute the cumulative max within each group
841+
:meth:`~.DataFrameGroupBy.cummin`,Compute the cumulative min within each group
842+
:meth:`~.DataFrameGroupBy.cumprod`,Compute the cumulative product within each group
843+
:meth:`~.DataFrameGroupBy.cumsum`,Compute the cumulative sum within each group
844+
:meth:`~.DataFrameGroupBy.diff`,Compute the difference between adjacent values within each group
845+
:meth:`~.DataFrameGroupBy.ffill`,Forward fill NA values within each group
846+
:meth:`~.DataFrameGroupBy.pct_change`,Compute the percent change between adjacent values within each group
847+
:meth:`~.DataFrameGroupBy.rank`,Compute the rank of each value within each group
848+
:meth:`~.DataFrameGroupBy.shift`,Shift values up or down within each group
851849

852850
In addition, passing any built-in aggregation method as a string to
853851
:meth:`~.DataFrameGroupBy.transform` (see the next section) will broadcast the result
@@ -1095,11 +1093,10 @@ efficient, GroupBy-specific, implementation.
10951093
.. csv-table::
10961094
:header: "Method", "Description"
10971095
:widths: 20, 80
1098-
:delim: ;
10991096

1100-
:meth:`~.DataFrameGroupBy.head`;Select the top row(s) of each group
1101-
:meth:`~.DataFrameGroupBy.nth`;Select the nth row(s) of each group
1102-
:meth:`~.DataFrameGroupBy.tail`;Select the bottom row(s) of each group
1097+
:meth:`~.DataFrameGroupBy.head`,Select the top row(s) of each group
1098+
:meth:`~.DataFrameGroupBy.nth`,Select the nth row(s) of each group
1099+
:meth:`~.DataFrameGroupBy.tail`,Select the bottom row(s) of each group
11031100

11041101
Users can also use transformations along with Boolean indexing to construct complex
11051102
filtrations within groups. For example, suppose we are given groups of products and

doc/source/user_guide/indexing.rst

+9-9
Original file line numberDiff line numberDiff line change
@@ -101,13 +101,14 @@ well). Any of the axes accessors may be the null slice ``:``. Axes left out of
101101
the specification are assumed to be ``:``, e.g. ``p.loc['a']`` is equivalent to
102102
``p.loc['a', :]``.
103103

104-
.. csv-table::
105-
:header: "Object Type", "Indexers"
106-
:widths: 30, 50
107-
:delim: ;
108104

109-
Series; ``s.loc[indexer]``
110-
DataFrame; ``df.loc[row_indexer,column_indexer]``
105+
.. ipython:: python
106+
107+
ser = pd.Series(range(5), index=list("abcde"))
108+
ser.loc[["a", "c", "e"]]
109+
110+
df = pd.DataFrame(np.arange(25).reshape(5, 5), index=list("abcde"), columns=list("abcde"))
111+
df.loc[["a", "c", "e"], ["b", "d"]]
111112
112113
.. _indexing.basics:
113114

@@ -123,10 +124,9 @@ indexing pandas objects with ``[]``:
123124
.. csv-table::
124125
:header: "Object Type", "Selection", "Return Value Type"
125126
:widths: 30, 30, 60
126-
:delim: ;
127127

128-
Series; ``series[label]``; scalar value
129-
DataFrame; ``frame[colname]``; ``Series`` corresponding to colname
128+
Series, ``series[label]``, scalar value
129+
DataFrame, ``frame[colname]``, ``Series`` corresponding to colname
130130

131131
Here we construct a simple time series data set to use for illustrating the
132132
indexing functionality:

doc/source/user_guide/io.rst

+33-36
Original file line numberDiff line numberDiff line change
@@ -16,27 +16,26 @@ The pandas I/O API is a set of top level ``reader`` functions accessed like
1616
.. csv-table::
1717
:header: "Format Type", "Data Description", "Reader", "Writer"
1818
:widths: 30, 100, 60, 60
19-
:delim: ;
20-
21-
text;`CSV <https://en.wikipedia.org/wiki/Comma-separated_values>`__;:ref:`read_csv<io.read_csv_table>`;:ref:`to_csv<io.store_in_csv>`
22-
text;Fixed-Width Text File;:ref:`read_fwf<io.fwf_reader>`
23-
text;`JSON <https://www.json.org/>`__;:ref:`read_json<io.json_reader>`;:ref:`to_json<io.json_writer>`
24-
text;`HTML <https://en.wikipedia.org/wiki/HTML>`__;:ref:`read_html<io.read_html>`;:ref:`to_html<io.html>`
25-
text;`LaTeX <https://en.wikipedia.org/wiki/LaTeX>`__;;:ref:`Styler.to_latex<io.latex>`
26-
text;`XML <https://www.w3.org/standards/xml/core>`__;:ref:`read_xml<io.read_xml>`;:ref:`to_xml<io.xml>`
27-
text; Local clipboard;:ref:`read_clipboard<io.clipboard>`;:ref:`to_clipboard<io.clipboard>`
28-
binary;`MS Excel <https://en.wikipedia.org/wiki/Microsoft_Excel>`__;:ref:`read_excel<io.excel_reader>`;:ref:`to_excel<io.excel_writer>`
29-
binary;`OpenDocument <http://opendocumentformat.org>`__;:ref:`read_excel<io.ods>`;
30-
binary;`HDF5 Format <https://support.hdfgroup.org/HDF5/whatishdf5.html>`__;:ref:`read_hdf<io.hdf5>`;:ref:`to_hdf<io.hdf5>`
31-
binary;`Feather Format <https://github.com/wesm/feather>`__;:ref:`read_feather<io.feather>`;:ref:`to_feather<io.feather>`
32-
binary;`Parquet Format <https://parquet.apache.org/>`__;:ref:`read_parquet<io.parquet>`;:ref:`to_parquet<io.parquet>`
33-
binary;`ORC Format <https://orc.apache.org/>`__;:ref:`read_orc<io.orc>`;:ref:`to_orc<io.orc>`
34-
binary;`Stata <https://en.wikipedia.org/wiki/Stata>`__;:ref:`read_stata<io.stata_reader>`;:ref:`to_stata<io.stata_writer>`
35-
binary;`SAS <https://en.wikipedia.org/wiki/SAS_(software)>`__;:ref:`read_sas<io.sas_reader>`;
36-
binary;`SPSS <https://en.wikipedia.org/wiki/SPSS>`__;:ref:`read_spss<io.spss_reader>`;
37-
binary;`Python Pickle Format <https://docs.python.org/3/library/pickle.html>`__;:ref:`read_pickle<io.pickle>`;:ref:`to_pickle<io.pickle>`
38-
SQL;`SQL <https://en.wikipedia.org/wiki/SQL>`__;:ref:`read_sql<io.sql>`;:ref:`to_sql<io.sql>`
39-
SQL;`Google BigQuery <https://en.wikipedia.org/wiki/BigQuery>`__;:ref:`read_gbq<io.bigquery>`;:ref:`to_gbq<io.bigquery>`
19+
20+
text,`CSV <https://en.wikipedia.org/wiki/Comma-separated_values>`__, :ref:`read_csv<io.read_csv_table>`, :ref:`to_csv<io.store_in_csv>`
21+
text,Fixed-Width Text File, :ref:`read_fwf<io.fwf_reader>` , NA
22+
text,`JSON <https://www.json.org/>`__, :ref:`read_json<io.json_reader>`, :ref:`to_json<io.json_writer>`
23+
text,`HTML <https://en.wikipedia.org/wiki/HTML>`__, :ref:`read_html<io.read_html>`, :ref:`to_html<io.html>`
24+
text,`LaTeX <https://en.wikipedia.org/wiki/LaTeX>`__, :ref:`Styler.to_latex<io.latex>` , NA
25+
text,`XML <https://www.w3.org/standards/xml/core>`__, :ref:`read_xml<io.read_xml>`, :ref:`to_xml<io.xml>`
26+
text, Local clipboard, :ref:`read_clipboard<io.clipboard>`, :ref:`to_clipboard<io.clipboard>`
27+
binary,`MS Excel <https://en.wikipedia.org/wiki/Microsoft_Excel>`__ , :ref:`read_excel<io.excel_reader>`, :ref:`to_excel<io.excel_writer>`
28+
binary,`OpenDocument <http://opendocumentformat.org>`__, :ref:`read_excel<io.ods>`, NA
29+
binary,`HDF5 Format <https://support.hdfgroup.org/HDF5/whatishdf5.html>`__, :ref:`read_hdf<io.hdf5>`, :ref:`to_hdf<io.hdf5>`
30+
binary,`Feather Format <https://github.com/wesm/feather>`__, :ref:`read_feather<io.feather>`, :ref:`to_feather<io.feather>`
31+
binary,`Parquet Format <https://parquet.apache.org/>`__, :ref:`read_parquet<io.parquet>`, :ref:`to_parquet<io.parquet>`
32+
binary,`ORC Format <https://orc.apache.org/>`__, :ref:`read_orc<io.orc>`, :ref:`to_orc<io.orc>`
33+
binary,`Stata <https://en.wikipedia.org/wiki/Stata>`__, :ref:`read_stata<io.stata_reader>`, :ref:`to_stata<io.stata_writer>`
34+
binary,`SAS <https://en.wikipedia.org/wiki/SAS_(software)>`__, :ref:`read_sas<io.sas_reader>` , NA
35+
binary,`SPSS <https://en.wikipedia.org/wiki/SPSS>`__, :ref:`read_spss<io.spss_reader>` , NA
36+
binary,`Python Pickle Format <https://docs.python.org/3/library/pickle.html>`__, :ref:`read_pickle<io.pickle>`, :ref:`to_pickle<io.pickle>`
37+
SQL,`SQL <https://en.wikipedia.org/wiki/SQL>`__, :ref:`read_sql<io.sql>`,:ref:`to_sql<io.sql>`
38+
SQL,`Google BigQuery <https://en.wikipedia.org/wiki/BigQuery>`__;:ref:`read_gbq<io.bigquery>`;:ref:`to_gbq<io.bigquery>`
4039

4140
:ref:`Here <io.perf>` is an informal performance comparison for some of these IO methods.
4241

@@ -1838,14 +1837,13 @@ with optional parameters:
18381837

18391838
.. csv-table::
18401839
:widths: 20, 150
1841-
:delim: ;
18421840

1843-
``split``; dict like {index -> [index], columns -> [columns], data -> [values]}
1844-
``records``; list like [{column -> value}, ... , {column -> value}]
1845-
``index``; dict like {index -> {column -> value}}
1846-
``columns``; dict like {column -> {index -> value}}
1847-
``values``; just the values array
1848-
``table``; adhering to the JSON `Table Schema`_
1841+
``split``, dict like {index -> [index]; columns -> [columns]; data -> [values]}
1842+
``records``, list like [{column -> value}; ... ]
1843+
``index``, dict like {index -> {column -> value}}
1844+
``columns``, dict like {column -> {index -> value}}
1845+
``values``, just the values array
1846+
``table``, adhering to the JSON `Table Schema`_
18491847

18501848
* ``date_format`` : string, type of date conversion, 'epoch' for timestamp, 'iso' for ISO8601.
18511849
* ``double_precision`` : The number of decimal places to use when encoding floating point values, default 10.
@@ -2033,14 +2031,13 @@ is ``None``. To explicitly force ``Series`` parsing, pass ``typ=series``
20332031

20342032
.. csv-table::
20352033
:widths: 20, 150
2036-
:delim: ;
2037-
2038-
``split``; dict like {index -> [index], columns -> [columns], data -> [values]}
2039-
``records``; list like [{column -> value}, ... , {column -> value}]
2040-
``index``; dict like {index -> {column -> value}}
2041-
``columns``; dict like {column -> {index -> value}}
2042-
``values``; just the values array
2043-
``table``; adhering to the JSON `Table Schema`_
2034+
2035+
``split``, dict like {index -> [index]; columns -> [columns]; data -> [values]}
2036+
``records``, list like [{column -> value} ...]
2037+
``index``, dict like {index -> {column -> value}}
2038+
``columns``, dict like {column -> {index -> value}}
2039+
``values``, just the values array
2040+
``table``, adhering to the JSON `Table Schema`_
20442041

20452042

20462043
* ``dtype`` : if True, infer dtypes, if a dict of column to dtype, then use those, if ``False``, then don't infer dtypes at all, default is True, apply only to the data.

0 commit comments

Comments
 (0)