Skip to content

Commit 4117a9c

Browse files
committed
Revert spelling changes
1 parent fc9a960 commit 4117a9c

21 files changed

+46
-41
lines changed

doc/source/10min.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -663,7 +663,7 @@ Convert the raw grades to a categorical data type.
663663
df["grade"]
664664
665665
Rename the categories to more meaningful names (assigning to
666-
``Series.cat.categories`` is in place!).
666+
``Series.cat.categories`` is inplace!).
667667

668668
.. ipython:: python
669669

doc/source/advanced.rst

+4-4
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,7 @@ For example:
182182
df[['foo','qux']].columns # sliced
183183
184184
This is done to avoid a recomputation of the levels in order to make slicing
185-
highly efficient. If you want to see only the used levels, you can use the
185+
highly performant. If you want to see only the used levels, you can use the
186186
:func:`MultiIndex.get_level_values` method.
187187

188188
.. ipython:: python
@@ -342,7 +342,7 @@ As usual, **both sides** of the slicers are included as this is label indexing.
342342
columns=micolumns).sort_index().sort_index(axis=1)
343343
dfmi
344344
345-
Basic multi-index slicing using slices, lists, and labels.
345+
Basic MultiIndex slicing using slices, lists, and labels.
346346

347347
.. ipython:: python
348348
@@ -559,7 +559,7 @@ return a copy of the data rather than a view:
559559
560560
.. _advanced.unsorted:
561561

562-
Furthermore if you try to index something that is not fully lex-sorted, this can raise:
562+
Furthermore if you try to index something that is not fully lexsorted, this can raise:
563563

564564
.. code-block:: ipython
565565
@@ -593,7 +593,7 @@ Take Methods
593593

594594
Similar to NumPy ndarrays, pandas Index, Series, and DataFrame also provides
595595
the ``take`` method that retrieves elements along a given axis at the given
596-
indexes. The given indices must be either a list or an ndarray of integer
596+
indices. The given indices must be either a list or an ndarray of integer
597597
index positions. ``take`` will also accept negative integers as relative positions to the end of the object.
598598

599599
.. ipython:: python

doc/source/categorical.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -847,7 +847,7 @@ the categories being combined.
847847
848848
By default, the resulting categories will be ordered as
849849
they appear in the data. If you want the categories to
850-
be lex-sorted, use ``sort_categories=True`` argument.
850+
be lexsorted, use ``sort_categories=True`` argument.
851851

852852
.. ipython:: python
853853

doc/source/cookbook.rst

+8-8
Original file line numberDiff line numberDiff line change
@@ -307,7 +307,7 @@ MultiIndexing
307307

308308
The :ref:`multindexing <advanced.hierarchical>` docs.
309309

310-
`Creating a multi-index from a labeled frame
310+
`Creating a MultiIndex from a labeled frame
311311
<http://stackoverflow.com/questions/14916358/reshaping-dataframes-in-pandas-based-on-column-labels>`__
312312

313313
.. ipython:: python
@@ -330,7 +330,7 @@ The :ref:`multindexing <advanced.hierarchical>` docs.
330330
Arithmetic
331331
**********
332332

333-
`Performing arithmetic with a multi-index that needs broadcasting
333+
`Performing arithmetic with a MultiIndex that needs broadcasting
334334
<http://stackoverflow.com/questions/19501510/divide-entire-pandas-multiindex-dataframe-by-dataframe-variable/19502176#19502176>`__
335335

336336
.. ipython:: python
@@ -342,7 +342,7 @@ Arithmetic
342342
Slicing
343343
*******
344344

345-
`Slicing a multi-index with xs
345+
`Slicing a MultiIndex with xs
346346
<http://stackoverflow.com/questions/12590131/how-to-slice-multindex-columns-in-pandas-dataframes>`__
347347

348348
.. ipython:: python
@@ -363,7 +363,7 @@ To take the cross section of the 1st level and 1st axis the index:
363363
364364
df.xs('six',level=1,axis=0)
365365
366-
`Slicing a multi-index with xs, method #2
366+
`Slicing a MultiIndex with xs, method #2
367367
<http://stackoverflow.com/questions/14964493/multiindex-based-indexing-in-pandas>`__
368368

369369
.. ipython:: python
@@ -386,13 +386,13 @@ To take the cross section of the 1st level and 1st axis the index:
386386
df.loc[(All,'Math'),('Exams')]
387387
df.loc[(All,'Math'),(All,'II')]
388388
389-
`Setting portions of a multi-index with xs
389+
`Setting portions of a MultiIndex with xs
390390
<http://stackoverflow.com/questions/19319432/pandas-selecting-a-lower-level-in-a-dataframe-to-do-a-ffill>`__
391391

392392
Sorting
393393
*******
394394

395-
`Sort by specific column or an ordered list of columns, with a multi-index
395+
`Sort by specific column or an ordered list of columns, with a MultiIndex
396396
<http://stackoverflow.com/questions/14733871/mutli-index-sorting-in-pandas>`__
397397

398398
.. ipython:: python
@@ -677,7 +677,7 @@ To create year and month cross tabulation:
677677
Apply
678678
*****
679679

680-
`Rolling Apply to Organize - Turning embedded lists into a multi-index frame
680+
`Rolling Apply to Organize - Turning embedded lists into a MultiIndex frame
681681
<http://stackoverflow.com/questions/17349981/converting-pandas-dataframe-with-categorical-values-into-binary-values>`__
682682

683683
.. ipython:: python
@@ -1030,7 +1030,7 @@ Skip row between header and data
10301030
"""
10311031
10321032
Option 1: pass rows explicitly to skip rows
1033-
""""""""""""""""""""""""""""""""""""""""""
1033+
"""""""""""""""""""""""""""""""""""""""""""
10341034

10351035
.. ipython:: python
10361036

doc/source/groupby.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1332,7 +1332,7 @@ In order to resample to work on indexes that are non-datetimelike, the following
13321332
13331333
In the following examples, **df.index // 5** returns a binary array which is used to determine what gets selected for the groupby operation.
13341334
1335-
.. note:: The below example shows how we can down-sample by consolidation of samples into fewer samples. Here by using **df.index // 5**, we are aggregating the samples in bins. By applying **std()** function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
1335+
.. note:: The below example shows how we can downsample by consolidation of samples into fewer samples. Here by using **df.index // 5**, we are aggregating the samples in bins. By applying **std()** function, we aggregate the information contained in many samples into a small subset of values which is their standard deviation thereby reducing the number of samples.
13361336
13371337
.. ipython:: python
13381338

doc/source/io.rst

+11-11
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ header : int or list of ints, default ``'infer'``
116116
existing names.
117117

118118
The header can be a list of ints that specify row locations
119-
for a multi-index on the columns e.g. ``[0,1,3]``. Intervening rows
119+
for a MultiIndex on the columns e.g. ``[0,1,3]``. Intervening rows
120120
that are not specified will be skipped (e.g. 2 in this example is
121121
skipped). Note that this parameter ignores commented lines and empty
122122
lines if ``skip_blank_lines=True``, so header=0 denotes the first
@@ -953,7 +953,7 @@ datetime strings are all formatted the same way, you may get a large speed
953953
up by setting ``infer_datetime_format=True``. If set, pandas will attempt
954954
to guess the format of your datetime strings, and then use a faster means
955955
of parsing the strings. 5-10x parsing speeds have been observed. pandas
956-
will fall-back to the usual parsing if either the format cannot be guessed
956+
will fallback to the usual parsing if either the format cannot be guessed
957957
or the format that was guessed cannot properly parse the entire column
958958
of strings. So in general, ``infer_datetime_format`` should not have any
959959
negative consequences if enabled.
@@ -1644,7 +1644,7 @@ over the string representation of the object. All arguments are optional:
16441644
argument and returns a formatted string; to be applied to floats in the
16451645
``DataFrame``.
16461646
- ``sparsify`` default True, set to False for a ``DataFrame`` with a hierarchical
1647-
index to print every multi-index key at each row.
1647+
index to print every MultiIndex key at each row.
16481648
- ``index_names`` default True, will print the names of the indices
16491649
- ``index`` default True, will print the index (ie, row labels)
16501650
- ``header`` default True, will print the column labels
@@ -1812,7 +1812,7 @@ Writing to a file, with a date index and a date column:
18121812
dfj2.to_json('test.json')
18131813
open('test.json').read()
18141814
1815-
Fall-back Behavior
1815+
Fallback Behavior
18161816
+++++++++++++++++
18171817

18181818
If the JSON serializer cannot handle the container contents directly it will
@@ -2237,7 +2237,7 @@ A few notes on the generated table schema:
22372237
name is ``values``
22382238
+ For ``DataFrames``, the stringified version of the column name is used
22392239
+ For ``Index`` (not ``MultiIndex``), ``index.name`` is used, with a
2240-
fall-back to ``index`` if that is None.
2240+
fallback to ``index`` if that is None.
22412241
+ For ``MultiIndex``, ``mi.names`` is used. If any level has no name,
22422242
then ``level_<i>`` is used.
22432243

@@ -2246,7 +2246,7 @@ A few notes on the generated table schema:
22462246

22472247
``read_json`` also accepts ``orient='table'`` as an argument. This allows for
22482248
the preservation of metadata such as dtypes and index names in a
2249-
round-trip manner.
2249+
round-trippable manner.
22502250

22512251
.. ipython:: python
22522252
@@ -3467,7 +3467,7 @@ Fixed Format
34673467
''''''''''''
34683468

34693469
The examples above show storing using ``put``, which write the HDF5 to ``PyTables`` in a fixed array format, called
3470-
the ``fixed`` format. These types of stores are **not** able to be appended once written (though you can simply
3470+
the ``fixed`` format. These types of stores are **not** appendable once written (though you can simply
34713471
remove them and rewrite). Nor are they **queryable**; they must be
34723472
retrieved in their entirety. They also do not support dataframes with non-unique column names.
34733473
The ``fixed`` format stores offer very fast writing and slightly faster reading than ``table`` stores.
@@ -3820,7 +3820,7 @@ indexed dimension as the ``where``.
38203820

38213821
.. note::
38223822

3823-
Indexes are auto-magically created on the indexables
3823+
Indexes are automagically created on the indexables
38243824
and any data columns you specify. This behavior can be turned off by passing
38253825
``index=False`` to ``append``.
38263826

@@ -4629,7 +4629,7 @@ included in Python's standard library by default.
46294629
You can find an overview of supported drivers for each SQL dialect in the
46304630
`SQLAlchemy docs <http://docs.sqlalchemy.org/en/latest/dialects/index.html>`__.
46314631

4632-
If SQLAlchemy is not installed, a fall-back is only provided for sqlite (and
4632+
If SQLAlchemy is not installed, a fallback is only provided for sqlite (and
46334633
for mysql for backwards compatibility, but this is deprecated and will be
46344634
removed in a future version).
46354635
This mode requires a Python database adapter which respect the `Python
@@ -4735,7 +4735,7 @@ SQL data type based on the dtype of the data. When you have columns of dtype
47354735
You can always override the default type by specifying the desired SQL type of
47364736
any of the columns by using the ``dtype`` argument. This argument needs a
47374737
dictionary mapping column names to SQLAlchemy types (or strings for the sqlite3
4738-
fall-back mode).
4738+
fallback mode).
47394739
For example, specifying to use the sqlalchemy ``String`` type instead of the
47404740
default ``Text`` type for string columns:
47414741

@@ -4922,7 +4922,7 @@ You can combine SQLAlchemy expressions with parameters passed to :func:`read_sql
49224922
pd.read_sql(expr, engine, params={'date': dt.datetime(2010, 10, 18)})
49234923
49244924
4925-
Sqlite fall-back
4925+
Sqlite fallback
49264926
'''''''''''''''
49274927

49284928
The use of sqlite is supported without using SQLAlchemy.

doc/source/merging.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -895,7 +895,7 @@ The merged result:
895895

896896
.. note::
897897

898-
Merging on ``category`` dtypes that are the same can be quite efficient compared to ``object`` dtype merging.
898+
Merging on ``category`` dtypes that are the same can be quite performant compared to ``object`` dtype merging.
899899

900900
.. _merging.join.index:
901901

doc/source/missing_data.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ pandas.
2525
for simplicity and performance reasons. It differs from the MaskedArray
2626
approach of, for example, :mod:`scikits.timeseries`. We are hopeful that
2727
NumPy will soon be able to provide a native NA type solution (similar to R)
28-
efficient enough to be used in pandas.
28+
performant enough to be used in pandas.
2929

3030
See the :ref:`cookbook<cookbook.missing_data>` for some advanced strategies.
3131

doc/source/spelling_wordlist.txt

+5
Original file line numberDiff line numberDiff line change
@@ -2377,12 +2377,17 @@ serialisable
23772377
lzo
23782378
usepackage
23792379
booktabs
2380+
coereced
23802381
rcl
23812382
multicolumns
2383+
gfc
2384+
automagically
23822385
fastparquet
23832386
brotli
23842387
sql
23852388
nullable
2389+
performant
2390+
lexsorted
23862391
tw
23872392
latin
23882393
StrL

doc/source/whatsnew/v0.17.0.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ We are adding an implementation that natively supports datetime with timezones.
7373
*could* be assigned a datetime with timezones, and would work as an ``object`` dtype. This had performance issues with a large
7474
number rows. See the :ref:`docs <timeseries.timezone_series>` for more details. (:issue:`8260`, :issue:`10763`, :issue:`11034`).
7575

76-
The new implementation allows for having a single-timezone across all rows, with operations in a efficient manner.
76+
The new implementation allows for having a single-timezone across all rows, with operations in a performant manner.
7777

7878
.. ipython:: python
7979

doc/source/whatsnew/v0.19.0.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -1512,7 +1512,7 @@ Bug Fixes
15121512
- Bug in ``.set_index`` raises ``AmbiguousTimeError`` if new index contains DST boundary and multi levels (:issue:`12920`)
15131513
- Bug in ``.shift`` raises ``AmbiguousTimeError`` if data contains datetime near DST boundary (:issue:`13926`)
15141514
- Bug in ``pd.read_hdf()`` returns incorrect result when a ``DataFrame`` with a ``categorical`` column and a query which doesn't match any values (:issue:`13792`)
1515-
- Bug in ``.iloc`` when indexing with a non lex-sorted MultiIndex (:issue:`13797`)
1515+
- Bug in ``.iloc`` when indexing with a non lexsorted MultiIndex (:issue:`13797`)
15161516
- Bug in ``.loc`` when indexing with date strings in a reverse sorted ``DatetimeIndex`` (:issue:`14316`)
15171517
- Bug in ``Series`` comparison operators when dealing with zero dim NumPy arrays (:issue:`13006`)
15181518
- Bug in ``.combine_first`` may return incorrect ``dtype`` (:issue:`7630`, :issue:`10567`)

pandas/_libs/tslibs/ccalendar.pyx

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ from strptime import LocaleTime
1818
# ----------------------------------------------------------------------
1919
# Constants
2020

21-
# Slightly more efficient cython lookups than a 2D table
21+
# Slightly more performant cython lookups than a 2D table
2222
# The first 12 entries correspond to month lengths for non-leap years.
2323
# The remaining 12 entries give month lengths for leap years
2424
cdef int32_t* days_per_month_array = [

pandas/_libs/tslibs/np_datetime.pyx

+1-1
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ reverse_ops[Py_GE] = Py_LE
8080

8181
cdef inline bint cmp_scalar(int64_t lhs, int64_t rhs, int op) except -1:
8282
"""
83-
cmp_scalar is a more efficient version of PyObject_RichCompare
83+
cmp_scalar is a more performant version of PyObject_RichCompare
8484
typed for int64_t arguments.
8585
"""
8686
if op == Py_EQ:

pandas/_libs/tslibs/resolution.pyx

+1-1
Original file line numberDiff line numberDiff line change
@@ -346,7 +346,7 @@ class Resolution(object):
346346
# Frequency Inference
347347

348348

349-
# TODO: this is non efficient logic here (and duplicative) and this
349+
# TODO: this is non performant logic here (and duplicative) and this
350350
# simply should call unique_1d directly
351351
# plus no reason to depend on khash directly
352352
cdef unique_deltas(ndarray[int64_t] arr):

pandas/core/dtypes/cast.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ def is_nested_object(obj):
5959
return a boolean if we have a nested object, e.g. a Series with 1 or
6060
more Series elements
6161
62-
This may not be necessarily be efficient.
62+
This may not be necessarily be performant.
6363
6464
"""
6565

pandas/core/frame.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -4540,7 +4540,7 @@ def nlargest(self, n, columns, keep='first'):
45404540
45414541
This method is equivalent to
45424542
``df.sort_values(columns, ascending=False).head(n)``, but more
4543-
efficient.
4543+
performant.
45444544
45454545
Parameters
45464546
----------

pandas/core/indexes/frozen.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ def searchsorted(self, v, side='left', sorter=None):
129129
numpy.searchsorted : equivalent function
130130
"""
131131

132-
# we are much more efficient if the searched
132+
# we are much more performant if the searched
133133
# indexer is the same type as the array
134134
# this doesn't matter for int64, but DOES
135135
# matter for smaller int dtypes

pandas/core/indexes/multi.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -2141,7 +2141,7 @@ def slice_locs(self, start=None, end=None, step=None, kind=None):
21412141
21422142
Notes
21432143
-----
2144-
This method only works if the MultiIndex is properly lex-sorted. So,
2144+
This method only works if the MultiIndex is properly lexsorted. So,
21452145
if only the first 2 levels of a 3-level MultiIndex are lexsorted,
21462146
you can only pass two levels to ``.slice_locs``.
21472147

pandas/core/indexing.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -1801,7 +1801,7 @@ def _is_scalar_access(self, key):
18011801
# this is a shortcut accessor to both .loc and .iloc
18021802
# that provide the equivalent access of .at and .iat
18031803
# a) avoid getting things via sections and (to minimize dtype changes)
1804-
# b) provide an efficient path
1804+
# b) provide a performant path
18051805
if not hasattr(key, '__len__'):
18061806
return False
18071807

@@ -1977,7 +1977,7 @@ def _is_scalar_access(self, key):
19771977
# this is a shortcut accessor to both .loc and .iloc
19781978
# that provide the equivalent access of .at and .iat
19791979
# a) avoid getting things via sections and (to minimize dtype changes)
1980-
# b) provide an efficient path
1980+
# b) provide a performant path
19811981
if not hasattr(key, '__len__'):
19821982
return False
19831983

pandas/tests/groupby/test_categorical.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -379,7 +379,7 @@ def test_observed_codes_remap(observed):
379379

380380
def test_observed_perf():
381381
# we create a cartesian product, so this is
382-
# non-efficient if we don't use observed values
382+
# non-performant if we don't use observed values
383383
# gh-14942
384384
df = DataFrame({
385385
'cat': np.random.randint(0, 255, size=30000),

pandas/tseries/offsets.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1691,7 +1691,7 @@ class YearOffset(DateOffset):
16911691

16921692
def _get_offset_day(self, other):
16931693
# override BaseOffset method to use self.month instead of other.month
1694-
# TODO: there may be a more efficient way to do this
1694+
# TODO: there may be a more performant way to do this
16951695
return liboffsets.get_day_of_month(other.replace(month=self.month),
16961696
self._day_opt)
16971697

0 commit comments

Comments
 (0)