-
-
Notifications
You must be signed in to change notification settings - Fork 18.4k
/
Copy pathbasics.rst
2411 lines (1654 loc) · 74.6 KB
/
basics.rst
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
.. _basics:
{{ header }}
==============================
Essential basic functionality
==============================
Here we discuss a lot of the essential functionality common to the pandas data
structures. To begin, let's create some example objects like we did in
the :ref:`10 minutes to pandas <10min>` section:
.. ipython:: python
index = pd.date_range("1/1/2000", periods=8)
s = pd.Series(np.random.randn(5), index=["a", "b", "c", "d", "e"])
df = pd.DataFrame(np.random.randn(8, 3), index=index, columns=["A", "B", "C"])
.. _basics.head_tail:
Head and tail
-------------
To view a small sample of a Series or DataFrame object, use the
:meth:`~DataFrame.head` and :meth:`~DataFrame.tail` methods. The default number
of elements to display is five, but you may pass a custom number.
.. ipython:: python
long_series = pd.Series(np.random.randn(1000))
long_series.head()
long_series.tail(3)
.. _basics.attrs:
Attributes and underlying data
------------------------------
pandas objects have a number of attributes enabling you to access the metadata
* **shape**: gives the axis dimensions of the object, consistent with ndarray
* Axis labels
* **Series**: *index* (only axis)
* **DataFrame**: *index* (rows) and *columns*
Note, **these attributes can be safely assigned to**!
.. ipython:: python
df[:2]
df.columns = [x.lower() for x in df.columns]
df
pandas objects (:class:`Index`, :class:`Series`, :class:`DataFrame`) can be
thought of as containers for arrays, which hold the actual data and do the
actual computation. For many types, the underlying array is a
:class:`numpy.ndarray`. However, pandas and 3rd party libraries may *extend*
NumPy's type system to add support for custom arrays
(see :ref:`basics.dtypes`).
To get the actual data inside a :class:`Index` or :class:`Series`, use
the ``.array`` property
.. ipython:: python
s.array
s.index.array
:attr:`~Series.array` will always be an :class:`~pandas.api.extensions.ExtensionArray`.
The exact details of what an :class:`~pandas.api.extensions.ExtensionArray` is and why pandas uses them are a bit
beyond the scope of this introduction. See :ref:`basics.dtypes` for more.
If you know you need a NumPy array, use :meth:`~Series.to_numpy`
or :meth:`numpy.asarray`.
.. ipython:: python
s.to_numpy()
np.asarray(s)
When the Series or Index is backed by
an :class:`~pandas.api.extensions.ExtensionArray`, :meth:`~Series.to_numpy`
may involve copying data and coercing values. See :ref:`basics.dtypes` for more.
:meth:`~Series.to_numpy` gives some control over the ``dtype`` of the
resulting :class:`numpy.ndarray`. For example, consider datetimes with timezones.
NumPy doesn't have a dtype to represent timezone-aware datetimes, so there
are two possibly useful representations:
1. An object-dtype :class:`numpy.ndarray` with :class:`Timestamp` objects, each
with the correct ``tz``
2. A ``datetime64[ns]`` -dtype :class:`numpy.ndarray`, where the values have
been converted to UTC and the timezone discarded
Timezones may be preserved with ``dtype=object``
.. ipython:: python
ser = pd.Series(pd.date_range("2000", periods=2, tz="CET"))
ser.to_numpy(dtype=object)
Or thrown away with ``dtype='datetime64[ns]'``
.. ipython:: python
ser.to_numpy(dtype="datetime64[ns]")
Getting the "raw data" inside a :class:`DataFrame` is possibly a bit more
complex. When your ``DataFrame`` only has a single data type for all the
columns, :meth:`DataFrame.to_numpy` will return the underlying data:
.. ipython:: python
df.to_numpy()
If a DataFrame contains homogeneously-typed data, the ndarray can
actually be modified in-place, and the changes will be reflected in the data
structure. For heterogeneous data (e.g. some of the DataFrame's columns are not
all the same dtype), this will not be the case. The values attribute itself,
unlike the axis labels, cannot be assigned to.
.. note::
When working with heterogeneous data, the dtype of the resulting ndarray
will be chosen to accommodate all of the data involved. For example, if
strings are involved, the result will be of object dtype. If there are only
floats and integers, the resulting array will be of float dtype.
In the past, pandas recommended :attr:`Series.values` or :attr:`DataFrame.values`
for extracting the data from a Series or DataFrame. You'll still find references
to these in old code bases and online. Going forward, we recommend avoiding
``.values`` and using ``.array`` or ``.to_numpy()``. ``.values`` has the following
drawbacks:
1. When your Series contains an :ref:`extension type <extending.extension-types>`, it's
unclear whether :attr:`Series.values` returns a NumPy array or the extension array.
:attr:`Series.array` will always return an :class:`~pandas.api.extensions.ExtensionArray`, and will never
copy data. :meth:`Series.to_numpy` will always return a NumPy array,
potentially at the cost of copying / coercing values.
2. When your DataFrame contains a mixture of data types, :attr:`DataFrame.values` may
involve copying data and coercing values to a common dtype, a relatively expensive
operation. :meth:`DataFrame.to_numpy`, being a method, makes it clearer that the
returned NumPy array may not be a view on the same data in the DataFrame.
.. _basics.accelerate:
Accelerated operations
----------------------
pandas has support for accelerating certain types of binary numerical and boolean operations using
the ``numexpr`` library and the ``bottleneck`` libraries.
These libraries are especially useful when dealing with large data sets, and provide large
speedups. ``numexpr`` uses smart chunking, caching, and multiple cores. ``bottleneck`` is
a set of specialized cython routines that are especially fast when dealing with arrays that have
``nans``.
Here is a sample (using 100 column x 100,000 row ``DataFrames``):
.. csv-table::
:header: "Operation", "0.11.0 (ms)", "Prior Version (ms)", "Ratio to Prior"
:widths: 25, 25, 25, 25
``df1 > df2``, 13.32, 125.35, 0.1063
``df1 * df2``, 21.71, 36.63, 0.5928
``df1 + df2``, 22.04, 36.50, 0.6039
You are highly encouraged to install both libraries. See the section
:ref:`Recommended Dependencies <install.recommended_dependencies>` for more installation info.
These are both enabled to be used by default, you can control this by setting the options:
.. code-block:: python
pd.set_option("compute.use_bottleneck", False)
pd.set_option("compute.use_numexpr", False)
.. _basics.binop:
Flexible binary operations
--------------------------
With binary operations between pandas data structures, there are two key points
of interest:
* Broadcasting behavior between higher- (e.g. DataFrame) and
lower-dimensional (e.g. Series) objects.
* Missing data in computations.
We will demonstrate how to manage these issues independently, though they can
be handled simultaneously.
Matching / broadcasting behavior
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DataFrame has the methods :meth:`~DataFrame.add`, :meth:`~DataFrame.sub`,
:meth:`~DataFrame.mul`, :meth:`~DataFrame.div` and related functions
:meth:`~DataFrame.radd`, :meth:`~DataFrame.rsub`, ...
for carrying out binary operations. For broadcasting behavior,
Series input is of primary interest. Using these functions, you can use to
either match on the *index* or *columns* via the **axis** keyword:
.. ipython:: python
df = pd.DataFrame(
{
"one": pd.Series(np.random.randn(3), index=["a", "b", "c"]),
"two": pd.Series(np.random.randn(4), index=["a", "b", "c", "d"]),
"three": pd.Series(np.random.randn(3), index=["b", "c", "d"]),
}
)
df
row = df.iloc[1]
column = df["two"]
df.sub(row, axis="columns")
df.sub(row, axis=1)
df.sub(column, axis="index")
df.sub(column, axis=0)
Furthermore you can align a level of a MultiIndexed DataFrame with a Series.
.. ipython:: python
dfmi = df.copy()
dfmi.index = pd.MultiIndex.from_tuples(
[(1, "a"), (1, "b"), (1, "c"), (2, "a")], names=["first", "second"]
)
dfmi.sub(column, axis=0, level="second")
Series and Index also support the :func:`divmod` builtin. This function takes
the floor division and modulo operation at the same time returning a two-tuple
of the same type as the left hand side. For example:
.. ipython:: python
s = pd.Series(np.arange(10))
s
div, rem = divmod(s, 3)
div
rem
idx = pd.Index(np.arange(10))
idx
div, rem = divmod(idx, 3)
div
rem
We can also do elementwise :func:`divmod`:
.. ipython:: python
div, rem = divmod(s, [2, 2, 3, 3, 4, 4, 5, 5, 6, 6])
div
rem
Missing data / operations with fill values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In Series and DataFrame, the arithmetic functions have the option of inputting
a *fill_value*, namely a value to substitute when at most one of the values at
a location are missing. For example, when adding two DataFrame objects, you may
wish to treat NaN as 0 unless both DataFrames are missing that value, in which
case the result will be NaN (you can later replace NaN with some other value
using ``fillna`` if you wish).
.. ipython:: python
df2 = df.copy()
df2.loc["a", "three"] = 1.0
df
df2
df + df2
df.add(df2, fill_value=0)
.. _basics.compare:
Flexible comparisons
~~~~~~~~~~~~~~~~~~~~
Series and DataFrame have the binary comparison methods ``eq``, ``ne``, ``lt``, ``gt``,
``le``, and ``ge`` whose behavior is analogous to the binary
arithmetic operations described above:
.. ipython:: python
df.gt(df2)
df2.ne(df)
These operations produce a pandas object of the same type as the left-hand-side
input that is of dtype ``bool``. These ``boolean`` objects can be used in
indexing operations, see the section on :ref:`Boolean indexing<indexing.boolean>`.
.. _basics.reductions:
Boolean reductions
~~~~~~~~~~~~~~~~~~
You can apply the reductions: :attr:`~DataFrame.empty`, :meth:`~DataFrame.any`,
:meth:`~DataFrame.all`.
.. ipython:: python
(df > 0).all()
(df > 0).any()
You can reduce to a final boolean value.
.. ipython:: python
(df > 0).any().any()
You can test if a pandas object is empty, via the :attr:`~DataFrame.empty` property.
.. ipython:: python
df.empty
pd.DataFrame(columns=list("ABC")).empty
.. warning::
Asserting the truthiness of a pandas object will raise an error, as the testing of the emptiness
or values is ambiguous.
.. ipython:: python
:okexcept:
if df:
print(True)
.. ipython:: python
:okexcept:
df and df2
See :ref:`gotchas<gotchas.truth>` for a more detailed discussion.
.. _basics.equals:
Comparing if objects are equivalent
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Often you may find that there is more than one way to compute the same
result. As a simple example, consider ``df + df`` and ``df * 2``. To test
that these two computations produce the same result, given the tools
shown above, you might imagine using ``(df + df == df * 2).all()``. But in
fact, this expression is False:
.. ipython:: python
df + df == df * 2
(df + df == df * 2).all()
Notice that the boolean DataFrame ``df + df == df * 2`` contains some False values!
This is because NaNs do not compare as equals:
.. ipython:: python
np.nan == np.nan
So, NDFrames (such as Series and DataFrames)
have an :meth:`~DataFrame.equals` method for testing equality, with NaNs in
corresponding locations treated as equal.
.. ipython:: python
(df + df).equals(df * 2)
Note that the Series or DataFrame index needs to be in the same order for
equality to be True:
.. ipython:: python
df1 = pd.DataFrame({"col": ["foo", 0, np.nan]})
df2 = pd.DataFrame({"col": [np.nan, 0, "foo"]}, index=[2, 1, 0])
df1.equals(df2)
df1.equals(df2.sort_index())
Comparing array-like objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can conveniently perform element-wise comparisons when comparing a pandas
data structure with a scalar value:
.. ipython:: python
pd.Series(["foo", "bar", "baz"]) == "foo"
pd.Index(["foo", "bar", "baz"]) == "foo"
pandas also handles element-wise comparisons between different array-like
objects of the same length:
.. ipython:: python
pd.Series(["foo", "bar", "baz"]) == pd.Index(["foo", "bar", "qux"])
pd.Series(["foo", "bar", "baz"]) == np.array(["foo", "bar", "qux"])
Trying to compare ``Index`` or ``Series`` objects of different lengths will
raise a ValueError:
.. ipython:: python
:okexcept:
pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar'])
pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo'])
Combining overlapping data sets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A problem occasionally arising is the combination of two similar data sets
where values in one are preferred over the other. An example would be two data
series representing a particular economic indicator where one is considered to
be of "higher quality". However, the lower quality series might extend further
back in history or have more complete data coverage. As such, we would like to
combine two DataFrame objects where missing values in one DataFrame are
conditionally filled with like-labeled values from the other DataFrame. The
function implementing this operation is :meth:`~DataFrame.combine_first`,
which we illustrate:
.. ipython:: python
df1 = pd.DataFrame(
{"A": [1.0, np.nan, 3.0, 5.0, np.nan], "B": [np.nan, 2.0, 3.0, np.nan, 6.0]}
)
df2 = pd.DataFrame(
{
"A": [5.0, 2.0, 4.0, np.nan, 3.0, 7.0],
"B": [np.nan, np.nan, 3.0, 4.0, 6.0, 8.0],
}
)
df1
df2
df1.combine_first(df2)
General DataFrame combine
~~~~~~~~~~~~~~~~~~~~~~~~~
The :meth:`~DataFrame.combine_first` method above calls the more general
:meth:`DataFrame.combine`. This method takes another DataFrame
and a combiner function, aligns the input DataFrame and then passes the combiner
function pairs of Series (i.e., columns whose names are the same).
So, for instance, to reproduce :meth:`~DataFrame.combine_first` as above:
.. ipython:: python
def combiner(x, y):
return np.where(pd.isna(x), y, x)
df1.combine(df2, combiner)
.. _basics.stats:
Descriptive statistics
----------------------
There exists a large number of methods for computing descriptive statistics and
other related operations on :ref:`Series <api.series.stats>`, :ref:`DataFrame
<api.dataframe.stats>`. Most of these
are aggregations (hence producing a lower-dimensional result) like
:meth:`~DataFrame.sum`, :meth:`~DataFrame.mean`, and :meth:`~DataFrame.quantile`,
but some of them, like :meth:`~DataFrame.cumsum` and :meth:`~DataFrame.cumprod`,
produce an object of the same size. Generally speaking, these methods take an
**axis** argument, just like *ndarray.{sum, std, ...}*, but the axis can be
specified by name or integer:
* **Series**: no axis argument needed
* **DataFrame**: "index" (axis=0, default), "columns" (axis=1)
For example:
.. ipython:: python
df
df.mean(axis=0)
df.mean(axis=1)
All such methods have a ``skipna`` option signaling whether to exclude missing
data (``True`` by default):
.. ipython:: python
df.sum(axis=0, skipna=False)
df.sum(axis=1, skipna=True)
Combined with the broadcasting / arithmetic behavior, one can describe various
statistical procedures, like standardization (rendering data zero mean and
standard deviation of 1), very concisely:
.. ipython:: python
ts_stand = (df - df.mean()) / df.std()
ts_stand.std()
xs_stand = df.sub(df.mean(axis=1), axis=0).div(df.std(axis=1), axis=0)
xs_stand.std(axis=1)
Note that methods like :meth:`~DataFrame.cumsum` and :meth:`~DataFrame.cumprod`
preserve the location of ``NaN`` values. This is somewhat different from
:meth:`~DataFrame.expanding` and :meth:`~DataFrame.rolling` since ``NaN`` behavior
is furthermore dictated by a ``min_periods`` parameter.
.. ipython:: python
df.cumsum()
Here is a quick reference summary table of common functions. Each also takes an
optional ``level`` parameter which applies only if the object has a
:ref:`hierarchical index<advanced.hierarchical>`.
.. csv-table::
:header: "Function", "Description"
:widths: 20, 80
``count``, Number of non-NA observations
``sum``, Sum of values
``mean``, Mean of values
``median``, Arithmetic median of values
``min``, Minimum
``max``, Maximum
``mode``, Mode
``abs``, Absolute Value
``prod``, Product of values
``std``, Bessel-corrected sample standard deviation
``var``, Unbiased variance
``sem``, Standard error of the mean
``skew``, Sample skewness (3rd moment)
``kurt``, Sample kurtosis (4th moment)
``quantile``, Sample quantile (value at %)
``cumsum``, Cumulative sum
``cumprod``, Cumulative product
``cummax``, Cumulative maximum
``cummin``, Cumulative minimum
Note that by chance some NumPy methods, like ``mean``, ``std``, and ``sum``,
will exclude NAs on Series input by default:
.. ipython:: python
np.mean(df["one"])
np.mean(df["one"].to_numpy())
:meth:`Series.nunique` will return the number of unique non-NA values in a
Series:
.. ipython:: python
series = pd.Series(np.random.randn(500))
series[20:500] = np.nan
series[10:20] = 5
series.nunique()
.. _basics.describe:
Summarizing data: describe
~~~~~~~~~~~~~~~~~~~~~~~~~~
There is a convenient :meth:`~DataFrame.describe` function which computes a variety of summary
statistics about a Series or the columns of a DataFrame (excluding NAs of
course):
.. ipython:: python
series = pd.Series(np.random.randn(1000))
series[::2] = np.nan
series.describe()
frame = pd.DataFrame(np.random.randn(1000, 5), columns=["a", "b", "c", "d", "e"])
frame.iloc[::2] = np.nan
frame.describe()
You can select specific percentiles to include in the output:
.. ipython:: python
series.describe(percentiles=[0.05, 0.25, 0.75, 0.95])
By default, the median is always included.
For a non-numerical Series object, :meth:`~Series.describe` will give a simple
summary of the number of unique values and most frequently occurring values:
.. ipython:: python
s = pd.Series(["a", "a", "b", "b", "a", "a", np.nan, "c", "d", "a"])
s.describe()
Note that on a mixed-type DataFrame object, :meth:`~DataFrame.describe` will
restrict the summary to include only numerical columns or, if none are, only
categorical columns:
.. ipython:: python
frame = pd.DataFrame({"a": ["Yes", "Yes", "No", "No"], "b": range(4)})
frame.describe()
This behavior can be controlled by providing a list of types as ``include``/``exclude``
arguments. The special value ``all`` can also be used:
.. ipython:: python
frame.describe(include=["object"])
frame.describe(include=["number"])
frame.describe(include="all")
That feature relies on :ref:`select_dtypes <basics.selectdtypes>`. Refer to
there for details about accepted inputs.
.. _basics.idxmin:
Index of min/max values
~~~~~~~~~~~~~~~~~~~~~~~
The :meth:`~DataFrame.idxmin` and :meth:`~DataFrame.idxmax` functions on Series
and DataFrame compute the index labels with the minimum and maximum
corresponding values:
.. ipython:: python
s1 = pd.Series(np.random.randn(5))
s1
s1.idxmin(), s1.idxmax()
df1 = pd.DataFrame(np.random.randn(5, 3), columns=["A", "B", "C"])
df1
df1.idxmin(axis=0)
df1.idxmax(axis=1)
When there are multiple rows (or columns) matching the minimum or maximum
value, :meth:`~DataFrame.idxmin` and :meth:`~DataFrame.idxmax` return the first
matching index:
.. ipython:: python
df3 = pd.DataFrame([2, 1, 1, 3, np.nan], columns=["A"], index=list("edcba"))
df3
df3["A"].idxmin()
.. note::
``idxmin`` and ``idxmax`` are called ``argmin`` and ``argmax`` in NumPy.
.. _basics.discretization:
Value counts (histogramming) / mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :meth:`~Series.value_counts` Series method computes a histogram
of a 1D array of values. It can also be used as a function on regular arrays:
.. ipython:: python
data = np.random.randint(0, 7, size=50)
data
s = pd.Series(data)
s.value_counts()
The :meth:`~DataFrame.value_counts` method can be used to count combinations across multiple columns.
By default all columns are used but a subset can be selected using the ``subset`` argument.
.. ipython:: python
data = {"a": [1, 2, 3, 4], "b": ["x", "x", "y", "y"]}
frame = pd.DataFrame(data)
frame.value_counts()
Similarly, you can get the most frequently occurring value(s), i.e. the mode, of the values in a Series or DataFrame:
.. ipython:: python
s5 = pd.Series([1, 1, 3, 3, 3, 5, 5, 7, 7, 7])
s5.mode()
df5 = pd.DataFrame(
{
"A": np.random.randint(0, 7, size=50),
"B": np.random.randint(-10, 15, size=50),
}
)
df5.mode()
Discretization and quantiling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Continuous values can be discretized using the :func:`cut` (bins based on values)
and :func:`qcut` (bins based on sample quantiles) functions:
.. ipython:: python
arr = np.random.randn(20)
factor = pd.cut(arr, 4)
factor
factor = pd.cut(arr, [-5, -1, 0, 1, 5])
factor
:func:`qcut` computes sample quantiles. For example, we could slice up some
normally distributed data into equal-size quartiles like so:
.. ipython:: python
arr = np.random.randn(30)
factor = pd.qcut(arr, [0, 0.25, 0.5, 0.75, 1])
factor
We can also pass infinite values to define the bins:
.. ipython:: python
arr = np.random.randn(20)
factor = pd.cut(arr, [-np.inf, 0, np.inf])
factor
.. _basics.apply:
Function application
--------------------
To apply your own or another library's functions to pandas objects,
you should be aware of the three methods below. The appropriate
method to use depends on whether your function expects to operate
on an entire ``DataFrame`` or ``Series``, row- or column-wise, or elementwise.
1. `Tablewise Function Application`_: :meth:`~DataFrame.pipe`
2. `Row or Column-wise Function Application`_: :meth:`~DataFrame.apply`
3. `Aggregation API`_: :meth:`~DataFrame.agg` and :meth:`~DataFrame.transform`
4. `Applying Elementwise Functions`_: :meth:`~DataFrame.map`
.. _basics.pipe:
Tablewise function application
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``DataFrames`` and ``Series`` can be passed into functions.
However, if the function needs to be called in a chain, consider using the :meth:`~DataFrame.pipe` method.
First some setup:
.. ipython:: python
def extract_city_name(df):
"""
Chicago, IL -> Chicago for city_name column
"""
df["city_name"] = df["city_and_code"].str.split(",").str.get(0)
return df
def add_country_name(df, country_name=None):
"""
Chicago -> Chicago-US for city_name column
"""
col = "city_name"
df["city_and_country"] = df[col] + country_name
return df
df_p = pd.DataFrame({"city_and_code": ["Chicago, IL"]})
``extract_city_name`` and ``add_country_name`` are functions taking and returning ``DataFrames``.
Now compare the following:
.. ipython:: python
add_country_name(extract_city_name(df_p), country_name="US")
Is equivalent to:
.. ipython:: python
df_p.pipe(extract_city_name).pipe(add_country_name, country_name="US")
pandas encourages the second style, which is known as method chaining.
``pipe`` makes it easy to use your own or another library's functions
in method chains, alongside pandas' methods.
In the example above, the functions ``extract_city_name`` and ``add_country_name`` each expected a ``DataFrame`` as the first positional argument.
What if the function you wish to apply takes its data as, say, the second argument?
In this case, provide ``pipe`` with a tuple of ``(callable, data_keyword)``.
``.pipe`` will route the ``DataFrame`` to the argument specified in the tuple.
For example, we can fit a regression using statsmodels. Their API expects a formula first and a ``DataFrame`` as the second argument, ``data``. We pass in the function, keyword pair ``(sm.ols, 'data')`` to ``pipe``:
.. code-block:: ipython
In [147]: import statsmodels.formula.api as sm
In [148]: bb = pd.read_csv("data/baseball.csv", index_col="id")
In [149]: (
.....: bb.query("h > 0")
.....: .assign(ln_h=lambda df: np.log(df.h))
.....: .pipe((sm.ols, "data"), "hr ~ ln_h + year + g + C(lg)")
.....: .fit()
.....: .summary()
.....: )
.....:
Out[149]:
<class 'statsmodels.iolib.summary.Summary'>
"""
OLS Regression Results
==============================================================================
Dep. Variable: hr R-squared: 0.685
Model: OLS Adj. R-squared: 0.665
Method: Least Squares F-statistic: 34.28
Date: Tue, 22 Nov 2022 Prob (F-statistic): 3.48e-15
Time: 05:34:17 Log-Likelihood: -205.92
No. Observations: 68 AIC: 421.8
Df Residuals: 63 BIC: 432.9
Df Model: 4
Covariance Type: nonrobust
===============================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------
Intercept -8484.7720 4664.146 -1.819 0.074 -1.78e+04 835.780
C(lg)[T.NL] -2.2736 1.325 -1.716 0.091 -4.922 0.375
ln_h -1.3542 0.875 -1.547 0.127 -3.103 0.395
year 4.2277 2.324 1.819 0.074 -0.417 8.872
g 0.1841 0.029 6.258 0.000 0.125 0.243
==============================================================================
Omnibus: 10.875 Durbin-Watson: 1.999
Prob(Omnibus): 0.004 Jarque-Bera (JB): 17.298
Skew: 0.537 Prob(JB): 0.000175
Kurtosis: 5.225 Cond. No. 1.49e+07
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.49e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
"""
The pipe method is inspired by unix pipes and more recently dplyr_ and magrittr_, which
have introduced the popular ``(%>%)`` (read pipe) operator for R_.
The implementation of ``pipe`` here is quite clean and feels right at home in Python.
We encourage you to view the source code of :meth:`~DataFrame.pipe`.
.. _dplyr: https://github.com/tidyverse/dplyr
.. _magrittr: https://github.com/tidyverse/magrittr
.. _R: https://www.r-project.org
Row or column-wise function application
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Arbitrary functions can be applied along the axes of a DataFrame
using the :meth:`~DataFrame.apply` method, which, like the descriptive
statistics methods, takes an optional ``axis`` argument:
.. ipython:: python
df.apply(lambda x: np.mean(x))
df.apply(lambda x: np.mean(x), axis=1)
df.apply(lambda x: x.max() - x.min())
df.apply(np.cumsum)
df.apply(np.exp)
The :meth:`~DataFrame.apply` method will also dispatch on a string method name.
.. ipython:: python
df.apply("mean")
df.apply("mean", axis=1)
The return type of the function passed to :meth:`~DataFrame.apply` affects the
type of the final output from ``DataFrame.apply`` for the default behaviour:
* If the applied function returns a ``Series``, the final output is a ``DataFrame``.
The columns match the index of the ``Series`` returned by the applied function.
* If the applied function returns any other type, the final output is a ``Series``.
This default behaviour can be overridden using the ``result_type``, which
accepts three options: ``reduce``, ``broadcast``, and ``expand``.
These will determine how list-likes return values expand (or not) to a ``DataFrame``.
:meth:`~DataFrame.apply` combined with some cleverness can be used to answer many questions
about a data set. For example, suppose we wanted to extract the date where the
maximum value for each column occurred:
.. ipython:: python
tsdf = pd.DataFrame(
np.random.randn(1000, 3),
columns=["A", "B", "C"],
index=pd.date_range("1/1/2000", periods=1000),
)
tsdf.apply(lambda x: x.idxmax())
You may also pass additional arguments and keyword arguments to the :meth:`~DataFrame.apply`
method.
.. ipython:: python
def subtract_and_divide(x, sub, divide=1):
return (x - sub) / divide
df_udf = pd.DataFrame(np.ones((2, 2)))
df_udf.apply(subtract_and_divide, args=(5,), divide=3)
Another useful feature is the ability to pass Series methods to carry out some
Series operation on each column or row:
.. ipython:: python
tsdf = pd.DataFrame(
np.random.randn(10, 3),
columns=["A", "B", "C"],
index=pd.date_range("1/1/2000", periods=10),
)
tsdf.iloc[3:7] = np.nan
tsdf
tsdf.apply(pd.Series.interpolate)
Finally, :meth:`~DataFrame.apply` takes an argument ``raw`` which is False by default, which
converts each row or column into a Series before applying the function. When
set to True, the passed function will instead receive an ndarray object, which
has positive performance implications if you do not need the indexing
functionality.
.. _basics.aggregate:
Aggregation API
~~~~~~~~~~~~~~~
The aggregation API allows one to express possibly multiple aggregation operations in a single concise way.
This API is similar across pandas objects, see :ref:`groupby API <groupby.aggregate>`, the
:ref:`window API <window.overview>`, and the :ref:`resample API <timeseries.aggregate>`.
The entry point for aggregation is :meth:`DataFrame.aggregate`, or the alias
:meth:`DataFrame.agg`.
We will use a similar starting frame from above:
.. ipython:: python
tsdf = pd.DataFrame(
np.random.randn(10, 3),
columns=["A", "B", "C"],
index=pd.date_range("1/1/2000", periods=10),
)
tsdf.iloc[3:7] = np.nan
tsdf
Using a single function is equivalent to :meth:`~DataFrame.apply`. You can also
pass named methods as strings. These will return a ``Series`` of the aggregated
output:
.. ipython:: python
tsdf.agg(lambda x: np.sum(x))
tsdf.agg("sum")
# these are equivalent to a ``.sum()`` because we are aggregating
# on a single function
tsdf.sum()
Single aggregations on a ``Series`` this will return a scalar value:
.. ipython:: python
tsdf["A"].agg("sum")
Aggregating with multiple functions
+++++++++++++++++++++++++++++++++++
You can pass multiple aggregation arguments as a list.
The results of each of the passed functions will be a row in the resulting ``DataFrame``.
These are naturally named from the aggregation function.
.. ipython:: python
tsdf.agg(["sum"])
Multiple functions yield multiple rows:
.. ipython:: python
tsdf.agg(["sum", "mean"])
On a ``Series``, multiple functions return a ``Series``, indexed by the function names:
.. ipython:: python
tsdf["A"].agg(["sum", "mean"])
Passing a ``lambda`` function will yield a ``<lambda>`` named row:
.. ipython:: python
tsdf["A"].agg(["sum", lambda x: x.mean()])
Passing a named function will yield that name for the row:
.. ipython:: python