diff --git a/source/img/data_frame_slides_cdn/data_frame_slides_cdn.001.jpeg b/source/img/data_frame_slides_cdn/data_frame_slides_cdn.001.jpeg index fa1e065d..276300cc 100644 Binary files a/source/img/data_frame_slides_cdn/data_frame_slides_cdn.001.jpeg and b/source/img/data_frame_slides_cdn/data_frame_slides_cdn.001.jpeg differ diff --git a/source/img/data_frame_slides_cdn/data_frame_slides_cdn.002.jpeg b/source/img/data_frame_slides_cdn/data_frame_slides_cdn.002.jpeg index 4fcf0966..b29831ee 100644 Binary files a/source/img/data_frame_slides_cdn/data_frame_slides_cdn.002.jpeg and b/source/img/data_frame_slides_cdn/data_frame_slides_cdn.002.jpeg differ diff --git a/source/img/data_frame_slides_cdn/data_frame_slides_cdn.004.jpeg b/source/img/data_frame_slides_cdn/data_frame_slides_cdn.004.jpeg index ae68f0d2..8675de1e 100644 Binary files a/source/img/data_frame_slides_cdn/data_frame_slides_cdn.004.jpeg and b/source/img/data_frame_slides_cdn/data_frame_slides_cdn.004.jpeg differ diff --git a/source/img/pivot_functions/pivot_functions.001.jpeg b/source/img/pivot_functions/pivot_functions.001.jpeg index f72151ba..fc5123f3 100644 Binary files a/source/img/pivot_functions/pivot_functions.001.jpeg and b/source/img/pivot_functions/pivot_functions.001.jpeg differ diff --git a/source/img/pivot_functions/pivot_functions.002.jpeg b/source/img/pivot_functions/pivot_functions.002.jpeg index 5e83772e..961c0813 100644 Binary files a/source/img/pivot_functions/pivot_functions.002.jpeg and b/source/img/pivot_functions/pivot_functions.002.jpeg differ diff --git a/source/img/summarize/summarize.001.jpeg b/source/img/summarize/summarize.001.jpeg index 1ffbaa57..7960e61e 100644 Binary files a/source/img/summarize/summarize.001.jpeg and b/source/img/summarize/summarize.001.jpeg differ diff --git a/source/img/summarize/summarize.002.jpeg b/source/img/summarize/summarize.002.jpeg index 5a6dbbd0..97995520 100644 Binary files a/source/img/summarize/summarize.002.jpeg and b/source/img/summarize/summarize.002.jpeg differ diff --git a/source/img/summarize/summarize.003.jpeg b/source/img/summarize/summarize.003.jpeg index a9d50b07..0a97f6be 100644 Binary files a/source/img/summarize/summarize.003.jpeg and b/source/img/summarize/summarize.003.jpeg differ diff --git a/source/img/summarize/summarize.004.jpeg b/source/img/summarize/summarize.004.jpeg index f3553dba..476ad698 100644 Binary files a/source/img/summarize/summarize.004.jpeg and b/source/img/summarize/summarize.004.jpeg differ diff --git a/source/img/summarize/summarize.005.jpeg b/source/img/summarize/summarize.005.jpeg index b2b1b2ca..d1a4f710 100644 Binary files a/source/img/summarize/summarize.005.jpeg and b/source/img/summarize/summarize.005.jpeg differ diff --git a/source/img/wrangling/pandas_dataframe_series-3.png b/source/img/wrangling/pandas_dataframe_series-3.png index a93bf397..6a2eea54 100644 Binary files a/source/img/wrangling/pandas_dataframe_series-3.png and b/source/img/wrangling/pandas_dataframe_series-3.png differ diff --git a/source/img/wrangling/pandas_dataframe_series.png b/source/img/wrangling/pandas_dataframe_series.png index 285a6559..75ffc893 100644 Binary files a/source/img/wrangling/pandas_dataframe_series.png and b/source/img/wrangling/pandas_dataframe_series.png differ diff --git a/source/index.md b/source/index.md index 248c0d02..be402176 100644 --- a/source/index.md +++ b/source/index.md @@ -22,7 +22,7 @@ you may need to open the table of contents first by clicking the menu button on the top left of the page. For the R version of the textbook, please visit https://datasciencebook.ca. -You can purchase a PDF or print copy of the book +You can purchase a PDF or print copy of the R version of the book on the [CRC Press website](https://www.routledge.com/Data-Science-A-First-Introduction/Timbers-Campbell-Lee/p/book/9780367524685) or on [Amazon](https://www.amazon.com/Data-Science-First-Introduction-Chapman/dp/0367532174/ref=sr_[…]qid=1644637450&sprefix=data+science+timber%2Caps%2C166&sr=8-1). diff --git a/source/intro.md b/source/intro.md index 9683b4ef..0997fa52 100644 --- a/source/intro.md +++ b/source/intro.md @@ -24,9 +24,9 @@ from myst_nb import glue This chapter provides an introduction to data science and the Python programming language. The goal here is to get your hands dirty right from the start! We will walk through an entire data analysis, -and along the way introduce different types of data analysis question, some fundamental programming +and along the way introduce different types of data analysis question, some fundamental programming concepts in Python, and the basics of loading, cleaning, and visualizing data. In the following chapters, we will -dig into each of these steps in much more detail; but for now, let's jump in to see how much we can do +dig into each of these steps in much more detail; but for now, let's jump in to see how much we can do with data science! ## Chapter learning objectives @@ -38,7 +38,8 @@ By the end of the chapter, readers will be able to do the following: - Read tabular data with `read_csv`. - Use `help()` to access help and documentation tools in Python. - Create new variables and objects in Python. -- Create and organize subsets of tabular data using `[]`, `loc[]`, and `sort_values` +- Create and organize subsets of tabular data using `[]`, `loc[]`, and `sort_values`. +- Chain multiple operations in sequence. - Visualize data with an `altair` bar plot. ## Canadian languages data set @@ -47,7 +48,7 @@ By the end of the chapter, readers will be able to do the following: ``` In this chapter, we will walk through a full analysis of a data set relating to -languages spoken at home by Canadian residents. Many Indigenous peoples exist in Canada +languages spoken at home by Canadian residents. Many Indigenous peoples exist in Canada with their own cultures and languages; these languages are often unique to Canada and not spoken anywhere else in the world {cite:p}`statcan2018mothertongue`. Sadly, colonization has led to the loss of many of these languages. For instance, generations of @@ -55,18 +56,18 @@ children were not allowed to speak their mother tongue (the first language an individual learns in childhood) in Canadian residential schools. Colonizers also renamed places they had "discovered" {cite:p}`wilson2018`. Acts such as these have significantly harmed the continuity of Indigenous languages in Canada, and -some languages are considered "endangered" as few people report speaking them. -To learn more, please see *Canadian Geographic*'s article, "Mapping Indigenous Languages in -Canada" {cite:p}`walker2017`, -*They Came for the Children: Canada, Aboriginal -peoples, and Residential Schools* {cite:p}`children2012` -and the *Truth and Reconciliation Commission of Canada's* +some languages are considered "endangered" as few people report speaking them. +To learn more, please see *Canadian Geographic*'s article, "Mapping Indigenous Languages in +Canada" {cite:p}`walker2017`, +*They Came for the Children: Canada, Aboriginal +peoples, and Residential Schools* {cite:p}`children2012` +and the *Truth and Reconciliation Commission of Canada's* *Calls to Action* {cite:p}`calls2015`. -The data set we will study in this chapter is taken from -[the `canlang` R data package](https://ttimbers.github.io/canlang/) +The data set we will study in this chapter is taken from +[the `canlang` R data package](https://ttimbers.github.io/canlang/) {cite:p}`timbers2020canlang`, which has -population language data collected during the 2016 Canadian census {cite:p}`cancensus2016`. +population language data collected during the 2016 Canadian census {cite:p}`cancensus2016`. In this data, there are 214 languages recorded, each having six different properties: 1. `category`: Higher-level language category, describing whether the language is an Official Canadian language, an Aboriginal (i.e., Indigenous) language, or a Non-Official and Non-Aboriginal language. @@ -78,15 +79,15 @@ In this data, there are 214 languages recorded, each having six different proper According to the census, more than 60 Aboriginal languages were reported as being spoken in Canada. Suppose we want to know which are the most common; -then we might ask the following question, which we wish to answer using our data: +then we might ask the following question, which we wish to answer using our data: *Which ten Aboriginal languages were most often reported in 2016 as mother -tongues in Canada, and how many people speak each of them?* +tongues in Canada, and how many people speak each of them?* ```{index} data science; good practices ``` -> **Note:** Data science cannot be done without +> **Note:** Data science cannot be done without > a deep understanding of the data and > problem domain. In this book, we have simplified the data sets used in our > examples to concentrate on methods and fundamental concepts. But in real @@ -96,15 +97,15 @@ tongues in Canada, and how many people speak each of them?* > about *how* the data were collected, which affects the conclusions you can > draw. If your data are biased, then your results will be biased! -## Asking a question +## Asking a question Every good data analysis begins with a *question*—like the above—that you aim to answer using data. As it turns out, there are actually a number of different *types* of question regarding data: descriptive, exploratory, inferential, predictive, causal, and mechanistic, all of which are defined in {numref}`questions-table`. {cite:p}`leek2015question,peng2015art` -Carefully formulating a question as early as possible in your analysis—and -correctly identifying which type of question it is—will guide your overall approach to +Carefully formulating a question as early as possible in your analysis—and +correctly identifying which type of question it is—will guide your overall approach to the analysis as well as the selection of appropriate tools. ```{index} question; data analysis, descriptive question; definition, exploratory question; definition @@ -138,12 +139,12 @@ the analysis as well as the selection of appropriate tools. * - Mechanistic - A question that asks about the underlying mechanism of the observed patterns, trends, or relationships (i.e., how does it happen?) - How does wealth lead to voting for a certain political party in Canadian elections? - + ``` -In this book, you will learn techniques to answer the -first four types of question: descriptive, exploratory, predictive, and inferential; +In this book, you will learn techniques to answer the +first four types of question: descriptive, exploratory, predictive, and inferential; causal and mechanistic questions are beyond the scope of this book. In particular, you will learn how to apply the following analysis tools: @@ -153,25 +154,25 @@ In particular, you will learn how to apply the following analysis tools: ```{index} clustering; overview, estimation; overview ``` -1. **Summarization:** computing and reporting aggregated values pertaining to a data set. +1. **Summarization:** computing and reporting aggregated values pertaining to a data set. Summarization is most often used to answer descriptive questions, and can occasionally help with answering exploratory questions. -For example, you might use summarization to answer the following question: +For example, you might use summarization to answer the following question: *What is the average race time for runners in this data set?* Tools for summarization are covered in detail in the {ref}`reading` and {ref}`wrangling` chapters, but appear regularly throughout the text. -1. **Visualization:** plotting data graphically. +1. **Visualization:** plotting data graphically. Visualization is typically used to answer descriptive and exploratory questions, but plays a critical supporting role in answering all of the types of question in {numref}`questions-table`. For example, you might use visualization to answer the following question: -*Is there any relationship between race time and age for runners in this data set?* +*Is there any relationship between race time and age for runners in this data set?* This is covered in detail in the {ref}`viz` chapter, but again appears regularly throughout the book. 3. **Classification:** predicting a class or category for a new observation. Classification is used to answer predictive questions. For example, you might use classification to answer the following question: *Given measurements of a tumor's average cell area and perimeter, is the tumor benign or malignant?* Classification is covered in the {ref}`classification` and {ref}`classification2` chapters. -4. **Regression:** predicting a quantitative value for a new observation. +4. **Regression:** predicting a quantitative value for a new observation. Regression is also used to answer predictive questions. For example, you might use regression to answer the following question: *What will be the race time for a 20-year-old runner who weighs 50kg?* @@ -181,22 +182,22 @@ data set. Clustering is often used to answer exploratory questions. For example, you might use clustering to answer the following question: *What products are commonly bought together on Amazon?* Clustering is covered in the {ref}`clustering` chapter. -6. **Estimation:** taking measurements for a small number of items from a large group - and making a good guess for the average or proportion for the large group. Estimation +6. **Estimation:** taking measurements for a small number of items from a large group + and making a good guess for the average or proportion for the large group. Estimation is used to answer inferential questions. For example, you might use estimation to answer the following question: *Given a survey of cellphone ownership of 100 Canadians, what proportion -of the entire Canadian population own Android phones?* +of the entire Canadian population own Android phones?* Estimation is covered in the {ref}`inference` chapter. -Referring to {numref}`questions-table`, our question about +Referring to {numref}`questions-table`, our question about Aboriginal languages is an example of a *descriptive question*: we are summarizing the characteristics of a data set without further interpretation. And referring to the list above, it looks like we should use visualization and perhaps some summarization to answer the question. So in the remainder -of this chapter, we will work towards making a visualization that shows +of this chapter, we will work towards making a visualization that shows us the ten most common Aboriginal languages in Canada and their associated counts, -according to the 2016 census. +according to the 2016 census. ## Loading a tabular data set @@ -204,7 +205,7 @@ according to the 2016 census. ``` A data set is, at its core essence, a structured collection of numbers and characters. -Aside from that, there are really no strict rules; data sets can come in +Aside from that, there are really no strict rules; data sets can come in many different forms! Perhaps the most common form of data set that you will find in the wild, however, is *tabular data*. Think spreadsheets in Microsoft Excel: tabular data are rectangular-shaped and spreadsheet-like, as shown in {numref}`img-spreadsheet-vs-data frame`. In this book, we will focus primarily on tabular data. @@ -216,9 +217,9 @@ Since we are using Python for data analysis in this book, the first step for us load the data into Python. When we load tabular data into Python, it is represented as a *data frame* object. {numref}`img-spreadsheet-vs-data frame` shows that a Python data frame is very similar to a spreadsheet. We refer to the rows as **observations**; these are the things that we -collect the data on, e.g., voters, cities, etc. We refer to the columns as +collect the data on, e.g., voters, cities, etc. We refer to the columns as **variables**; these are the characteristics of those observations, e.g., voters' political -affiliations, cities' populations, etc. +affiliations, cities' populations, etc. ```{figure} img/spreadsheet_vs_df.png @@ -239,7 +240,7 @@ The first kind of data file that we will learn how to load into Python as a data frame is the *comma-separated values* format (`.csv` for short). These files have names ending in `.csv`, and can be opened and saved using common spreadsheet programs like Microsoft Excel and Google Sheets. For example, the -`.csv` file named `can_lang.csv` +`.csv` file named `can_lang.csv` is included with [the code for this book](https://github.com/UBC-DSCI/introduction-to-datascience-python/tree/main/source/data). If we were to open this data in a plain text editor (a program like Notepad that just shows text with no formatting), we would see each row on its own line, and each entry in the table separated by a comma: @@ -264,7 +265,7 @@ To load this data into Python so that we can do things with it (e.g., perform analyses or create data visualizations), we will need to use a *function.* A function is a special word in Python that takes instructions (we call these *arguments*) and does something. The function we will use to load a `.csv` file -into Python is called `read_csv`. In its most basic +into Python is called `read_csv`. In its most basic use-case, `read_csv` expects that the data file: - has column names (or *headers*), @@ -280,14 +281,14 @@ Below you'll see the code used to load the data into Python using the `read_csv` function. Note that the `read_csv` function is not included in the base installation of Python, meaning that it is not one of the primary functions ready to use when you install Python. Therefore, you need to load it from somewhere else -before you can use it. The place from which we will load it is called a Python *package*. +before you can use it. The place from which we will load it is called a Python *package*. A Python package is a collection of functions that can be used in addition to the built-in Python package functions once loaded. The `read_csv` function, in -particular, can be made accessible by loading +particular, can be made accessible by loading [the `pandas` Python package](https://pypi.org/project/pandas/) {cite:p}`reback2020pandas,mckinney-proc-scipy-2010` using the `import` command. The `pandas` package contains many -functions that we will use throughout this book to load, clean, wrangle, -and visualize data. +functions that we will use throughout this book to load, clean, wrangle, +and visualize data. +++ @@ -296,19 +297,19 @@ import pandas as pd ``` This command has two parts. The first is `import pandas`, which loads the `pandas` package. -The second is `as pd`, which give the `pandas` package the much shorter *alias* (another name) `pd`. +The second is `as pd`, which give the `pandas` package the much shorter *alias* (another name) `pd`. We can now use the `read_csv` function by writing `pd.read_csv`, i.e., the package name, then a dot, then the function name. You can see why we gave `pandas` a shorter alias; if we had to type `pandas.` before every function we wanted to use, our code would become much longer and harder to read! -Now that the `pandas` package is loaded, we can use the `read_csv` function by passing +Now that the `pandas` package is loaded, we can use the `read_csv` function by passing it a single argument: the name of the file, `"can_lang.csv"`. We have to put quotes around file names and other letters and words that we use in our code to distinguish it from the special words (like functions!) that make up the Python programming language. The file's name is the only argument we need to provide because our file satisfies everything else that the `read_csv` function expects in the default use-case. {numref}`img-read-csv` describes how we use the `read_csv` -to read data into Python. +to read data into Python. **(FIGURE 1.2 FROM R BOOK IS NOT MISSING, BUT STILL R VERSION. NEEDS PD.READ_CSV)** @@ -332,11 +333,11 @@ pd.read_csv("data/can_lang.csv") ## Naming things in Python When we loaded the 2016 Canadian census language data -using `read_csv`, we did not give this data frame a name. -Therefore the data was just printed on the screen, -and we cannot do anything else with it. That isn't very useful. -What would be more useful would be to give a name -to the data frame that `read_csv` outputs, +using `read_csv`, we did not give this data frame a name. +Therefore the data was just printed on the screen, +and we cannot do anything else with it. That isn't very useful. +What would be more useful would be to give a name +to the data frame that `read_csv` outputs, so that we can refer to it later for analysis and visualization. ```{index} see: =; assignment symbol @@ -345,7 +346,7 @@ so that we can refer to it later for analysis and visualization. ```{index} assignment symbol, string ``` -The way to assign a name to a value in Python is via the *assignment symbol* `=`. +The way to assign a name to a value in Python is via the *assignment symbol* `=`. On the left side of the assignment symbol you put the name that you want to use, and on the right side of the assignment symbol you put the value that you want the name to refer to. @@ -360,17 +361,17 @@ my_number = 1 + 2 name = "Alice" ``` -Note that when -we name something in Python using the assignment symbol, `=`, -we do not need to surround the name we are creating with quotes. This is +Note that when +we name something in Python using the assignment symbol, `=`, +we do not need to surround the name we are creating with quotes. This is because we are formally telling Python that this special word denotes the value of whatever is on the right-hand side. Only characters and words that act as *values* on the right-hand side of the assignment -symbol—e.g., the file name `"data/can_lang.csv"` that we specified before, or `"Alice"` above—need +symbol—e.g., the file name `"data/can_lang.csv"` that we specified before, or `"Alice"` above—need to be surrounded by quotes. After making the assignment, we can use the special name words we have created in -place of their values. For example, if we want to do something with the value `3` later on, +place of their values. For example, if we want to do something with the value `3` later on, we can just use `my_number` instead. Let's try adding 2 to `my_number`; you will see that Python just interprets this as adding 2 and 3: @@ -397,7 +398,7 @@ SyntaxError: cannot assign to operator ```{index} object; naming convention ``` -There are certain conventions for naming objects in Python. +There are certain conventions for naming objects in Python. When naming an object we suggest using only lowercase letters, numbers and underscores `_` to separate the words in a name. Python is case sensitive, which means that `Letter` and @@ -408,20 +409,20 @@ remember what each name in your code represents. We recommend following the **PEP 8** naming conventions outlined in the *[PEP 8](https://peps.python.org/pep-0008/)* {cite:p}`pep8-style-guide`. Let's now use the assignment symbol to give the name `can_lang` to the 2016 Canadian census language data frame that we get from -`read_csv`. +`read_csv`. ```{code-cell} ipython3 can_lang = pd.read_csv("data/can_lang.csv") ``` Wait a minute, nothing happened this time! Where's our data? -Actually, something did happen: the data was loaded in -and now has the name `can_lang` associated with it. -And we can use that name to access the data frame and do things with it. -For example, we can type the name of the data frame to print both the first few rows +Actually, something did happen: the data was loaded in +and now has the name `can_lang` associated with it. +And we can use that name to access the data frame and do things with it. +For example, we can type the name of the data frame to print both the first few rows and the last few rows. The three dots (`...`) indicate that there are additional rows that are not printed. -You will also see that the number of observations (i.e., rows) and -variables (i.e., columns) are printed just underneath the data frame (214 rows and 6 columns in this case). +You will also see that the number of observations (i.e., rows) and +variables (i.e., columns) are printed just underneath the data frame (214 rows and 6 columns in this case). Printing a few rows from data frame like this is a handy way to get a quick sense for what is contained in it. ```{code-cell} ipython3 @@ -435,8 +436,8 @@ can_lang Now that we've loaded our data into Python, we can start wrangling the data to find the ten Aboriginal languages that were most often reported -in 2016 as mother tongues in Canada. In particular, we want to construct -a table with the ten Aboriginal languages that have the largest +in 2016 as mother tongues in Canada. In particular, we want to construct +a table with the ten Aboriginal languages that have the largest counts in the `mother_tongue` column. The first step is to extract from our `can_lang` data only those rows that correspond to Aboriginal languages, and then the second step is to keep only the `language` and `mother_tongue` columns. @@ -457,8 +458,8 @@ and then use `loc[]` to do both in our analysis of the Aboriginal languages data Looking at the `can_lang` data above, we see the column `category` contains different high-level categories of languages, which include "Aboriginal languages", "Non-Official & Non-Aboriginal languages" and "Official languages". To answer -our question we want to filter our data set so we restrict our attention -to only those languages in the "Aboriginal languages" category. +our question we want to filter our data set so we restrict our attention +to only those languages in the "Aboriginal languages" category. ```{index} pandas.DataFrame; [], filter, logical statement, logical statement; equivalency operator, string ``` @@ -476,12 +477,12 @@ column---denoted by `can_lang["category"]`---with the value `"Aboriginal languag You will learn about many other kinds of logical statement in the {ref}`wrangling` chapter. Similar to when we loaded the data file and put quotes around the file name, here we need to put quotes around both `"Aboriginal languages"` and `"category"`. Using -quotes tells Python that this is a *string value* (e.g., a column name, or word data) -and not one of the special words that makes up the Python programming language, +quotes tells Python that this is a *string value* (e.g., a column name, or word data) +and not one of the special words that makes up the Python programming language, or one of the names we have given to objects in the code we have already written. > **Note:** In Python, single quotes (`'`) and double quotes (`"`) are generally -> treated the same. So we could have written `'Aboriginal languages'` instead +> treated the same. So we could have written `'Aboriginal languages'` instead > of `"Aboriginal languages"` above, or `'category'` instead of `"category"`. > Try both out for yourself! @@ -513,7 +514,7 @@ We can also use the `[]` operation to select columns from a data frame. We again first type the name of the data frame---here, `can_lang`---followed by square brackets. Inside the square brackets, we provide a *list* of column names. In Python, we denote a *list* using square brackets, where -each item is separated by a comma (`,`). So if we are interested in +each item is separated by a comma (`,`). So if we are interested in selecting only the `language` and `mother_tongue` columns from our original `can_lang` data frame, we put the list `["language", "mother_tongue"]` containing those two column names inside the square brackets of the `[]` operation. @@ -549,7 +550,7 @@ The syntax is very similar to the `[]` operation we have already covered: we wil essentially combine both our row filtering and column selection steps from before. In particular, we first write the name of the data frame---`can_lang` again---then follow that with the `.loc[]` method. Inside the square brackets, -we write our row filtering logical statement, +we write our row filtering logical statement, then a comma, then our list of columns to select. **(This figure is wrong-- should be for .loc[] operation below)** @@ -565,14 +566,14 @@ Syntax for using the `loc[]` operation to filter rows and select columns. ```{code-cell} ipython3 aboriginal_lang = can_lang.loc[can_lang["category"] == "Aboriginal languages", ["language", "mother_tongue"]] ``` -There is one very important thing to notice in this code example. +There is one very important thing to notice in this code example. The first is that we used the `loc[]` operation on the `can_lang` data frame by writing `can_lang.loc[]`---first the data frame name, then a dot, then `loc[]`. There's that dot again! If you recall, earlier in this chapter we used the `read_csv` function from `pandas` (aliased as `pd`), and wrote `pd.read_csv`. The dot means that the thing on the left (`pd`, i.e., the `pandas` package) *provides* the thing on the right (the `read_csv` function). In the case of `can_lang.loc[]`, the thing on the left (the `can_lang` data frame) -*provides* the thing on the right (the `loc[]` operation). In Python, -both packages (like `pandas`) *and* objects (like our `can_lang` data frame) can provide functions +*provides* the thing on the right (the `loc[]` operation). In Python, +both packages (like `pandas`) *and* objects (like our `can_lang` data frame) can provide functions and other objects that we access using the dot syntax. At this point, if we have done everything correctly, `aboriginal_lang` should be a data frame @@ -585,7 +586,7 @@ aboriginal_lang ``` We can see the original `can_lang` data set contained 214 rows with multiple kinds of `category`. The data frame -`aboriginal_lang` contains only 67 rows, and looks like it only contains Aboriginal languages. +`aboriginal_lang` contains only 67 rows, and looks like it only contains Aboriginal languages. So it looks like the `loc[]` operation gave us the result we wanted! ### Using `sort_values` to order and `head` to select rows by value @@ -598,7 +599,7 @@ with only the Aboriginal languages in the data set and their associated counts. However, we want to know the **ten** languages that are spoken most often. As a next step, we will order the `mother_tongue` column from largest to smallest value and then extract only the top ten rows. This is where the `sort_values` -and `head` functions come to the rescue! +and `head` functions come to the rescue! The `sort_values` function allows us to order the rows of a data frame by the values of a particular column. We need to specify the column name @@ -619,7 +620,7 @@ arranged_lang Next, we will obtain the ten most common Aboriginal languages by selecting only the first ten rows of the `arranged_lang` data frame. We do this using the `head` function, and specifying the argument -`10`. +`10`. ```{code-cell} ipython3 @@ -627,16 +628,134 @@ ten_lang = arranged_lang.head(10) ten_lang ``` -We have now answered our initial question by generating this table! +## Combining analysis steps with chaining and multiline expressions + +```{index} chaining methods +``` + +It took us 3 steps to find the ten Aboriginal languages most often reported in +2016 as mother tongues in Canada. Starting from the `can_lang` data frame, we: + +1) used `loc` to filter the rows so that only the + `Aboriginal languages` category remained, and selected the + `language` and `mother_tongue` columns, +2) used `sort_values` to sort the rows by `mother_tongue` in descending order, and +3) obtained only the top 10 values using `head`. + +One way of performing these steps is to just write +multiple lines of code, storing temporary, intermediate objects as you go. +```{code-cell} ipython3 +aboriginal_lang = can_lang.loc[can_lang["category"] == "Aboriginal languages", ["language", "mother_tongue"]] +arranged_lang_sorted = aboriginal_lang.sort_values(by='mother_tongue', ascending=False) +ten_lang = arranged_lang_sorted.head(10) +``` + +```{index} multi-line expression +``` + +You might find that code hard to read. You're not wrong; it is! +There are two main issues with readability here. First, each line of code is quite long. +It is hard to keep track of what methods are being called, and what arguments were used. +Second, each line introduces a new temporary object. In this case, both `aboriginal_lang` and `arranged_lang_sorted` +are just temporary results on the way to producing the `ten_lang` data frame. +This makes the code hard to read, as one has to trace where each temporary object +goes, and hard to understand, since introducing many named objects also suggests that they +are of some importance, when really they are just intermediates. +The need to call multiple methods in a sequence to process a data frame is +quite common, so this is an important issue to address! + +To solve the first problem, we can actually split the long expressions above across +multiple lines. Although in most cases, a single expression in Python must be contained +in a single line of code, there are a small number of situations where lets us do this. +Let's rewrite this code in a more readable format using multiline expressions. + +```{code-cell} ipython3 +aboriginal_lang = can_lang.loc[ + can_lang["category"] == "Aboriginal languages", + ["language", "mother_tongue"]] +arranged_lang_sorted = aboriginal_lang.sort_values( + by='mother_tongue', + ascending=False) +ten_lang = arranged_lang_sorted.head(10) +``` + +This code is the same as the code we showed earlier; you can see the same +sequence of methods and arguments is used. But long expressions are split +across multiple lines when they would otherwise get long and unwieldy, +improving the readability of the code. +How does Python know when to keep +reading on the next line for a single expression? +For the line starting with `aboriginal_lang = ...`, Python sees that the line ends with a left +bracket symbol `[`, and knows that our +expression cannot end until we close it with an appropriate corresponding right bracket symbol `]`. +We put the same two arguments as we did before, and then +the corresponding right bracket appears after `["language", "mother_tongue"]`). +For the line starting with `arranged_lang_sorted = ...`, Python sees that the line ends with a left parenthesis symbol `(`, +and knows the expression cannot end until we close it with the corresponding right parenthesis symbol `)`. +Again we use the same two arguments as before, and then the +corresponding right parenthesis appears right after `ascending=False`. +In both cases, Python keeps reading the next line to figure out +what the rest of the expression is. We could, of course, +put all of the code on one line of code, but splitting it across +multiple lines helps a lot with code readability. + +We still have to handle the issue that each line of code---i.e., each step in the analysis---introduces +a new temporary object. To address this issue, we can *chain* multiple operations together without +assigning intermediate objects. The key idea of chaining is that the *output* of +each step in the analysis is a data frame, which means that you can just directly keep calling methods +that operate on the output of each step in a sequence! This simplifies the code and makes it +easier to read. The code below demonstrates the use of both multiline expressions and chaining together. +The code is now much cleaner, and the `ten_lang` data frame that we get is equivalent to the one +from the messy code above! + +```{code-cell} ipython3 +# obtain the 10 most common Aboriginal languages +ten_lang = ( + can_lang.loc[ + can_lang["category"] == "Aboriginal languages", + ["language", "mother_tongue"] + ] + .sort_values(by="mother_tongue", ascending=False) + .head(10) +) +ten_lang +``` + +Let's parse this new block of code piece by piece. +The code above starts with a left parenthesis, `(`, and so Python +knows to keep reading to subsequent lines until it finds the corresponding +right parenthesis symbol `)`. The `loc` method performs the filtering and selecting steps as before. The line after this +starts with a period (`.`) that "chains" the output of the `loc` step with the next operation, +`sort_values`. Since the output of `loc` is a data frame, we can use the `sort_values` method on it +without first giving it a name! That is what the `.sort_values` does on the next line. +Finally, we once again "chain" together the output of `sort_values` with `head` to ask for the 10 +most common languages. Finally, the right parenthesis `)` corresponding to the very first left parenthesis +appears on the second last line, completing the multiline expression. +Instead of creating intermediate objects, with chaining, we take the output of +one operation and use that to perform the next operation. In doing so, we remove the need to create and +store intermediates. This can help with readability by simplifying the code. + +Now that we've shown you chaining as an alternative to storing +temporary objects and composing code, does this mean you should *never* store +temporary objects or compose code? Not necessarily! +There are times when temporary objects are handy to keep around. +For example, you might store a temporary object before feeding it into a plot function +so you can iteratively change the plot without having to +redo all of your data transformations. +Chaining many functions can be overwhelming and difficult to debug; +you may want to store a temporary object midway through to inspect your result +before moving on with further steps. + +We have now answered our initial question by generating the `ten_lang` table! Are we done? Well, not quite; tables are almost never the best way to present the result of your analysis to your audience. Even the simple table above with only two columns presents some difficulty: for example, you have to scrutinize -the table quite closely to get a sense for the relative numbers of speakers of -each language. When you move on to more complicated analyses, this issue only -gets worse. In contrast, a *visualization* would convey this information in a much -more easily understood format. +the table quite closely to get a sense for the relative numbers of speakers of +each language. When you move on to more complicated analyses, this issue only +gets worse. In contrast, a *visualization* would convey this information in a much +more easily understood format. Visualizations are a great tool for summarizing information to help you -effectively communicate with your audience. +effectively communicate with your audience. ## Exploring data with visualizations @@ -644,7 +763,7 @@ effectively communicate with your audience. ``` Creating effective data visualizations is an essential component of any data -analysis. In this section we will develop a visualization of the +analysis. In this section we will develop a visualization of the ten Aboriginal languages that were most often reported in 2016 as mother tongues in Canada, as well as the number of people that speak each of them. @@ -670,9 +789,9 @@ formally introduce tidy data in the {ref}`wrangling` chapter. We will make a bar plot to visualize our data. A bar plot is a chart where the lengths of the bars represent certain values, like counts or proportions. We will make a bar plot using the `mother_tongue` and `language` columns from our -`ten_lang` data frame. To create a bar plot of these two variables using the +`ten_lang` data frame. To create a bar plot of these two variables using the `altair` package, we must specify the data frame, which variables -to put on the x and y axes, and what kind of plot to create. +to put on the x and y axes, and what kind of plot to create. First, we need to import the `altair` package. ```{code-cell} ipython3 @@ -683,11 +802,11 @@ import altair as alt +++ The fundamental object in `altair` is the `Chart`, which takes a data frame as a single argument: `alt.Chart(ten_lang)`. -With a chart object in hand, we can now specify how we would like the data to be visualized. -We first indicate what kind of geometric mark we want to use to represent the data. Here we set the mark attribute +With a chart object in hand, we can now specify how we would like the data to be visualized. +We first indicate what kind of geometric mark we want to use to represent the data. Here we set the mark attribute of the chart object using the `Chart.mark_bar` function, because we want to create a bar chart. -Next, we need to encode the variables of the data frame using -the `x` (represents the x-axis position of the points) and +Next, we need to encode the variables of the data frame using +the `x` (represents the x-axis position of the points) and `y` (represents the y-axis position of the points) *channels*. We use the `encode()` function to handle this: we specify that the `language` column should correspond to the x-axis, and that the `mother_tongue` column should correspond to the y-axis. @@ -705,7 +824,7 @@ barplot_mother_tongue = ( x="language", y="mother_tongue" )) - + ``` @@ -728,20 +847,6 @@ Bar plot of the ten Aboriginal languages most often reported by Canadian residen ```{index} see: .; chaining methods ``` -```{index} multi-line expression -``` - -> **Note:** The vast majority of the -> time, a single expression in Python must be contained in a single line of code. -> However, there *are* a small number of situations in which you can have a -> single Python expression span multiple lines. Above is one such case: here, Python sees that we put a left -> parenthesis symbol `(` on the first line right after the assignment symbol `=`, and knows that our -> expression cannot end until we close it with an appropriate corresponding right parenthesis symbol `)`. -> So Python keeps reading the next line to figure out -> what the rest of the expression is. We could, of course, -> put all of the code on one line of code, but splitting it across -> multiple lines helps a lot with code readability. - ### Formatting `altair` objects It is exciting that we can already visualize our data to help answer our @@ -760,8 +865,8 @@ Canadian Residents)" would be much more informative. ``` Adding additional labels to our visualizations that we create in `altair` is -one common and easy way to improve and refine our data visualizations. We can add titles for the axes -in the `altair` objects using `alt.X` and `alt.Y` with the `title` argument to make +one common and easy way to improve and refine our data visualizations. We can add titles for the axes +in the `altair` objects using `alt.X` and `alt.Y` with the `title` argument to make the axes titles more informative. Again, since we are specifying words (e.g. `"Mother Tongue (Number of Canadian Residents)"`) as arguments to @@ -795,7 +900,7 @@ Bar plot of the ten Aboriginal languages most often reported by Canadian residen ::: -The result is shown in {numref}`barplot-mother-tongue-labs`. +The result is shown in {numref}`barplot-mother-tongue-labs`. This is already quite an improvement! Let's tackle the next major issue with the visualization in {numref}`barplot-mother-tongue-labs`: the vertical x axis labels, which are currently making it difficult to read the different language names. @@ -830,14 +935,14 @@ Horizontal bar plot of the ten Aboriginal languages most often reported by Canad ```{index} altair; sort ``` -Another big step forward, as shown in {numref}`barplot-mother-tongue-labs-axis`! There +Another big step forward, as shown in {numref}`barplot-mother-tongue-labs-axis`! There are no more serious issues with the visualization. Now comes time to refine the visualization to make it even more well-suited to answering the question we asked earlier in this chapter. For example, the visualization could be made more transparent by organizing the bars according to the number of Canadian residents reporting each language, rather than in alphabetical order. We can reorder the bars using the `sort` argument, which orders a variable (here `language`) based on the -values of the variable(`mother_tongue`) on the `x-axis`. +values of the variable(`mother_tongue`) on the `x-axis`. ```{code-cell} ipython3 ordered_barplot_mother_tongue = ( @@ -864,7 +969,7 @@ glue('barplot-mother-tongue-reorder', ordered_barplot_mother_tongue, display=Tru :name: barplot-mother-tongue-reorder Bar plot of the ten Aboriginal languages most often reported by Canadian residents as their mother tongue with bars reordered. -::: +::: {numref}`barplot-mother-tongue-reorder` provides a very clear and well-organized @@ -878,7 +983,7 @@ n.o.s. with over 60,000 Canadian residents reporting it as their mother tongue. > Cree languages include the following categories: Cree n.o.s., Swampy Cree, > Plains Cree, Woods Cree, and a 'Cree not included elsewhere' category (which > includes Moose Cree, Northern East Cree and Southern East Cree) -> {cite:p}`language2016`. +> {cite:p}`language2016`. ### Putting it all together @@ -890,12 +995,12 @@ n.o.s. with over 60,000 Canadian residents reporting it as their mother tongue. In the block of code below, we put everything from this chapter together, with a few modifications. In particular, we have combined all of our steps into one expression -split across multiple lines using the left and right parenthesis symbols `(` and `)`. -We have also provided *comments* next to +split across multiple lines using the left and right parenthesis symbols `(` and `)`. +We have also provided *comments* next to many of the lines of code below using the -hash symbol `#`. When Python sees a `#` sign, it +hash symbol `#`. When Python sees a `#` sign, it will ignore all of the text that -comes after the symbol on that line. So you can use comments to explain lines +comes after the symbol on that line. So you can use comments to explain lines of code for others, and perhaps more importantly, your future self! It's good practice to get in the habit of commenting your code to improve its readability. @@ -905,7 +1010,7 @@ performed an entire data science workflow with a highly effective data visualization! We asked a question, loaded the data into Python, wrangled the data (using `[]`, `loc[]`, `sort_values`, and `head`) and created a data visualization to help answer our question. In this chapter, you got a quick taste of the data -science workflow; continue on with the next few chapters to learn each of +science workflow; continue on with the next few chapters to learn each of these steps in much more detail! ```{code-cell} ipython3 @@ -956,16 +1061,16 @@ Bar plot of the ten Aboriginal languages most often reported by Canadian residen ```{index} see: __doc__; documentation ``` -There are many Python functions in the `pandas` package (and beyond!), and +There are many Python functions in the `pandas` package (and beyond!), and nobody can be expected to remember what every one of them does -or all of the arguments we have to give them. Fortunately, Python provides -the `help` function, which -provides an easy way to pull up the documentation for -most functions quickly. To use the `help` function to access the documentation, you +or all of the arguments we have to give them. Fortunately, Python provides +the `help` function, which +provides an easy way to pull up the documentation for +most functions quickly. To use the `help` function to access the documentation, you just put the name of the function you are curious about as an argument inside the `help` function. For example, if you had forgotten what the `pd.read_csv` function did or exactly what arguments to pass in, you could run the following -code: +code: ```{code-cell} ipython3 :tags: ["remove-output"] @@ -973,11 +1078,11 @@ help(pd.read_csv) ``` {numref}`help_read_csv` shows the documentation that will pop up, -including a high-level description of the function, its arguments, +including a high-level description of the function, its arguments, a description of each, and more. Note that you may find some of the text in the documentation a bit too technical right now. Fear not: as you work through this book, many of these terms will be introduced -to you, and slowly but surely you will become more adept at understanding and navigating +to you, and slowly but surely you will become more adept at understanding and navigating documentation like that shown in {numref}`help_read_csv`. But do keep in mind that the documentation is not written to *teach* you about a function; it is just there as a reference to *remind* you about the different arguments and usage of functions that you have already learned about elsewhere. @@ -1000,8 +1105,8 @@ ways to access documentation for functions. **JOEL ADD TEXT AND IMAGES HERE**. ## Exercises -Practice exercises for the material covered in this chapter -can be found in the accompanying +Practice exercises for the material covered in this chapter +can be found in the accompanying [worksheets repository](https://github.com/UBC-DSCI/data-science-a-first-intro-python-worksheets#readme) in the "Python and Pandas" row. You can launch an interactive version of the worksheet in your browser by clicking the "launch binder" button. diff --git a/source/wrangling.md b/source/wrangling.md index 4f0a4573..94f15938 100644 --- a/source/wrangling.md +++ b/source/wrangling.md @@ -41,71 +41,33 @@ By the end of the chapter, readers will be able to do the following: - Define the term "tidy data". - Discuss the advantages of storing data in a tidy data format. - - Define what lists, series and data frames are in Python, and describe how they relate to + - Define what series and data frames are in Python, and describe how they relate to each other. - Describe the common types of data in Python and their uses. - Recall and use the following functions for their intended data wrangling tasks: - - `.agg` - - `.apply` - - `.assign` - - `.groupby` - - `.melt` - - `.pivot` - - `.str.split` + - `agg` + - `apply` + - `assign` + - `groupby` + - `melt` + - `pivot` + - `str.split` - Recall and use the following operators for their intended data wrangling tasks: - - `==` + - `==` - `in` - `and` - `or` - - `df[]` - - `.iloc[]` - - `.loc[]` + - `[]` + - `loc[]` + - `iloc[]` -```{code-cell} ipython3 ---- -jupyter: - source_hidden: true -tags: [remove-cell] ---- -# By the end of the chapter, readers will be able to do the following: - -# - Define the term "tidy data". -# - Discuss the advantages of storing data in a tidy data format. -# - Define what vectors, lists, and data frames are in R, and describe how they relate to -# each other. -# - Describe the common types of data in R and their uses. -# - Recall and use the following functions for their -# intended data wrangling tasks: -# - `across` -# - `c` -# - `filter` -# - `group_by` -# - `select` -# - `map` -# - `mutate` -# - `pull` -# - `pivot_longer` -# - `pivot_wider` -# - `rowwise` -# - `separate` -# - `summarize` -# - Recall and use the following operators for their -# intended data wrangling tasks: -# - `==` -# - `%in%` -# - `!` -# - `&` -# - `|` -# - `|>` and `%>%` -``` - -## Data frames, series, and lists - -In Chapters {ref}`intro` and {ref}`reading`, *data frames* were the focus: +## Data frames and series + +In the chapters on {ref}`intro` and {ref}`reading`, *data frames* were the focus: we learned how to import data into Python as a data frame, and perform basic operations on data frames in Python. -In the remainder of this book, this pattern continues. The vast majority of tools we use will require +In the remainder of this book, this pattern continues. The vast majority of tools we use will require that data are represented as a `pandas` **data frame** in Python. Therefore, in this section, we will dig more deeply into what data frames are and how they are represented in Python. This knowledge will be helpful in effectively utilizing these objects in our data analyses. @@ -147,94 +109,46 @@ data set. There are 13 entities in the data set in total, corresponding to the ```{figure} img/data_frame_slides_cdn/data_frame_slides_cdn.004.jpeg :name: fig:02-obs -:figclass: caption-hack +:figclass: figure A data frame storing data regarding the population of various regions in Canada. In this example data frame, the row that corresponds to the observation for the city of Vancouver is colored yellow, and the column that corresponds to the population variable is colored blue. ``` -```{code-cell} ipython3 -:tags: [remove-cell] - -# The following cell was removed because there is no "vector" in Python. -``` - -+++ {"tags": ["remove-cell"]} - -Python stores the columns of a data frame as either -*lists* or *vectors*. For example, the data frame in Figure -{numref}`fig:02-vectors` has three vectors whose names are `region`, `year` and -`population`. The next two sections will explain what lists and vectors are. - -```{figure} img/data_frame_slides_cdn/data_frame_slides_cdn.005.jpeg -:name: fig:02-vectors -:figclass: caption-hack - -Data frame with three vectors. -``` - -+++ - ### What is a series? ```{index} pandas.Series ``` -In Python, `pandas` **series** are arrays with labels. They are strictly 1-dimensional and can contain any data type (integers, strings, floats, etc), including a mix of them (objects); -Python has several different basic data types, as shown in {numref}`tab:datatype-table`. -You can create a `pandas` series using the `pd.Series()` function. For -example, to create the vector `region` as shown in -{numref}`fig:02-series`, you can write: +In Python, `pandas` **series** are are objects that can contain one or more elements (like a list). +They are a single column, are ordered, can be indexed, and can contain any data type. +The `pandas` package uses `Series` objects to represent the columns in a data frame. +`Series` can contain a mix of data types, but it is good practice to only include a single type in a series +because all observations of one variable should be the same type. +Python +has several different basic data types, as shown in +{numref}`tab:datatype-table`. +You can create a `pandas` series using the +`pd.Series()` function. For example, to create the series `region` as shown +in {numref}`fig:02-series`, you can write the following. ```{code-cell} ipython3 import pandas as pd + region = pd.Series(["Toronto", "Montreal", "Vancouver", "Calgary", "Ottawa"]) region ``` + + +++ {"tags": []} ```{figure} img/wrangling/pandas_dataframe_series.png :name: fig:02-series -:figclass: caption-hack +:figclass: figure Example of a `pandas` series whose type is string. ``` -+++ {"tags": ["remove-cell"]} - -### What is a vector? - -In R, **vectors** \index{vector}\index{atomic vector|see{vector}} are objects that can contain one or more elements. The vector -elements are ordered, and they must all be of the same **data type**; -R has several different basic data types, as shown in {numref}`tab:datatype-table`. -Figure \@ref(fig:02-vector) provides an example of a vector where all of the elements are -of character type. -You can create vectors in R using the `c` function \index{c function} (`c` stands for "concatenate"). For -example, to create the vector `region` as shown in Figure -\@ref(fig:02-vector), you would write: - -``` {r} -year <- c("Toronto", "Montreal", "Vancouver", "Calgary", "Ottawa") -year -``` - -> **Note:** Technically, these objects are called "atomic vectors." In this book -> we have chosen to call them "vectors," which is how they are most commonly -> referred to in the R community. To be totally precise, "vector" is an umbrella term that -> encompasses both atomic vector and list objects in R. But this creates a -> confusing situation where the term "vector" could -> mean "atomic vector" *or* "the umbrella term for atomic vector and list," -> depending on context. Very confusing indeed! So to keep things simple, in -> this book we *always* use the term "vector" to refer to "atomic vector." -> We encourage readers who are enthusiastic to learn more to read the -> Vectors chapter of *Advanced R* [@wickham2019advanced]. - -``` {r 02-vector, echo = FALSE, message = FALSE, warning = FALSE, fig.cap = "Example of a vector whose type is character.", fig.retina = 2, out.width = "100%"} -image_read("img/data_frame_slides_cdn/data_frame_slides_cdn.007.jpeg") %>% - image_crop("3632x590") -``` - -+++ ```{code-cell} ipython3 :tags: [remove-cell] @@ -265,76 +179,30 @@ image_read("img/data_frame_slides_cdn/data_frame_slides_cdn.007.jpeg") %>% ```{table} Basic data types in Python :name: tab:datatype-table -| English name | Type name | Type Category | Description | Example | -| :-------------------- | :--------- | :------------- | :-------------------------------------------- | :----------------------------------------- | -| integer | `int` | Numeric Type | positive/negative whole numbers | `42` | -| floating point number | `float` | Numeric Type | real number in decimal form | `3.14159` | -| boolean | `bool` | Boolean Values | true or false | `True` | -| string | `str` | Sequence Type | text | `"Can I have a cheezburger?"` | -| list | `list` | Sequence Type | a collection of objects - mutable & ordered | `['Ali', 'Xinyi', 'Miriam']` | -| tuple | `tuple` | Sequence Type | a collection of objects - immutable & ordered | `('Thursday', 6, 9, 2018)` | -| dictionary | `dict` | Mapping Type | mapping of key-value pairs | `{'name':'DSCI', 'code':100, 'credits':2}` | -| none | `NoneType` | Null Object | represents no value | `None` | +| Data type | Abbreviation | Description | Example | +| :-------------------- | :----------- | :-------------------------------------------- | :----------------------------------------- | +| integer | `int` | positive/negative/zero whole numbers | `42` | +| floating point number | `float` | real number in decimal form | `3.14159` | +| boolean | `bool` | true or false | `True` | +| string | `str` | text | `"Hello World"` | +| none | `NoneType` | represents no value | `None` | ``` +++ -It is important in Python to make sure you represent your data with the correct type. -Many of the `pandas` functions we use in this book treat -the various data types differently. You should use integers and float types -(which both fall under the "numeric" umbrella type) to represent numbers and perform -arithmetic. Strings are used to represent data that should -be thought of as "text", such as words, names, paths, URLs, and more. -There are other basic data types in Python, such as *set* -and *complex*, but we do not use these in this textbook. - -```{code-cell} ipython3 -:tags: [remove-cell] - -# It is important in R to make sure you represent your data with the correct type. -# Many of the `tidyverse` functions we use in this book treat -# the various data types differently. You should use integers and double types -# (which both fall under the "numeric" umbrella type) to represent numbers and perform -# arithmetic. Doubles are more common than integers in R, though; for instance, a double data type is the -# default when you create a vector of numbers using `c()`, and when you read in -# whole numbers via `read_csv`. Characters are used to represent data that should -# be thought of as "text", such as words, names, paths, URLs, and more. Factors help us -# encode variables that represent *categories*; a factor variable takes one of a discrete -# set of values known as *levels* (one for each category). The levels can be ordered or unordered. Even though -# factors can sometimes *look* like characters, they are not used to represent -# text, words, names, and paths in the way that characters are; in fact, R -# internally stores factors using integers! There are other basic data types in R, such as *raw* -# and *complex*, but we do not use these in this textbook. -``` - -### What is a list? - -```{index} list -``` - -Lists are built-in objects in Python that have multiple, ordered elements. -`pandas` series can be treated as lists with labels (indices). - -```{code-cell} ipython3 -:tags: [remove-cell] - -# Lists \index{list} are also objects in R that have multiple, ordered elements. -# Vectors and lists differ by the requirement of element type -# consistency. All elements within a single vector must be of the same type (e.g., -# all elements are characters), whereas elements within a single list can be of -# different types (e.g., characters, integers, logicals, and even other lists). -``` - -+++ {"tags": ["remove-cell"]} - -```{figure} img/data_frame_slides_cdn/data_frame_slides_cdn.008.jpeg -:name: fig:02-vec-vs-list -:figclass: caption-hack - -A vector versus a list. -``` +It is important in Python to make sure you represent your data with the correct type. +Many of the `pandas` functions we use in this book treat +the various data types differently. You should use `int` and `float` types +to represent numbers and perform arithmetic. The `int` type is for integers that have no decimal point, +while the `float` type is for numbers that have a decimal point. +The `bool` type are boolean variables that can only take on one of two values: `True` or `False`. +The `string` type is used to represent data that should +be thought of as "text", such as words, names, paths, URLs, and more. +A `NoneType` is a special type in Python that is used to indicate no value; this can occur, +for example, when you have missing data. +There are other basic data types in Python, but we will generally +not use these in this textbook. -+++ ### What does this have to do with data frames? @@ -343,41 +211,26 @@ A vector versus a list. ```{index} data frame; definition ``` -A data frame is really just series stuck together that follows two rules: - -1. Each element itself is a series. -2. Each element (series) must have the same length. - -Not all columns in a data frame need to be of the same type. +A data frame is really just a collection of series that are stuck together, +where each series corresponds to one column and all must have the same length. +But not all columns in a data frame need to be of the same type. {numref}`fig:02-dataframe` shows a data frame where -the columns are series of different types. +the columns are series of different types. But each element *within* +one column should usually be the same type, since the values for a single variable +are usually all of the same type. For example, if the variable is the name of a city, +that name should be a string, whereas if the variable is a year, that should be an +integer. So even though series let you put different types in them, it is most common +(and good practice!) to have just one type per column. +++ {"tags": []} ```{figure} img/wrangling/pandas_dataframe_series-3.png :name: fig:02-dataframe -:figclass: caption-hack +:figclass: figure -Data frame and vector types. +Data frame and series types. ``` -```{code-cell} ipython3 -:tags: [remove-cell] - -# A data frame \index{data frame!definition} is really a special kind of list that follows two rules: - -# 1. Each element itself must either be a vector or a list. -# 2. Each element (vector or list) must have the same length. - -# Not all columns in a data frame need to be of the same type. -# Figure \@ref(fig:02-dataframe) shows a data frame where -# the columns are vectors of different types. -# But remember: because the columns in this example are *vectors*, -# the elements must be the same data type *within each column.* -# On the other hand, if our data frame had *list* columns, there would be no such requirement. -# It is generally much more common to use *vector* columns, though, -# as the values for a single variable are usually all of the same type. -``` ```{index} type ``` @@ -386,46 +239,72 @@ Data frame and vector types. > For example we can check the class of the Canadian languages data set, > `can_lang`, we worked with in the previous chapters and we see it is a `pandas.core.frame.DataFrame`. -```{code-cell} ipython3 -:tags: [remove-cell] - -# The functions from the `tidyverse` package that we use often give us a -# special class of data frame called a *tibble*. Tibbles have some additional \index{tibble} -# features and benefits over the built-in data frame object. These include the -# ability to add useful attributes (such as grouping, which we will discuss later) -# and more predictable type preservation when subsetting. -# Because a tibble is just a data frame with some added features, -# we will collectively refer to both built-in R data frames and -# tibbles as data frames in this book. - -# > **Note:** You can use the function `class` \index{class} on a data object to assess whether a data -# > frame is a built-in R data frame or a tibble. If the data object is a data -# > frame, `class` will return `"data.frame"`. If the data object is a -# > tibble it will return `"tbl_df" "tbl" "data.frame"`. You can easily convert -# > built-in R data frames to tibbles using the `tidyverse` `as_tibble` function. -# > For example we can check the class of the Canadian languages data set, -# > `can_lang`, we worked with in the previous chapters and we see it is a tibble. -``` ```{code-cell} ipython3 can_lang = pd.read_csv("data/can_lang.csv") type(can_lang) ``` -Lists, Series and DataFrames are basic types of *data structure* in Python, which -are core to most data analyses. We summarize them in -{numref}`tab:datastructure-table`. There are several other data structures in the Python programming -language (*e.g.,* matrices), but these are beyond the scope of this book. +### Data structures in Python -+++ +The `Series` and `DataFrame` types are *data structures* in Python, which +are core to most data analyses. +The functions from `pandas` that we use often give us back a `DataFrame` +or a `Series` depending on the operation. Because +`Series` are essentially simple `DataFrames`, we will refer +to both `DataFrames` and `Series` as "data frames" in the text. +There are other types that represent data structures in Python. +We summarize the most common ones in {numref}`tab:datastruc-table`. ```{table} Basic data structures in Python -:name: tab:datastructure-table +:name: tab:datastruc-table | Data Structure | Description | -| --- |------------ | -| list | An 1D ordered collection of values that can store multiple data types at once. | -| Series | An 1D ordered collection of values *with labels* that can store multiple data types at once. | -| DataFrame | A 2D labeled data structure with columns of potentially different types. | +| --- | ----------- | +| list | An ordered collection of values that can store multiple data types at once. | +| dict | A labeled data structure where `keys` are paired with `values` | +| Series | An ordered collection of values *with labels* that can store multiple data types at once. | +| DataFrame | A labeled data structure with `Series` columns of potentially different types. | +``` + +A `list` is an ordered collection of values. To create a list, we put the contents of the list in between +square brackets `[]`, where each item of the list is separated by a comma. A `list` can contain values +of different types. The example below contains six `str` entries. + +```{code-cell} ipython3 +cities = ["Toronto", "Vancouver", "Montreal", "Calgary", "Ottawa", "Winnipeg"] +cities +``` +A list can directly be converted to a pandas `Series`. +```{code-cell} ipython3 +cities_series = pd.Series(cities) +cities_series +``` + +A `dict`, or dictionary, contains pairs of "keys" and "values." +You use a key to look up its corresponding value. Dictionaries are created +using curly brackets `{}`. Each entry starts with the +key on the left, followed by a colon symbol `:`, and then the value. +A dictionary can have multiple key-value pairs, each separted by a comma. +Keys can take a wide variety of types (`int` and `str` are commonly used), and values can take any type; +the key-value pairs in a dictionary can all be of different types, too. + In the example below, +we create a dictionary that has two keys: `"cities"` and `"population"`. +The values associated with each are lists. +```{code-cell} ipython3 +population_in_2016 = { + "cities": ["Toronto", "Vancouver", "Montreal", "Calgary", "Ottawa", "Winnipeg"], + "population": [2235145, 1027613, 1823281, 544870, 571146, 321484] +} +population_in_2016 +``` +A dictionary can be converted to a data frame. Keys +become the column names, and the values become the entries in +those columns. Dictionaries on their own are quite simple objects; it is preferable to work with a data frame +because then we have access to the built-in functionality in +`pandas` (e.g. `loc[]`, `[]`, and many functions that we will discuss in the upcoming sections)! +```{code-cell} ipython3 +population_in_2016 = pd.DataFrame(population_in_2016) +population_in_2016 ``` +++ @@ -435,9 +314,10 @@ language (*e.g.,* matrices), but these are beyond the scope of this book. ```{index} tidy data; definition ``` -There are many ways a tabular data set can be organized. This chapter will focus -on introducing the **tidy data** format of organization and how to make your raw -(and likely messy) data tidy. A tidy data frame satisfies +There are many ways a tabular data set can be organized. The data frames we +have looked at so far have all been using the **tidy data** format of +organization. This chapter will focus on introducing the tidy data format and +how to make your raw (and likely messy) data tidy. A tidy data frame satisfies the following three criteria {cite:p}`wickham2014tidy`: - each row is a single observation, @@ -445,14 +325,14 @@ the following three criteria {cite:p}`wickham2014tidy`: - each value is a single cell (i.e., its entry in the data frame is not shared with another value). -{numref}`fig:02-tidy-image` demonstrates a tidy data set that satisfies these +{numref}`fig:02-tidy-image` demonstrates a tidy data set that satisfies these three criteria. +++ {"tags": []} ```{figure} img/tidy_data/tidy_data.001-cropped.jpeg :name: fig:02-tidy-image -:figclass: caption-hack +:figclass: figure Tidy data satisfies three criteria. ``` @@ -464,8 +344,8 @@ Tidy data satisfies three criteria. There are many good reasons for making sure your data are tidy as a first step in your analysis. The most important is that it is a single, consistent format that nearly every function -in the `pandas` recognizes. No matter what the variables and observations -in your data represent, as long as the data frame +in the `pandas` recognizes. No matter what the variables and observations +in your data represent, as long as the data frame is tidy, you can manipulate it, plot it, and analyze it using the same tools. If your data is *not* tidy, you will have to write special bespoke code in your analysis that will not only be error-prone, but hard for others to understand. @@ -486,23 +366,23 @@ below! +++ -### Tidying up: going from wide to long using `.melt` +### Tidying up: going from wide to long using `melt` ```{index} pandas.DataFrame; melt ``` -One task that is commonly performed to get data into a tidy format -is to combine values that are stored in separate columns, +One task that is commonly performed to get data into a tidy format +is to combine values that are stored in separate columns, but are really part of the same variable, into one. -Data is often stored this way -because this format is sometimes more intuitive for human readability +Data is often stored this way +because this format is sometimes more intuitive for human readability and understanding, and humans create data sets. -In {numref}`fig:02-wide-to-long`, -the table on the left is in an untidy, "wide" format because the year values -(2006, 2011, 2016) are stored as column names. -And as a consequence, -the values for population for the various cities -over these years are also split across several columns. +In {numref}`fig:02-wide-to-long`, +the table on the left is in an untidy, "wide" format because the year values +(2006, 2011, 2016) are stored as column names. +And as a consequence, +the values for population for the various cities +over these years are also split across several columns. For humans, this table is easy to read, which is why you will often find data stored in this wide format. However, this format is difficult to work with @@ -518,19 +398,24 @@ greatly simplified once the data is tidied. Another problem with data in this format is that we don't know what the numbers under each year actually represent. Do those numbers represent -population size? Land area? It's not clear. -To solve both of these problems, -we can reshape this data set to a tidy data format +population size? Land area? It's not clear. +To solve both of these problems, +we can reshape this data set to a tidy data format by creating a column called "year" and a column called "population." This transformation—which makes the data "longer"—is shown as the right table in -{numref}`fig:02-wide-to-long`. +{numref}`fig:02-wide-to-long`. Note that the number of entries in our data frame +can change in this transformation. The "untidy" data has 5 rows and 3 columns for +a total of 15 entries, whereas the "tidy" data on the right has 15 rows and 2 columns +for a total of 30 entries. +++ {"tags": []} ```{figure} img/pivot_functions/pivot_functions.001.jpeg :name: fig:02-wide-to-long -:figclass: caption-hack +:figclass: figure + + Melting data from a wide to long data format. ``` @@ -540,63 +425,66 @@ Melting data from a wide to long data format. ```{index} Canadian languages ``` -We can achieve this effect in Python using the `.melt` function from the `pandas` package. -The `.melt` function combines columns, -and is usually used during tidying data -when we need to make the data frame longer and narrower. -To learn how to use `.melt`, we will work through an example with the +We can achieve this effect in Python using the `melt` function from the `pandas` package. +The `melt` function combines columns, +and is usually used during tidying data +when we need to make the data frame longer and narrower. +To learn how to use `melt`, we will work through an example with the `region_lang_top5_cities_wide.csv` data set. This data set contains the -counts of how many Canadians cited each language as their mother tongue for five +counts of how many Canadians cited each language as their mother tongue for five major Canadian cities (Toronto, Montréal, Vancouver, Calgary and Edmonton) from -the 2016 Canadian census. -To get started, +the 2016 Canadian census. +To get started, we will use `pd.read_csv` to load the (untidy) data. ```{code-cell} ipython3 +:tags: ["output_scroll"] lang_wide = pd.read_csv("data/region_lang_top5_cities_wide.csv") lang_wide ``` -What is wrong with the untidy format above? -The table on the left in {numref}`fig:img-pivot-longer-with-table` +What is wrong with the untidy format above? +The table on the left in {numref}`fig:img-pivot-longer-with-table` represents the data in the "wide" (messy) format. -From a data analysis perspective, this format is not ideal because the values of -the variable *region* (Toronto, Montréal, Vancouver, Calgary and Edmonton) +From a data analysis perspective, this format is not ideal because the values of +the variable *region* (Toronto, Montréal, Vancouver, Calgary and Edmonton) are stored as column names. Thus they are not easily accessible to the data analysis functions we will apply to our data set. Additionally, the *mother tongue* variable values are spread across multiple columns, which will prevent us from doing any desired visualization or statistical tasks until we combine them into one column. For -instance, suppose we want to know the languages with the highest number of +instance, suppose we want to know the languages with the highest number of Canadians reporting it as their mother tongue among all five regions. This -question would be tough to answer with the data in its current format. -We *could* find the answer with the data in this format, +question would be tough to answer with the data in its current format. +We *could* find the answer with the data in this format, though it would be much easier to answer if we tidy our -data first. If mother tongue were instead stored as one column, -as shown in the tidy data on the right in +data first. If mother tongue were instead stored as one column, +as shown in the tidy data on the right in {numref}`fig:img-pivot-longer-with-table`, -we could simply use one line of code (`df["mother_tongue"].max()`) +we could simply use one line of code (`df["mother_tongue"].max()`) to get the maximum value. +++ {"tags": []} ```{figure} img/wrangling/pandas_melt_wide-long.png :name: fig:img-pivot-longer-with-table -:figclass: caption-hack +:figclass: figure -Going from wide to long with the `.melt` function. +Going from wide to long with the `melt` function. ``` +++ -{numref}`fig:img-pivot-longer` details the arguments that we need to specify -in the `.melt` function to accomplish this data transformation. +{numref}`fig:img-pivot-longer` details the arguments that we need to specify +in the `melt` function to accomplish this data transformation. +++ {"tags": []} +**(FIGURE UPDATE NEEDED TO MATCH THE CODE BELOW)** + ```{figure} img/wrangling/pandas_melt_args_labels.png :name: fig:img-pivot-longer -:figclass: caption-hack +:figclass: figure Syntax for the `melt` function. ``` @@ -609,29 +497,29 @@ Syntax for the `melt` function. ```{index} see: :; column range ``` -We use `.melt` to combine the Toronto, Montréal, +We use `melt` to combine the Toronto, Montréal, Vancouver, Calgary, and Edmonton columns into a single column called `region`, and create a column called `mother_tongue` that contains the count of how many Canadians report each language as their mother tongue for each metropolitan -area. We specify `value_vars` to be all -the columns between Toronto and Edmonton: +area ```{code-cell} ipython3 +:tags: ["output_scroll"] lang_mother_tidy = lang_wide.melt( id_vars=["category", "language"], - value_vars=["Toronto", "Montréal", "Vancouver", "Calgary", "Edmonton"], var_name="region", value_name="mother_tongue", ) - lang_mother_tidy ``` > **Note**: In the code above, the call to the -> `.melt` function is split across several lines. This is allowed in -> certain cases; for example, when calling a function as above, as long as the -> line ends with a comma `,` Python knows to keep reading on the next line. -> Splitting long lines like this across multiple lines is encouraged +> `melt` function is split across several lines. Recall from +> the {ref}`intro` chapter that this is allowed in +> certain cases. For example, when calling a function as above, the input +> arguments are between parentheses `()` and Python knows to keep reading on +> the next line. Each line ends with a comma `,` making it easier to read. +> Splitting long lines like this across multiple lines is encouraged > as it helps significantly with code readability. Generally speaking, you should > limit each line of code to about 80 characters. @@ -648,7 +536,7 @@ been met: +++ (pivot-wider)= -### Tidying up: going from long to wide using `.pivot` +### Tidying up: going from long to wide using `pivot` ```{index} pandas.DataFrame; pivot ``` @@ -656,17 +544,17 @@ been met: Suppose we have observations spread across multiple rows rather than in a single row. For example, in {numref}`fig:long-to-wide`, the table on the left is in an untidy, long format because the `count` column contains three variables -(population, commuter, and incorporated count) and information about each observation -(here, population, commuter, and incorporated counts for a region) is split across three rows. -Remember: one of the criteria for tidy data +(population, commuter, and incorporated count) and information about each observation +(here, population, commuter, and incorporated counts for a region) is split across three rows. +Remember: one of the criteria for tidy data is that each observation must be in a single row. Using data in this format—where two or more variables are mixed together in a single column—makes it harder to apply many usual `pandas` functions. -For example, finding the maximum number of commuters +For example, finding the maximum number of commuters would require an additional step of filtering for the commuter values before the maximum can be computed. -In comparison, if the data were tidy, +In comparison, if the data were tidy, all we would have to do is compute the maximum value for the commuter column. To reshape this untidy data set to a tidy (and in this case, wider) format, we need to create columns called "population", "commuters", and "incorporated." @@ -676,62 +564,64 @@ This is illustrated in the right table of {numref}`fig:long-to-wide`. ```{figure} img/pivot_functions/pivot_functions.002.jpeg :name: fig:long-to-wide -:figclass: caption-hack +:figclass: figure Going from long to wide data. ``` +++ -To tidy this type of data in Python, we can use the `.pivot` function. -The `.pivot` function generally increases the number of columns (widens) -and decreases the number of rows in a data set. -To learn how to use `.pivot`, -we will work through an example -with the `region_lang_top5_cities_long.csv` data set. -This data set contains the number of Canadians reporting +To tidy this type of data in Python, we can use the `pivot` function. +The `pivot` function generally increases the number of columns (widens) +and decreases the number of rows in a data set. +To learn how to use `pivot`, +we will work through an example +with the `region_lang_top5_cities_long.csv` data set. +This data set contains the number of Canadians reporting the primary language at home and work for five major cities (Toronto, Montréal, Vancouver, Calgary and Edmonton). ```{code-cell} ipython3 +:tags: ["output_scroll"] lang_long = pd.read_csv("data/region_lang_top5_cities_long.csv") lang_long ``` -What makes the data set shown above untidy? -In this example, each observation is a language in a region. -However, each observation is split across multiple rows: -one where the count for `most_at_home` is recorded, -and the other where the count for `most_at_work` is recorded. -Suppose the goal with this data was to +What makes the data set shown above untidy? +In this example, each observation is a language in a region. +However, each observation is split across multiple rows: +one where the count for `most_at_home` is recorded, +and the other where the count for `most_at_work` is recorded. +Suppose the goal with this data was to visualize the relationship between the number of -Canadians reporting their primary language at home and work. +Canadians reporting their primary language at home and work. Doing that would be difficult with this data in its current form, since these two variables are stored in the same column. {numref}`fig:img-pivot-wider-table` shows how this data -will be tidied using the `.pivot` function. +will be tidied using the `pivot` function. +++ {"tags": []} ```{figure} img/wrangling/pandas_pivot_long-wide.png :name: fig:img-pivot-wider-table -:figclass: caption-hack +:figclass: figure -Going from long to wide with the `.pivot` function. +Going from long to wide with the `pivot` function. ``` +++ -{numref}`fig:img-pivot-wider` details the arguments that we need to specify -in the `.pivot` function. +{numref}`fig:img-pivot-wider` details the arguments that we need to specify in the `pivot` function. + +**TODO make figure match code below** +++ {"tags": []} ```{figure} img/wrangling/pandas_pivot_args_labels.png :name: fig:img-pivot-wider -:figclass: caption-hack +:figclass: figure -Syntax for the `.pivot` function. +Syntax for the `pivot` function. ``` +++ @@ -739,8 +629,11 @@ Syntax for the `.pivot` function. We will apply the function as detailed in {numref}`fig:img-pivot-wider`. ```{code-cell} ipython3 +:tags: ["output_scroll"] lang_home_tidy = lang_long.pivot( - index=["region", "category", "language"], columns=["type"], values=["count"] + index=["region", "category", "language"], + columns=["type"], + values=["count"] ).reset_index() lang_home_tidy.columns = [ @@ -753,11 +646,30 @@ lang_home_tidy.columns = [ lang_home_tidy ``` +In the first step, note that we added a call to `reset_index`. When `pivot` is called with +multiple column names passed to the `index`, those entries become the "name" of each row that +would be used when you filter rows with `[]` or `loc` rather than just simple numbers. This +can be confusing... What `reset_index` does is sets us back with the usual expected behaviour +where each row is "named" with an integer. This is a subtle point, but the main take-away is that +when you call `pivot`, it is a good idea to call `reset_index` afterwards. + +The second operation we applied is to rename the columns. When we perform the `pivot` +operation, it keeps the original column name `"count"` and adds the `"type"` as a second column name. +Having two names for a column can be confusing! So we rename giving each column only one name. + +We can print out some useful information about our data frame using the `info` function. +In the first row it tells us the `type` of `lang_home_tidy` (it is a `pandas` `DataFrame`). The second +row tells us how many rows there are: 1070, and to index those rows, you can use numbers between +0 and 1069 (remember that Python starts counting at 0!). Next, there is a print out about the data +colums. Here there are 5 columns total. The little table it prints out tells you the name of each +column, the number of non-null values (e.g. the number of entries that are not missing values), and +the type of the entries. Finally the last two rows summarize the types of each column and how much +memory the data frame is using on your computer. ```{code-cell} ipython3 -lang_home_tidy.dtypes +lang_home_tidy.info() ``` -The data above is now tidy! We can go through the three criteria again to check +The data is now tidy! We can go through the three criteria again to check that this data is a tidy data set. 1. All the statistical variables are their own columns in the data frame (i.e., @@ -768,43 +680,45 @@ that this data is a tidy data set. frame is not shared with another value). You might notice that we have the same number of columns in the tidy data set as -we did in the messy one. Therefore `.pivot` didn't really "widen" the data. +we did in the messy one. Therefore `pivot` didn't really "widen" the data. This is just because the original `type` column only had -two categories in it. If it had more than two, `.pivot` would have created +two categories in it. If it had more than two, `pivot` would have created more columns, and we would see the data set "widen." + +++ (str-split)= -### Tidying up: using `.str.split` to deal with multiple delimiters +### Tidying up: using `str.split` to deal with multiple delimiters ```{index} pandas.Series; str.split, delimiter ``` -Data are also not considered tidy when multiple values are stored in the same +Data are also not considered tidy when multiple values are stored in the same cell. The data set we show below is even messier than the ones we dealt with above: the `Toronto`, `Montréal`, `Vancouver`, `Calgary` and `Edmonton` columns contain the number of Canadians reporting their primary language at home and -work in one column separated by the delimiter (`/`). The column names are the +work in one column separated by the delimiter (`/`). The column names are the values of a variable, *and* each value does not have its own cell! To turn this messy data into tidy data, we'll have to fix these issues. ```{code-cell} ipython3 +:tags: ["output_scroll"] lang_messy = pd.read_csv("data/region_lang_top5_cities_messy.csv") lang_messy ``` -First we’ll use `.melt` to create two columns, `region` and `value`, -similar to what we did previously. +First we’ll use `melt` to create two columns, `region` and `value`, +similar to what we did previously. The new `region` columns will contain the region names, -and the new column `value` will be a temporary holding place for the -data that we need to further separate, i.e., the +and the new column `value` will be a temporary holding place for the +data that we need to further separate, i.e., the number of Canadians reporting their primary language at home and work. ```{code-cell} ipython3 +:tags: ["output_scroll"] lang_messy_longer = lang_messy.melt( id_vars=["category", "language"], - value_vars=["Toronto", "Montréal", "Vancouver", "Calgary", "Edmonton"], var_name="region", value_name="value", ) @@ -812,40 +726,73 @@ lang_messy_longer = lang_messy.melt( lang_messy_longer ``` -Next we'll use `.str.split` to split the `value` column into two columns. -One column will contain only the counts of Canadians -that speak each language most at home, -and the other will contain the counts of Canadians -that speak each language most at work for each region. +Next we'll split the `value` column into two columns. +In basic Python, if we wanted to split the string `"50/0"` into two numbers `["50", "0"]` +we would use the `split` method on the string, and specify that the split should be made +on the slash character `"/"`. +```{code-cell} ipython3 +"50/0".split("/") +``` + +The `pandas` package provides similar functions that we can access +by using the `str` method. So, to split all of the entries for an entire +column in a data frame, we would use the `str.split` method. +Once we use this method, +one column will contain only the counts of Canadians +that speak each language most at home, +and the other will contain the counts of Canadians +that speak each language most at work for each region. {numref}`fig:img-separate` -outlines what we need to specify to use `.str.split`. +outlines what we need to specify to use `str.split`. +++ {"tags": []} ```{figure} img/wrangling/str-split_args_labels.png :name: fig:img-separate -:figclass: caption-hack +:figclass: figure -Syntax for the `.str.split` function. +Syntax for the `str.split` function. ``` +We will do this in multiple steps. First, we create a new object +that contains two columns. We will set the `expand` argument to `True` +to tell `pandas` that we want to expand the output into two columns. + ```{code-cell} ipython3 -tidy_lang = ( - pd.concat( - (lang_messy_longer, lang_messy_longer["value"].str.split("/", expand=True)), - axis=1, - ) - .rename(columns={0: "most_at_home", 1: "most_at_work"}) - .drop(columns=["value"]) -) +split_counts = lang_messy_longer["value"].str.split("/", expand=True) +split_counts +``` +Since we only operated on the `value` column, the `split_counts` data frame +doesn't have the rest of the columns (`language`, `region`, etc.) +that were in our original data frame. We don't want to lose this information, so +we will contatenate (combine) the original data frame with `split_counts` using +the `concat` function from `pandas`. The `concat` function *concatenates* data frames +along an axis. By default, it concatenates the data frames vertically along `axis=0` yielding a single +*taller* data frame. Since we want to concatenate our old columns to our +new `split_counts` data frame (to obtain a *wider* data frame), we will specify `axis=1`. +```{code-cell} ipython3 +:tags: ["output_scroll"] +tidy_lang = pd.concat( + [lang_messy_longer, split_counts], + axis=1, +) tidy_lang ``` +Next, we will rename our newly created columns (currently called +`0` and `1`) to the more meaningful names `"most_at_home"` and `"most_at_work"`, +and drop the `value` column from our data frame using the `drop` method. + ```{code-cell} ipython3 -tidy_lang.dtypes +:tags: ["output_scroll"] +tidy_lang = ( + tidy_lang.rename(columns={0: "most_at_home", 1: "most_at_work"}) + .drop(columns=["value"]) +) +tidy_lang ``` - +Note that we could have chained these steps together to make our code more compact. Is this data set now tidy? If we recall the three criteria for tidy data: - each row is a single observation, @@ -853,57 +800,36 @@ Is this data set now tidy? If we recall the three criteria for tidy data: - each value is a single cell. We can see that this data now satisfies all three criteria, making it easier to -analyze. But we aren't done yet! Notice in the table, all of the variables are -"object" data types. Object data types are columns of strings or columns with mixed types. In the previous example in Section {ref}`pivot-wider`, the -`most_at_home` and `most_at_work` variables were `int64` (integer)—you can -verify this by calling `df.dtypes`—which is a type -of numeric data. This change is due to the delimiter (`/`) when we read in this -messy data set. Python read these columns in as string types, and by default, -`.str.split` will return columns as object data types. - -It makes sense for `region`, `category`, and `language` to be stored as a -object type. However, suppose we want to apply any functions that treat the -`most_at_home` and `most_at_work` columns as a number (e.g., finding rows -above a numeric threshold of a column). -In that case, -it won't be possible to do if the variable is stored as a `object`. -Fortunately, the `pandas.to_numeric` function provides a natural way to fix problems -like this: it will convert the columns to the best numeric data types. - +analyze. But we aren't done yet! Although we can't see it in the data frame above, all of the variables are actually +"object" data types. We can check this using the `info` method. ```{code-cell} ipython3 -:tags: [remove-cell] - -# We can see that this data now satisfies all three criteria, making it easier to -# analyze. But we aren't done yet! Notice in the table above that the word -# `` appears beneath each of the column names. The word under the column name -# indicates the data type of each column. Here all of the variables are -# "character" data types. Recall, character data types are letter(s) or digits(s) -# surrounded by quotes. In the previous example in Section \@ref(pivot-wider), the -# `most_at_home` and `most_at_work` variables were `` (double)—you can -# verify this by looking at the tables in the previous sections—which is a type -# of numeric data. This change is due to the delimiter (`/`) when we read in this -# messy data set. R read these columns in as character types, and by default, -# `separate` will return columns as character data types. - -# It makes sense for `region`, `category`, and `language` to be stored as a -# character (or perhaps factor) type. However, suppose we want to apply any functions that treat the -# `most_at_home` and `most_at_work` columns as a number (e.g., finding rows -# above a numeric threshold of a column). -# In that case, -# it won't be possible to do if the variable is stored as a `character`. -# Fortunately, the `separate` function provides a natural way to fix problems -# like this: we can set `convert = TRUE` to convert the `most_at_home` -# and `most_at_work` columns to the correct data type. +tidy_lang.info() ``` +Object columns in `pandas` data frames are columns of strings or columns with +mixed types. In the previous example in the section on {ref}`pivot-wider`, the +`most_at_home` and `most_at_work` variables were `int64` (integer), which is a type of numeric data. +This change is due to the delimiter (`/`) when we read in this messy data set. +Python read these columns in as string types, and by default, `str.split` will +return columns with the `object` data type. + +It makes sense for `region`, `category`, and `language` to be stored as an +`object` type. However, suppose we want to apply any functions that treat the +`most_at_home` and `most_at_work` columns as a number (e.g., finding rows +above a numeric threshold of a column). +That won't be possible if the variable is stored as a `object`. +Fortunately, the `pandas.to_numeric` function provides a natural way to fix problems +like this: it will convert the columns to the best numeric data types. + ```{code-cell} ipython3 +:tags: ["output_scroll"] tidy_lang["most_at_home"] = pd.to_numeric(tidy_lang["most_at_home"]) tidy_lang["most_at_work"] = pd.to_numeric(tidy_lang["most_at_work"]) tidy_lang ``` ```{code-cell} ipython3 -tidy_lang.dtypes +tidy_lang.info() ``` Now we see `most_at_home` and `most_at_work` columns are of `int64` data types, @@ -911,122 +837,35 @@ indicating they are integer data types (i.e., numbers)! +++ -(loc-iloc)= -## Using `.loc[]` and `.iloc[]` to extract a range of columns - -```{index} pandas.DataFrame; loc[] -``` - -Now that the `tidy_lang` data is indeed *tidy*, we can start manipulating it -using the powerful suite of functions from the `pandas`. -For the first example, recall `.loc[]` from Chapter {ref}`intro`, -which lets us create a subset of columns from a data frame. -Suppose we wanted to select only the columns `language`, `region`, -`most_at_home` and `most_at_work` from the `tidy_lang` data set. Using what we -learned in Chapter {ref}`intro`, we would pass all of these column names into the square brackets: - -```{code-cell} ipython3 -selected_columns = tidy_lang.loc[:, ["language", "region", "most_at_home", "most_at_work"]] -selected_columns -``` - -```{index} pandas.DataFrame; iloc[], column range -``` - -Here we wrote out the names of each of the columns. However, this method is -time-consuming, especially if you have a lot of columns! Another approach is to -index with integers. `.iloc[]` make it easier for -us to select columns. For instance, we can use `.iloc[]` to choose a -range of columns rather than typing each column name out. To do this, we use the -colon (`:`) operator to denote the range. For example, to get all the columns in -the `tidy_lang` data frame from `language` to `most_at_work`, we pass `:` before the comma indicating we want to retrieve all rows, and `1:` after the comma indicating we want only columns from index 1 (*i.e.* `language`) and afterwords. - -```{code-cell} ipython3 -:tags: [remove-cell] - -# Here we wrote out the names of each of the columns. However, this method is -# time-consuming, especially if you have a lot of columns! Another approach is to -# use a "select helper". Select helpers are operators that make it easier for -# us to select columns. For instance, we can use a select helper to choose a -# range of columns rather than typing each column name out. To do this, we use the -# colon (`:`) operator to denote the range. For example, to get all the columns in \index{column range} -# the `tidy_lang` data frame from `language` to `most_at_work` we pass -# `language:most_at_work` as the second argument to the `select` function. -``` - -```{code-cell} ipython3 -column_range = tidy_lang.iloc[:, 1:] -column_range -``` - -Notice that we get the same output as we did above, -but with less (and clearer!) code. This type of operator -is especially handy for large data sets. - -```{index} pandas.Series; str.startswith -``` - -Suppose instead we wanted to extract columns that followed a particular pattern -rather than just selecting a range. For example, let's say we wanted only to select the -columns `most_at_home` and `most_at_work`. There are other functions that allow -us to select variables based on their names. In particular, we can use the `.str.startswith` method -to choose only the columns that start with the word "most": - -```{code-cell} ipython3 -tidy_lang.loc[:, tidy_lang.columns.str.startswith('most')] -``` - -```{index} pandas.Series; str.contains -``` - -We could also have chosen the columns containing an underscore `_` by using the -`.str.contains("_")`, since we notice -the columns we want contain underscores and the others don't. - -```{code-cell} ipython3 -tidy_lang.loc[:, tidy_lang.columns.str.contains('_')] -``` - -There are many different functions that help with selecting -variables based on certain criteria. -The additional resources section at the end of this chapter -provides a comprehensive resource on these functions. - -```{code-cell} ipython3 -:tags: [remove-cell] - -# There are many different `select` helpers that select -# variables based on certain criteria. -# The additional resources section at the end of this chapter -# provides a comprehensive resource on `select` helpers. -``` - -## Using `df[]` to extract rows +## Using `[]` to extract rows or columns -Next, we revisit the `df[]` from Chapter {ref}`intro`, -which lets us create a subset of rows from a data frame. -Recall the argument to the `df[]`: -column names or a logical statement evaluated to either `True` or `False`; -`df[]` works by returning the rows where the logical statement evaluates to `True`. -This section will highlight more advanced usage of the `df[]` function. +Now that the `tidy_lang` data is indeed *tidy*, we can start manipulating it +using the powerful suite of functions from the `pandas`. +We revisit the `[]` from the chapter on {ref}`intro`, +which lets us create a subset of rows from a data frame. +Recall the argument to `[]`: +a list of column names, or a logical statement that evaluates to either `True` or `False`, +where `[]` returns the rows where the logical statement evaluates to `True`. +This section will highlight more advanced usage of the `[]` function. In particular, this section provides an in-depth treatment of the variety of logical statements -one can use in the `df[]` to select subsets of rows. +one can use in the `[]` to select subsets of rows. +++ ### Extracting rows that have a certain value with `==` Suppose we are only interested in the subset of rows in `tidy_lang` corresponding to the official languages of Canada (English and French). -We can extract these rows by using the *equivalency operator* (`==`) -to compare the values of the `category` column -with the value `"Official languages"`. -With these arguments, `df[]` returns a data frame with all the columns -of the input data frame -but only the rows we asked for in the logical statement, i.e., +We can extract these rows by using the *equivalency operator* (`==`) +to compare the values of the `category` column +with the value `"Official languages"`. +With these arguments, `[]` returns a data frame with all the columns +of the input data frame +but only the rows we asked for in the logical statement, i.e., those where the `category` column holds the value `"Official languages"`. We name this data frame `official_langs`. ```{code-cell} ipython3 +:tags: ["output_scroll"] official_langs = tidy_lang[tidy_lang["category"] == "Official languages"] official_langs ``` @@ -1034,30 +873,34 @@ official_langs ### Extracting rows that do not have a certain value with `!=` What if we want all the other language categories in the data set *except* for -those in the `"Official languages"` category? We can accomplish this with the `!=` +those in the `"Official languages"` category? We can accomplish this with the `!=` operator, which means "not equal to". So if we want to find all the rows where the `category` does *not* equal `"Official languages"` we write the code below. ```{code-cell} ipython3 +:tags: ["output_scroll"] tidy_lang[tidy_lang["category"] != "Official languages"] ``` (filter-and)= ### Extracting rows satisfying multiple conditions using `&` -Suppose now we want to look at only the rows -for the French language in Montréal. -To do this, we need to filter the data set -to find rows that satisfy multiple conditions simultaneously. +Suppose now we want to look at only the rows +for the French language in Montréal. +To do this, we need to filter the data set +to find rows that satisfy multiple conditions simultaneously. We can do this with the ampersand symbol (`&`), which -is interpreted by Python as "and". -We write the code as shown below to filter the `official_langs` data frame -to subset the rows where `region == "Montréal"` -*and* the `language == "French"`. +is interpreted by Python as "and". +We write the code as shown below to filter the `official_langs` data frame +to subset the rows where `region == "Montréal"` +*and* `language == "French"`. ```{code-cell} ipython3 -tidy_lang[(tidy_lang["region"] == "Montréal") & (tidy_lang["language"] == "French")] +tidy_lang[ + (tidy_lang["region"] == "Montréal") & + (tidy_lang["language"] == "French") +] ``` +++ {"tags": []} @@ -1065,37 +908,39 @@ tidy_lang[(tidy_lang["region"] == "Montréal") & (tidy_lang["language"] == "Fren ### Extracting rows satisfying at least one condition using `|` Suppose we were interested in only those rows corresponding to cities in Alberta -in the `official_langs` data set (Edmonton and Calgary). +in the `official_langs` data set (Edmonton and Calgary). We can't use `&` as we did above because `region` -cannot be both Edmonton *and* Calgary simultaneously. -Instead, we can use the vertical pipe (`|`) logical operator, -which gives us the cases where one condition *or* -another condition *or* both are satisfied. +cannot be both Edmonton *and* Calgary simultaneously. +Instead, we can use the vertical pipe (`|`) logical operator, +which gives us the cases where one condition *or* +another condition *or* both are satisfied. In the code below, we ask Python to return the rows where the `region` columns are equal to "Calgary" *or* "Edmonton". ```{code-cell} ipython3 official_langs[ - (official_langs["region"] == "Calgary") | (official_langs["region"] == "Edmonton") + (official_langs["region"] == "Calgary") | + (official_langs["region"] == "Edmonton") ] ``` -### Extracting rows with values in a list using `.isin()` +### Extracting rows with values in a list using `isin` -Next, suppose we want to see the populations of our five cities. -Let's read in the `region_data.csv` file -that comes from the 2016 Canadian census, -as it contains statistics for number of households, land area, population +Next, suppose we want to see the populations of our five cities. +Let's read in the `region_data.csv` file +that comes from the 2016 Canadian census, +as it contains statistics for number of households, land area, population and number of dwellings for different regions. ```{code-cell} ipython3 +:tags: ["output_scroll"] region_data = pd.read_csv("data/region_data.csv") region_data ``` -To get the population of the five cities -we can filter the data set using the `.isin` method. -The `.isin` method is used to see if an element belongs to a list. +To get the population of the five cities +we can filter the data set using the `isin` method. +The `isin` method is used to see if an element belongs to a list. Here we are filtering for rows where the value in the `region` column matches any of the five cities we are intersted in: Toronto, Montréal, Vancouver, Calgary, and Edmonton. @@ -1106,7 +951,7 @@ five_cities = region_data[region_data["region"].isin(city_names)] five_cities ``` -> **Note:** What's the difference between `==` and `.isin`? Suppose we have two +> **Note:** What's the difference between `==` and `isin`? Suppose we have two > Series, `seriesA` and `seriesB`. If you type `seriesA == seriesB` into Python it > will compare the series element by element. Python checks if the first element of > `seriesA` equals the first element of `seriesB`, the second element of @@ -1114,7 +959,7 @@ five_cities > `seriesA.isin(seriesB)` compares the first element of `seriesA` to all the > elements in `seriesB`. Then the second element of `seriesA` is compared > to all the elements in `seriesB`, and so on. Notice the difference between `==` and -> `.isin` in the example below. +> `isin` in the example below. ```{code-cell} ipython3 pd.Series(["Vancouver", "Toronto"]) == pd.Series(["Toronto", "Vancouver"]) @@ -1124,25 +969,6 @@ pd.Series(["Vancouver", "Toronto"]) == pd.Series(["Toronto", "Vancouver"]) pd.Series(["Vancouver", "Toronto"]).isin(pd.Series(["Toronto", "Vancouver"])) ``` -```{code-cell} ipython3 -:tags: [remove-cell] - -# > **Note:** What's the difference between `==` and `%in%`? Suppose we have two -# > vectors, `vectorA` and `vectorB`. If you type `vectorA == vectorB` into R it -# > will compare the vectors element by element. R checks if the first element of -# > `vectorA` equals the first element of `vectorB`, the second element of -# > `vectorA` equals the second element of `vectorB`, and so on. On the other hand, -# > `vectorA %in% vectorB` compares the first element of `vectorA` to all the -# > elements in `vectorB`. Then the second element of `vectorA` is compared -# > to all the elements in `vectorB`, and so on. Notice the difference between `==` and -# > `%in%` in the example below. -# > -# >``` {r} -# >c("Vancouver", "Toronto") == c("Toronto", "Vancouver") -# >c("Vancouver", "Toronto") %in% c("Toronto", "Vancouver") -# >``` -``` - ### Extracting rows above or below a threshold using `>` and `<` ```{code-cell} ipython3 @@ -1152,1262 +978,832 @@ glue("census_popn", "{0:,.0f}".format(35151728)) glue("most_french", "{0:,.0f}".format(2669195)) ``` -We saw in Section {ref}`filter-and` that -{glue:text}`most_french` people reported -speaking French in Montréal as their primary language at home. -If we are interested in finding the official languages in regions -with higher numbers of people who speak it as their primary language at home -compared to French in Montréal, then we can use `df[]` to obtain rows -where the value of `most_at_home` is greater than -{glue:text}`most_french`. +We saw in the section on {ref}`filter-and` that +{glue:text}`most_french` people reported +speaking French in Montréal as their primary language at home. +If we are interested in finding the official languages in regions +with higher numbers of people who speak it as their primary language at home +compared to French in Montréal, then we can use `[]` to obtain rows +where the value of `most_at_home` is greater than +{glue:text}`most_french`. We use the `>` symbol to look for values *above* a threshold, +and the `<` symbol to look for values *below* a threshold. The `>=` and `<=` +symbols similarly look for *equal to or above* a threshold and *equal to or below* a threshold. ```{code-cell} ipython3 official_langs[official_langs["most_at_home"] > 2669195] ``` -This operation returns a data frame with only one row, indicating that when -considering the official languages, -only English in Toronto is reported by more people -as their primary language at home +This operation returns a data frame with only one row, indicating that when +considering the official languages, +only English in Toronto is reported by more people +as their primary language at home than French in Montréal according to the 2016 Canadian census. -+++ {"tags": []} +### Extracting rows using `query` -(pandas-assign)= -## Using `.assign` to modify or add columns +You can also extract rows above, below, equal or not-equal to a threshold using the +`query` method. For example the following gives us the same result as when we used +`official_langs[official_langs["most_at_home"] > 2669195]`. -+++ +```{code-cell} ipython3 +official_langs.query("most_at_home > 2669195") +``` -### Using `.assign` to modify columns +The query (criteria we are using to select values) is input as a string. The `query` method +is less often used than the earlier approaches we introduced, but it can come in handy +to make long chains of filtering operations a bit easier to read. -```{index} pandas.DataFrame; df[] +(loc-iloc)= +## Using `loc[]` to filter rows and select columns. +```{index} pandas.DataFrame; loc[] ``` -In Section {ref}`str-split`, -when we first read in the `"region_lang_top5_cities_messy.csv"` data, -all of the variables were "object" data types. -During the tidying process, -we used the `pandas.to_numeric` function -to convert the `most_at_home` and `most_at_work` columns -to the desired integer (i.e., numeric class) data types and then used `df[]` to overwrite columns. -But suppose we didn't use the `df[]`, -and needed to modify the columns some other way. -Below we create such a situation -so that we can demonstrate how to use `.assign` -to change the column types of a data frame. -`.assign` is a useful function to modify or create new data frame columns. +The `[]` operation is only used when you want to filter rows or select columns; +it cannot be used to do both operations at the same time. This is where `loc[]` +comes in. For the first example, recall `loc[]` from Chapter {ref}`intro`, +which lets us create a subset of columns from a data frame. +Suppose we wanted to select only the columns `language`, `region`, +`most_at_home` and `most_at_work` from the `tidy_lang` data set. Using what we +learned in the chapter on {ref}`intro`, we would pass all of these column names into the square brackets. ```{code-cell} ipython3 -lang_messy = pd.read_csv("data/region_lang_top5_cities_messy.csv") -lang_messy_longer = lang_messy.melt( - id_vars=["category", "language"], - value_vars=["Toronto", "Montréal", "Vancouver", "Calgary", "Edmonton"], - var_name="region", - value_name="value", -) -tidy_lang_obj = ( - pd.concat( - (lang_messy_longer, lang_messy_longer["value"].str.split("/", expand=True)), - axis=1, - ) - .rename(columns={0: "most_at_home", 1: "most_at_work"}) - .drop(columns=["value"]) -) -official_langs_obj = tidy_lang_obj[tidy_lang_obj["category"] == "Official languages"] - -official_langs_obj +:tags: ["output_scroll"] +selected_columns = tidy_lang.loc[:, ["language", "region", "most_at_home", "most_at_work"]] +selected_columns ``` +We pass `:` before the comma indicating we want to retrieve all rows, and the list indicates +the columns that we want. + +Note that we could obtain the same result by stating that we would like all of the columns +from `language` through `most_at_work`. Instead of passing a list of all of the column +names that we want, we can ask for the range of columns `"language":"most_at_work"`, which +you can read as "The columns from `language` to `most_at_work`". ```{code-cell} ipython3 -official_langs_obj.dtypes +:tags: ["output_scroll"] +selected_columns = tidy_lang.loc[:, "language":"most_at_work"] +selected_columns ``` -To use the `.assign` method, again we first specify the object to be the data set, -and in the following arguments, -we specify the name of the column we want to modify or create -(here `most_at_home` and `most_at_work`), an `=` sign, -and then the function we want to apply (here `pandas.to_numeric`). -In the function we want to apply, -we refer to the column upon which we want it to act -(here `most_at_home` and `most_at_work`). -In our example, we are naming the columns the same -names as columns that already exist in the data frame -("most\_at\_home", "most\_at\_work") -and this will cause `.assign` to *overwrite* those columns -(also referred to as modifying those columns *in-place*). -If we were to give the columns a new name, -then `.assign` would create new columns with the names we specified. -`.assign`'s general syntax is detailed in {numref}`fig:img-assign`. - -+++ {"tags": []} - -```{figure} img/wrangling/pandas_assign_args_labels.png -:name: fig:img-assign -:figclass: caption-hack +Similarly, you can ask for all of the columns including and after `language` by doing the following -Syntax for the `.assign` function. +```{code-cell} ipython3 +:tags: ["output_scroll"] +selected_columns = tidy_lang.loc[:, "language":] +selected_columns ``` -+++ +By not putting anything after the `:`, python reads this as "from `language` until the last column". +Although the notation for selecting a range using `:` is convienent because less code is required, +it must be used carefully. If you were to re-order columns or add a column to the data frame, the +output would change. Using a list is more explicit and less prone to potential confusion. -Below we use `.assign` to convert the columns `most_at_home` and `most_at_work` -to numeric data types in the `official_langs` data set as described in -{numref}`fig:img-assign`: +Suppose instead we wanted to extract columns that followed a particular pattern +rather than just selecting a range. For example, let's say we wanted only to select the +columns `most_at_home` and `most_at_work`. There are other functions that allow +us to select variables based on their names. In particular, we can use the `.str.startswith` method +to choose only the columns that start with the word "most": ```{code-cell} ipython3 -official_langs_numeric = official_langs_obj.assign( - most_at_home=pd.to_numeric(official_langs_obj["most_at_home"]), - most_at_work=pd.to_numeric(official_langs_obj["most_at_work"]), -) - -official_langs_numeric +tidy_lang.loc[:, tidy_lang.columns.str.startswith('most')] ``` -```{code-cell} ipython3 -official_langs_numeric.dtypes +```{index} pandas.Series; str.contains ``` -Now we see that the `most_at_home` and `most_at_work` columns are both `int64` (which is a numeric data type)! +We could also have chosen the columns containing an underscore `_` by using the +`.str.contains("_")`, since we notice +the columns we want contain underscores and the others don't. -+++ +```{code-cell} ipython3 +tidy_lang.loc[:, tidy_lang.columns.str.contains('_')] +``` -### Using `.assign` to create new columns +There are many different functions that help with selecting +variables based on certain criteria. +The additional resources section at the end of this chapter +provides a comprehensive resource on these functions. ```{code-cell} ipython3 :tags: [remove-cell] -number_most_home = int( - official_langs[ - (official_langs["language"] == "English") - & (official_langs["region"] == "Toronto") - ]["most_at_home"] -) - -toronto_popn = int(region_data[region_data["region"] == "Toronto"]["population"]) - -glue("number_most_home", "{0:,.0f}".format(number_most_home)) -glue("toronto_popn", "{0:,.0f}".format(toronto_popn)) -glue("prop_eng_tor", "{0:.2f}".format(number_most_home / toronto_popn)) +# There are many different `select` helpers that select +# variables based on certain criteria. +# The additional resources section at the end of this chapter +# provides a comprehensive resource on `select` helpers. ``` -We can see in the table that -{glue:text}`number_most_home` people reported -speaking English in Toronto as their primary language at home, according to -the 2016 Canadian census. What does this number mean to us? To understand this -number, we need context. In particular, how many people were in Toronto when -this data was collected? From the 2016 Canadian census profile, the population -of Toronto was reported to be -{glue:text}`toronto_popn` people. -The number of people who report that English is their primary language at home -is much more meaningful when we report it in this context. -We can even go a step further and transform this count to a relative frequency -or proportion. -We can do this by dividing the number of people reporting a given language -as their primary language at home by the number of people who live in Toronto. -For example, -the proportion of people who reported that their primary language at home -was English in the 2016 Canadian census was {glue:text}`prop_eng_tor` -in Toronto. - -Let's use `.assign` to create a new column in our data frame -that holds the proportion of people who speak English -for our five cities of focus in this chapter. -To accomplish this, we will need to do two tasks -beforehand: - -1. Create a list containing the population values for the cities. -2. Filter the `official_langs` data frame -so that we only keep the rows where the language is English. - -To create a list containing the population values for the five cities -(Toronto, Montréal, Vancouver, Calgary, Edmonton), -we will use the `[]` (recall that we can also use `list()` to create a list): - -```{code-cell} ipython3 -city_pops = [5928040, 4098927, 2463431, 1392609, 1321426] -city_pops +## Using `iloc[]` to extract a range of columns +```{index} pandas.DataFrame; iloc[], column range ``` - -And next, we will filter the `official_langs` data frame -so that we only keep the rows where the language is English. -We will name the new data frame we get from this `english_langs`: +Another approach for selecting columns is to use `iloc[]`, +which provides the ability to index with integers rather than the names of the columns. +For example, the column names of the `tidy_lang` data frame are +`['category', 'language', 'region', 'most_at_home', 'most_at_work']`. +Using `iloc[]`, you can ask for the `language` column by requesting the +column at index `1` (remember that Python starts counting at `0`, so the second item `'language'` +has index `1`!). ```{code-cell} ipython3 -english_langs = official_langs[official_langs["language"] == "English"] -english_langs +column = tidy_lang.iloc[:, 1] +column ``` -Finally, we can use `.assign` to create a new column, -named `most_at_home_proportion`, that will have value that corresponds to -the proportion of people reporting English as their primary -language at home. -We will compute this by dividing the column by our vector of city populations. +You can also ask for multiple columns, just like we did with `[]`. We pass `:` before +the comma, indicating we want to retrieve all rows, and `1:` after the comma +indicating we want columns after and including index 1 (*i.e.* `language`). ```{code-cell} ipython3 -english_langs = english_langs.assign( - most_at_home_proportion=english_langs["most_at_home"] / city_pops -) - -english_langs +column_range = tidy_lang.iloc[:, 1:] +column_range ``` -In the computation above, we had to ensure that we ordered the `city_pops` vector in the -same order as the cities were listed in the `english_langs` data frame. -This is because Python will perform the division computation we did by dividing -each element of the `most_at_home` column by each element of the -`city_pops` list, matching them up by position. -Failing to do this would have resulted in the incorrect math being performed. +The `iloc[]` method is less commonly used, and needs to be used with care. +For example, it is easy to +accidentally put in the wrong integer index! If you did not correctly remember +that the `language` column was index `1`, and used `2` instead, your code +would end up having a bug that might be quite hard to track down. -> **Note:** In more advanced data wrangling, -> one might solve this problem in a less error-prone way though using -> a technique called "joins". -> We link to resources that discuss this in the additional -> resources at the end of this chapter. +```{index} pandas.Series; str.startswith +``` -+++ ++++ {"tags": []} - +## Aggregating data +++ -## Combining functions by chaining the methods +### Calculating summary statistics on individual columns -```{index} chaining methods +```{index} summarize ``` -In Python, we often have to call multiple methods in a sequence to process a data -frame. The basic ways of doing this can become quickly unreadable if there are -many steps. For example, suppose we need to perform three operations on a data -frame called `data`: - -1) add a new column `new_col` that is double another `old_col`, -2) filter for rows where another column, `other_col`, is more than 5, and -3) select only the new column `new_col` for those rows. - -One way of performing these three steps is to just write -multiple lines of code, storing temporary objects as you go: +As a part of many data analyses, we need to calculate a summary value for the +data (a *summary statistic*). +Examples of summary statistics we might want to calculate +are the number of observations, the average/mean value for a column, +the minimum value, etc. +Oftentimes, +this summary statistic is calculated from the values in a data frame column, +or columns, as shown in {numref}`fig:summarize`. -```{code-cell} ipython3 -:tags: [remove-cell] ++++ {"tags": []} -# ## Combining functions using the pipe operator, `|>` +```{figure} img/summarize/summarize.001.jpeg +:name: fig:summarize +:figclass: figure -# In R, we often have to call multiple functions in a sequence to process a data -# frame. The basic ways of doing this can become quickly unreadable if there are -# many steps. For example, suppose we need to perform three operations on a data -# frame called `data`: \index{pipe}\index{aaapipesymb@\vert{}>|see{pipe}} +Calculating summary statistics on one or more column(s) in `pandas` generally +creates a series or data frame containing the summary statistic(s) for each column +being summarized. The darker, top row of each table represents column headers. ``` -```{code-cell} ipython3 -:tags: [remove-cell] ++++ -data = pd.DataFrame({"old_col": [1, 2, 5, 0], "other_col": [1, 10, 3, 6]}) -``` +We will start by showing how to compute the minimum and maximum number of Canadians reporting a particular +language as their primary language at home. First, a reminder of what `region_lang` looks like: ```{code-cell} ipython3 -:tags: [remove-output] - -output_1 = data.assign(new_col=data["old_col"] * 2) -output_2 = output_1[output_1["other_col"] > 5] -output = output_2.loc[:, "new_col"] +:tags: ["output_scroll"] +region_lang = pd.read_csv("data/region_lang.csv") +region_lang ``` -This is difficult to understand for multiple reasons. The reader may be tricked -into thinking the named `output_1` and `output_2` objects are important for some -reason, while they are just temporary intermediate computations. Further, the -reader has to look through and find where `output_1` and `output_2` are used in -each subsequent line. - -+++ - -Chaining the sequential functions solves this problem, resulting in cleaner and -easier-to-follow code. -The code below accomplishes the same thing as the previous -two code blocks: +We use `.min` to calculate the minimum +and `.max` to calculate maximum number of Canadians +reporting a particular language as their primary language at home, +for any region. ```{code-cell} ipython3 -:tags: [remove-output] - -output = ( - data.assign(new_col=data["old_col"] * 2) - .query("other_col > 5") - .loc[:, "new_col"] -) +region_lang["most_at_home"].min() ``` ```{code-cell} ipython3 -:tags: [remove-cell] - -# ``` {r eval = F} -# output <- select(filter(mutate(data, new_col = old_col * 2), -# other_col > 5), -# new_col) -# ``` -# Code like this can also be difficult to understand. Functions compose (reading -# from left to right) in the *opposite order* in which they are computed by R -# (above, `mutate` happens first, then `filter`, then `select`). It is also just a -# really long line of code to read in one go. - -# The *pipe operator* (`|>`) solves this problem, resulting in cleaner and -# easier-to-follow code. `|>` is built into R so you don't need to load any -# packages to use it. -# You can think of the pipe as a physical pipe. It takes the output from the -# function on the left-hand side of the pipe, and passes it as the first argument -# to the function on the right-hand side of the pipe. -# The code below accomplishes the same thing as the previous -# two code blocks: -``` - -> **Note:** You might also have noticed that we split the function calls across -> lines, similar to when we did this earlier in the chapter -> for long function calls. Again, this is allowed and recommended, especially when -> the chained function calls create a long line of code. Doing this makes -> your code more readable. When you do this, it is important to use parentheses -> to tell Python that your code is continuing onto the next line. +region_lang["most_at_home"].max() +``` ```{code-cell} ipython3 :tags: [remove-cell] - -# > **Note:** You might also have noticed that we split the function calls across -# > lines after the pipe, similar to when we did this earlier in the chapter -# > for long function calls. Again, this is allowed and recommended, especially when -# > the piped function calls create a long line of code. Doing this makes -# > your code more readable. When you do this, it is important to end each line -# > with the pipe operator `|>` to tell R that your code is continuing onto the -# > next line. - -# > **Note:** In this textbook, we will be using the base R pipe operator syntax, `|>`. -# > This base R `|>` pipe operator was inspired by a previous version of the pipe -# > operator, `%>%`. The `%>%` pipe operator is not built into R -# > and is from the `magrittr` R package. -# > The `tidyverse` metapackage imports the `%>%` pipe operator via `dplyr` -# > (which in turn imports the `magrittr` R package). -# > There are some other differences between `%>%` and `|>` related to -# > more advanced R uses, such as sharing and distributing code as R packages, -# > however, these are beyond the scope of this textbook. -# > We have this note in the book to make the reader aware that `%>%` exists -# > as it is still commonly used in data analysis code and in many data science -# > books and other resources. -# > In most cases these two pipes are interchangeable and either can be used. - -# \index{pipe}\index{aaapipesymbb@\%>\%|see{pipe}} -``` - -### Chaining `df[]` and `.loc` - -+++ - -Let's work with the tidy `tidy_lang` data set from Section {ref}`str-split`, -which contains the number of Canadians reporting their primary language at home -and work for five major cities -(Toronto, Montréal, Vancouver, Calgary, and Edmonton): - -```{code-cell} ipython3 -tidy_lang +glue("lang_most_people", "{0:,.0f}".format(int(region_lang["most_at_home"].max()))) ``` -Suppose we want to create a subset of the data with only the languages and -counts of each language spoken most at home for the city of Vancouver. To do -this, we can use the `df[]` and `.loc`. First, we use `df[]` to -create a data frame called `van_data` that contains only values for Vancouver. - +From this we see that there are some languages in the data set that no one speaks +as their primary language at home. We also see that the most commonly spoken +primary language at home is spoken by +{glue:text}`lang_most_people` people. If instead we wanted to know the +total number of people in the survey, we could use the `sum` summary statistic method. ```{code-cell} ipython3 -van_data = tidy_lang[tidy_lang["region"] == "Vancouver"] -van_data +region_lang["most_at_home"].sum() ``` -We then use `.loc` on this data frame to keep only the variables we want: +Other handy summary statistics include the `mean`, `median` and `std` for +computing the mean, median, and standard deviation of observations, respectively. +We can also compute multiple statistics at once using `agg` to "aggregate" results. +For example, if we wanted to +compute both the `min` and `max` at once, we could use `agg` with the argument `['min', 'max']`. +Note that `agg` outputs a `Series` object. ```{code-cell} ipython3 -van_data_selected = van_data.loc[:, ["language", "most_at_home"]] -van_data_selected +region_lang["most_at_home"].agg(["min", "max"]) ``` -Although this is valid code, there is a more readable approach we could take by -chaining the operations. With chaining, we do not need to create an intermediate -object to store the output from `df[]`. Instead, we can directly call `.loc` upon the -output of `df[]`: +The `pandas` package also provides the `describe` method, +which is a handy function that computes many common summary statistics at once; it +gives us a *summary* of a variable. ```{code-cell} ipython3 -van_data_selected = tidy_lang[tidy_lang["region"] == "Vancouver"].loc[ - :, ["language", "most_at_home"] -] - -van_data_selected +region_lang["most_at_home"].describe() ``` -```{code-cell} ipython3 -:tags: [remove-cell] +In addition to the summary methods we introduced earlier, the `describe` method +outputs a `count` (the total number of observations, or rows, in our data frame), +as well as the 25th, 50th, and 75th percentiles. +{numref}`tab:basic-summary-statistics` provides an overview of some of the useful +summary statistics that you can compute with `pandas`. -# But wait...Why do the `select` and `filter` function calls -# look different in these two examples? -# Remember: when you use the pipe, -# the output of the first function is automatically provided -# as the first argument for the function that comes after it. -# Therefore you do not specify the first argument in that function call. -# In the code above, -# the first line is just the `tidy_lang` data frame with a pipe. -# The pipe passes the left-hand side (`tidy_lang`) to the first argument of the function on the right (`filter`), -# so in the `filter` function you only see the second argument (and beyond). -# Then again after `filter` there is a pipe, which passes the result of the `filter` step -# to the first argument of the `select` function. +```{table} Basic summary statistics +:name: tab:basic-summary-statistics +| Function | Description | +| -------- | ----------- | +| `count` | The number of observations (rows) | +| `mean` | The mean of the observations | +| `median` | The median value of the observations | +| `std` | The standard deviation of the observations | +| `max` | The largest value in a column | +| `min` | The smallest value in a column | +| `sum` | The sum of all observations | +| `agg` | Aggregate multiple statistics together | +| `describe` | a summary | ``` -As you can see, both of these approaches—with and without chaining—give us the same output, but the second -approach is clearer and more readable. - +++ - -### Chaining more than two functions - +++ -Chaining can be used with any method in Python. -Additionally, we can chain together more than two functions. -For example, we can chain together three functions to: -- extract rows (`df[]`) to include only those where the counts of the language most spoken at home are greater than 10,000, -- extract only the columns (`.loc`) corresponding to `region`, `language` and `most_at_home`, and -- sort the data frame rows in order (`.sort_values`) by counts of the language most spoken at home -from smallest to largest. +> **Note:** In `pandas`, the value `NaN` is often used to denote missing data. +> By default, when `pandas` calculates summary statistics (e.g., `max`, `min`, `sum`, etc), +> it ignores these values. If you look at the documentation for these functions, you will +> see an input variable `skipna`, which by default is set to `skipna=True`. This means that +> `pandas` will skip `NaN` values when computing statistics. -```{index} pandas.DataFrame; sort_values -``` - -As we saw in Chapter {ref}`intro`, -we can use the `.sort_values` function -to order the rows in the data frame by the values of one or more columns. -Here we pass the column name `most_at_home` to sort the data frame rows by the values in that column, in ascending order. +### Calculating summary statistics on data frames +What if you want to calculate summary statistics on an entire data frame? Well, +it turns out that the functions in {numref}`tab:basic-summary-statistics` +can be applied to a whole data frame! +For example, we can ask for the number of rows that each column has using `count`. ```{code-cell} ipython3 -large_region_lang = ( - tidy_lang[tidy_lang["most_at_home"] > 10000] - .loc[:, ["region", "language", "most_at_home"]] - .sort_values(by="most_at_home") -) - -large_region_lang +region_lang.count() ``` - +Not surprisingly, they are all the same. We could also ask for the `mean`, but +some of the columns in `region_lang` contain string data with words like `"Vancouver"` +and `"Halifax"`---for these columns there is no way for `pandas` to compute the mean. +So we provide the keyword `numeric_only=True` so that it only computes the mean of columns with numeric values. This +is also needed if you want the `sum` or `std`. ```{code-cell} ipython3 -:tags: [remove-cell] - -# You will notice above that we passed `tidy_lang` as the first argument of the `filter` function. -# We can also pipe the data frame into the same sequence of functions rather than -# using it as the first argument of the first function. These two choices are equivalent, -# and we get the same result. -# ``` {r} -# large_region_lang <- tidy_lang |> -# filter(most_at_home > 10000) |> -# select(region, language, most_at_home) |> -# arrange(most_at_home) - -# large_region_lang -# ``` -``` - -Now that we've shown you chaining as an alternative to storing -temporary objects and composing code, does this mean you should *never* store -temporary objects or compose code? Not necessarily! -There are times when you will still want to do these things. -For example, you might store a temporary object before feeding it into a plot function -so you can iteratively change the plot without having to -redo all of your data transformations. -Additionally, chaining many functions can be overwhelming and difficult to debug; -you may want to store a temporary object midway through to inspect your result -before moving on with further steps. - -+++ - -## Aggregating data with `.assign`, `.agg` and `.apply` - -+++ - -### Calculating summary statistics on whole columns - -```{index} summarize -``` - -As a part of many data analyses, we need to calculate a summary value for the -data (a *summary statistic*). -Examples of summary statistics we might want to calculate -are the number of observations, the average/mean value for a column, -the minimum value, etc. -Oftentimes, -this summary statistic is calculated from the values in a data frame column, -or columns, as shown in {numref}`fig:summarize`. - -+++ {"tags": []} - -```{figure} img/summarize/summarize.001.jpeg -:name: fig:summarize -:figclass: caption-hack - -Calculating summary statistics on one or more column(s). In its simplest use case, it creates a new data frame with a single row containing the summary statistic(s) for each column being summarized. The darker, top row of each table represents the column headers. +region_lang.mean(numeric_only=True) ``` - -+++ - -We can use `.assign` as mentioned in Section {ref}`pandas-assign` along with proper summary functions to create a aggregated column. - -First a reminder of what `region_lang` looks like: - +If we ask for the `min` or the `max`, `pandas` will give you the smallest or largest number +for columns with numeric values. For columns with text, it will return the +least repeated value for `min` and the most repeated value for `max`. Again, +if you only want the minimum and maximum value for +numeric columns, you can provide `numeric_only=True`. ```{code-cell} ipython3 -:tags: [remove-cell] - -# A useful `dplyr` function for calculating summary statistics is `summarize`, -# where the first argument is the data frame and subsequent arguments -# are the summaries we want to perform. -# Here we show how to use the `summarize` function to calculate the minimum -# and maximum number of Canadians -# reporting a particular language as their primary language at home. -# First a reminder of what `region_lang` looks like: +region_lang.max() ``` - ```{code-cell} ipython3 -region_lang = pd.read_csv("data/region_lang.csv") -region_lang +region_lang.min() ``` -We apply `min` to calculate the minimum -and `max` to calculate maximum number of Canadians -reporting a particular language as their primary language at home, -for any region, and `.assign` a column name to each: - -```{code-cell} ipython3 -:tags: [remove-cell] +Similarly, if there are only some columns for which you would like to get summary statistics, +you can first use `loc[]` and then ask for the summary statistic. An example of this is illustrated in {numref}`fig:summarize-across`. +Later, we will talk about how you can also use a more general function, `apply`, to accomplish this. -pd.DataFrame(region_lang["most_at_home"].agg(["min", "max"])).T +```{figure} img/summarize/summarize.003.jpeg +:name: fig:summarize-across +:figclass: figure -# pd.DataFrame(region_lang["most_at_home"].agg(["min", "max"])).T.rename( -# columns={"min": "min_most_at_home", "max": "max_most_at_home"} -# ) +`loc[]` or `apply` is useful for efficiently calculating summary statistics on +many columns at once. The darker, top row of each table represents the column +headers. ``` +Lets say that we want to know +the mean and standard deviation of all of the columns between `"mother_tongue"` and `"lang_known"`. +We use `loc[]` to specify the columns and then `agg` to ask for both the `mean` and `std`. ```{code-cell} ipython3 -:tags: [] - -lang_summary = pd.DataFrame() -lang_summary = lang_summary.assign(min_most_at_home=[min(region_lang["most_at_home"])]) -lang_summary = lang_summary.assign(max_most_at_home=[max(region_lang["most_at_home"])]) -lang_summary +region_lang.loc[:, "mother_tongue":"lang_known"].agg(["mean", "std"]) ``` -```{code-cell} ipython3 -:tags: [remove-cell] -glue("lang_most_people", "{0:,.0f}".format(int(lang_summary["max_most_at_home"]))) -``` -From this we see that there are some languages in the data set that no one speaks -as their primary language at home. We also see that the most commonly spoken -primary language at home is spoken by -{glue:text}`lang_most_people` -people. +## Performing operations on groups of rows using `groupby` +++ -### Calculating summary statistics when there are `NaN`s - -```{index} missing data +```{index} pandas.DataFrame; groupby ``` +What happens if we want to know how languages vary by region? In this case, +we need a new tool that lets us group rows by region. This can be achieved +using the `groupby` function in `pandas`. Pairing summary functions +with `groupby` lets you summarize values for subgroups within a data set, +as illustrated in {numref}`fig:summarize-groupby`. +For example, we can use `groupby` to group the regions of the `tidy_lang` data +frame and then calculate the minimum and maximum number of Canadians +reporting the language as the primary language at home +for each of the regions in the data set. + ++++ {"tags": []} -```{index} see: NaN; missing data +```{figure} img/summarize/summarize.002.jpeg +:name: fig:summarize-groupby +:figclass: figure + +A summary statistic function paired with `groupby` is useful for calculating that statistic +on one or more column(s) for each group. It +creates a new data frame with one row for each group +and one column for each summary statistic.The darker, top row of each table +represents the column headers. The gray, blue, and green colored rows +correspond to the rows that belong to each of the three groups being +represented in this cartoon example. ``` -In `pandas` DataFrame, the value `NaN` is often used to denote missing data. -Many of the base python statistical summary functions -(e.g., `max`, `min`, `sum`, etc) will return `NaN` -when applied to columns containing `NaN` values. -Usually that is not what we want to happen; -instead, we would usually like Python to ignore the missing entries -and calculate the summary statistic using all of the other non-`NaN` values -in the column. -Fortunately `pandas` provides many equivalent methods (e.g., `.max`, `.min`, `.sum`, etc) to -these summary functions while providing an extra argument `skipna` that lets -us tell the function what to do when it encounters `NaN` values. -In particular, if we specify `skipna=True` (default), the function will ignore -missing values and return a summary of all the non-missing entries. -We show an example of this below. ++++ -First we create a new version of the `region_lang` data frame, -named `region_lang_na`, that has a seemingly innocuous `NaN` -in the first row of the `most_at_home` column: +The `groupby` function takes at least one argument—the columns to use in the +grouping. Here we use only one column for grouping (`region`). ```{code-cell} ipython3 -:tags: [remove-cell] - -# In data frames in R, the value `NA` is often used to denote missing data. -# Many of the base R statistical summary functions -# (e.g., `max`, `min`, `mean`, `sum`, etc) will return `NA` -# when applied to columns containing `NA` values. \index{missing data}\index{NA|see{missing data}} -# Usually that is not what we want to happen; -# instead, we would usually like R to ignore the missing entries -# and calculate the summary statistic using all of the other non-`NA` values -# in the column. -# Fortunately many of these functions provide an argument `na.rm` that lets -# us tell the function what to do when it encounters `NA` values. -# In particular, if we specify `na.rm = TRUE`, the function will ignore -# missing values and return a summary of all the non-missing entries. -# We show an example of this combined with `summarize` below. +region_lang.groupby("region")["most_at_home"].agg(["min", "max"]) ``` +Notice that `groupby` converts a `DataFrame` object to a `DataFrameGroupBy` +object, which contains information about the groups of the data frame. We can +then apply aggregating functions to the `DataFrameGroupBy` object. This can be handy if you would like to perform multiple operations and assign +each output to its own object. ```{code-cell} ipython3 -:tags: [remove-cell] - -region_lang_na = region_lang.copy() -region_lang_na.loc[0, "most_at_home"] = np.nan +region_lang.groupby("region") ``` +You can also pass multiple column names to `groupby`. For example, if we wanted to +know about how the different categories of languages (Aboriginal, Non-Official & +Non-Aboriginal, and Official) are spoken at home in different regions, we would pass a +list including `region` and `category` to `groupby`. ```{code-cell} ipython3 -region_lang_na +region_lang.groupby(["region", "category"])["most_at_home"].agg(["min", "max"]) ``` -Now if we apply the Python built-in summary function as above, -we see that we no longer get the minimum and maximum returned, -but just an `NaN` instead! - +You can also ask for grouped summary statistics on the whole data frame ```{code-cell} ipython3 -lang_summary_na = pd.DataFrame() -lang_summary_na = lang_summary_na.assign( - min_most_at_home=[min(region_lang_na["most_at_home"])] -) -lang_summary_na = lang_summary_na.assign( - max_most_at_home=[max(region_lang_na["most_at_home"])] -) -lang_summary_na +:tags: ["output_scroll"] +region_lang.groupby("region").agg(["min", "max"]) ``` -We can fix this by using the `pandas` Series methods (*i.e.* `.min` and `.max`) with `skipna=True` as explained above: - +If you want to ask for only some columns, for example +the columns between `"most_at_home"` and `"lang_known"`, +you might think about first applying `groupby` and then `loc`; +but `groupby` returns a `DataFrameGroupBy` object, which does not +work with `loc`. The other option is to do things the other way around: +first use `loc`, then use `groupby`. +This usually does work, but you have to be careful! For example, +in our case, if we try using `loc` and then `groupby`, we get an error. ```{code-cell} ipython3 -lang_summary_na = pd.DataFrame() -lang_summary_na = lang_summary_na.assign( - min_most_at_home=[region_lang_na["most_at_home"].min(skipna=True)] -) -lang_summary_na = lang_summary_na.assign( - max_most_at_home=[region_lang_na["most_at_home"].max(skipna=True)] -) -lang_summary_na +:tags: [remove-output] +region_lang.loc[:, "most_at_home":"lang_known"].groupby("region").max() +``` +``` +KeyError: 'region' +``` +This is because when we use `loc` we selected only the columns between +`"most_at_home"` and `"lang_known"`, which doesn't include `"region"`! +Instead, we need to call `loc` with a list of column names that +includes `region`, and then use `groupby`. +```{code-cell} ipython3 +:tags: ["output_scroll"] +region_lang.loc[ + :, + ["region", "mother_tongue", "most_at_home", "most_at_work", "lang_known"] +].groupby("region").max() ``` - -### Calculating summary statistics for groups of rows +++ -```{index} pandas.DataFrame; groupby -``` - -A common pairing with summary functions is `.groupby`. Pairing these functions -together can let you summarize values for subgroups within a data set, -as illustrated in {numref}`fig:summarize-groupby`. -For example, we can use `.groupby` to group the regions of the `tidy_lang` data frame and then calculate the minimum and maximum number of Canadians -reporting the language as the primary language at home -for each of the regions in the data set. +## Apply functions across multiple columns with `apply` -```{code-cell} ipython3 -:tags: [remove-cell] +### Apply a function to each column with `apply` -# A common pairing with `summarize` is `group_by`. Pairing these functions \index{group\_by} -# together can let you summarize values for subgroups within a data set, -# as illustrated in Figure \@ref(fig:summarize-groupby). -# For example, we can use `group_by` to group the regions of the `tidy_lang` data frame and then calculate the minimum and maximum number of Canadians -# reporting the language as the primary language at home -# for each of the regions in the data set. +An alternative to aggregating on a data frame +for applying a function to many columns is the `apply` method. +Let's again find the maximum value of each column of the +`region_lang` data frame, but using `apply` with the `max` function this time. +We focus on the two arguments of `apply`: +the function that you would like to apply to each column, and the `axis` along +which the function will be applied (`0` for columns, `1` for rows). +Note that `apply` does not have an argument +to specify *which* columns to apply the function to. +Therefore, we will use the `loc[]` before calling `apply` +to choose the columns for which we want the maximum. -# (ref:summarize-groupby) `summarize` and `group_by` is useful for calculating summary statistics on one or more column(s) for each group. It creates a new data frame—with one row for each group—containing the summary statistic(s) for each column being summarized. It also creates a column listing the value of the grouping variable. The darker, top row of each table represents the column headers. The gray, blue, and green colored rows correspond to the rows that belong to each of the three groups being represented in this cartoon example. +```{code-cell} ipython3 +region_lang.loc[:, "most_at_home":"most_at_work"].apply(max) ``` +We can use `apply` for much more than summary statistics. +Sometimes we need to apply a function to many columns in a data frame. +For example, we would need to do this when converting units of measurements across many columns. +We illustrate such a data transformation in {numref}`fig:mutate-across`. + +++ {"tags": []} -```{figure} img/summarize/summarize.002.jpeg -:name: fig:summarize-groupby -:figclass: caption-hack +```{figure} img/summarize/summarize.005.jpeg +:name: fig:mutate-across +:figclass: figure -Calculating summary statistics on one or more column(s) for each group. It creates a new data frame—with one row for each group—containing the summary statistic(s) for each column being summarized. It also creates a column listing the value of the grouping variable. The darker, top row of each table represents the column headers. The gray, blue, and green colored rows correspond to the rows that belong to each of the three groups being represented in this cartoon example. +`apply` is useful for applying functions across many columns. The darker, top row of each table represents the column headers. ``` +++ -The `.groupby` function takes at least one argument—the columns to use in the -grouping. Here we use only one column for grouping (`region`), but more than one -can also be used. To do this, pass a list of column names to the `by` argument. +For example, +imagine that we wanted to convert all the numeric columns +in the `region_lang` data frame from `int64` type to `int32` type +using the `.as_type` function. +When we revisit the `region_lang` data frame, +we can see that this would be the columns from `mother_tongue` to `lang_known`. ```{code-cell} ipython3 -region_summary = pd.DataFrame() -region_summary = region_summary.assign( - min_most_at_home=region_lang.groupby(by="region")["most_at_home"].min(), - max_most_at_home=region_lang.groupby(by="region")["most_at_home"].max() -).reset_index() - -region_summary.columns = ["region", "min_most_at_home", "max_most_at_home"] -region_summary +:tags: ["output_scroll"] +region_lang ``` -`pandas` also has a convenient method `.agg` (shorthand for `.aggregate`) that allows us to apply multiple aggregate functions in one line of code. We just need to pass in a list of function names to `.agg` as shown below. +```{index} pandas.DataFrame; apply, pandas.DataFrame; loc[] +``` +To accomplish such a task, we can use `apply`. +As we did above, +we again use `loc[]` to specify the columns +as well as the `apply` to specify the function we want to apply on these columns. +Now, we need a way to tell `apply` what function to perform to each column +so that we can convert them from `int64` to `int32`. We will use what is called +a `lambda` function in python; `lambda` functions are just regular functions, +except that you don't need to give them a name. +That means you can pass them as an argument into `apply` easily! +Let's consider a simple example of a `lambda` function that +multiplies a number by two. ```{code-cell} ipython3 -region_summary = ( - region_lang.groupby(by="region")["most_at_home"].agg(["min", "max"]).reset_index() -) -region_summary.columns = ["region", "min_most_at_home", "max_most_at_home"] -region_summary +lambda x: 2*x ``` - -Notice that `.groupby` converts a `DataFrame` object to a `DataFrameGroupBy` object, which contains information about the groups of the dataframe. We can then apply aggregating functions to the `DataFrameGroupBy` object. - +We define a `lambda` function in the following way. We start with the syntax `lambda`, which is a special word +that tells Python "what follows is +a function." Following this, we then state the name of the arguments of the function. +In this case, we just have one argument named `x`. After the list of arguments, we put a +colon `:`. And finally after the colon are the instructions: take the value provided and multiply it by 2. +Let's call our shiny new `lambda` function with the argument `2` (so the output should be `4`). +Just like a regular function, we pass its argument between parentheses `()` symbols. ```{code-cell} ipython3 -:tags: [remove-cell] - -# Notice that `group_by` on its own doesn't change the way the data looks. -# In the output below, the grouped data set looks the same, -# and it doesn't *appear* to be grouped by `region`. -# Instead, `group_by` simply changes how other functions work with the data, -# as we saw with `summarize` above. +(lambda x: 2*x)(2) ``` +> **Note:** Because we didn't give the `lambda` function a name, we have to surround it with +> parentheses too if we want to call it. Otherwise, if we wrote something like `lambda x: 2*x(2)`, Python would get confused +> and think that `(2)` was part of the instructions that comprise the `lambda` function. +> As long as we don't want to call the `lambda` function ourselves, we don't need those parentheses. For example, +> we can pass a `lambda` function as an argument to `apply` without any parentheses. +Returning to our example, let's use `apply` to convert the columns `"mother_tongue":"lang_known"` +to `int32`. To accomplish this we create a `lambda` function that takes one argument---a single column +of the data frame, which we will name `col`---and apply the `astype` method to it. +Then the `apply` method will use that `lambda` function on every column we specify via `loc[]`. ```{code-cell} ipython3 -region_lang.groupby("region") +region_lang_nums = region_lang.loc[:, "mother_tongue":"lang_known"].apply(lambda col: col.astype("int32")) +region_lang_nums.info() ``` +You can now see that the columns from `mother_tongue` to `lang_known` are type `int32`. +You can also see that `apply` returns a data frame with the same number of columns and rows +as the input data frame. The only thing `apply` does is use the `lambda` function argument +on each of the specified columns. -### Calculating summary statistics on many columns +### Apply a function row-wise with `apply` -+++ +What if you want to apply a function across columns but within one row? +We illustrate such a data transformation in {numref}`fig:rowwise`. -Sometimes we need to summarize statistics across many columns. -An example of this is illustrated in {numref}`fig:summarize-across`. -In such a case, using summary functions alone means that we have to -type out the name of each column we want to summarize. -In this section we will meet two strategies for performing this task. -First we will see how we can do this using `.iloc[]` to slice the columns before applying summary functions. -Then we will also explore how we can use a more general iteration function, -`.apply`, to also accomplish this. ++++ {"tags": []} -```{code-cell} ipython3 -:tags: [remove-cell] +```{figure} img/summarize/summarize.004.jpeg +:name: fig:rowwise +:figclass: figure -# Sometimes we need to summarize statistics across many columns. -# An example of this is illustrated in Figure \@ref(fig:summarize-across). -# In such a case, using `summarize` alone means that we have to -# type out the name of each column we want to summarize. -# In this section we will meet two strategies for performing this task. -# First we will see how we can do this using `summarize` + `across`. -# Then we will also explore how we can use a more general iteration function, -# `map`, to also accomplish this. +`apply` is useful for applying functions across columns within one row. The +darker, top row of each table represents the column headers. ``` -+++ {"tags": []} ++++ -```{figure} img/summarize/summarize.003.jpeg -:name: fig:summarize-across -:figclass: caption-hack +For instance, suppose we want to know the maximum value between `mother_tongue`, +and `lang_known` for each language and region +in the `region_lang_nums` data set. +In other words, we want to apply the `max` function *row-wise.* +In order to tell `apply` that we want to work row-wise (as opposed to acting on each column +individually, which is the default behavior), we just specify the argument `axis=1`. +For example, in the case of the `max` function, this tells Python that we would like +the `max` within each row of the input, as opposed to being applied on each column. -`.iloc[]` or `.apply` is useful for efficiently calculating summary statistics on many columns at once. The darker, top row of each table represents the column headers. +```{code-cell} ipython3 +region_lang_nums.apply(max, axis=1) ``` -+++ +We see that we get a column, which is the maximum value between `mother_tongue`, +`most_at_home`, `most_at_work` and `lang_known` for each language +and region. It is often the case that we want to include a column result +from using `apply` row-wise as a new column in the data frame, so that we can make +plots or continue our analysis. To make this happen, +we will use `assign` to create a new column. This is discussed in the next section. -#### Aggregating on a data frame for calculating summary statistics on many columns +(pandas-assign)= +## Using `assign` to modify or add columns -+++ -```{index} column range +```{index} pandas.DataFrame; [] ``` -Recall that in the Section {ref}`loc-iloc`, we can use `.iloc[]` to extract a range of columns with indices. Here we demonstrate finding the maximum value -of each of the numeric -columns of the `region_lang` data set through pairing `.iloc[]` and `.max`. This means that the -summary methods (*e.g.* `.min`, `.max`, `.sum` etc.) can be used for data frames as well. +### Using `assign` to create new columns + +When we compute summary statistics with `agg` or apply functions using `apply` +those give us new data frames. But what if we want to append that information +to an existing data frame? This is where we make use of the `assign` method. +For example, say we wanted the maximum values of the `region_lang_nums` data frame, +and to create a new data frame consisting of all the columns of `region_lang` as well as that additional column. +To do this, we will (1) compute the maximum of those columns using `apply`, +and (2) use `assign` to assign values to create a new column in the `region_lang` data frame. +Note that `assign` does not by default modify the data frame itself; it creates a copy +with the new column added to it. +To use the `assign` method, we specify one argument for each column we want to create. +In this case we want to create one new column named `maximum`, so the argument +to `assign` begins with `maximum = `. +Then after the `=`, we specify what the contents of that new column +should be. In this case we use `apply` just as we did in the previous section to give us the maximum values. +Remember to specify `axis=1` in the `apply` method so that we compute the row-wise maximum value. ```{code-cell} ipython3 -pd.DataFrame(region_lang.iloc[:, 3:].max(axis=0)).T +:tags: ["output_scroll"] +region_lang.assign( + maximum = region_lang_nums.apply(max, axis=1) +) ``` +This gives us a new data frame that looks like the `region_lang` data frame, +except that it has an additional column named `maximum`. +The `maximum` column contains +the maximum value between `mother_tongue`, +`most_at_home`, `most_at_work` and `lang_known` for each language +and region, just as we specified! + ```{code-cell} ipython3 ---- -jupyter: - source_hidden: true -tags: [remove-cell] ---- -# To summarize statistics across many columns, we can use the -# `summarize` function we have just recently learned about. -# However, in such a case, using `summarize` alone means that we have to -# type out the name of each column we want to summarize. -# To do this more efficiently, we can pair `summarize` with `across` \index{across} -# and use a colon `:` to specify a range of columns we would like \index{column range} -# to perform the statistical summaries on. -# Here we demonstrate finding the maximum value -# of each of the numeric -# columns of the `region_lang` data set. - -# ``` {r 02-across-data} -# region_lang |> -# summarize(across(mother_tongue:lang_known, max)) -# ``` - -# > **Note:** Similar to when we use base R statistical summary functions -# > (e.g., `max`, `min`, `mean`, `sum`, etc) with `summarize` alone, -# > the use of the `summarize` + `across` functions paired -# > with base R statistical summary functions -# > also return `NA`s when we apply them to columns that -# > contain `NA`s in the data frame. \index{missing data} -# > -# > To avoid this, again we need to add the argument `na.rm = TRUE`, -# > but in this case we need to use it a little bit differently. -# > In this case, we need to add a `,` and then `na.rm = TRUE`, -# > after specifying the function we want `summarize` + `across` to apply, -# > as illustrated below: -# > -# > ``` {r} -# > region_lang_na |> -# > summarize(across(mother_tongue:lang_known, max, na.rm = TRUE)) -# > ``` -``` - -(apply-summary)= -#### `.apply` for calculating summary statistics on many columns +:tags: [remove-cell] -+++ +number_most_home = int( + official_langs[ + (official_langs["language"] == "English") & + (official_langs["region"] == "Toronto") + ]["most_at_home"] +) + +toronto_popn = int(region_data[region_data["region"] == "Toronto"]["population"]) -```{index} pandas.DataFrame; apply +glue("number_most_home", "{0:,.0f}".format(number_most_home)) +glue("toronto_popn", "{0:,.0f}".format(toronto_popn)) +glue("prop_eng_tor", "{0:.2f}".format(number_most_home / toronto_popn)) ``` -An alternative to aggregating on a dataframe -for applying a function to many columns is the `.apply` method. -Let's again find the maximum value of each column of the -`region_lang` data frame, but using `.apply` with the `max` function this time. -We focus on the two arguments of `.apply`: -the function that you would like to apply to each column, and the `axis` along which the function will be applied (`0` for columns, `1` for rows). -Note that `.apply` does not have an argument -to specify *which* columns to apply the function to. -Therefore, we will use the `.iloc[]` before calling `.apply` -to choose the columns for which we want the maximum. +As another example, we might ask the question: "What proportion of +the population reported English as their primary language at home in the 2016 census?" +For example, in Toronto, {glue:text}`number_most_home` people reported +speaking English as their primary language at home, and the +population of Toronto was reported to be +{glue:text}`toronto_popn` people. So the proportion of people reporting English +as their primary language in Toronto in the 2016 census was {glue:text}`prop_eng_tor`. +How could we figure this out starting from the `region_lang` data frame? +First, we need to filter the `region_lang` data frame +so that we only keep the rows where the language is English. +We will also restrict our attention to the five major cities +in the `five_cities` data frame: Toronto, Montréal, Vancouver, Calgary, and Edmonton. +We will filter to keep only those rows pertaining to the English language +and pertaining to the five aforementioned cities. To combine these two logical statements +we will use the `&` symbol. +and with the `[]` operation, + `"English"` as the `language` and filter the rows, +and name the new data frame `english_langs`. +```{code-cell} ipython3 +:tags: ["output_scroll"] +english_lang = region_lang[ + (region_lang["language"] == "English") & + (region_lang["region"].isin(five_cities["region"])) + ] +english_lang +``` + +Okay, now we have a data frame that pertains only to the English language +and the five cities mentioned earlier. +In order to compute the proportion of the population speaking English in each of these cities, +we need to add the population data from the `five_cities` data frame. ```{code-cell} ipython3 ---- -jupyter: - source_hidden: true -tags: [remove-cell] ---- -# An alternative to `summarize` and `across` -# for applying a function to many columns is the `map` family of functions. \index{map} -# Let's again find the maximum value of each column of the -# `region_lang` data frame, but using `map` with the `max` function this time. -# `map` takes two arguments: -# an object (a vector, data frame or list) that you want to apply the function to, -# and the function that you would like to apply to each column. -# Note that `map` does not have an argument -# to specify *which* columns to apply the function to. -# Therefore, we will use the `select` function before calling `map` -# to choose the columns for which we want the maximum. -``` - -```{code-cell} ipython3 -pd.DataFrame(region_lang.iloc[:, 3:].apply(max, axis=0)).T -``` - -```{index} missing data -``` - -> **Note:** Similar to when we use base Python statistical summary functions -> (e.g., `max`, `min`, `sum`, etc.) when there are `NaN`s, -> `.apply` functions paired with base Python statistical summary functions -> also return `NaN` values when we apply them to columns that -> contain `NaN` values. -> -> To avoid this, again we need to use the `pandas` variants of summary functions (*i.e.* -> `.max`, `.min`, `.sum`, etc.) with `skipna=True`. -> When we use this with `.apply`, we do this by constructing a anonymous function that calls -> the `.max` method with `skipna=True`, as illustrated below: - -```{code-cell} ipython3 -pd.DataFrame( - region_lang_na.iloc[:, 3:].apply(lambda col: col.max(skipna=True), axis=0) -).T -``` - -The `.apply` function is generally quite useful for solving many problems -involving repeatedly applying functions in Python. -Additionally, a variant of `.apply` is `.applymap`, -which can be used to apply functions element-wise. -To learn more about these functions, see the additional resources -section at the end of this chapter. - -+++ {"jp-MarkdownHeadingCollapsed": true, "tags": ["remove-cell"]} - - - -+++ {"tags": []} - -## Apply functions across many columns with `.apply` +five_cities +``` +The data frame above shows that the populations of the five cities in 2016 were +5928040 (Toronto), 4098927 (Montréal), 2463431 (Vancouver), 1392609 (Calgary), and 1321426 (Edmonton). +We will add this information to our data frame in a new column named `city_pops` by using `assign`. +Once again we specify the new column name (`city_pops`) as the argument, followed by the equal symbol `=`, +and finally the data in the column. +Note that the order of the rows in the `english_lang` data frame is Montréal, Toronto, Calgary, Edmonton, Vancouver. +So we will create a column called `city_pops` where we list the populations of those cities in that +order, and add it to our data frame. +Also note that we write `english_lang = ` on the left so that the newly created data frame overwrites our +old `english_lang` data frame; remember that by default, like other `pandas` functions, `assign` does not +modify the original data frame directly! +```{code-cell} ipython3 +:tags: ["output_scroll"] +english_lang = english_lang.assign( + city_pops=[4098927, + 5928040, + 1392609, + 1321426, + 2463431 + ]) +english_lang +``` +> **Note**: Inserting data manually in this is generally very error-prone and is not recommended. +> We do it here to demonstrate another usage of `assign` that does not involve `apply`. +> But in more advanced data wrangling, +> one would solve this problem in a less error-prone way using +> the `merge` function, which lets you combine two data frames. We will show you an +> example using `merge` at the end of the chapter! + +Now we have a new column with the population for each city. Finally, we calculate the +proportion of people who speak English the most at home by taking the ratio of the columns +`most_at_home` and `city_pops`. We will again add this to our data frame using `assign`. +```{code-cell} ipython3 +:tags: ["output_scroll"] +english_lang.assign( + proportion=english_lang["most_at_home"]/english_lang["city_pops"] + ) +``` -Sometimes we need to apply a function to many columns in a data frame. -For example, we would need to do this when converting units of measurements across many columns. -We illustrate such a data transformation in {numref}`fig:mutate-across`. -+++ {"tags": []} ++++ -```{figure} img/summarize/summarize.005.jpeg -:name: fig:mutate-across -:figclass: caption-hack -`.apply` is useful for applying functions across many columns. The darker, top row of each table represents the column headers. -``` +### Using `assign` to modify columns -+++ -For example, -imagine that we wanted to convert all the numeric columns -in the `region_lang` data frame from `int64` type to `int32` type -using the `.as_type` function. -When we revisit the `region_lang` data frame, -we can see that this would be the columns from `mother_tongue` to `lang_known`. +In the section on {ref}`str-split`, +when we first read in the `"region_lang_top5_cities_messy.csv"` data, +all of the variables were "object" data types. +During the tidying process, +we used the `pandas.to_numeric` function +to convert the `most_at_home` and `most_at_work` columns +to the desired integer (i.e., numeric class) data types and then used `[]` to overwrite columns. +We can do the same thing using `assign`. + +Below we use `assign` to convert the columns `most_at_home` and `most_at_work` +to numeric data types in the `official_langs` data set as described in +{numref}`fig:img-assign`. In our example, we are naming the columns the same +names as columns that already exist in the data frame +(`"most_at_home"`, `"most_at_work"`) +and this will cause `assign` to *overwrite* those columns +(also referred to as modifying those columns *in-place*). +If we were to give the columns a new name, +then `assign` would create new columns with the names we specified. +The syntax is detailed in {numref}`fig:img-assign`. ```{code-cell} ipython3 ---- -jupyter: - source_hidden: true -tags: [remove-cell] ---- -# For example, -# imagine that we wanted to convert all the numeric columns -# in the `region_lang` data frame from double type to integer type -# using the `as.integer` function. -# When we revisit the `region_lang` data frame, -# we can see that this would be the columns from `mother_tongue` to `lang_known`. -``` +:tags: ["output_scroll"] +official_langs_numeric = official_langs.assign( + most_at_home=pd.to_numeric(official_langs["most_at_home"]), + most_at_work=pd.to_numeric(official_langs["most_at_work"]), +) -```{code-cell} ipython3 -region_lang +official_langs_numeric ``` -```{index} pandas.DataFrame; apply, pandas.DataFrame; iloc[] -``` ++++ {"tags": []} -To accomplish such a task, we can use `.apply`. -This works in a similar way for column selection, -as we saw when we used in Section {ref}`apply-summary` earlier. -As we did above, -we again use `.iloc` to specify the columns -as well as the `.apply` to specify the function we want to apply on these columns. -However, a key difference here is that we are not using aggregating function here, -which means that we get back a data frame with the same number of rows. +```{figure} img/wrangling/pandas_assign_args_labels.png +:name: fig:img-assign +:figclass: figure -```{code-cell} ipython3 ---- -jupyter: - source_hidden: true -tags: [remove-cell] ---- -# To accomplish such a task, we can use `mutate` paired with `across`. \index{across} -# This works in a similar way for column selection, -# as we saw when we used `summarize` + `across` earlier. -# As we did above, -# we again use `across` to specify the columns using `select` syntax -# as well as the function we want to apply on the specified columns. -# However, a key difference here is that we are using `mutate`, -# which means that we get back a data frame with the same number of rows. +Syntax for the `assign` function. ``` -```{code-cell} ipython3 -region_lang.dtypes -``` ++++ -```{code-cell} ipython3 -region_lang_int32 = region_lang.iloc[:, 3:].apply(lambda col: col.astype('int32'), axis=0) -region_lang_int32 = pd.concat((region_lang.iloc[:, :3], region_lang_int32), axis=1) -region_lang_int32 -``` ```{code-cell} ipython3 -region_lang_int32.dtypes +official_langs_numeric.info() ``` -We see that we get back a data frame -with the same number of columns and rows. -The only thing that changes is the transformation we applied -to the specified columns (here `mother_tongue` to `lang_known`). +Now we see that the `most_at_home` and `most_at_work` columns are both `int64` (which is a numeric data type)! +Note that we were careful here and created a new data frame object `official_langs_numeric`. Since `assign` has +the power to overwrite the entries of a column, it is a good idea to create a new data frame object so that if +you make a mistake, you can start again from the original data frame. +++ -## Apply functions across columns within one row with `.apply` - -What if you want to apply a function across columns but within one row? -We illustrate such a data transformation in {numref}`fig:rowwise`. - -+++ {"tags": []} - -```{figure} img/summarize/summarize.004.jpeg -:name: fig:rowwise -:figclass: caption-hack - -`.apply` is useful for applying functions across columns within one row. The darker, top row of each table represents the column headers. -``` - -+++ -For instance, suppose we want to know the maximum value between `mother_tongue`, -`most_at_home`, `most_at_work` -and `lang_known` for each language and region -in the `region_lang` data set. -In other words, we want to apply the `max` function *row-wise.* -Before we use `.apply`, we will again use `.iloc` to select only the count columns -so we can see all the columns in the data frame's output easily in the book. -So for this demonstration, the data set we are operating on looks like this: +### Using `assign` to create a new data frame ```{code-cell} ipython3 ---- -jupyter: - source_hidden: true -tags: [remove-cell] ---- -# For instance, suppose we want to know the maximum value between `mother_tongue`, -# `most_at_home`, `most_at_work` -# and `lang_known` for each language and region -# in the `region_lang` data set. -# In other words, we want to apply the `max` function *row-wise.* -# We will use the (aptly named) `rowwise` function in combination with `mutate` -# to accomplish this task. - -# Before we apply `rowwise`, we will `select` only the count columns \index{rowwise} -# so we can see all the columns in the data frame's output easily in the book. -# So for this demonstration, the data set we are operating on looks like this: -``` +:tags: [remove-cell] -```{code-cell} ipython3 -region_lang.iloc[:, 3:] +english_lang = region_lang[region_lang["language"] == "English"] +five_cities = ["Toronto", "Montréal", "Vancouver", "Calgary", "Edmonton"] +english_lang = english_lang[english_lang["region"].isin(five_cities)] +english_lang ``` -Now we use `.apply` with argument `axis=1`, to tell Python that we would like -the `max` function to be applied across, and within, a row, -as opposed to being applied on a column -(which is the default behavior of `.apply`): - +Sometimes you want to create a new data frame. You can use `assign` to create a data frame from scratch. +Lets return to the example of wanting to compute the proportions of people who speak English +most at home in Toronto, Montréal, Vancouver, Calgary, Edmonton. Before adding new columns, we filtered +our `region_lang` to create the `english_lang` data frame containing only English speakers in the five cities +of interest. ```{code-cell} ipython3 ---- -jupyter: - source_hidden: true -tags: [remove-cell] ---- -# Now we apply `rowwise` before `mutate`, to tell R that we would like -# the mutate function to be applied across, and within, a row, -# as opposed to being applied on a column -# (which is the default behavior of `mutate`): +:tags: ["output_scroll"] +english_lang ``` +We then wanted to add the populations of these cities as a column using `assign` +(Toronto: 5928040, Montréal: 4098927, Vancouver: 2463431, +Calgary: 1392609, and Edmonton: 1321426). We had to be careful to add those populations in the +right order, and it could be easy to make a mistake this way. An alternative approach, that we demonstrate here +is to (1) create a new, empty data frame, (2) use `assign` to assign the city names and populations in that +data frame, and (3) use `merge` to combine the two data frames, recognizing that the "regions" are the same. +We create a new, empty data frame by calling `pd.DataFrame` with no arguments. +We then use `assign` to add the city names in a column called `"region"` +and their populations in a column called `"population"`. ```{code-cell} ipython3 -region_lang_rowwise = region_lang.assign( - maximum=region_lang.iloc[:, 3:].apply(max, axis=1) +city_populations = pd.DataFrame().assign( + region=["Toronto", "Montréal", "Vancouver", "Calgary", "Edmonton"], + population=[5928040, 4098927, 2463431, 1392609, 1321426] ) - -region_lang_rowwise +city_populations ``` - -We see that we get an additional column added to the data frame, -named `maximum`, which is the maximum value between `mother_tongue`, -`most_at_home`, `most_at_work` and `lang_known` for each language -and region. - +This new data frame has the same `region` column as the `english_lang` data frame. The order of +the cities is different, but that is okay! We can use the `merge` function in `pandas` to say +we would like to combine the two data frames by matching the `region` between them. The argument +`on="region"` tells pandas we would like to use the `region` column to match up the entries. ```{code-cell} ipython3 ---- -jupyter: - source_hidden: true -tags: [remove-cell] ---- -# Similar to `group_by`, -# `rowwise` doesn't appear to do anything when it is called by itself. -# However, we can apply `rowwise` in combination -# with other functions to change how these other functions operate on the data. -# Notice if we used `mutate` without `rowwise`, -# we would have computed the maximum value across *all* rows -# rather than the maximum value for *each* row. -# Below we show what would have happened had we not used -# `rowwise`. In particular, the same maximum value is reported -# in every single row; this code does not provide the desired result. - -# ```{r} -# region_lang |> -# select(mother_tongue:lang_known) |> -# mutate(maximum = max(c(mother_tongue, -# most_at_home, -# most_at_home, -# lang_known))) -# ``` +:tags: ["output_scroll"] +english_lang = english_lang.merge(city_populations, on="region") +english_lang ``` +You can see that the populations for each city are correct (e.g. Montréal: 4098927, Toronto: 5928040), +and we could proceed to with our analysis from here. ## Summary -Cleaning and wrangling data can be a very time-consuming process. However, +Cleaning and wrangling data can be a very time-consuming process. However, it is a critical step in any data analysis. We have explored many different -functions for cleaning and wrangling data into a tidy format. -{numref}`tab:summary-functions-table` summarizes some of the key wrangling -functions we learned in this chapter. In the following chapters, you will -learn how you can take this tidy data and do so much more with it to answer your +functions for cleaning and wrangling data into a tidy format. +{numref}`tab:summary-functions-table` summarizes some of the key wrangling +functions we learned in this chapter. In the following chapters, you will +learn how you can take this tidy data and do so much more with it to answer your burning data science questions! +++ -```{table} Summary of wrangling functions +```{table} Summary of wrangling functions :name: tab:summary-functions-table | Function | Description | -| --- | ----------- | -| `.agg` | calculates aggregated summaries of inputs | -| `.apply` | allows you to apply function(s) to multiple columns/rows | -| `.assign` | adds or modifies columns in a data frame | -| `.groupby` | allows you to apply function(s) to groups of rows | -| `.iloc` | subsets columns/rows of a data frame using integer indices | -| `.loc` | subsets columns/rows of a data frame using labels | -| `.melt` | generally makes the data frame longer and narrower | -| `.pivot` | generally makes a data frame wider and decreases the number of rows | -| `.str.split` | splits up a string column into multiple columns | -``` - -```{code-cell} ipython3 ---- -jupyter: - source_hidden: true -tags: [remove-cell] ---- -# ## Summary - -# Cleaning and wrangling data can be a very time-consuming process. However, -# it is a critical step in any data analysis. We have explored many different -# functions for cleaning and wrangling data into a tidy format. -# Table \@ref(tab:summary-functions-table) summarizes some of the key wrangling -# functions we learned in this chapter. In the following chapters, you will -# learn how you can take this tidy data and do so much more with it to answer your -# burning data science questions! - -# \newpage - -# Table: (#tab:summary-functions-table) Summary of wrangling functions - -# | Function | Description | -# | --- | ----------- | -# | `across` | allows you to apply function(s) to multiple columns | -# | `filter` | subsets rows of a data frame | -# | `group_by` | allows you to apply function(s) to groups of rows | -# | `mutate` | adds or modifies columns in a data frame | -# | `map` | general iteration function | -# | `pivot_longer` | generally makes the data frame longer and narrower | -# | `pivot_wider` | generally makes a data frame wider and decreases the number of rows | -# | `rowwise` | applies functions across columns within one row | -# | `separate` | splits up a character column into multiple columns | -# | `select` | subsets columns of a data frame | -# | `summarize` | calculates summaries of inputs | +| --- | ----------- | +| `agg` | calculates aggregated summaries of inputs | +| `apply` | allows you to apply function(s) to multiple columns/rows | +| `assign` | adds or modifies columns in a data frame | +| `groupby` | allows you to apply function(s) to groups of rows | +| `iloc` | subsets columns/rows of a data frame using integer indices | +| `loc` | subsets columns/rows of a data frame using labels | +| `melt` | generally makes the data frame longer and narrower | +| `merge` | combine two data frames | +| `pivot` | generally makes a data frame wider and decreases the number of rows | +| `str.split` | splits up a string column into multiple columns | ``` ## Exercises -Practice exercises for the material covered in this chapter -can be found in the accompanying -[worksheets repository](https://github.com/UBC-DSCI/data-science-a-first-intro-worksheets#readme) +Practice exercises for the material covered in this chapter +can be found in the accompanying +[worksheets repository](https://github.com/UBC-DSCI/data-science-a-first-intro-python-worksheets#readme) in the "Cleaning and wrangling data" row. You can launch an interactive version of the worksheet in your browser by clicking the "launch binder" button. You can also preview a non-interactive version of the worksheet by clicking "view worksheet." If you instead decide to download the worksheet and run it on your own machine, make sure to follow the instructions for computer setup -found in Chapter {ref}`move-to-your-own-machine`. This will ensure that the automated feedback +found in the chapter on {ref}`move-to-your-own-machine`. This will ensure that the automated feedback and guidance that the worksheets provide will function as intended. +++ {"tags": []} -## Additional resources +## Additional resources - The [`pandas` package documentation](https://pandas.pydata.org/docs/reference/index.html) is another resource to learn more about the functions in this @@ -2417,58 +1813,15 @@ and guidance that the worksheets provide will function as intended. - *Python for Data Analysis* {cite:p}`mckinney2012python` has a few chapters related to data wrangling that go into more depth than this book. For example, the [data wrangling chapter](https://wesmckinney.com/book/data-wrangling.html) covers tidy data, - `.melt` and `.pivot`, but also covers missing values - and additional wrangling functions (like `.stack`). The [data + `melt` and `pivot`, but also covers missing values + and additional wrangling functions (like `stack`). The [data aggregation chapter](https://wesmckinney.com/book/data-aggregation.html) covers - `.groupby`, aggregating functions, `.apply`, etc. + `groupby`, aggregating functions, `apply`, etc. - You will occasionally encounter a case where you need to iterate over items in a data frame, but none of the above functions are flexible enough to do what you want. In that case, you may consider using [a for loop](https://wesmckinney.com/book/python-basics.html#control_for) {cite:p}`mckinney2012python`. -```{code-cell} ipython3 ---- -jp-MarkdownHeadingCollapsed: true -jupyter: - source_hidden: true -tags: [remove-cell] ---- -# ## Additional resources - -# - As we mentioned earlier, `tidyverse` is actually an *R -# meta package*: it installs and loads a collection of R packages that all -# follow the tidy data philosophy we discussed above. One of the `tidyverse` -# packages is `dplyr`—a data wrangling workhorse. You have already met many -# of `dplyr`'s functions -# (`select`, `filter`, `mutate`, `arrange`, `summarize`, and `group_by`). -# To learn more about these functions and meet a few more useful -# functions, we recommend you check out Chapters 5-9 of the [STAT545 online notes](https://stat545.com/). -# of the data wrangling, exploration, and analysis with R book. -# - The [`dplyr` R package documentation](https://dplyr.tidyverse.org/) [@dplyr] is -# another resource to learn more about the functions in this -# chapter, the full set of arguments you can use, and other related functions. -# The site also provides a very nice cheat sheet that summarizes many of the -# data wrangling functions from this chapter. -# - Check out the [`tidyselect` R package page](https://tidyselect.r-lib.org/index.html) -# [@tidyselect] for a comprehensive list of `select` helpers. -# These helpers can be used to choose columns in a data frame when paired with the `select` function -# (and other functions that use the `tidyselect` syntax, such as `pivot_longer`). -# The [documentation for `select` helpers](https://tidyselect.r-lib.org/reference/select_helpers.html) -# is a useful reference to find the helper you need for your particular problem. -# - *R for Data Science* [@wickham2016r] has a few chapters related to -# data wrangling that go into more depth than this book. For example, the -# [tidy data chapter](https://r4ds.had.co.nz/tidy-data.html) covers tidy data, -# `pivot_longer`/`pivot_wider` and `separate`, but also covers missing values -# and additional wrangling functions (like `unite`). The [data -# transformation chapter](https://r4ds.had.co.nz/transform.html) covers -# `select`, `filter`, `arrange`, `mutate`, and `summarize`. And the [`map` -# functions chapter](https://r4ds.had.co.nz/iteration.html#the-map-functions) -# provides more about the `map` functions. -# - You will occasionally encounter a case where you need to iterate over items -# in a data frame, but none of the above functions are flexible enough to do -# what you want. In that case, you may consider using [a for -# loop](https://r4ds.had.co.nz/iteration.html#iteration). -``` ## References @@ -2476,4 +1829,4 @@ tags: [remove-cell] ```{bibliography} :filter: docname in docnames -``` \ No newline at end of file +```