From 47a58c5d42c038efea9d938b368b4f62e222db21 Mon Sep 17 00:00:00 2001 From: Baran Toppare Date: Sun, 24 Oct 2021 23:19:30 +0200 Subject: [PATCH] text change --- Chapter2_MorePyMC/Ch2_MorePyMC_PyMC2.ipynb | 2 +- Chapter2_MorePyMC/Ch2_MorePyMC_PyMC3.ipynb | 2 +- Chapter2_MorePyMC/Ch2_MorePyMC_TFP.ipynb | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC2.ipynb b/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC2.ipynb index 3bc78f7f..5cfc576c 100644 --- a/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC2.ipynb +++ b/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC2.ipynb @@ -1130,7 +1130,7 @@ "\n", "> In the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers \"Yes, I did cheat\" if the coin flip lands heads, and \"No, I did not cheat\", if the coin flip lands tails. This way, the interviewer does not know if a \"Yes\" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers. \n", "\n", - "I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some *Yes*'s are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC to dig through this noisy model, and find a posterior distribution for the true frequency of liars. " + "I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some *Yes*'s are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC to dig through this noisy model, and find a posterior distribution for the true frequency of cheaters. " ] }, { diff --git a/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC3.ipynb b/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC3.ipynb index 3eae5093..35725ca3 100644 --- a/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC3.ipynb +++ b/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC3.ipynb @@ -1200,7 +1200,7 @@ "\n", "> In the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers \"Yes, I did cheat\" if the coin flip lands heads, and \"No, I did not cheat\", if the coin flip lands tails. This way, the interviewer does not know if a \"Yes\" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers. \n", "\n", - "I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some *Yes*'s are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC3 to dig through this noisy model, and find a posterior distribution for the true frequency of liars. " + "I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some *Yes*'s are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use PyMC3 to dig through this noisy model, and find a posterior distribution for the true frequency of cheaters. " ] }, { diff --git a/Chapter2_MorePyMC/Ch2_MorePyMC_TFP.ipynb b/Chapter2_MorePyMC/Ch2_MorePyMC_TFP.ipynb index 6f04a856..1ebdcde8 100644 --- a/Chapter2_MorePyMC/Ch2_MorePyMC_TFP.ipynb +++ b/Chapter2_MorePyMC/Ch2_MorePyMC_TFP.ipynb @@ -2082,7 +2082,7 @@ "\n", "> In the interview process for each student, the student flips a coin, hidden from the interviewer. The student agrees to answer honestly if the coin comes up heads. Otherwise, if the coin comes up tails, the student (secretly) flips the coin again, and answers \"Yes, I did cheat\" if the coin flip lands heads, and \"No, I did not cheat\", if the coin flip lands tails. This way, the interviewer does not know if a \"Yes\" was the result of a guilty plea, or a Heads on a second coin toss. Thus privacy is preserved and the researchers receive honest answers. \n", "\n", - "I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some *Yes*'s are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use TFP to dig through this noisy model, and find a posterior distribution for the true frequency of liars. " + "I call this the Privacy Algorithm. One could of course argue that the interviewers are still receiving false data since some *Yes*'s are not confessions but instead randomness, but an alternative perspective is that the researchers are discarding approximately half of their original dataset since half of the responses will be noise. But they have gained a systematic data generation process that can be modeled. Furthermore, they do not have to incorporate (perhaps somewhat naively) the possibility of deceitful answers. We can use TFP to dig through this noisy model, and find a posterior distribution for the true frequency of cheaters. " ] }, {