Skip to content

Commit f0fcc81

Browse files
author
chdamianos
committed
Fixed log-loss latex formula in Chapter 5
1 parent e43a82a commit f0fcc81

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

Diff for: Chapter5_LossFunctions/Ch5_LossFunctions_PyMC2.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@
5454
"Other popular loss functions include:\n",
5555
"\n",
5656
"- $L( \\theta, \\hat{\\theta} ) = \\mathbb{1}_{ \\hat{\\theta} \\neq \\theta }$ is the zero-one loss often used in machine learning classification algorithms.\n",
57-
"- $L( \\theta, \\hat{\\theta} ) = -\\hat{\\theta}\\log( \\theta ) - (1-\\hat{ \\theta})\\log( 1 - \\theta ), \\; \\; \\hat{\\theta} \\in {0,1}, \\; \\theta \\in [0,1]$, called the *log-loss*, also used in machine learning. \n",
57+
"- $L( \\theta, \\hat{\\theta} ) = -\\theta\\log( \\hat{\\theta} ) - (1-\\theta)\\log( 1 - \\hat{\\theta} ), \\; \\; \\theta \\in {0,1}, \\; \\hat{\\theta} \\in [0,1]$, called the *log-loss*, also used in machine learning. \n",
5858
"\n",
5959
"Historically, loss functions have been motivated from 1) mathematical convenience, and 2) they are robust to application, i.e., they are objective measures of loss. The first reason has really held back the full breadth of loss functions. With computers being agnostic to mathematical convenience, we are free to design our own loss functions, which we take full advantage of later in this Chapter.\n",
6060
"\n",

Diff for: Chapter5_LossFunctions/Ch5_LossFunctions_PyMC3.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@
6060
"Other popular loss functions include:\n",
6161
"\n",
6262
"- $L( \\theta, \\hat{\\theta} ) = \\mathbb{1}_{ \\hat{\\theta} \\neq \\theta }$ is the zero-one loss often used in machine learning classification algorithms.\n",
63-
"- $L( \\theta, \\hat{\\theta} ) = -\\hat{\\theta}\\log( \\theta ) - (1-\\hat{ \\theta})\\log( 1 - \\theta ), \\; \\; \\hat{\\theta} \\in {0,1}, \\; \\theta \\in [0,1]$, called the *log-loss*, also used in machine learning. \n",
63+
"- $L( \\theta, \\hat{\\theta} ) = -\\theta\\log( \\hat{\\theta} ) - (1- \\theta)\\log( 1 - \\hat{\\theta} ), \\; \\; \\theta \\in {0,1}, \\; \\hat{\\theta} \\in [0,1]$, called the *log-loss*, also used in machine learning. \n",
6464
"\n",
6565
"Historically, loss functions have been motivated from 1) mathematical convenience, and 2) they are robust to application, i.e., they are objective measures of loss. The first reason has really held back the full breadth of loss functions. With computers being agnostic to mathematical convenience, we are free to design our own loss functions, which we take full advantage of later in this Chapter.\n",
6666
"\n",

Diff for: Chapter5_LossFunctions/Ch5_LossFunctions_TFP.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -280,7 +280,7 @@
280280
"Other popular loss functions include:\n",
281281
"\n",
282282
"- $L( \\theta, \\hat{\\theta} ) = \\mathbb{1}_{ \\hat{\\theta} \\neq \\theta }$ is the zero-one loss often used in machine learning classification algorithms.\n",
283-
"- $L( \\theta, \\hat{\\theta} ) = -\\hat{\\theta}\\log( \\theta ) - (1-\\hat{ \\theta})\\log( 1 - \\theta ), \\; \\; \\hat{\\theta} \\in {0,1}, \\; \\theta \\in [0,1]$, called the *log-loss*, also used in machine learning. \n",
283+
"- $L( \\theta, \\hat{\\theta} ) = -\\theta\\log( \\hat{\\theta} ) - (1- \\theta)\\log( 1 - \\hat{\\theta} ), \\; \\; \\theta \\in {0,1}, \\; \\hat{\\theta} \\in [0,1]$, called the *log-loss*, also used in machine learning. \n",
284284
"\n",
285285
"Historically, loss functions have been motivated from 1) mathematical convenience, and 2) they are robust to application, i.e., they are objective measures of loss. The first reason has really held back the full breadth of loss functions. With computers being agnostic to mathematical convenience, we are free to design our own loss functions, which we take full advantage of later in this Chapter.\n",
286286
"\n",

0 commit comments

Comments
 (0)