diff --git a/examples/gaussian_processes/GP-Births.ipynb b/examples/gaussian_processes/GP-Births.ipynb index fcd2966c4..8b548e733 100644 --- a/examples/gaussian_processes/GP-Births.ipynb +++ b/examples/gaussian_processes/GP-Births.ipynb @@ -997,7 +997,7 @@ "id": "0e8f5c41", "metadata": {}, "source": [ - "The motivation is that we have around $7.3$K data points and we want to consider the in between data points distance in the normalized (log) scale. That is why we consider the ratio `7_000 / time_str`. Note that we want to capture the long term trend, so we want to consider a length scale that is larger than the data points distance. We increase the order of magnitude by dividing by $10$. Finally, as we are setting the prior on the normalized log-scale (because that's what the GP is seeing) we take a log-transform." + "The motivation is that we have around $7.3$K data points and we want to consider the in between data points distance in the normalized scale. That is why we consider the ratio `7_000 / time_str`. Note that we want to capture the long term trend, so we want to consider a length scale that is larger than the data points distance. We increase the order of magnitude by dividing by $10$. Finally, since a {class}`~pymc.distributions.continuous.LogNormal` distribution has positive support and a common choice for length scales, we take a log-transform on the resulting quantity `700 / time_str` so ensure the mean of the prior is close to this value." ] }, { diff --git a/examples/gaussian_processes/GP-Births.myst.md b/examples/gaussian_processes/GP-Births.myst.md index 0b6ed7ef1..32490daa0 100644 --- a/examples/gaussian_processes/GP-Births.myst.md +++ b/examples/gaussian_processes/GP-Births.myst.md @@ -347,7 +347,7 @@ ax.set_title( ); ``` -The motivation is that we have around $7.3$K data points and we want to consider the in between data points distance in the normalized (log) scale. That is why we consider the ratio `7_000 / time_str`. Note that we want to capture the long term trend, so we want to consider a length scale that is larger than the data points distance. We increase the order of magnitude by dividing by $10$. Finally, as we are setting the prior on the normalized log-scale (because that's what the GP is seeing) we take a log-transform. +The motivation is that we have around $7.3$K data points and we want to consider the in between data points distance in the normalized scale. That is why we consider the ratio `7_000 / time_str`. Note that we want to capture the long term trend, so we want to consider a length scale that is larger than the data points distance. We increase the order of magnitude by dividing by $10$. Finally, since a {class}`~pymc.distributions.continuous.LogNormal` distribution has positive support and a common choice for length scales, we take a log-transform on the resulting quantity `700 / time_str` so ensure the mean of the prior is close to this value. +++