Skip to content

Commit 8a68a5c

Browse files
authored
minor text improvements (#7361)
1 parent 5e82394 commit 8a68a5c

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

docs/source/learn/core_notebooks/pymc_overview.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@
7171
"\n",
7272
"### Generating data\n",
7373
"\n",
74-
"We can simulate some artificial data from this model using only NumPy's {mod}`~numpy.random` module, and then use PyMC to try to recover the corresponding parameters. We are intentionally generating the data to closely correspond the PyMC model structure."
74+
"We can simulate some artificial data from this model using only NumPy's {mod}`~numpy.random` module, and then use PyMC to try to recover the corresponding parameters. We are intentionally generating the data to closely correspond to the PyMC model structure."
7575
]
7676
},
7777
{
@@ -124,7 +124,7 @@
124124
"cell_type": "markdown",
125125
"metadata": {},
126126
"source": [
127-
"Here is what the simulated data look like. We use the `pylab` module from the plotting library matplotlib. "
127+
"Here is what the simulated data look like. We use the plotting library matplotlib to visualize the data. "
128128
]
129129
},
130130
{
@@ -2957,7 +2957,7 @@
29572957
"\n",
29582958
"You may have heard of regularization from machine learning or classical statistics applications, where methods like the lasso or ridge regression shrink parameters towards zero by applying a penalty to the size of the regression parameters. In a Bayesian context, we apply an appropriate prior distribution to the regression coefficients. One such prior is the *hierarchical regularized horseshoe*, which uses two regularization strategies, one global and a set of local parameters, one for each coefficient. The key to making this work is by selecting a long-tailed distribution as the shrinkage priors, which allows some to be nonzero, while pushing the rest towards zero.\n",
29592959
"\n",
2960-
"The horeshoe prior for each regression coefficient $\\beta_i$ looks like this:\n",
2960+
"The horseshoe prior for each regression coefficient $\\beta_i$ looks like this:\n",
29612961
"\n",
29622962
"$$\\beta_i \\sim N\\left(0, \\tau^2 \\cdot \\tilde{\\lambda}_i^2\\right)$$\n",
29632963
"\n",
@@ -3980,7 +3980,7 @@
39803980
"cell_type": "markdown",
39813981
"metadata": {},
39823982
"source": [
3983-
"If your logp can not be expressed in PyTensor, you can decorate the function with `as_op` as follows: `@as_op(itypes=[at.dscalar], otypes=[at.dscalar])`. Note, that this will create a blackbox Python function that will be much slower and not provide the gradients necessary for e.g. NUTS."
3983+
"If your logp cannot be expressed in PyTensor, you can decorate the function with `as_op` as follows: `@as_op(itypes=[at.dscalar], otypes=[at.dscalar])`. Note, that this will create a blackbox Python function that will be much slower and not provide the gradients necessary for e.g. NUTS."
39843984
]
39853985
},
39863986
{
@@ -4079,7 +4079,7 @@
40794079
"name": "python",
40804080
"nbconvert_exporter": "python",
40814081
"pygments_lexer": "ipython3",
4082-
"version": "3.10.9"
4082+
"version": "3.11.9"
40834083
},
40844084
"toc": {
40854085
"base_numbering": 1,

0 commit comments

Comments
 (0)