Skip to content

Commit 6f82a26

Browse files
committed
Search-and-replace a bunch of strings to port to v4.
1 parent f033fb2 commit 6f82a26

File tree

97 files changed

+2135
-2135
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

97 files changed

+2135
-2135
lines changed

examples/case_studies/BEST.ipynb

+7-7
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
"name": "stdout",
1717
"output_type": "stream",
1818
"text": [
19-
"Running on PyMC3 v3.11.0\n"
19+
"Running on PyMC v3.11.0\n"
2020
]
2121
}
2222
],
@@ -25,9 +25,9 @@
2525
"import matplotlib.pyplot as plt\n",
2626
"import numpy as np\n",
2727
"import pandas as pd\n",
28-
"import pymc3 as pm\n",
28+
"import pymc as pm\n",
2929
"\n",
30-
"print(f\"Running on PyMC3 v{pm.__version__}\")"
30+
"print(f\"Running on PyMC v{pm.__version__}\")"
3131
]
3232
},
3333
{
@@ -222,7 +222,7 @@
222222
"cell_type": "markdown",
223223
"metadata": {},
224224
"source": [
225-
"Since PyMC3 parameterizes the Student-T in terms of precision, rather than standard deviation, we must transform the standard deviations before specifying our likelihoods."
225+
"Since PyMC parameterizes the Student-T in terms of precision, rather than standard deviation, we must transform the standard deviations before specifying our likelihoods."
226226
]
227227
},
228228
{
@@ -323,7 +323,7 @@
323323
],
324324
"source": [
325325
"with model:\n",
326-
" trace = pm.sample(2000, return_inferencedata=True)"
326+
" trace = pm.sample(2000)"
327327
]
328328
},
329329
{
@@ -574,7 +574,7 @@
574574
"source": [
575575
"The original pymc2 implementation was written by Andrew Straw and can be found here: https://github.com/strawlab/best\n",
576576
"\n",
577-
"Ported to PyMC3 by [Thomas Wiecki](https://twitter.com/twiecki) (c) 2015, updated by Chris Fonnesbeck."
577+
"Ported to PyMC by [Thomas Wiecki](https://twitter.com/twiecki) (c) 2015, updated by Chris Fonnesbeck."
578578
]
579579
},
580580
{
@@ -596,7 +596,7 @@
596596
"numpy : 1.19.2\n",
597597
"matplotlib: 3.3.2\n",
598598
"arviz : 0.11.2\n",
599-
"pymc3 : 3.11.0\n",
599+
"pymc : 3.11.0\n",
600600
"\n",
601601
"Watermark: 2.2.0\n",
602602
"\n"

examples/case_studies/LKJ.ipynb

+10-10
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
"cell_type": "markdown",
1212
"metadata": {},
1313
"source": [
14-
"While the [inverse-Wishart distribution](https://en.wikipedia.org/wiki/Inverse-Wishart_distribution) is the conjugate prior for the covariance matrix of a multivariate normal distribution, it is [not very well-suited](https://github.com/pymc-devs/pymc3/issues/538#issuecomment-94153586) to modern Bayesian computational methods. For this reason, the [LKJ prior](http://www.sciencedirect.com/science/article/pii/S0047259X09000876) is recommended when modeling the covariance matrix of a multivariate normal distribution.\n",
14+
"While the [inverse-Wishart distribution](https://en.wikipedia.org/wiki/Inverse-Wishart_distribution) is the conjugate prior for the covariance matrix of a multivariate normal distribution, it is [not very well-suited](https://github.com/pymc-devs/pymc/issues/538#issuecomment-94153586) to modern Bayesian computational methods. For this reason, the [LKJ prior](http://www.sciencedirect.com/science/article/pii/S0047259X09000876) is recommended when modeling the covariance matrix of a multivariate normal distribution.\n",
1515
"\n",
1616
"To illustrate modelling covariance with the LKJ distribution, we first generate a two-dimensional normally-distributed sample data set."
1717
]
@@ -25,21 +25,21 @@
2525
"name": "stdout",
2626
"output_type": "stream",
2727
"text": [
28-
"Running on PyMC3 v3.11.2\n"
28+
"Running on PyMC v3.11.2\n"
2929
]
3030
}
3131
],
3232
"source": [
3333
"import arviz as az\n",
3434
"import numpy as np\n",
35-
"import pymc3 as pm\n",
35+
"import pymc as pm\n",
3636
"import seaborn as sns\n",
3737
"\n",
3838
"from matplotlib import pyplot as plt\n",
3939
"from matplotlib.lines import Line2D\n",
4040
"from matplotlib.patches import Ellipse\n",
4141
"\n",
42-
"print(f\"Running on PyMC3 v{pm.__version__}\")"
42+
"print(f\"Running on PyMC v{pm.__version__}\")"
4343
]
4444
},
4545
{
@@ -135,7 +135,7 @@
135135
"\n",
136136
"The LKJ distribution provides a prior on the correlation matrix, $\\mathbf{C} = \\textrm{Corr}(x_i, x_j)$, which, combined with priors on the standard deviations of each component, [induces](http://www3.stat.sinica.edu.tw/statistica/oldpdf/A10n416.pdf) a prior on the covariance matrix, $\\Sigma$. Since inverting $\\Sigma$ is numerically unstable and inefficient, it is computationally advantageous to use the [Cholesky decompositon](https://en.wikipedia.org/wiki/Cholesky_decomposition) of $\\Sigma$, $\\Sigma = \\mathbf{L} \\mathbf{L}^{\\top}$, where $\\mathbf{L}$ is a lower-triangular matrix. This decompositon allows computation of the term $(\\mathbf{x} - \\mu)^{\\top} \\Sigma^{-1} (\\mathbf{x} - \\mu)$ using back-substitution, which is more numerically stable and efficient than direct matrix inversion.\n",
137137
"\n",
138-
"PyMC3 supports LKJ priors for the Cholesky decomposition of the covariance matrix via the [LKJCholeskyCov](../api/distributions/multivariate.rst) distribution. This distribution has parameters `n` and `sd_dist`, which are the dimension of the observations, $\\mathbf{x}$, and the PyMC3 distribution of the component standard deviations, respectively. It also has a hyperparamter `eta`, which controls the amount of correlation between components of $\\mathbf{x}$. The LKJ distribution has the density $f(\\mathbf{C}\\ |\\ \\eta) \\propto |\\mathbf{C}|^{\\eta - 1}$, so $\\eta = 1$ leads to a uniform distribution on correlation matrices, while the magnitude of correlations between components decreases as $\\eta \\to \\infty$.\n",
138+
"PyMC supports LKJ priors for the Cholesky decomposition of the covariance matrix via the [LKJCholeskyCov](../api/distributions/multivariate.rst) distribution. This distribution has parameters `n` and `sd_dist`, which are the dimension of the observations, $\\mathbf{x}$, and the PyMC distribution of the component standard deviations, respectively. It also has a hyperparamter `eta`, which controls the amount of correlation between components of $\\mathbf{x}$. The LKJ distribution has the density $f(\\mathbf{C}\\ |\\ \\eta) \\propto |\\mathbf{C}|^{\\eta - 1}$, so $\\eta = 1$ leads to a uniform distribution on correlation matrices, while the magnitude of correlations between components decreases as $\\eta \\to \\infty$.\n",
139139
"\n",
140140
"In this example, we model the standard deviations with $\\textrm{Exponential}(1.0)$ priors, and the correlation matrix as $\\mathbf{C} \\sim \\textrm{LKJ}(\\eta = 2)$."
141141
]
@@ -267,7 +267,7 @@
267267
"text": [
268268
"Auto-assigning NUTS sampler...\n",
269269
"Initializing NUTS using adapt_diag...\n",
270-
"WARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.\n",
270+
"WARNING (aesara.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.\n",
271271
"Multiprocess sampling (4 chains in 4 jobs)\n",
272272
"NUTS: [μ, chol]\n"
273273
]
@@ -544,7 +544,7 @@
544544
" trace = pm.sample(\n",
545545
" random_seed=RANDOM_SEED,\n",
546546
" init=\"adapt_diag\",\n",
547-
" return_inferencedata=True,\n",
547+
" ,\n",
548548
" idata_kwargs={\"dims\": {\"chol_stds\": [\"axis\"], \"chol_corr\": [\"axis\", \"axis_bis\"]}},\n",
549549
" )\n",
550550
"az.summary(trace, var_names=\"~chol\", round_to=2)"
@@ -761,14 +761,14 @@
761761
"Python version : 3.8.10\n",
762762
"IPython version : 7.25.0\n",
763763
"\n",
764-
"theano: 1.1.2\n",
764+
"aesara: 1.1.2\n",
765765
"xarray: 0.17.0\n",
766766
"\n",
767767
"matplotlib: 3.3.4\n",
768768
"arviz : 0.11.2\n",
769769
"seaborn : 0.11.1\n",
770770
"numpy : 1.21.0\n",
771-
"pymc3 : 3.11.2\n",
771+
"pymc : 3.11.2\n",
772772
"\n",
773773
"Watermark: 2.2.0\n",
774774
"\n"
@@ -777,7 +777,7 @@
777777
],
778778
"source": [
779779
"%load_ext watermark\n",
780-
"%watermark -n -u -v -iv -w -p theano,xarray"
780+
"%watermark -n -u -v -iv -w -p aesara,xarray"
781781
]
782782
}
783783
],

examples/case_studies/bayesian_ab_testing.ipynb

+16-16
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
"name": "stdout",
1111
"output_type": "stream",
1212
"text": [
13-
"Running on PyMC3 v3.11.2\n"
13+
"Running on PyMC v3.11.2\n"
1414
]
1515
}
1616
],
@@ -22,12 +22,12 @@
2222
"import matplotlib.pyplot as plt\n",
2323
"import numpy as np\n",
2424
"import pandas as pd\n",
25-
"import pymc3 as pm\n",
26-
"import pymc3.math as pmm\n",
25+
"import pymc as pm\n",
26+
"import pymc.math as pmm\n",
2727
"\n",
2828
"from scipy.stats import bernoulli, expon\n",
2929
"\n",
30-
"print(f\"Running on PyMC3 v{pm.__version__}\")"
30+
"print(f\"Running on PyMC v{pm.__version__}\")"
3131
]
3232
},
3333
{
@@ -98,13 +98,13 @@
9898
"\n",
9999
"With this, we can sample from the joint posterior of $\\theta_A, \\theta_B$. \n",
100100
"\n",
101-
"You may have noticed that the Beta distribution is the conjugate prior for the Binomial, so we don't need MCMC sampling to estimate the posterior (the exact solution can be found in the VWO paper). We'll still demonstrate how sampling can be done with PyMC3 though, and doing this makes it easier to extend the model with different priors, dependency assumptions, etc.\n",
101+
"You may have noticed that the Beta distribution is the conjugate prior for the Binomial, so we don't need MCMC sampling to estimate the posterior (the exact solution can be found in the VWO paper). We'll still demonstrate how sampling can be done with PyMC though, and doing this makes it easier to extend the model with different priors, dependency assumptions, etc.\n",
102102
"\n",
103103
"Finally, remember that our outcome of interest is whether B is better than A. A common measure in practice for whether B is better than is the _relative uplift in conversion rates_, i.e. the percentage difference of $\\theta_B$ over $\\theta_A$:\n",
104104
"\n",
105105
"$$\\mathrm{reluplift}_B = \\theta_B / \\theta_A - 1$$\n",
106106
"\n",
107-
"We'll implement this model setup in PyMC3 below."
107+
"We'll implement this model setup in PyMC below."
108108
]
109109
},
110110
{
@@ -181,7 +181,7 @@
181181
"id": "8e1f6ca4",
182182
"metadata": {},
183183
"source": [
184-
"Note that we can pass in arbitrary values for the observed data in these prior predictive checks. PyMC3 will not use that data when sampling from the prior predictive distribution."
184+
"Note that we can pass in arbitrary values for the observed data in these prior predictive checks. PyMC will not use that data when sampling from the prior predictive distribution."
185185
]
186186
},
187187
{
@@ -404,9 +404,9 @@
404404
" generated = generate_binomial_data(variants, true_rates, samples_per_variant)\n",
405405
" data = [BinomialData(**generated[v].to_dict()) for v in variants]\n",
406406
" with ConversionModelTwoVariant(priors=weak_prior).create_model(data):\n",
407-
" trace_weak = pm.sample(draws=5000, return_inferencedata=True, cores=1, chains=2)\n",
407+
" trace_weak = pm.sample(draws=5000, cores=1, chains=2)\n",
408408
" with ConversionModelTwoVariant(priors=strong_prior).create_model(data):\n",
409-
" trace_strong = pm.sample(draws=5000, return_inferencedata=True, cores=1, chains=2)\n",
409+
" trace_strong = pm.sample(draws=5000, cores=1, chains=2)\n",
410410
"\n",
411411
" true_rel_uplift = true_rates[1] / true_rates[0] - 1\n",
412412
"\n",
@@ -884,7 +884,7 @@
884884
" generated = generate_binomial_data(variants, true_rates, samples_per_variant)\n",
885885
" data = [BinomialData(**generated[v].to_dict()) for v in variants]\n",
886886
" with ConversionModel(priors).create_model(data=data, comparison_method=comparison_method):\n",
887-
" trace = pm.sample(draws=5000, return_inferencedata=True, chains=2, cores=1)\n",
887+
" trace = pm.sample(draws=5000, chains=2, cores=1)\n",
888888
"\n",
889889
" n_plots = len(variants)\n",
890890
" fig, axs = plt.subplots(nrows=n_plots, ncols=1, figsize=(3 * n_plots, 7), sharex=True)\n",
@@ -1439,7 +1439,7 @@
14391439
" with RevenueModel(conversion_rate_prior, mean_purchase_prior).create_model(\n",
14401440
" data, comparison_method\n",
14411441
" ):\n",
1442-
" trace = pm.sample(draws=5000, return_inferencedata=True, chains=2, cores=1)\n",
1442+
" trace = pm.sample(draws=5000, chains=2, cores=1)\n",
14431443
"\n",
14441444
" n_plots = len(variants)\n",
14451445
" fig, axs = plt.subplots(nrows=n_plots, ncols=1, figsize=(3 * n_plots, 7), sharex=True)\n",
@@ -1895,9 +1895,9 @@
18951895
"* How do we plan the length and size of A/B tests using power analysis, if we're using Bayesian models to analyse the results?\n",
18961896
"* Outside of the conversion rates (bernoulli random variables for each visitor), many value distributions in online software cannot be fit with nice densities like Normal, Gamma, etc. How do we model these?\n",
18971897
"\n",
1898-
"Various textbooks and online resources dive into these areas in more detail. [Doing Bayesian Data Analysis](http://doingbayesiandataanalysis.blogspot.com/) by John Kruschke is a great resource, and has been translated to PyMC3 here: https://github.com/JWarmenhoven/DBDA-python.\n",
1898+
"Various textbooks and online resources dive into these areas in more detail. [Doing Bayesian Data Analysis](http://doingbayesiandataanalysis.blogspot.com/) by John Kruschke is a great resource, and has been translated to PyMC here: https://github.com/JWarmenhoven/DBDA-python.\n",
18991899
"\n",
1900-
"We also plan to create more PyMC3 tutorials on these topics, so stay tuned!\n",
1900+
"We also plan to create more PyMC tutorials on these topics, so stay tuned!\n",
19011901
"\n",
19021902
"---\n",
19031903
"\n",
@@ -1924,10 +1924,10 @@
19241924
"Python version : 3.8.6\n",
19251925
"IPython version : 7.23.1\n",
19261926
"\n",
1927-
"theano: 1.1.2\n",
1927+
"aesara: 1.1.2\n",
19281928
"xarray: 0.18.0\n",
19291929
"\n",
1930-
"pymc3 : 3.11.2\n",
1930+
"pymc : 3.11.2\n",
19311931
"arviz : 0.11.2\n",
19321932
"matplotlib: 3.4.2\n",
19331933
"pandas : 1.2.4\n",
@@ -1940,7 +1940,7 @@
19401940
],
19411941
"source": [
19421942
"%load_ext watermark\n",
1943-
"%watermark -n -u -v -iv -w -p theano,xarray"
1943+
"%watermark -n -u -v -iv -w -p aesara,xarray"
19441944
]
19451945
}
19461946
],

0 commit comments

Comments
 (0)