Skip to content

Commit ebb7488

Browse files
committed
Fix RST bugs Luciano Paz caught.
1 parent ccf03b3 commit ebb7488

File tree

5 files changed

+67
-67
lines changed

5 files changed

+67
-67
lines changed

pymc3/backends/base.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -390,7 +390,7 @@ def add_values(self, vals, overwrite=False) -> None:
390390
391391
Returns
392392
-------
393-
Nothing.
393+
None.
394394
"""
395395
for k, v in vals.items():
396396
new_var = 1

pymc3/data.py

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -99,23 +99,23 @@ class Minibatch(tt.TensorVariable):
9999
----------
100100
data : :class:`ndarray`
101101
initial data
102-
batch_size : `int` or `List[int|tuple(size, random_seed)]`
102+
batch_size : ``int`` or ``List[int|tuple(size, random_seed)]``
103103
batch size for inference, random seed is needed
104104
for child random generators
105-
dtype : `str`
105+
dtype : ``str``
106106
cast data to specific type
107107
broadcastable : tuple[bool]
108-
change broadcastable pattern that defaults to `(False, ) * ndim`
109-
name : `str`
108+
change broadcastable pattern that defaults to ``(False, ) * ndim``
109+
name : ``str``
110110
name for tensor, defaults to "Minibatch"
111-
random_seed : `int`
111+
random_seed : ``int``
112112
random seed that is used by default
113-
update_shared_f : `callable`
113+
update_shared_f : ``callable``
114114
returns :class:`ndarray` that will be carefully
115115
stored to underlying shared variable
116116
you can use it to change source of
117117
minibatches programmatically
118-
in_memory_size : `int` or `List[int|slice|Ellipsis]`
118+
in_memory_size : ``int`` or ``List[int|slice|Ellipsis]``
119119
data size for storing in theano.shared
120120
121121
Attributes
@@ -130,7 +130,7 @@ class Minibatch(tt.TensorVariable):
130130
Below is a common use case of Minibatch within the variational inference.
131131
Importantly, we need to make PyMC3 "aware" of minibatch being used in inference.
132132
Otherwise, we will get the wrong :math:`logp` for the model.
133-
To do so, we need to pass the `total_size` parameter to the observed node, which correctly scales
133+
To do so, we need to pass the ``total_size`` parameter to the observed node, which correctly scales
134134
the density of the model logp that is affected by Minibatch. See more in examples below.
135135
136136
Examples
@@ -141,16 +141,16 @@ class Minibatch(tt.TensorVariable):
141141
if we want 1d slice of size 10 we do
142142
>>> x = Minibatch(data, batch_size=10)
143143
144-
Note that your data is cast to `floatX` if it is not integer type
145-
But you still can add the `dtype` kwarg for :class:`Minibatch`
144+
Note that your data is cast to ``floatX`` if it is not integer type
145+
But you still can add the ``dtype`` kwarg for :class:`Minibatch`
146146
147147
in case we want 10 sampled rows and columns
148-
`[(size, seed), (size, seed)]` it is
148+
``[(size, seed), (size, seed)]`` it is
149149
>>> x = Minibatch(data, batch_size=[(10, 42), (10, 42)], dtype='int32')
150150
>>> assert str(x.dtype) == 'int32'
151151
152152
or simpler with default random seed = 42
153-
`[size, size]`
153+
``[size, size]``
154154
>>> x = Minibatch(data, batch_size=[10, 10])
155155
156156
x is a regular :class:`TensorVariable` that supports any math
@@ -166,17 +166,17 @@ class Minibatch(tt.TensorVariable):
166166
>>> with model:
167167
... approx = pm.fit()
168168
169-
Notable thing is that :class:`Minibatch` has `shared`, `minibatch`, attributes
169+
Notable thing is that :class:`Minibatch` has ``shared``, ``minibatch``, attributes
170170
you can call later
171171
>>> x.set_value(np.random.laplace(size=(100, 100)))
172172
173173
and minibatches will be then from new storage
174-
it directly affects `x.shared`.
174+
it directly affects ``x.shared``.
175175
the same thing would be but less convenient
176176
>>> x.shared.set_value(pm.floatX(np.random.laplace(size=(100, 100))))
177177
178178
programmatic way to change storage is as follows
179-
I import `partial` for simplicity
179+
I import ``partial`` for simplicity
180180
>>> from functools import partial
181181
>>> datagen = partial(np.random.laplace, size=(100, 100))
182182
>>> x = Minibatch(datagen(), batch_size=10, update_shared_f=datagen)
@@ -197,7 +197,7 @@ class Minibatch(tt.TensorVariable):
197197
for shared variable. Feel free to use that if needed.
198198
199199
Suppose you need some replacements in the graph, e.g. change minibatch to testdata
200-
>>> node = x ** 2 # arbitrary expressions on minibatch `x`
200+
>>> node = x ** 2 # arbitrary expressions on minibatch ``x``
201201
>>> testdata = pm.floatX(np.random.laplace(size=(1000, 10)))
202202
203203
Then you should create a dict with replacements
@@ -214,20 +214,20 @@ class Minibatch(tt.TensorVariable):
214214
For more complex slices some more code is needed that can seem not so clear
215215
>>> moredata = np.random.rand(10, 20, 30, 40, 50)
216216
217-
default `total_size` that can be passed to `PyMC3` random node
218-
is then `(10, 20, 30, 40, 50)` but can be less verbose in some cases
217+
default ``total_size`` that can be passed to ``PyMC3`` random node
218+
is then ``(10, 20, 30, 40, 50)`` but can be less verbose in some cases
219219
220-
1) Advanced indexing, `total_size = (10, Ellipsis, 50)`
220+
1) Advanced indexing, ``total_size = (10, Ellipsis, 50)``
221221
>>> x = Minibatch(moredata, [2, Ellipsis, 10])
222222
223223
We take slice only for the first and last dimension
224224
>>> assert x.eval().shape == (2, 20, 30, 40, 10)
225225
226-
2) Skipping particular dimension, `total_size = (10, None, 30)`
226+
2) Skipping particular dimension, ``total_size = (10, None, 30)``
227227
>>> x = Minibatch(moredata, [2, None, 20])
228228
>>> assert x.eval().shape == (2, 20, 20, 40, 50)
229229
230-
3) Mixing that all, `total_size = (10, None, 30, Ellipsis, 50)`
230+
3) Mixing that all, ``total_size = (10, None, 30, Ellipsis, 50)``
231231
>>> x = Minibatch(moredata, [2, None, 20, Ellipsis, 10])
232232
>>> assert x.eval().shape == (2, 20, 20, 40, 10)
233233
"""

pymc3/distributions/continuous.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1550,7 +1550,7 @@ def logcdf(self, value):
15501550
References
15511551
----------
15521552
.. [Machler2012] Martin Mächler (2012).
1553-
"Accurately computing :math: `\log(1-\exp(-\mid a \mid))` Assessed by the Rmpfr
1553+
"Accurately computing :math:`\log(1-\exp(-\mid a \mid))` Assessed by the Rmpfr
15541554
package"
15551555
15561556
Parameters
@@ -1762,7 +1762,7 @@ class Lognormal(PositiveContinuous):
17621762
17631763
.. code-block:: python
17641764
1765-
# Example to show that we pass in only `sigma` or `tau` but not both.
1765+
# Example to show that we pass in only ``sigma`` or ``tau`` but not both.
17661766
with pm.Model():
17671767
x = pm.Lognormal('x', mu=2, sigma=30)
17681768
@@ -1913,7 +1913,7 @@ class StudentT(Continuous):
19131913
plt.show()
19141914
19151915
======== ========================
1916-
Support :math:`x \in \mathbb{R}`
1916+
Support :math:``x \in \mathbb{R}``
19171917
======== ========================
19181918
19191919
Parameters
@@ -3723,7 +3723,7 @@ class Gumbel(Continuous):
37233723
37243724
======== ==========================================
37253725
Support :math:`x \in \mathbb{R}`
3726-
Mean :math:`\mu + \beta\gamma`, where \gamma is the Euler-Mascheroni constant
3726+
Mean :math:`\mu + \beta\gamma`, where :math:`\gamma` is the Euler-Mascheroni constant
37273727
Variance :math:`\frac{\pi^2}{6} \beta^2`
37283728
======== ==========================================
37293729
@@ -4213,7 +4213,7 @@ class Interpolated(BoundedContinuous):
42134213
interpolated density is any way normalized to make the total probability
42144214
equal to $1$.
42154215
4216-
Both parameters `x_points` and values `pdf_points` are not variables, but
4216+
Both parameters ``x_points`` and values ``pdf_points`` are not variables, but
42174217
plain array-like objects, so they are constant and cannot be sampled.
42184218
42194219
======== ===========================================
@@ -4225,7 +4225,7 @@ class Interpolated(BoundedContinuous):
42254225
x_points : array-like
42264226
A monotonically growing list of values
42274227
pdf_points : array-like
4228-
Probability density function evaluated on lattice `x_points`
4228+
Probability density function evaluated on lattice ``x_points``
42294229
"""
42304230

42314231
def __init__(self, x_points, pdf_points, *args, **kwargs):

pymc3/distributions/timeseries.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -316,7 +316,7 @@ class EulerMaruyama(distribution.Continuous):
316316
sde_fn : callable
317317
function returning the drift and diffusion coefficients of SDE
318318
sde_pars : tuple
319-
parameters of the SDE, passed as `*args` to `sde_fn`
319+
parameters of the SDE, passed as ``*args`` to ``sde_fn``
320320
"""
321321
def __init__(self, dt, sde_fn, sde_pars, *args, **kwds):
322322
super().__init__(*args, **kwds)

0 commit comments

Comments
 (0)