@@ -99,23 +99,23 @@ class Minibatch(tt.TensorVariable):
99
99
----------
100
100
data : :class:`ndarray`
101
101
initial data
102
- batch_size : `int` or `List[int|tuple(size, random_seed)]`
102
+ batch_size : `` int`` or `` List[int|tuple(size, random_seed)]` `
103
103
batch size for inference, random seed is needed
104
104
for child random generators
105
- dtype : `str`
105
+ dtype : `` str` `
106
106
cast data to specific type
107
107
broadcastable : tuple[bool]
108
- change broadcastable pattern that defaults to `(False, ) * ndim`
109
- name : `str`
108
+ change broadcastable pattern that defaults to `` (False, ) * ndim` `
109
+ name : `` str` `
110
110
name for tensor, defaults to "Minibatch"
111
- random_seed : `int`
111
+ random_seed : `` int` `
112
112
random seed that is used by default
113
- update_shared_f : `callable`
113
+ update_shared_f : `` callable` `
114
114
returns :class:`ndarray` that will be carefully
115
115
stored to underlying shared variable
116
116
you can use it to change source of
117
117
minibatches programmatically
118
- in_memory_size : `int` or `List[int|slice|Ellipsis]`
118
+ in_memory_size : `` int`` or `` List[int|slice|Ellipsis]` `
119
119
data size for storing in theano.shared
120
120
121
121
Attributes
@@ -130,7 +130,7 @@ class Minibatch(tt.TensorVariable):
130
130
Below is a common use case of Minibatch within the variational inference.
131
131
Importantly, we need to make PyMC3 "aware" of minibatch being used in inference.
132
132
Otherwise, we will get the wrong :math:`logp` for the model.
133
- To do so, we need to pass the `total_size` parameter to the observed node, which correctly scales
133
+ To do so, we need to pass the `` total_size` ` parameter to the observed node, which correctly scales
134
134
the density of the model logp that is affected by Minibatch. See more in examples below.
135
135
136
136
Examples
@@ -141,16 +141,16 @@ class Minibatch(tt.TensorVariable):
141
141
if we want 1d slice of size 10 we do
142
142
>>> x = Minibatch(data, batch_size=10)
143
143
144
- Note that your data is cast to `floatX` if it is not integer type
145
- But you still can add the `dtype` kwarg for :class:`Minibatch`
144
+ Note that your data is cast to `` floatX` ` if it is not integer type
145
+ But you still can add the `` dtype` ` kwarg for :class:`Minibatch`
146
146
147
147
in case we want 10 sampled rows and columns
148
- `[(size, seed), (size, seed)]` it is
148
+ `` [(size, seed), (size, seed)]` ` it is
149
149
>>> x = Minibatch(data, batch_size=[(10, 42), (10, 42)], dtype='int32')
150
150
>>> assert str(x.dtype) == 'int32'
151
151
152
152
or simpler with default random seed = 42
153
- `[size, size]`
153
+ `` [size, size]` `
154
154
>>> x = Minibatch(data, batch_size=[10, 10])
155
155
156
156
x is a regular :class:`TensorVariable` that supports any math
@@ -166,17 +166,17 @@ class Minibatch(tt.TensorVariable):
166
166
>>> with model:
167
167
... approx = pm.fit()
168
168
169
- Notable thing is that :class:`Minibatch` has `shared`, `minibatch`, attributes
169
+ Notable thing is that :class:`Minibatch` has `` shared`` , `` minibatch` `, attributes
170
170
you can call later
171
171
>>> x.set_value(np.random.laplace(size=(100, 100)))
172
172
173
173
and minibatches will be then from new storage
174
- it directly affects `x.shared`.
174
+ it directly affects `` x.shared` `.
175
175
the same thing would be but less convenient
176
176
>>> x.shared.set_value(pm.floatX(np.random.laplace(size=(100, 100))))
177
177
178
178
programmatic way to change storage is as follows
179
- I import `partial` for simplicity
179
+ I import `` partial` ` for simplicity
180
180
>>> from functools import partial
181
181
>>> datagen = partial(np.random.laplace, size=(100, 100))
182
182
>>> x = Minibatch(datagen(), batch_size=10, update_shared_f=datagen)
@@ -197,7 +197,7 @@ class Minibatch(tt.TensorVariable):
197
197
for shared variable. Feel free to use that if needed.
198
198
199
199
Suppose you need some replacements in the graph, e.g. change minibatch to testdata
200
- >>> node = x ** 2 # arbitrary expressions on minibatch `x `
200
+ >>> node = x ** 2 # arbitrary expressions on minibatch ``x` `
201
201
>>> testdata = pm.floatX(np.random.laplace(size=(1000, 10)))
202
202
203
203
Then you should create a dict with replacements
@@ -214,20 +214,20 @@ class Minibatch(tt.TensorVariable):
214
214
For more complex slices some more code is needed that can seem not so clear
215
215
>>> moredata = np.random.rand(10, 20, 30, 40, 50)
216
216
217
- default `total_size` that can be passed to `PyMC3` random node
218
- is then `(10, 20, 30, 40, 50)` but can be less verbose in some cases
217
+ default `` total_size`` that can be passed to `` PyMC3` ` random node
218
+ is then `` (10, 20, 30, 40, 50)` ` but can be less verbose in some cases
219
219
220
- 1) Advanced indexing, `total_size = (10, Ellipsis, 50)`
220
+ 1) Advanced indexing, `` total_size = (10, Ellipsis, 50)` `
221
221
>>> x = Minibatch(moredata, [2, Ellipsis, 10])
222
222
223
223
We take slice only for the first and last dimension
224
224
>>> assert x.eval().shape == (2, 20, 30, 40, 10)
225
225
226
- 2) Skipping particular dimension, `total_size = (10, None, 30)`
226
+ 2) Skipping particular dimension, `` total_size = (10, None, 30)` `
227
227
>>> x = Minibatch(moredata, [2, None, 20])
228
228
>>> assert x.eval().shape == (2, 20, 20, 40, 50)
229
229
230
- 3) Mixing that all, `total_size = (10, None, 30, Ellipsis, 50)`
230
+ 3) Mixing that all, `` total_size = (10, None, 30, Ellipsis, 50)` `
231
231
>>> x = Minibatch(moredata, [2, None, 20, Ellipsis, 10])
232
232
>>> assert x.eval().shape == (2, 20, 20, 40, 10)
233
233
"""
0 commit comments