|
1 |
| -========================================= |
2 |
| -:mod:`tensor.conv` -- Tensor Convolutions |
3 |
| -========================================= |
| 1 | +.. _libdoc_tensor_conv: |
4 | 2 |
|
5 |
| -.. module:: tensor.conv |
| 3 | +========================================================== |
| 4 | +:mod:`conv` -- Ops for convolutional neural nets |
| 5 | +========================================================== |
| 6 | + |
| 7 | +.. module:: conv |
6 | 8 | :platform: Unix, Windows
|
7 |
| - :synopsis: Tensor Convolutions |
8 |
| -.. moduleauthor:: LISA, PyMC Developers, PyTensor Developers |
| 9 | + :synopsis: ops for signal processing |
| 10 | +.. moduleauthor:: LISA |
| 11 | + |
| 12 | + |
| 13 | +The recommended user interface are: |
| 14 | + |
| 15 | +- :func:`aesara.tensor.conv.conv2d` for 2d convolution |
| 16 | +- :func:`aesara.tensor.conv.conv3d` for 3d convolution |
| 17 | + |
| 18 | +Pytensor will automatically use the fastest implementation in many cases. |
| 19 | +On the CPU, the implementation is a GEMM based one. |
| 20 | + |
| 21 | +This auto-tuning has the inconvenience that the first call is much |
| 22 | +slower as it tries and times each implementation it has. So if you |
| 23 | +benchmark, it is important that you remove the first call from your |
| 24 | +timing. |
| 25 | + |
| 26 | +Implementation Details |
| 27 | +====================== |
| 28 | + |
| 29 | +This section gives more implementation detail. Most of the time you do |
| 30 | +not need to read it. Pytensor will select it for you. |
| 31 | + |
| 32 | + |
| 33 | +- Implemented operators for neural network 2D / image convolution: |
| 34 | + - :func:`conv.conv2d <pytensor.tensor.conv.conv2d>`. |
| 35 | + old 2d convolution. DO NOT USE ANYMORE. |
| 36 | + |
| 37 | + For each element in a batch, it first creates a |
| 38 | + `Toeplitz <http://en.wikipedia.org/wiki/Toeplitz_matrix>`_ matrix in a CUDA kernel. |
| 39 | + Then, it performs a ``gemm`` call to multiply this Toeplitz matrix and the filters |
| 40 | + (hence the name: MM is for matrix multiplication). |
| 41 | + It needs extra memory for the Toeplitz matrix, which is a 2D matrix of shape |
| 42 | + ``(no of channels * filter width * filter height, output width * output height)``. |
| 43 | + |
| 44 | + - :func:`CorrMM <pytensor.tensor.nnet.corr.CorrMM>` |
| 45 | + This is a CPU-only 2d correlation implementation taken from |
| 46 | + `caffe's cpp implementation <https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cpp>`_. |
| 47 | + It does not flip the kernel. |
| 48 | + |
| 49 | +- Implemented operators for neural network 3D / video convolution: |
| 50 | + - :func:`Corr3dMM <pytensor.tensor.nnet.corr3d.Corr3dMM>` |
| 51 | + This is a CPU-only 3d correlation implementation based on |
| 52 | + the 2d version (:func:`CorrMM <pytensor.tensor.nnet.corr.CorrMM>`). |
| 53 | + It does not flip the kernel. As it provides a gradient, you can use it as a |
| 54 | + replacement for nnet.conv3d. For convolutions done on CPU, |
| 55 | + nnet.conv3d will be replaced by Corr3dMM. |
| 56 | + |
| 57 | + - :func:`conv3d2d <pytensor.tensor.nnet.conv3d2d.conv3d>` |
| 58 | + Another conv3d implementation that uses the conv2d with data reshaping. |
| 59 | + It is faster in some corner cases than conv3d. It flips the kernel. |
| 60 | + |
| 61 | +.. autofunction:: pytensor.tensor.conv.conv2d |
| 62 | +.. autofunction:: pytensor.tensor.conv.conv2d_transpose |
| 63 | +.. autofunction:: pytensor.tensor.conv.conv3d |
| 64 | +.. autofunction:: pytensor.tensor.conv.conv3d2d.conv3d |
| 65 | +.. autofunction:: pytensor.tensor.conv.conv.conv2d |
9 | 66 |
|
10 |
| -.. automodule:: pytensor.tensor.conv |
11 |
| - :members: |
| 67 | +.. automodule:: pytensor.tensor.conv.abstract_conv |
| 68 | + :members: |
0 commit comments