Skip to content

Commit 3e55a20

Browse files
Armavicatwiecki
authored andcommitted
Enable sphinx-lint pre-commit hook
1 parent a873597 commit 3e55a20

12 files changed

+110
-105
lines changed

.pre-commit-config.yaml

+5
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,11 @@ repos:
2121
pytensor/tensor/variable\.py|
2222
)$
2323
- id: check-merge-conflict
24+
- repo: https://github.com/sphinx-contrib/sphinx-lint
25+
rev: v1.0.0
26+
hooks:
27+
- id: sphinx-lint
28+
args: ["."]
2429
- repo: https://github.com/astral-sh/ruff-pre-commit
2530
rev: v0.6.5
2631
hooks:

doc/extending/creating_a_c_op.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ This distance between consecutive elements of an array over a given dimension,
152152
is called the stride of that dimension.
153153

154154

155-
Accessing NumPy :class`ndarray`\s' data and properties
155+
Accessing NumPy :class:`ndarray`'s data and properties
156156
------------------------------------------------------
157157

158158
The following macros serve to access various attributes of NumPy :class:`ndarray`\s.

doc/extending/creating_a_numba_jax_op.rst

+17-17
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Adding JAX, Numba and Pytorch support for `Op`\s
44
PyTensor is able to convert its graphs into JAX, Numba and Pytorch compiled functions. In order to do
55
this, each :class:`Op` in an PyTensor graph must have an equivalent JAX/Numba/Pytorch implementation function.
66

7-
This tutorial will explain how JAX, Numba and Pytorch implementations are created for an :class:`Op`.
7+
This tutorial will explain how JAX, Numba and Pytorch implementations are created for an :class:`Op`.
88

99
Step 1: Identify the PyTensor :class:`Op` you'd like to implement
1010
------------------------------------------------------------------------
@@ -60,7 +60,7 @@ could also have any data type (e.g. floats, ints), so our implementation
6060
must be able to handle all the possible data types.
6161

6262
It also tells us that there's only one return value, that it has a data type
63-
determined by :meth:`x.type()` i.e., the data type of the original tensor.
63+
determined by :meth:`x.type` i.e., the data type of the original tensor.
6464
This implies that the result is necessarily a matrix.
6565

6666
Some class may have a more complex behavior. For example, the :class:`CumOp`\ :class:`Op`
@@ -116,7 +116,7 @@ Here's an example for :class:`DimShuffle`:
116116

117117
.. tab-set::
118118

119-
.. tab-item:: JAX
119+
.. tab-item:: JAX
120120

121121
.. code:: python
122122
@@ -134,7 +134,7 @@ Here's an example for :class:`DimShuffle`:
134134
res = jnp.copy(res)
135135
136136
return res
137-
137+
138138
.. tab-item:: Numba
139139

140140
.. code:: python
@@ -465,7 +465,7 @@ Step 4: Write tests
465465
.. tab-item:: JAX
466466

467467
Test that your registered `Op` is working correctly by adding tests to the
468-
appropriate test suites in PyTensor (e.g. in ``tests.link.jax``).
468+
appropriate test suites in PyTensor (e.g. in ``tests.link.jax``).
469469
The tests should ensure that your implementation can
470470
handle the appropriate types of inputs and produce outputs equivalent to `Op.perform`.
471471
Check the existing tests for the general outline of these kinds of tests. In
@@ -478,7 +478,7 @@ Step 4: Write tests
478478
Here's a small example of a test for :class:`CumOp` above:
479479

480480
.. code:: python
481-
481+
482482
import numpy as np
483483
import pytensor.tensor as pt
484484
from pytensor.configdefaults import config
@@ -514,22 +514,22 @@ Step 4: Write tests
514514
.. code:: python
515515
516516
import pytest
517-
517+
518518
def test_jax_CumOp():
519519
"""Test JAX conversion of the `CumOp` `Op`."""
520520
a = pt.matrix("a")
521521
a.tag.test_value = np.arange(9, dtype=config.floatX).reshape((3, 3))
522-
522+
523523
with pytest.raises(NotImplementedError):
524524
out = pt.cumprod(a, axis=1)
525525
fgraph = FunctionGraph([a], [out])
526526
compare_jax_and_py(fgraph, [get_test_value(i) for i in fgraph.inputs])
527-
528-
527+
528+
529529
.. tab-item:: Numba
530530

531531
Test that your registered `Op` is working correctly by adding tests to the
532-
appropriate test suites in PyTensor (e.g. in ``tests.link.numba``).
532+
appropriate test suites in PyTensor (e.g. in ``tests.link.numba``).
533533
The tests should ensure that your implementation can
534534
handle the appropriate types of inputs and produce outputs equivalent to `Op.perform`.
535535
Check the existing tests for the general outline of these kinds of tests. In
@@ -542,7 +542,7 @@ Step 4: Write tests
542542
Here's a small example of a test for :class:`CumOp` above:
543543

544544
.. code:: python
545-
545+
546546
from tests.link.numba.test_basic import compare_numba_and_py
547547
from pytensor.graph import FunctionGraph
548548
from pytensor.compile.sharedvalue import SharedVariable
@@ -561,11 +561,11 @@ Step 4: Write tests
561561
if not isinstance(i, SharedVariable | Constant)
562562
],
563563
)
564-
564+
565565
566566
567567
.. tab-item:: Pytorch
568-
568+
569569
Test that your registered `Op` is working correctly by adding tests to the
570570
appropriate test suites in PyTensor (``tests.link.pytorch``). The tests should ensure that your implementation can
571571
handle the appropriate types of inputs and produce outputs equivalent to `Op.perform`.
@@ -579,7 +579,7 @@ Step 4: Write tests
579579
Here's a small example of a test for :class:`CumOp` above:
580580

581581
.. code:: python
582-
582+
583583
import numpy as np
584584
import pytest
585585
import pytensor.tensor as pt
@@ -592,7 +592,7 @@ Step 4: Write tests
592592
["float64", "int64"],
593593
)
594594
@pytest.mark.parametrize(
595-
"axis",
595+
"axis",
596596
[None, 1, (0,)],
597597
)
598598
def test_pytorch_CumOp(axis, dtype):
@@ -650,4 +650,4 @@ as reported in issue `#654 <https://github.com/pymc-devs/pytensor/issues/654>`_.
650650
All jitted functions now must have constant shape, which means a graph like the
651651
one of :class:`Eye` can never be translated to JAX, since it's fundamentally a
652652
function with dynamic shapes. In other words, only PyTensor graphs with static shapes
653-
can be translated to JAX at the moment.
653+
can be translated to JAX at the moment.

doc/extending/type.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -333,7 +333,7 @@ returns eitehr a new transferred variable (which can be the same as
333333
the input if no transfer is necessary) or returns None if the transfer
334334
can't be done.
335335

336-
Then register that function by calling :func:`register_transfer()`
336+
Then register that function by calling :func:`register_transfer`
337337
with it as argument.
338338

339339
An example

doc/library/compile/io.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ The ``inputs`` argument to ``pytensor.function`` is a list, containing the ``Var
3636
``self.<name>``. The default value is ``None``.
3737

3838
``value``: literal or ``Container``. The initial/default value for this
39-
input. If update is`` None``, this input acts just like
39+
input. If update is ``None``, this input acts just like
4040
an argument with a default value in Python. If update is not ``None``,
4141
changes to this
4242
value will "stick around", whether due to an update or a user's

doc/library/config.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -226,7 +226,7 @@ import ``pytensor`` and print the config variable, as in:
226226
in the future.
227227

228228
The ``'numpy+floatX'`` setting attempts to mimic NumPy casting rules,
229-
although it prefers to use ``float32` `numbers instead of ``float64`` when
229+
although it prefers to use ``float32`` numbers instead of ``float64`` when
230230
``config.floatX`` is set to ``'float32'`` and the associated data is not
231231
explicitly typed as ``float64`` (e.g. regular Python floats). Note that
232232
``'numpy+floatX'`` is not currently behaving exactly as planned (it is a

doc/library/tensor/basic.rst

+34-34
Original file line numberDiff line numberDiff line change
@@ -908,8 +908,8 @@ Reductions
908908
:Parameter: *x* - symbolic Tensor (or compatible)
909909
:Parameter: *axis* - axis or axes along which to compute the maximum
910910
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
911-
left in the result as dimensions with size one. With this option, the result
912-
will broadcast correctly against the original tensor.
911+
left in the result as dimensions with size one. With this option, the result
912+
will broadcast correctly against the original tensor.
913913
:Returns: maximum of *x* along *axis*
914914

915915
axis can be:
@@ -922,8 +922,8 @@ Reductions
922922
:Parameter: *x* - symbolic Tensor (or compatible)
923923
:Parameter: *axis* - axis along which to compute the index of the maximum
924924
:Parameter: *keepdims* - (boolean) If this is set to True, the axis which is reduced is
925-
left in the result as a dimension with size one. With this option, the result
926-
will broadcast correctly against the original tensor.
925+
left in the result as a dimension with size one. With this option, the result
926+
will broadcast correctly against the original tensor.
927927
:Returns: the index of the maximum value along a given axis
928928

929929
if ``axis == None``, `argmax` over the flattened tensor (like NumPy)
@@ -933,8 +933,8 @@ Reductions
933933
:Parameter: *x* - symbolic Tensor (or compatible)
934934
:Parameter: *axis* - axis along which to compute the maximum and its index
935935
:Parameter: *keepdims* - (boolean) If this is set to True, the axis which is reduced is
936-
left in the result as a dimension with size one. With this option, the result
937-
will broadcast correctly against the original tensor.
936+
left in the result as a dimension with size one. With this option, the result
937+
will broadcast correctly against the original tensor.
938938
:Returns: the maximum value along a given axis and its index.
939939

940940
if ``axis == None``, `max_and_argmax` over the flattened tensor (like NumPy)
@@ -944,8 +944,8 @@ Reductions
944944
:Parameter: *x* - symbolic Tensor (or compatible)
945945
:Parameter: *axis* - axis or axes along which to compute the minimum
946946
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
947-
left in the result as dimensions with size one. With this option, the result
948-
will broadcast correctly against the original tensor.
947+
left in the result as dimensions with size one. With this option, the result
948+
will broadcast correctly against the original tensor.
949949
:Returns: minimum of *x* along *axis*
950950

951951
`axis` can be:
@@ -958,8 +958,8 @@ Reductions
958958
:Parameter: *x* - symbolic Tensor (or compatible)
959959
:Parameter: *axis* - axis along which to compute the index of the minimum
960960
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
961-
left in the result as dimensions with size one. With this option, the result
962-
will broadcast correctly against the original tensor.
961+
left in the result as dimensions with size one. With this option, the result
962+
will broadcast correctly against the original tensor.
963963
:Returns: the index of the minimum value along a given axis
964964

965965
if ``axis == None``, `argmin` over the flattened tensor (like NumPy)
@@ -980,8 +980,8 @@ Reductions
980980
This default dtype does _not_ depend on the value of "acc_dtype".
981981

982982
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
983-
left in the result as dimensions with size one. With this option, the result
984-
will broadcast correctly against the original tensor.
983+
left in the result as dimensions with size one. With this option, the result
984+
will broadcast correctly against the original tensor.
985985

986986
:Parameter: *acc_dtype* - The dtype of the internal accumulator.
987987
If None (default), we use the dtype in the list below,
@@ -1015,8 +1015,8 @@ Reductions
10151015
This default dtype does _not_ depend on the value of "acc_dtype".
10161016

10171017
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
1018-
left in the result as dimensions with size one. With this option, the result
1019-
will broadcast correctly against the original tensor.
1018+
left in the result as dimensions with size one. With this option, the result
1019+
will broadcast correctly against the original tensor.
10201020

10211021
:Parameter: *acc_dtype* - The dtype of the internal accumulator.
10221022
If None (default), we use the dtype in the list below,
@@ -1031,16 +1031,16 @@ Reductions
10311031
as we need to handle 3 different cases: without zeros in the
10321032
input reduced group, with 1 zero or with more zeros.
10331033

1034-
This could slow you down, but more importantly, we currently
1035-
don't support the second derivative of the 3 cases. So you
1036-
cannot take the second derivative of the default prod().
1034+
This could slow you down, but more importantly, we currently
1035+
don't support the second derivative of the 3 cases. So you
1036+
cannot take the second derivative of the default prod().
10371037

1038-
To remove the handling of the special cases of 0 and so get
1039-
some small speed up and allow second derivative set
1040-
``no_zeros_in_inputs`` to ``True``. It defaults to ``False``.
1038+
To remove the handling of the special cases of 0 and so get
1039+
some small speed up and allow second derivative set
1040+
``no_zeros_in_inputs`` to ``True``. It defaults to ``False``.
10411041

1042-
**It is the user responsibility to make sure there are no zeros
1043-
in the inputs. If there are, the grad will be wrong.**
1042+
**It is the user responsibility to make sure there are no zeros
1043+
in the inputs. If there are, the grad will be wrong.**
10441044

10451045
:Returns: product of every term in *x* along *axis*
10461046

@@ -1058,13 +1058,13 @@ Reductions
10581058
done in float64 (acc_dtype would be float64 by default),
10591059
but that result will be casted back in float32.
10601060
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
1061-
left in the result as dimensions with size one. With this option, the result
1062-
will broadcast correctly against the original tensor.
1061+
left in the result as dimensions with size one. With this option, the result
1062+
will broadcast correctly against the original tensor.
10631063
:Parameter: *acc_dtype* - The dtype of the internal accumulator of the
10641064
inner summation. This will not necessarily be the dtype of the
10651065
output (in particular if it is a discrete (int/uint) dtype, the
10661066
output will be in a float type). If None, then we use the same
1067-
rules as :func:`sum()`.
1067+
rules as :func:`sum`.
10681068
:Returns: mean value of *x* along *axis*
10691069

10701070
`axis` can be:
@@ -1077,8 +1077,8 @@ Reductions
10771077
:Parameter: *x* - symbolic Tensor (or compatible)
10781078
:Parameter: *axis* - axis or axes along which to compute the variance
10791079
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
1080-
left in the result as dimensions with size one. With this option, the result
1081-
will broadcast correctly against the original tensor.
1080+
left in the result as dimensions with size one. With this option, the result
1081+
will broadcast correctly against the original tensor.
10821082
:Returns: variance of *x* along *axis*
10831083

10841084
`axis` can be:
@@ -1091,8 +1091,8 @@ Reductions
10911091
:Parameter: *x* - symbolic Tensor (or compatible)
10921092
:Parameter: *axis* - axis or axes along which to compute the standard deviation
10931093
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
1094-
left in the result as dimensions with size one. With this option, the result
1095-
will broadcast correctly against the original tensor.
1094+
left in the result as dimensions with size one. With this option, the result
1095+
will broadcast correctly against the original tensor.
10961096
:Returns: variance of *x* along *axis*
10971097

10981098
`axis` can be:
@@ -1105,8 +1105,8 @@ Reductions
11051105
:Parameter: *x* - symbolic Tensor (or compatible)
11061106
:Parameter: *axis* - axis or axes along which to apply 'bitwise and'
11071107
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
1108-
left in the result as dimensions with size one. With this option, the result
1109-
will broadcast correctly against the original tensor.
1108+
left in the result as dimensions with size one. With this option, the result
1109+
will broadcast correctly against the original tensor.
11101110
:Returns: bitwise and of *x* along *axis*
11111111

11121112
`axis` can be:
@@ -1119,8 +1119,8 @@ Reductions
11191119
:Parameter: *x* - symbolic Tensor (or compatible)
11201120
:Parameter: *axis* - axis or axes along which to apply bitwise or
11211121
:Parameter: *keepdims* - (boolean) If this is set to True, the axes which are reduced are
1122-
left in the result as dimensions with size one. With this option, the result
1123-
will broadcast correctly against the original tensor.
1122+
left in the result as dimensions with size one. With this option, the result
1123+
will broadcast correctly against the original tensor.
11241124
:Returns: bitwise or of *x* along *axis*
11251125

11261126
`axis` can be:
@@ -1745,7 +1745,7 @@ Linear Algebra
17451745
when indexed, so that each returned argument has the same shape.
17461746
The dimensions and number of the output arrays are equal to the
17471747
number of indexing dimensions. If the step length is not a complex
1748-
number, then the stop is not inclusive.
1748+
number, then the stop is not inclusive.
17491749

17501750
Example:
17511751

doc/library/tensor/conv.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,4 +8,4 @@
88
.. moduleauthor:: LISA, PyMC Developers, PyTensor Developers
99

1010
.. automodule:: pytensor.tensor.conv
11-
:members:
11+
:members:

doc/optimizations.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -262,8 +262,8 @@ Optimization o4 o3 o2
262262
local_remove_all_assert
263263
This is an unsafe optimization.
264264
For the fastest possible PyTensor, this optimization can be enabled by
265-
setting ``optimizer_including=local_remove_all_assert`` which will
266-
remove all assertions in the graph for checking user inputs are valid.
265+
setting ``optimizer_including=local_remove_all_assert`` which will
266+
remove all assertions in the graph for checking user inputs are valid.
267267
Use this optimization if you are sure everything is valid in your graph.
268268

269-
See :ref:`unsafe_rewrites`
269+
See :ref:`unsafe_rewrites`

0 commit comments

Comments
 (0)