Skip to content

Commit b85a999

Browse files
committed
Remove TODOs
1 parent 2d86a3f commit b85a999

File tree

1 file changed

+0
-4
lines changed

1 file changed

+0
-4
lines changed

RFC.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -280,8 +280,6 @@ Note that none of the code in this implementation makes use of NumPy. We are
280280
writing `torch_np.ndarray` above to make more explicit our intents, but there
281281
shouldn't be any ambiguity.
282282

283-
**OBS(Lezcano)**: `DTypeLike` should be `Optional[DTypeLike]`
284-
285283
**Implmenting out**: In PyTorch, the `out` kwarg is, as the name says, a
286284
keyword-only argument. It is for this reason that, in PrimTorch, we were able
287285
to implement it as [a decorator](https://github.com/pytorch/pytorch/blob/ce4df4cc596aa10534ac6d54912f960238264dfd/torch/_prims_common/wrappers.py#L187-L282).
@@ -326,8 +324,6 @@ CPU. We expect GPU coverage to be as good as the coverage we have with CPU
326324
matching GPU. If the original tensors are on GPU, the whole execution should
327325
be performed on the GPU.
328326

329-
**TODO(Lezcano)**. We should probably test CUDA on the tests.
330-
331327
**Gradients**. We have not tested gradient tracking either as we are still to
332328
find some good examples on which to test it, but it should be a simple
333329
corollary of all this effort. If the original tensors fed into the function do

0 commit comments

Comments
 (0)