You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: RFC.md
-4Lines changed: 0 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -280,8 +280,6 @@ Note that none of the code in this implementation makes use of NumPy. We are
280
280
writing `torch_np.ndarray` above to make more explicit our intents, but there
281
281
shouldn't be any ambiguity.
282
282
283
-
**OBS(Lezcano)**: `DTypeLike` should be `Optional[DTypeLike]`
284
-
285
283
**Implmenting out**: In PyTorch, the `out` kwarg is, as the name says, a
286
284
keyword-only argument. It is for this reason that, in PrimTorch, we were able
287
285
to implement it as [a decorator](https://github.com/pytorch/pytorch/blob/ce4df4cc596aa10534ac6d54912f960238264dfd/torch/_prims_common/wrappers.py#L187-L282).
@@ -326,8 +324,6 @@ CPU. We expect GPU coverage to be as good as the coverage we have with CPU
326
324
matching GPU. If the original tensors are on GPU, the whole execution should
327
325
be performed on the GPU.
328
326
329
-
**TODO(Lezcano)**. We should probably test CUDA on the tests.
330
-
331
327
**Gradients**. We have not tested gradient tracking either as we are still to
332
328
find some good examples on which to test it, but it should be a simple
333
329
corollary of all this effort. If the original tensors fed into the function do
0 commit comments