Skip to content

Commit a583300

Browse files
mchoi8739ahsan-z-khanPayton StaubshreyapanditBasil Beirouti
committed
documentation: minor fixes for smddp 1.4.0 doc (aws#2996)
Co-authored-by: Ahsan Khan <[email protected]> Co-authored-by: Payton Staub <[email protected]> Co-authored-by: Shreya Pandit <[email protected]> Co-authored-by: Basil Beirouti <[email protected]> Co-authored-by: Mufaddal Rohawala <[email protected]> Co-authored-by: Basil Beirouti <[email protected]> Co-authored-by: Payton Staub <[email protected]> Co-authored-by: Mohamed Ali Jamaoui <[email protected]> Co-authored-by: ci <ci> Co-authored-by: Jeniya Tabassum <[email protected]> Co-authored-by: sreedes <[email protected]> Co-authored-by: Navin Soni <[email protected]> Co-authored-by: Miyoung <[email protected]> Co-authored-by: Ameen Khan <[email protected]> Co-authored-by: Zhankui Lu <[email protected]> Co-authored-by: Navin Soni <[email protected]> Co-authored-by: Xiaoguang Chen <[email protected]> Co-authored-by: Jonathan Guinegagne <[email protected]> Co-authored-by: Zhankui Lu <[email protected]> Co-authored-by: Yifei Zhu <[email protected]> Co-authored-by: Qingzi-Lan <[email protected]> Co-authored-by: Ben Crabtree <[email protected]> Co-authored-by: Dewen Qi <[email protected]> Co-authored-by: qidewenwhen <[email protected]> Co-authored-by: Xinghan Chen <[email protected]> Co-authored-by: Tulio Casagrande <[email protected]> Co-authored-by: HappyAmazonian <[email protected]>
1 parent 90c1b6b commit a583300

File tree

1 file changed

+3
-14
lines changed

1 file changed

+3
-14
lines changed

doc/api/training/smd_data_parallel_release_notes/smd_data_parallel_change_log.rst

+3-14
Original file line numberDiff line numberDiff line change
@@ -43,8 +43,9 @@ SageMaker Distributed Data Parallel 1.4.0 Release Notes
4343

4444
**Improvements**
4545

46-
* Support AllReduce Large Tensors
47-
* we support the following new arguments in the `PyTorch DDP class <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel>`_.
46+
* Support for AllReduce Large Tensors
47+
* Support for the following new arguments in the `PyTorch DDP class
48+
<https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel>`_.
4849

4950
* ``broadcast_buffers``
5051
* ``find_unused_parameters``
@@ -70,18 +71,6 @@ This version passed benchmark testing and is migrated to the following AWS Deep
7071
7172
763104351884.dkr.ecr.<region>.amazonaws.com/pytorch-training:1.10.2-gpu-py38-cu113-ubuntu20.04-sagemaker
7273
73-
- PyTorch 1.10.0 DLC
74-
75-
.. code::
76-
77-
763104351884.dkr.ecr.<region>.amazonaws.com/pytorch-training:1.10.0-gpu-py38-cu113-ubuntu20.04-sagemaker
78-
79-
- PyTorch 1.9.1 DLC
80-
81-
.. code::
82-
83-
763104351884.dkr.ecr.<region>.amazonaws.com/pytorch-training:1.9.1-gpu-py38-cu111-ubuntu20.04
84-
8574
8675
----
8776

0 commit comments

Comments
 (0)