Skip to content

Commit 92730f3

Browse files
authored
Revert "documentation: release notes for smdistributed.dataparallel v1.1.1 (#2280)"
This reverts commit 7fec6c1.
1 parent 7fec6c1 commit 92730f3

File tree

2 files changed

+5
-23
lines changed

2 files changed

+5
-23
lines changed

doc/api/training/sdp_versions/latest.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11

2-
Version 1.1.1 (Latest)
2+
Version 1.1.0 (Latest)
33
======================
44

55
.. toctree::

doc/api/training/smd_data_parallel_release_notes/smd_data_parallel_change_log.md

+4-22
Original file line numberDiff line numberDiff line change
@@ -1,41 +1,23 @@
1-
# Sagemaker Distributed Data Parallel 1.1.1 Release Notes
2-
3-
* New Features
4-
* Bug Fixes
5-
* Known Issues
6-
7-
*New Features:*
8-
9-
* Adds support for PyTorch 1.8.1
10-
11-
*Bug Fixes:*
12-
13-
* Fixes a bug that was causing gradients from one of the worker nodes to be added twice resulting in incorrect `all_reduce` results under some conditions.
14-
15-
*Known Issues:*
16-
17-
* SageMaker distributed data parallel still is not efficient when run using a single node. For the best performance, use multi-node distributed training with `smdistributed.dataparallel`. Use a single node only for experimental runs while preparing your training pipeline.
18-
191
# Sagemaker Distributed Data Parallel 1.1.0 Release Notes
202

213
* New Features
224
* Bug Fixes
235
* Improvements
246
* Known Issues
257

26-
*New Features:*
8+
New Features:
279

2810
* Adds support for PyTorch 1.8.0 with CUDA 11.1 and CUDNN 8
2911

30-
*Bug Fixes:*
12+
Bug Fixes:
3113

3214
* Fixes crash issue when importing `smdataparallel` before PyTorch
3315

34-
*Improvements:*
16+
Improvements:
3517

3618
* Update `smdataparallel` name in python packages, descriptions, and log outputs
3719

38-
*Known Issues:*
20+
Known Issues:
3921

4022
* SageMaker DataParallel is not efficient when run using a single node. For the best performance, use multi-node distributed training with `smdataparallel`. Use a single node only for experimental runs while preparing your training pipeline.
4123

0 commit comments

Comments
 (0)