Skip to content

Revert "documentation: release notes for smdistributed.dataparallel v… #2281

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/api/training/sdp_versions/latest.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@

Version 1.1.1 (Latest)
Version 1.1.0 (Latest)
======================

.. toctree::
Expand Down
Original file line number Diff line number Diff line change
@@ -1,41 +1,23 @@
# Sagemaker Distributed Data Parallel 1.1.1 Release Notes

* New Features
* Bug Fixes
* Known Issues

*New Features:*

* Adds support for PyTorch 1.8.1

*Bug Fixes:*

* Fixes a bug that was causing gradients from one of the worker nodes to be added twice resulting in incorrect `all_reduce` results under some conditions.

*Known Issues:*

* SageMaker distributed data parallel still is not efficient when run using a single node. For the best performance, use multi-node distributed training with `smdistributed.dataparallel`. Use a single node only for experimental runs while preparing your training pipeline.

# Sagemaker Distributed Data Parallel 1.1.0 Release Notes

* New Features
* Bug Fixes
* Improvements
* Known Issues

*New Features:*
New Features:

* Adds support for PyTorch 1.8.0 with CUDA 11.1 and CUDNN 8

*Bug Fixes:*
Bug Fixes:

* Fixes crash issue when importing `smdataparallel` before PyTorch

*Improvements:*
Improvements:

* Update `smdataparallel` name in python packages, descriptions, and log outputs

*Known Issues:*
Known Issues:

* SageMaker DataParallel is not efficient when run using a single node. For the best performance, use multi-node distributed training with `smdataparallel`. Use a single node only for experimental runs while preparing your training pipeline.

Expand Down