Skip to content

Change 'distribution' to 'distributions' in documentation #503

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Dec 7, 2018
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions src/sagemaker/mxnet/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -207,15 +207,15 @@ If you were previously relying on the default save method, you can now import on

save(args.model_dir, model)

Lastly, if you were relying on the container launching a parameter server for use with distributed training, you must now set ``distribution`` to the following dictionary when creating an MXNet estimator:
Lastly, if you were relying on the container launching a parameter server for use with distributed training, you must now set ``distributions`` to the following dictionary when creating an MXNet estimator:

.. code:: python

from sagemaker.mxnet import MXNet

estimator = MXNet('path-to-distributed-training-script.py',
...,
distribution={'parameter_server': {'enabled': True}})
distributions={'parameter_server': {'enabled': True}})


Using third-party libraries
Expand Down Expand Up @@ -321,7 +321,7 @@ The following are optional arguments. When you create an ``MXNet`` object, you c
framework_version and py_version. Refer to: `SageMaker MXNet Docker Containers
<#sagemaker-mxnet-docker-containers>`_ for details on what the Official images support
and where to find the source code to build your custom image.
- ``distribution`` For versions 1.3 and above only.
- ``distributions`` For versions 1.3 and above only.
Specifies information for how to run distributed training.
To launch a parameter server during training, set this argument to:

Expand Down
2 changes: 1 addition & 1 deletion src/sagemaker/mxnet/estimator.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def __init__(self, entry_point, source_dir=None, hyperparameters=None, py_versio
Examples:
123.dkr.ecr.us-west-2.amazonaws.com/my-custom-image:1.0
custom-image:latest.
distribution (dict): A dictionary with information on how to run distributed training
distributions (dict): A dictionary with information on how to run distributed training
(default: None).
**kwargs: Additional kwargs passed to the :class:`~sagemaker.estimator.Framework` constructor.
"""
Expand Down
2 changes: 1 addition & 1 deletion src/sagemaker/tensorflow/estimator.py
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ def __init__(self, training_steps=None, evaluation_steps=None, checkpoint_path=N
custom-image:latest.
script_mode (bool): If set to True will the estimator will use the Script Mode containers (default: False).
This will be ignored if py_version is set to 'py3'.
distribution (dict): A dictionary with information on how to run distributed training
distributions (dict): A dictionary with information on how to run distributed training
(default: None). Currently we only support distributed training with parameter servers. To enable it
use the following setup:
{
Expand Down