Skip to content

doc: remove 'train_*' prefix from estimator parameters #1689

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 9, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/algorithms/linear_learner.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The Amazon SageMaker LinearLearner algorithm.
:undoc-members:
:show-inheritance:
:inherited-members:
:exclude-members: image_uri, train_instance_count, train_instance_type, predictor_type, binary_classifier_model_selection_criteria, target_recall, target_precision, positive_example_weight_mult, epochs, use_bias, num_models, parameter, num_calibration_samples, calibration, init_method, init_scale, init_sigma, init_bias, optimizer, loss, wd, l1, momentum, learning_rate, beta_1, beta_2, bias_lr_mult, use_lr_scheduler, lr_scheduler_step, lr_scheduler_factor, lr_scheduler_minimum_lr, lr_scheduler_minimum_lr, mini_batch_size, feature_dim, bias_wd_mult, MAX_DEFAULT_BATCH_SIZE
:exclude-members: image_uri, instance_count, instance_type, predictor_type, binary_classifier_model_selection_criteria, target_recall, target_precision, positive_example_weight_mult, epochs, use_bias, num_models, parameter, num_calibration_samples, calibration, init_method, init_scale, init_sigma, init_bias, optimizer, loss, wd, l1, momentum, learning_rate, beta_1, beta_2, bias_lr_mult, use_lr_scheduler, lr_scheduler_step, lr_scheduler_factor, lr_scheduler_minimum_lr, lr_scheduler_minimum_lr, mini_batch_size, feature_dim, bias_wd_mult, MAX_DEFAULT_BATCH_SIZE

.. autoclass:: sagemaker.LinearLearnerModel
:members:
Expand Down
28 changes: 14 additions & 14 deletions doc/amazon_sagemaker_debugger.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,8 +54,8 @@ The ``DebuggerHookConfig`` accepts one or more objects of type ``CollectionConfi

estimator = TensorFlow(
role=role,
train_instance_count=1,
train_instance_type=train_instance_type,
instance_count=1,
instance_type=instance_type,
debugger_hook_config=debugger_hook_config
)

Expand Down Expand Up @@ -215,8 +215,8 @@ Sample Usages

estimator = TensorFlow(
role=role,
train_instance_count=1,
train_instance_type=train_instance_type,
instance_count=1,
instance_type=instance_type,
rules=[Rule.sagemaker(vanishing_gradient())]
)

Expand All @@ -232,8 +232,8 @@ In the example above, Amazon SageMaker pulls the collection configuration best s

estimator = TensorFlow(
role=role,
train_instance_count=1,
train_instance_type=train_instance_type,
instance_count=1,
instance_type=instance_type,
rules=[Rule.sagemaker(vanishing_gradient()), Rule.sagemaker(weight_update_ratio())]
)

Expand Down Expand Up @@ -269,8 +269,8 @@ Here we modify the ``weight_update_ratio`` rule to store a custom collection rat

estimator = TensorFlow(
role=role,
train_instance_count=1,
train_instance_type=train_instance_type,
instance_count=1,
instance_type=instance_type,
rules=[
Rule.sagemaker(vanishing_gradient()),
wur_with_customization
Expand Down Expand Up @@ -317,8 +317,8 @@ To evaluate the custom rule against the training:

estimator = TensorFlow(
role=role,
train_instance_count=1,
train_instance_type=train_instance_type,
instance_count=1,
instance_type=instance_type,
rules=[
custom_gradient_rule
]
Expand All @@ -344,8 +344,8 @@ To enable the debugging hook to emit TensorBoard data, you need to specify the n

estimator = TensorFlow(
role=role,
train_instance_count=1,
train_instance_type=train_instance_type,
instance_count=1,
instance_type=instance_type,
tensorboard_output_config=tensorboard_output_config
)

Expand Down Expand Up @@ -392,8 +392,8 @@ To disable the hook initialization, you can do so by specifying ``False`` for va

estimator = TensorFlow(
role=role,
train_instance_count=1,
train_instance_type=train_instance_type,
instance_count=1,
instance_type=instance_type,
debugger_hook_config=False
)

Expand Down
10 changes: 5 additions & 5 deletions doc/frameworks/chainer/using_chainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -138,8 +138,8 @@ directories ('train' and 'test').
.. code:: python

chainer_estimator = Chainer('chainer-train.py',
train_instance_type='ml.p3.2xlarge',
train_instance_count=1,
instance_type='ml.p3.2xlarge',
instance_count=1,
framework_version='5.0.0',
py_version='py3',
hyperparameters = {'epochs': 20, 'batch-size': 64, 'learning-rate': 0.1})
Expand Down Expand Up @@ -191,7 +191,7 @@ Chainer allows you to train a model on multiple nodes using ChainerMN_, which di
In order to run distributed Chainer training on SageMaker, your training script should use a ``chainermn`` Communicator
object to coordinate training between multiple hosts.

SageMaker runs your script with ``mpirun`` if ``train_instance_count`` is greater than two.
SageMaker runs your script with ``mpirun`` if ``instance_count`` is greater than two.
The following are optional arguments modify how MPI runs your distributed training script.

- ``use_mpi`` Boolean that overrides whether to run your training script with MPI.
Expand Down Expand Up @@ -221,8 +221,8 @@ operation.

# Train my estimator
chainer_estimator = Chainer(entry_point='train_and_deploy.py',
train_instance_type='ml.p3.2xlarge',
train_instance_count=1,
instance_type='ml.p3.2xlarge',
instance_count=1,
framework_version='5.0.0',
py_version='py3')
chainer_estimator.fit('s3://my_bucket/my_training_data/')
Expand Down
8 changes: 4 additions & 4 deletions doc/frameworks/mxnet/using_mxnet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -181,8 +181,8 @@ The following code sample shows how you train a custom MXNet script "train.py".
.. code:: python

mxnet_estimator = MXNet('train.py',
train_instance_type='ml.p2.xlarge',
train_instance_count=1,
instance_type='ml.p2.xlarge',
instance_count=1,
framework_version='1.6.0',
py_version='py3',
hyperparameters={'batch-size': 100,
Expand Down Expand Up @@ -233,8 +233,8 @@ If you use the ``MXNet`` estimator to train the model, you can call ``deploy`` t
mxnet_estimator = MXNet('train.py',
framework_version='1.6.0',
py_version='py3',
train_instance_type='ml.p2.xlarge',
train_instance_count=1)
instance_type='ml.p2.xlarge',
instance_count=1)
mxnet_estimator.fit('s3://my_bucket/my_training_data/')

# Deploy my estimator to an Amazon SageMaker Endpoint and get a Predictor
Expand Down
10 changes: 5 additions & 5 deletions doc/frameworks/pytorch/using_pytorch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -152,8 +152,8 @@ directories ('train' and 'test').
.. code:: python

pytorch_estimator = PyTorch('pytorch-train.py',
train_instance_type='ml.p3.2xlarge',
train_instance_count=1,
instance_type='ml.p3.2xlarge',
instance_count=1,
framework_version='1.5.0',
py_version='py3',
hyperparameters = {'epochs': 20, 'batch-size': 64, 'learning-rate': 0.1})
Expand Down Expand Up @@ -201,7 +201,7 @@ Distributed PyTorch Training
============================

You can run a multi-machine, distributed PyTorch training using the PyTorch Estimator. By default, PyTorch objects will
submit single-machine training jobs to SageMaker. If you set ``train_instance_count`` to be greater than one, multi-machine
submit single-machine training jobs to SageMaker. If you set ``instance_count`` to be greater than one, multi-machine
training jobs will be launched when ``fit`` is called. When you run multi-machine training, SageMaker will import your
training script and run it on each host in the cluster.

Expand Down Expand Up @@ -246,8 +246,8 @@ operation.

# Train my estimator
pytorch_estimator = PyTorch(entry_point='train_and_deploy.py',
train_instance_type='ml.p3.2xlarge',
train_instance_count=1,
instance_type='ml.p3.2xlarge',
instance_count=1,
framework_version='1.5.0',
py_version='py3')
pytorch_estimator.fit('s3://my_bucket/my_training_data/')
Expand Down
12 changes: 6 additions & 6 deletions doc/frameworks/rl/using_rl.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ You can then create an ``RLEstimator`` with keyword arguments to point to this s
toolkit_version='0.11.1',
framework=RLFramework.TENSORFLOW,
role='SageMakerRole',
train_instance_type='ml.p3.2xlarge',
train_instance_count=1)
instance_type='ml.p3.2xlarge',
instance_count=1)

After that, you simply tell the estimator to start a training job:

Expand Down Expand Up @@ -81,9 +81,9 @@ these in the constructor, either positionally or as keyword arguments.
endpoints use this role to access training data and model artifacts.
After the endpoint is created, the inference code might use the IAM
role, if accessing AWS resource.
- ``train_instance_count`` Number of Amazon EC2 instances to use for
- ``instance_count`` Number of Amazon EC2 instances to use for
training.
- ``train_instance_type`` Type of EC2 instance to use for training, for
- ``instance_type`` Type of EC2 instance to use for training, for
example, 'ml.m4.xlarge'.

You must as well include either:
Expand Down Expand Up @@ -158,8 +158,8 @@ In case if ``image_uri`` was specified it would use provided image for the deplo
toolkit_version='0.11.0',
framework=RLFramework.MXNET,
role='SageMakerRole',
train_instance_type='ml.c4.2xlarge',
train_instance_count=1)
instance_type='ml.c4.2xlarge',
instance_count=1)

rl_estimator.fit()

Expand Down
4 changes: 2 additions & 2 deletions doc/frameworks/sklearn/using_sklearn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ directories ('train' and 'test').
.. code:: python

sklearn_estimator = SKLearn('sklearn-train.py',
train_instance_type='ml.m4.xlarge',
instance_type='ml.m4.xlarge',
framework_version='0.20.0',
hyperparameters = {'epochs': 20, 'batch-size': 64, 'learning-rate': 0.1})
sklearn_estimator.fit({'train': 's3://my-data-bucket/path/to/my/training/data',
Expand Down Expand Up @@ -198,7 +198,7 @@ operation.

# Train my estimator
sklearn_estimator = SKLearn(entry_point='train_and_deploy.py',
train_instance_type='ml.m4.xlarge',
instance_type='ml.m4.xlarge',
framework_version='0.20.0')
sklearn_estimator.fit('s3://my_bucket/my_training_data/')

Expand Down
16 changes: 10 additions & 6 deletions doc/frameworks/tensorflow/deploying_tensorflow_serving.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,17 +22,21 @@ estimator object to create a SageMaker Endpoint:

from sagemaker.tensorflow import TensorFlow

estimator = TensorFlow(entry_point='tf-train.py', ..., train_instance_count=1,
train_instance_type='ml.c4.xlarge', framework_version='1.11')
estimator = TensorFlow(
entry_point="tf-train.py",
...,
instance_count=1,
instance_type="ml.c4.xlarge",
framework_version="2.2",
py_version="py37",
)

estimator.fit(inputs)

predictor = estimator.deploy(initial_instance_count=1,
instance_type='ml.c5.xlarge',
endpoint_type='tensorflow-serving')
predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")


The code block above deploys a SageMaker Endpoint with one instance of the type 'ml.c5.xlarge'.
The code block above deploys a SageMaker Endpoint with one instance of the type "ml.c5.xlarge".

What happens when deploy is called
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down
2 changes: 1 addition & 1 deletion doc/frameworks/tensorflow/upgrade_from_legacy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ the difference in code would be as follows:
source_dir="code",
framework_version="1.10.0",
py_version="py2",
train_instance_type="ml.m4.xlarge",
instance_type="ml.m4.xlarge",
image_uri="520713654638.dkr.ecr.us-west-2.amazonaws.com/sagemaker-tensorflow:1.10.0-cpu-py2",
hyperparameters={
"training_steps": 100,
Expand Down
Loading