Skip to content

Commit a0f1a78

Browse files
authored
doc: remove 'train_*' prefix from estimator parameters (#1689)
1 parent d26d3f6 commit a0f1a78

14 files changed

+157
-128
lines changed

doc/algorithms/linear_learner.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ The Amazon SageMaker LinearLearner algorithm.
88
:undoc-members:
99
:show-inheritance:
1010
:inherited-members:
11-
:exclude-members: image_uri, train_instance_count, train_instance_type, predictor_type, binary_classifier_model_selection_criteria, target_recall, target_precision, positive_example_weight_mult, epochs, use_bias, num_models, parameter, num_calibration_samples, calibration, init_method, init_scale, init_sigma, init_bias, optimizer, loss, wd, l1, momentum, learning_rate, beta_1, beta_2, bias_lr_mult, use_lr_scheduler, lr_scheduler_step, lr_scheduler_factor, lr_scheduler_minimum_lr, lr_scheduler_minimum_lr, mini_batch_size, feature_dim, bias_wd_mult, MAX_DEFAULT_BATCH_SIZE
11+
:exclude-members: image_uri, instance_count, instance_type, predictor_type, binary_classifier_model_selection_criteria, target_recall, target_precision, positive_example_weight_mult, epochs, use_bias, num_models, parameter, num_calibration_samples, calibration, init_method, init_scale, init_sigma, init_bias, optimizer, loss, wd, l1, momentum, learning_rate, beta_1, beta_2, bias_lr_mult, use_lr_scheduler, lr_scheduler_step, lr_scheduler_factor, lr_scheduler_minimum_lr, lr_scheduler_minimum_lr, mini_batch_size, feature_dim, bias_wd_mult, MAX_DEFAULT_BATCH_SIZE
1212

1313
.. autoclass:: sagemaker.LinearLearnerModel
1414
:members:

doc/amazon_sagemaker_debugger.rst

+14-14
Original file line numberDiff line numberDiff line change
@@ -54,8 +54,8 @@ The ``DebuggerHookConfig`` accepts one or more objects of type ``CollectionConfi
5454
5555
estimator = TensorFlow(
5656
role=role,
57-
train_instance_count=1,
58-
train_instance_type=train_instance_type,
57+
instance_count=1,
58+
instance_type=instance_type,
5959
debugger_hook_config=debugger_hook_config
6060
)
6161
@@ -215,8 +215,8 @@ Sample Usages
215215
216216
estimator = TensorFlow(
217217
role=role,
218-
train_instance_count=1,
219-
train_instance_type=train_instance_type,
218+
instance_count=1,
219+
instance_type=instance_type,
220220
rules=[Rule.sagemaker(vanishing_gradient())]
221221
)
222222
@@ -232,8 +232,8 @@ In the example above, Amazon SageMaker pulls the collection configuration best s
232232
233233
estimator = TensorFlow(
234234
role=role,
235-
train_instance_count=1,
236-
train_instance_type=train_instance_type,
235+
instance_count=1,
236+
instance_type=instance_type,
237237
rules=[Rule.sagemaker(vanishing_gradient()), Rule.sagemaker(weight_update_ratio())]
238238
)
239239
@@ -269,8 +269,8 @@ Here we modify the ``weight_update_ratio`` rule to store a custom collection rat
269269
270270
estimator = TensorFlow(
271271
role=role,
272-
train_instance_count=1,
273-
train_instance_type=train_instance_type,
272+
instance_count=1,
273+
instance_type=instance_type,
274274
rules=[
275275
Rule.sagemaker(vanishing_gradient()),
276276
wur_with_customization
@@ -317,8 +317,8 @@ To evaluate the custom rule against the training:
317317
318318
estimator = TensorFlow(
319319
role=role,
320-
train_instance_count=1,
321-
train_instance_type=train_instance_type,
320+
instance_count=1,
321+
instance_type=instance_type,
322322
rules=[
323323
custom_gradient_rule
324324
]
@@ -344,8 +344,8 @@ To enable the debugging hook to emit TensorBoard data, you need to specify the n
344344
345345
estimator = TensorFlow(
346346
role=role,
347-
train_instance_count=1,
348-
train_instance_type=train_instance_type,
347+
instance_count=1,
348+
instance_type=instance_type,
349349
tensorboard_output_config=tensorboard_output_config
350350
)
351351
@@ -392,8 +392,8 @@ To disable the hook initialization, you can do so by specifying ``False`` for va
392392
393393
estimator = TensorFlow(
394394
role=role,
395-
train_instance_count=1,
396-
train_instance_type=train_instance_type,
395+
instance_count=1,
396+
instance_type=instance_type,
397397
debugger_hook_config=False
398398
)
399399

doc/frameworks/chainer/using_chainer.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -138,8 +138,8 @@ directories ('train' and 'test').
138138
.. code:: python
139139
140140
chainer_estimator = Chainer('chainer-train.py',
141-
train_instance_type='ml.p3.2xlarge',
142-
train_instance_count=1,
141+
instance_type='ml.p3.2xlarge',
142+
instance_count=1,
143143
framework_version='5.0.0',
144144
py_version='py3',
145145
hyperparameters = {'epochs': 20, 'batch-size': 64, 'learning-rate': 0.1})
@@ -191,7 +191,7 @@ Chainer allows you to train a model on multiple nodes using ChainerMN_, which di
191191
In order to run distributed Chainer training on SageMaker, your training script should use a ``chainermn`` Communicator
192192
object to coordinate training between multiple hosts.
193193

194-
SageMaker runs your script with ``mpirun`` if ``train_instance_count`` is greater than two.
194+
SageMaker runs your script with ``mpirun`` if ``instance_count`` is greater than two.
195195
The following are optional arguments modify how MPI runs your distributed training script.
196196

197197
- ``use_mpi`` Boolean that overrides whether to run your training script with MPI.
@@ -221,8 +221,8 @@ operation.
221221
222222
# Train my estimator
223223
chainer_estimator = Chainer(entry_point='train_and_deploy.py',
224-
train_instance_type='ml.p3.2xlarge',
225-
train_instance_count=1,
224+
instance_type='ml.p3.2xlarge',
225+
instance_count=1,
226226
framework_version='5.0.0',
227227
py_version='py3')
228228
chainer_estimator.fit('s3://my_bucket/my_training_data/')

doc/frameworks/mxnet/using_mxnet.rst

+4-4
Original file line numberDiff line numberDiff line change
@@ -181,8 +181,8 @@ The following code sample shows how you train a custom MXNet script "train.py".
181181
.. code:: python
182182
183183
mxnet_estimator = MXNet('train.py',
184-
train_instance_type='ml.p2.xlarge',
185-
train_instance_count=1,
184+
instance_type='ml.p2.xlarge',
185+
instance_count=1,
186186
framework_version='1.6.0',
187187
py_version='py3',
188188
hyperparameters={'batch-size': 100,
@@ -233,8 +233,8 @@ If you use the ``MXNet`` estimator to train the model, you can call ``deploy`` t
233233
mxnet_estimator = MXNet('train.py',
234234
framework_version='1.6.0',
235235
py_version='py3',
236-
train_instance_type='ml.p2.xlarge',
237-
train_instance_count=1)
236+
instance_type='ml.p2.xlarge',
237+
instance_count=1)
238238
mxnet_estimator.fit('s3://my_bucket/my_training_data/')
239239
240240
# Deploy my estimator to an Amazon SageMaker Endpoint and get a Predictor

doc/frameworks/pytorch/using_pytorch.rst

+5-5
Original file line numberDiff line numberDiff line change
@@ -152,8 +152,8 @@ directories ('train' and 'test').
152152
.. code:: python
153153
154154
pytorch_estimator = PyTorch('pytorch-train.py',
155-
train_instance_type='ml.p3.2xlarge',
156-
train_instance_count=1,
155+
instance_type='ml.p3.2xlarge',
156+
instance_count=1,
157157
framework_version='1.5.0',
158158
py_version='py3',
159159
hyperparameters = {'epochs': 20, 'batch-size': 64, 'learning-rate': 0.1})
@@ -201,7 +201,7 @@ Distributed PyTorch Training
201201
============================
202202

203203
You can run a multi-machine, distributed PyTorch training using the PyTorch Estimator. By default, PyTorch objects will
204-
submit single-machine training jobs to SageMaker. If you set ``train_instance_count`` to be greater than one, multi-machine
204+
submit single-machine training jobs to SageMaker. If you set ``instance_count`` to be greater than one, multi-machine
205205
training jobs will be launched when ``fit`` is called. When you run multi-machine training, SageMaker will import your
206206
training script and run it on each host in the cluster.
207207

@@ -246,8 +246,8 @@ operation.
246246
247247
# Train my estimator
248248
pytorch_estimator = PyTorch(entry_point='train_and_deploy.py',
249-
train_instance_type='ml.p3.2xlarge',
250-
train_instance_count=1,
249+
instance_type='ml.p3.2xlarge',
250+
instance_count=1,
251251
framework_version='1.5.0',
252252
py_version='py3')
253253
pytorch_estimator.fit('s3://my_bucket/my_training_data/')

doc/frameworks/rl/using_rl.rst

+6-6
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,8 @@ You can then create an ``RLEstimator`` with keyword arguments to point to this s
3131
toolkit_version='0.11.1',
3232
framework=RLFramework.TENSORFLOW,
3333
role='SageMakerRole',
34-
train_instance_type='ml.p3.2xlarge',
35-
train_instance_count=1)
34+
instance_type='ml.p3.2xlarge',
35+
instance_count=1)
3636
3737
After that, you simply tell the estimator to start a training job:
3838

@@ -81,9 +81,9 @@ these in the constructor, either positionally or as keyword arguments.
8181
endpoints use this role to access training data and model artifacts.
8282
After the endpoint is created, the inference code might use the IAM
8383
role, if accessing AWS resource.
84-
- ``train_instance_count`` Number of Amazon EC2 instances to use for
84+
- ``instance_count`` Number of Amazon EC2 instances to use for
8585
training.
86-
- ``train_instance_type`` Type of EC2 instance to use for training, for
86+
- ``instance_type`` Type of EC2 instance to use for training, for
8787
example, 'ml.m4.xlarge'.
8888

8989
You must as well include either:
@@ -158,8 +158,8 @@ In case if ``image_uri`` was specified it would use provided image for the deplo
158158
toolkit_version='0.11.0',
159159
framework=RLFramework.MXNET,
160160
role='SageMakerRole',
161-
train_instance_type='ml.c4.2xlarge',
162-
train_instance_count=1)
161+
instance_type='ml.c4.2xlarge',
162+
instance_count=1)
163163
164164
rl_estimator.fit()
165165

doc/frameworks/sklearn/using_sklearn.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ directories ('train' and 'test').
134134
.. code:: python
135135
136136
sklearn_estimator = SKLearn('sklearn-train.py',
137-
train_instance_type='ml.m4.xlarge',
137+
instance_type='ml.m4.xlarge',
138138
framework_version='0.20.0',
139139
hyperparameters = {'epochs': 20, 'batch-size': 64, 'learning-rate': 0.1})
140140
sklearn_estimator.fit({'train': 's3://my-data-bucket/path/to/my/training/data',
@@ -198,7 +198,7 @@ operation.
198198
199199
# Train my estimator
200200
sklearn_estimator = SKLearn(entry_point='train_and_deploy.py',
201-
train_instance_type='ml.m4.xlarge',
201+
instance_type='ml.m4.xlarge',
202202
framework_version='0.20.0')
203203
sklearn_estimator.fit('s3://my_bucket/my_training_data/')
204204

doc/frameworks/tensorflow/deploying_tensorflow_serving.rst

+10-6
Original file line numberDiff line numberDiff line change
@@ -22,17 +22,21 @@ estimator object to create a SageMaker Endpoint:
2222
2323
from sagemaker.tensorflow import TensorFlow
2424
25-
estimator = TensorFlow(entry_point='tf-train.py', ..., train_instance_count=1,
26-
train_instance_type='ml.c4.xlarge', framework_version='1.11')
25+
estimator = TensorFlow(
26+
entry_point="tf-train.py",
27+
...,
28+
instance_count=1,
29+
instance_type="ml.c4.xlarge",
30+
framework_version="2.2",
31+
py_version="py37",
32+
)
2733
2834
estimator.fit(inputs)
2935
30-
predictor = estimator.deploy(initial_instance_count=1,
31-
instance_type='ml.c5.xlarge',
32-
endpoint_type='tensorflow-serving')
36+
predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")
3337
3438
35-
The code block above deploys a SageMaker Endpoint with one instance of the type 'ml.c5.xlarge'.
39+
The code block above deploys a SageMaker Endpoint with one instance of the type "ml.c5.xlarge".
3640

3741
What happens when deploy is called
3842
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

doc/frameworks/tensorflow/upgrade_from_legacy.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ the difference in code would be as follows:
105105
source_dir="code",
106106
framework_version="1.10.0",
107107
py_version="py2",
108-
train_instance_type="ml.m4.xlarge",
108+
instance_type="ml.m4.xlarge",
109109
image_uri="520713654638.dkr.ecr.us-west-2.amazonaws.com/sagemaker-tensorflow:1.10.0-cpu-py2",
110110
hyperparameters={
111111
"training_steps": 100,

0 commit comments

Comments
 (0)