Skip to content

Commit 4a30f56

Browse files
mfranklin1laurenyu
authored andcommitted
Convert file to ASCII from UTF-8 to avoid compile failure on Windows (#15)
* Convert file to ascii from utf-8 to avoid compile (and install) failure on windows * Replace all ansi 93 and 94 'smart' quote characters with regular ansi 22 octal quote character * Replace characters 92 hex * Replace character 91
1 parent bb677bc commit 4a30f56

File tree

1 file changed

+31
-31
lines changed

1 file changed

+31
-31
lines changed

README.rst

+31-31
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ Building Sphinx docs
7878

7979
make html
8080

81-
You can edit the templates for any of the pages in the docs by editing the .rst files in the doc directory and then running ``make html`` again.
81+
You can edit the templates for any of the pages in the docs by editing the .rst files in the "doc" directory and then running "``make html``" again.
8282

8383

8484
SageMaker Python SDK Overview
@@ -109,7 +109,7 @@ With MXNet Estimators, you can train and host MXNet models on Amazon SageMaker.
109109
Training with MXNet
110110
~~~~~~~~~~~~~~~~~~~
111111

112-
Training MXNet models using ``MXNet`` Estimators is a two-step process. First, you prepare your training script, then second, you run this on SageMaker via an ``MXNet`` Estimator. You should prepare your script in a separate source file than the notebook, terminal session, or source file youre using to submit the script to SageMaker via an ``MXNet`` Estimator.
112+
Training MXNet models using ``MXNet`` Estimators is a two-step process. First, you prepare your training script, then second, you run this on SageMaker via an ``MXNet`` Estimator. You should prepare your script in a separate source file than the notebook, terminal session, or source file you're using to submit the script to SageMaker via an ``MXNet`` Estimator.
113113

114114
Suppose that you already have an MXNet training script called
115115
``mxnet-train.py``. You can run this script in SageMaker as follows:
@@ -122,7 +122,7 @@ Suppose that you already have an MXNet training script called
122122
123123
Where the s3 url is a path to your training data, within Amazon S3. The constructor keyword arguments define how SageMaker runs your training script and are discussed, in detail, in a later section.
124124

125-
In the following sections, well discuss how to prepare a training script for execution on SageMaker, then how to run that script on SageMaker using an ``MXNet`` Estimator.
125+
In the following sections, we'll discuss how to prepare a training script for execution on SageMaker, then how to run that script on SageMaker using an ``MXNet`` Estimator.
126126

127127
Preparing the MXNet training script
128128
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -135,13 +135,13 @@ When you run your script on SageMaker via the ``MXNet`` Estimator, SageMaker inj
135135
to SageMaker TrainingJob that runs your MXNet training script. You
136136
can use this to pass hyperparameters to your training script.
137137
- ``input_data_config (dict[string,dict])``: The SageMaker TrainingJob
138-
InputDataConfig object, thats set when the SageMaker TrainingJob is
138+
InputDataConfig object, that's set when the SageMaker TrainingJob is
139139
created. This is discussed in more detail below.
140140
- ``channel_input_dirs (dict[string,string])``: A collection of
141141
directories containing training data. When you run training, you can
142-
partition your training data into different logical channels.
143-
Depending on your problem, some common channel ideas are: train,
144-
test”, “evaluation or images’,”labels.
142+
partition your training data into different logical "channels".
143+
Depending on your problem, some common channel ideas are: "train",
144+
"test", "evaluation" or "images',"labels".
145145
- ``output_data_dir (str)``: A directory where your training script can
146146
write data that will be moved to s3 after training is complete.
147147
- ``num_gpus (int)``: The number of GPU devices available on your
@@ -161,7 +161,7 @@ A training script that takes advantage of all arguments would have the following
161161
num_gpus, num_cpus, hosts, current_host):
162162
pass
163163
164-
You dont have to use all the arguments, arguments you dont care about can be ignored by including ``**kwargs``.
164+
You don't have to use all the arguments, arguments you don't care about can be ignored by including ``**kwargs``.
165165

166166
.. code:: python
167167
@@ -170,7 +170,7 @@ You don’t have to use all the arguments, arguments you don’t care about can
170170
pass
171171
172172
**Note: Writing a training script that imports correctly**
173-
When SageMaker runs your training script, it imports it as a Python module and then invokes ``train`` on the imported module. Consequently, you should not include any statements that wont execute successfully in SageMaker when your module is imported. For example, dont attempt to open any local files in top-level statements in your training script.
173+
When SageMaker runs your training script, it imports it as a Python module and then invokes ``train`` on the imported module. Consequently, you should not include any statements that won't execute successfully in SageMaker when your module is imported. For example, don't attempt to open any local files in top-level statements in your training script.
174174

175175
If you want to run your training script locally via the Python interpreter, look at using a ``___name__ == '__main__'`` guard, discussed in more detail here: https://stackoverflow.com/questions/419163/what-does-if-name-main-do .
176176

@@ -182,7 +182,7 @@ You can import both ``mxnet`` and ``numpy`` in your training script. When your s
182182
Running an MXNet training script in SageMaker
183183
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
184184

185-
You run MXNet training scripts on SageMaker by creating ``MXNet`` Estimators. SageMaker training of your script is invoked when you call ``fit`` on an ``MXNet`` Estimator. The following code sample shows how you train a custom MXNet script train.py.
185+
You run MXNet training scripts on SageMaker by creating ``MXNet`` Estimators. SageMaker training of your script is invoked when you call ``fit`` on an ``MXNet`` Estimator. The following code sample shows how you train a custom MXNet script "train.py".
186186

187187
.. code:: python
188188
@@ -211,7 +211,7 @@ The following are required arguments to the ``MXNet`` constructor. When you crea
211211
- ``train_instance_count`` Number of Amazon EC2 instances to use for
212212
training.
213213
- ``train_instance_type`` Type of EC2 instance to use for training, for
214-
example, ml.c4.xlarge.
214+
example, 'ml.c4.xlarge'.
215215

216216
Optional arguments
217217
''''''''''''''''''
@@ -231,12 +231,12 @@ The following are optional arguments. When you create an ``MXNet`` object, you c
231231
model training code.
232232
- ``train_volume_size`` Size in GB of the EBS volume to use for storing
233233
input data during training. Must be large enough to store training
234-
data if input_mode=File is used (which is the default).
234+
data if input_mode='File' is used (which is the default).
235235
- ``train_max_run`` Timeout in hours for training, after which Amazon
236236
SageMaker terminates the job regardless of its current status.
237237
- ``input_mode`` The input mode that the algorithm supports. Valid
238-
modes: File - Amazon SageMaker copies the training dataset from the
239-
s3 location to a directory in the Docker container. Pipe - Amazon
238+
modes: 'File' - Amazon SageMaker copies the training dataset from the
239+
s3 location to a directory in the Docker container. 'Pipe' - Amazon
240240
SageMaker streams data directly from s3 to the container via a Unix
241241
named pipe.
242242
- ``output_path`` s3 location where you want the training result (model
@@ -292,7 +292,7 @@ Just as you enable training by defining a ``train`` function in your training sc
292292

293293
SageMaker provides a default implementation of ``save`` that works with MXNet Module API ``Module`` objects. If your training script does not define a ``save`` function, then the default ``save`` function will be invoked on the return-value of your ``train`` function.
294294

295-
The following script demonstrates how to return a model from train, thats compatible with the default ``save`` function.
295+
The following script demonstrates how to return a model from train, that's compatible with the default ``save`` function.
296296

297297
.. code:: python
298298
@@ -325,7 +325,7 @@ After your training job is complete, your model data will available in the s3 ``
325325
MXNet Module serialization in SageMaker
326326
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
327327

328-
If you train function returns a ``Module`` object, it will be serialized by the default Module serialization system, unless youve specified a custom ``save`` function.
328+
If you train function returns a ``Module`` object, it will be serialized by the default Module serialization system, unless you've specified a custom ``save`` function.
329329

330330
The default serialization system generates three files:
331331

@@ -369,7 +369,7 @@ After your ``train`` function completes, SageMaker will invoke ``save`` with the
369369

370370
**Note: How to save Gluon models with SageMaker**
371371

372-
If your train function returns a Gluon API ``net`` object as its model, youll need to write your own ``save`` function. You will want to serialize the ``net`` parameters. Saving ``net`` parameters is covered in the `Serialization section <http://gluon.mxnet.io/chapter03_deep-neural-networks/serialization.html>`__ of the collaborative Gluon deep-learning book `The Straight Dope <http://gluon.mxnet.io/index.html>`__.
372+
If your train function returns a Gluon API ``net`` object as its model, you'll need to write your own ``save`` function. You will want to serialize the ``net`` parameters. Saving ``net`` parameters is covered in the `Serialization section <http://gluon.mxnet.io/chapter03_deep-neural-networks/serialization.html>`__ of the collaborative Gluon deep-learning book `"The Straight Dope" <http://gluon.mxnet.io/index.html>`__.
373373

374374
Deploying MXNet models
375375
~~~~~~~~~~~~~~~~~~~~~~
@@ -408,9 +408,9 @@ As with MXNet training, you configure the MXNet model server by defining functio
408408
Model loading
409409
^^^^^^^^^^^^^
410410
411-
Before a model can be served, it must be loaded. The SageMaker model server loads your model by invoking a ``model_fn`` function on your training script. If you dont provide a ``model_fn`` function, SageMaker will use a default ``model_fn`` function. The default function works with MXNet Module model objects, saved via the default ``save`` function.
411+
Before a model can be served, it must be loaded. The SageMaker model server loads your model by invoking a ``model_fn`` function on your training script. If you don't provide a ``model_fn`` function, SageMaker will use a default ``model_fn`` function. The default function works with MXNet Module model objects, saved via the default ``save`` function.
412412
413-
If you wrote a custom ``save`` function then you may need to write a custom ``model_fn`` function. If your save function serializes ``Module`` objects under the same format as the default ``save`` function, then you wont need to write a custom model_fn function. If you do write a ``model_fn`` function must have the following signature:
413+
If you wrote a custom ``save`` function then you may need to write a custom ``model_fn`` function. If your save function serializes ``Module`` objects under the same format as the default ``save`` function, then you won't need to write a custom model_fn function. If you do write a ``model_fn`` function must have the following signature:
414414
415415
.. code:: python
416416
@@ -482,11 +482,11 @@ Input processing
482482
483483
When an InvokeEndpoint operation is made against an Endpoint running a SageMaker MXNet model server, the model server receives two pieces of information:
484484
485-
- The request Content-Type, for example application/json
485+
- The request Content-Type, for example "application/json"
486486
- The request data body, a byte array which is at most 5 MB (5 \* 1024
487487
\* 1024 bytes) in size.
488488
489-
The SageMaker MXNet model server will invoke an input_fn function in your training script, passing in this information. If you define an ``input_fn`` function definition, it should return an object that can be passed to ``predict_fn`` and have the following signature:
489+
The SageMaker MXNet model server will invoke an "input_fn" function in your training script, passing in this information. If you define an ``input_fn`` function definition, it should return an object that can be passed to ``predict_fn`` and have the following signature:
490490
491491
.. code:: python
492492
@@ -496,7 +496,7 @@ Where ``request_body`` is a byte buffer, ``request_content_type`` is a Python st
496496
497497
The SageMaker MXNet model server provides a default implementation of ``input_fn``. This function deserializes JSON or CSV encoded data into an MXNet ``NDArrayIter`` `(external API docs) <https://mxnet.incubator.apache.org/api/python/io.html#mxnet.io.NDArrayIter>`__ multi-dimensional array iterator. This works with the default ``predict_fn`` implementation, which expects an ``NDArrayIter`` as input.
498498
499-
Default json deserialization requires ``request_body`` contain a single json list. Sending multiple json objects within the same ``request_body`` is not supported. The list must have a dimensionality compatible with the MXNet ``net`` or ``Module`` object. Specifically, after the list is loaded, its either padded or split to fit the first dimension of the model input shape. The lists shape must be identical to the models input shape, for all dimensions after the first.
499+
Default json deserialization requires ``request_body`` contain a single json list. Sending multiple json objects within the same ``request_body`` is not supported. The list must have a dimensionality compatible with the MXNet ``net`` or ``Module`` object. Specifically, after the list is loaded, it's either padded or split to fit the first dimension of the model input shape. The list's shape must be identical to the model's input shape, for all dimensions after the first.
500500
501501
Default csv deserialization requires ``request_body`` contain one or more lines of CSV numerical data. The data is loaded into a two-dimensional array, where each line break defines the boundaries of the first dimension. This two-dimensional array is then re-shaped to be compatible with the shape expected by the model object. Specifically, the first dimension is kept unchanged, but the second dimension is reshaped to be consistent with the shape of all dimensions in the model, following the first dimension.
502502
@@ -566,7 +566,7 @@ The ``output_fn`` has the following signature:
566566
Where ``prediction`` is the result of invoking ``predict_fn`` and
567567
``content_type`` is the InvokeEndpoint requested response content-type. The function should return a byte array of data serialized to content_type.
568568
569-
The default implementation expects ``prediction`` to be an ``NDArray`` and can serialize the result to either JSON or CSV. It accepts response content types of application/json and text/csv.
569+
The default implementation expects ``prediction`` to be an ``NDArray`` and can serialize the result to either JSON or CSV. It accepts response content types of "application/json" and "text/csv".
570570
571571
Distributed MXNet training
572572
~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -642,20 +642,20 @@ The MXNetModel constructor takes the following arguments:
642642
custom code will be uploaded to. If not specified, will use the
643643
SageMaker default bucket created by sagemaker.Session.
644644
- ``sagemaker_session (sagemaker.Session):`` The SageMaker Session
645-
object, used for SageMaker interaction“”"
645+
object, used for SageMaker interaction"""
646646
647647
Your model data must be a .tar.gz file in S3. SageMaker Training Job model data is saved to .tar.gz files in S3, however if you have local data you want to deploy, you can prepare the data yourself.
648648
649-
Assuming you have a local directory containg your model data named my_model you can tar and gzip compress the file and upload to S3 using the following commands:
649+
Assuming you have a local directory containg your model data named "my_model" you can tar and gzip compress the file and upload to S3 using the following commands:
650650
651651
::
652652
653653
tar -czf model.tar.gz my_model
654654
aws s3 cp model.tar.gz s3://my-bucket/my-path/model.tar.gz
655655
656-
This uploads the contents of my_model to a gzip compressed tar file to S3 in the bucket my-bucket, with the key my-path/model.tar.gz.
656+
This uploads the contents of my_model to a gzip compressed tar file to S3 in the bucket "my-bucket", with the key "my-path/model.tar.gz".
657657
658-
To run this command, youll need the aws cli tool installed. Please refer to our `FAQ <#FAQ>`__ for more information on installing this.
658+
To run this command, you'll need the aws cli tool installed. Please refer to our `FAQ <#FAQ>`__ for more information on installing this.
659659
660660
MXNet Training Examples
661661
~~~~~~~~~~~~~~~~~~~~~~~
@@ -1059,7 +1059,7 @@ The following are required arguments to the TensorFlow constructor.
10591059
- ``train_instance_count (int)`` Number of Amazon EC2 instances to use for
10601060
training.
10611061
- ``train_instance_type (str)`` Type of EC2 instance to use for training, for
1062-
example, ml.c4.xlarge.
1062+
example, 'ml.c4.xlarge'.
10631063
- ``training_steps (int)`` Perform this many steps of training. ``None``, means train forever.
10641064
- ``evaluation_steps (int)`` Perform this many steps of evaluation. ``None``, means
10651065
that evaluation runs until input from ``eval_input_fn`` is exhausted (or another exception is raised).
@@ -1441,15 +1441,15 @@ FAQ
14411441
I want to train a SageMaker Estimator with local data, how do I do this?
14421442
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
14431443
1444-
Youll need to upload the data to S3 before training. You can use the AWS Command Line Tool (the aws cli) to achieve this.
1444+
You'll need to upload the data to S3 before training. You can use the AWS Command Line Tool (the aws cli) to achieve this.
14451445
1446-
If you dont have the aws cli, you can install it using pip:
1446+
If you don't have the aws cli, you can install it using pip:
14471447
14481448
::
14491449
14501450
pip install awscli --upgrade --user
14511451
1452-
If you dont have pip or want to learn more about installing the aws cli, please refer to the official `Amazon aws cli installation guide <http://docs.aws.amazon.com/cli/latest/userguide/installing.html>`__.
1452+
If you don't have pip or want to learn more about installing the aws cli, please refer to the official `Amazon aws cli installation guide <http://docs.aws.amazon.com/cli/latest/userguide/installing.html>`__.
14531453
14541454
Once you have the aws cli installed, you can upload a directory of files to S3 with the following command:
14551455

0 commit comments

Comments
 (0)