Skip to content

Commit 46dd0bc

Browse files
authored
Merge branch 'dev' into registerbyo
2 parents a818fa4 + 17fe93e commit 46dd0bc

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

doc/frameworks/pytorch/using_pytorch.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ with the following:
8080
8181
# ... load from args.train and args.test, train a model, write model to args.model_dir.
8282
83-
Because the SageMaker imports your training script, you should put your training code in a main guard
83+
Because SageMaker imports your training script, you should put your training code in a main guard
8484
(``if __name__=='__main__':``) if you are using the same script to host your model, so that SageMaker does not
8585
inadvertently run your training code at the wrong point in execution.
8686

@@ -177,7 +177,7 @@ fit Required Arguments
177177
case, the S3 objects rooted at the ``my-training-data`` prefix will
178178
be available in the default ``train`` channel. A dict from
179179
string channel names to S3 URIs. In this case, the objects rooted at
180-
each S3 prefix will available as files in each channel directory.
180+
each S3 prefix will be available as files in each channel directory.
181181

182182
For example:
183183

@@ -391,7 +391,7 @@ If you are using PyTorch Elastic Inference 1.5.1, you should provide ``model_fn`
391391
The client-side Elastic Inference framework is CPU-only, even though inference still happens in a CUDA context on the server. Thus, the default ``model_fn`` for Elastic Inference loads the model to CPU. Tracing models may lead to tensor creation on a specific device, which may cause device-related errors when loading a model onto a different device. Providing an explicit ``map_location=torch.device('cpu')`` argument forces all tensors to CPU.
392392

393393
For more information on the default inference handler functions, please refer to:
394-
`SageMaker PyTorch Default Inference Handler <https://github.com/aws/sagemaker-pytorch-serving-container/blob/master/src/sagemaker_pytorch_serving_container/default_inference_handler.py>`_.
394+
`SageMaker PyTorch Default Inference Handler <https://github.com/aws/sagemaker-pytorch-inference-toolkit/blob/master/src/sagemaker_pytorch_serving_container/default_pytorch_inference_handler.py>`_.
395395

396396
Serve a PyTorch Model
397397
---------------------

0 commit comments

Comments
 (0)