You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/frameworks/pytorch/using_pytorch.rst
+3-3
Original file line number
Diff line number
Diff line change
@@ -80,7 +80,7 @@ with the following:
80
80
81
81
# ... load from args.train and args.test, train a model, write model to args.model_dir.
82
82
83
-
Because the SageMaker imports your training script, you should put your training code in a main guard
83
+
Because SageMaker imports your training script, you should put your training code in a main guard
84
84
(``if __name__=='__main__':``) if you are using the same script to host your model, so that SageMaker does not
85
85
inadvertently run your training code at the wrong point in execution.
86
86
@@ -177,7 +177,7 @@ fit Required Arguments
177
177
case, the S3 objects rooted at the ``my-training-data`` prefix will
178
178
be available in the default ``train`` channel. A dict from
179
179
string channel names to S3 URIs. In this case, the objects rooted at
180
-
each S3 prefix will available as files in each channel directory.
180
+
each S3 prefix will be available as files in each channel directory.
181
181
182
182
For example:
183
183
@@ -391,7 +391,7 @@ If you are using PyTorch Elastic Inference 1.5.1, you should provide ``model_fn`
391
391
The client-side Elastic Inference framework is CPU-only, even though inference still happens in a CUDA context on the server. Thus, the default ``model_fn`` for Elastic Inference loads the model to CPU. Tracing models may lead to tensor creation on a specific device, which may cause device-related errors when loading a model onto a different device. Providing an explicit ``map_location=torch.device('cpu')`` argument forces all tensors to CPU.
392
392
393
393
For more information on the default inference handler functions, please refer to:
0 commit comments