You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/using_pytorch.rst
+69-68
Original file line number
Diff line number
Diff line change
@@ -90,7 +90,7 @@ Note that SageMaker doesn't support argparse actions. If you want to use, for ex
90
90
you need to specify `type` as `bool` in your script and provide an explicit `True` or `False` value for this hyperparameter
91
91
when instantiating PyTorch Estimator.
92
92
93
-
For more on training environment variables, please visit `SageMaker Containers <https://github.com/aws/sagemaker-containers>`_.
93
+
For more on training environment variables, see the `SageMaker Training Toolkit <https://github.com/aws/sagemaker-training-toolkit/blob/master/ENVIRONMENT_VARIABLES.md>`_.
94
94
95
95
Save the Model
96
96
--------------
@@ -115,7 +115,7 @@ to a certain filesystem path called ``model_dir``. This value is accessible thro
115
115
withopen(os.path.join(args.model_dir, 'model.pth'), 'wb') as f:
116
116
torch.save(model.state_dict(), f)
117
117
118
-
After your training job is complete, SageMaker will compress and upload the serialized model to S3, and your model data
118
+
After your training job is complete, SageMaker compresses and uploads the serialized model to S3, and your model data
119
119
will be available in the S3 ``output_path`` you specified when you created the PyTorch Estimator.
120
120
121
121
If you are using Elastic Inference, you must convert your models to the TorchScript format and use ``torch.jit.save`` to save the model.
@@ -566,12 +566,76 @@ The function should return a byte array of data serialized to content_type.
566
566
The default implementation expects ``prediction`` to be a torch.Tensor and can serialize the result to JSON, CSV, or NPY.
567
567
It accepts response content types of "application/json", "text/csv", and "application/x-npy".
568
568
569
-
Working with Existing Model Data and Training Jobs
You have to create a directory structure and place your model files in the correct location.
598
+
The ``PyTorchModel`` constructor packs the files into a ``tar.gz`` file and uploads it to S3.
599
+
600
+
The directory structure where you saved your PyTorch model should look something like the following:
601
+
602
+
**Note:** This directory struture is for PyTorch versions 1.2 and higher.
603
+
For the directory structure for versions 1.1 and lower,
604
+
see `For versions 1.1 and lower <#for-versions-1.1-and-lower>`_.
605
+
606
+
::
607
+
608
+
| my_model
609
+
| |--model.pth
610
+
|
611
+
| code
612
+
| |--inference.py
613
+
| |--requirements.txt
614
+
615
+
Where ``requirments.txt`` is an optional file that specifies dependencies on third-party libraries.
616
+
617
+
Create a ``PyTorchModel`` object
573
618
--------------------------------
574
619
620
+
Now call the :class:`sagemaker.pytorch.model.PyTorchModel` constructor to create a model object, and then call its ``deploy()`` method to deploy your model for inference.
0 commit comments