You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/using_pytorch.rst
+15
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,8 @@ With PyTorch Estimators and Models, you can train and host PyTorch models on Ama
6
6
7
7
Supported versions of PyTorch: ``0.4.0``, ``1.0.0``, ``1.1.0``, ``1.2.0``, ``1.3.1``.
8
8
9
+
Supported versions of PyTorch for Elastic Inference: ``1.3.1``.
10
+
9
11
We recommend that you use the latest supported version, because that's where we focus most of our development efforts.
10
12
11
13
You can visit the PyTorch repository at https://github.com/pytorch/pytorch.
@@ -250,6 +252,14 @@ You use the SageMaker PyTorch model server to host your PyTorch model when you c
250
252
Estimator. The model server runs inside a SageMaker Endpoint, which your call to ``deploy`` creates.
251
253
You can access the name of the Endpoint by the ``name`` property on the returned ``Predictor``.
252
254
255
+
PyTorch on Amazon SageMaker has support for `Elastic Inference <https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html>`_, which allows for inference acceleration to a hosted endpoint for a fraction of the cost of using a full GPU instance.
256
+
In order to attach an Elastic Inference accelerator to your endpoint provide the accelerator type to ``accelerator_type`` to your ``deploy`` call.
0 commit comments