From fce039a1e69ece9bb6c15646aa4bc6f696f3d719 Mon Sep 17 00:00:00 2001 From: Deng Date: Tue, 10 Mar 2020 11:33:08 -0700 Subject: [PATCH 1/2] doc: pytorch eia --- README.rst | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/README.rst b/README.rst index 3edb9869..abf4b881 100644 --- a/README.rst +++ b/README.rst @@ -25,6 +25,7 @@ Table of Contents #. `Getting Started <#getting-started>`__ #. `Building your Image <#building-your-image>`__ +#. `Amazon Elastic Inference with PyTorch in SageMaker <#amazon-elastic-inference-with-pytorch-in-sagemaker>`__ #. `Running the tests <#running-the-tests>`__ Getting Started @@ -142,6 +143,36 @@ If you want to build "final" Docker images, then use: # GPU docker build -t preprod-pytorch:1.0.0-gpu-py3 -f docker/1.0.0/final/Dockerfile.gpu --build-arg py_version=3 . +Amazon Elastic Inference with PyTorch in SageMaker +-------------------------------------------------- +`Amazon Elastic Inference `__ allows you to to attach +low-cost GPU-powered acceleration to Amazon EC2 and Amazon SageMaker instances to reduce the cost of running deep +learning inference by up to 75%. Currently, Amazon Elastic Inference supports TensorFlow, Apache MXNet, PyTorch, +and ONNX models. + +Support for using PyTorch with Amazon Elastic Inference in SageMaker is supported in the public SageMaker PyTorch serving containers. + +* For information on how to use the Python SDK to create an endpoint with Amazon Elastic Inference and PyTorch in SageMaker, see `Deploying PyTorch Models`__. +* For information on how Amazon Elastic Inference works, see `How EI Works `__. +* For more information in regards to using Amazon Elastic Inference in SageMaker, see `Amazon SageMaker Elastic Inference `__. + +Building the SageMaker Elastic Inference PyTorch Serving container +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Amazon Elastic Inference is designed to be used with AWS enhanced versions of TensorFlow serving, Apache MXNet or PyTorch serving. +The SageMaker PyTorch containers with Amazon Elastic Inference support were built utilizing the +same instructions listed `above <#building-your-image>`__ with the +EIA Dockerfiles, which are all named ``Dockerfile.eia``, and can be found in the same ``docker/`` directory. + +Example: + +:: + + # PyTorch 1.3.1, Python 3, EI + $ cp dist/sagemaker_pytorch_inference-*.tar.gz dist/sagemaker_pytorch_inference.tar.gz + $ docker build -t preprod-pytorch-serving-eia:1.3.1-cpu-py3 -f docker/1.3.1/py3/Dockerfile.eia . + + +* Currently, only PyTorch serving 1.3.1 is supported for Elastic Inference. Running the tests ----------------- From 930c046f11cf07783bfc86901d1d5d3c35d1ea80 Mon Sep 17 00:00:00 2001 From: Deng Date: Tue, 10 Mar 2020 11:34:31 -0700 Subject: [PATCH 2/2] fix twine error --- README.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.rst b/README.rst index abf4b881..fe8b0ca3 100644 --- a/README.rst +++ b/README.rst @@ -152,7 +152,7 @@ and ONNX models. Support for using PyTorch with Amazon Elastic Inference in SageMaker is supported in the public SageMaker PyTorch serving containers. -* For information on how to use the Python SDK to create an endpoint with Amazon Elastic Inference and PyTorch in SageMaker, see `Deploying PyTorch Models`__. +* For information on how to use the Python SDK to create an endpoint with Amazon Elastic Inference and PyTorch in SageMaker, see `Deploying PyTorch Models `__. * For information on how Amazon Elastic Inference works, see `How EI Works `__. * For more information in regards to using Amazon Elastic Inference in SageMaker, see `Amazon SageMaker Elastic Inference `__.