diff --git a/README.rst b/README.rst
index 3edb9869..fe8b0ca3 100644
--- a/README.rst
+++ b/README.rst
@@ -25,6 +25,7 @@ Table of Contents
#. `Getting Started <#getting-started>`__
#. `Building your Image <#building-your-image>`__
+#. `Amazon Elastic Inference with PyTorch in SageMaker <#amazon-elastic-inference-with-pytorch-in-sagemaker>`__
#. `Running the tests <#running-the-tests>`__
Getting Started
@@ -142,6 +143,36 @@ If you want to build "final" Docker images, then use:
# GPU
docker build -t preprod-pytorch:1.0.0-gpu-py3 -f docker/1.0.0/final/Dockerfile.gpu --build-arg py_version=3 .
+Amazon Elastic Inference with PyTorch in SageMaker
+--------------------------------------------------
+`Amazon Elastic Inference `__ allows you to to attach
+low-cost GPU-powered acceleration to Amazon EC2 and Amazon SageMaker instances to reduce the cost of running deep
+learning inference by up to 75%. Currently, Amazon Elastic Inference supports TensorFlow, Apache MXNet, PyTorch,
+and ONNX models.
+
+Support for using PyTorch with Amazon Elastic Inference in SageMaker is supported in the public SageMaker PyTorch serving containers.
+
+* For information on how to use the Python SDK to create an endpoint with Amazon Elastic Inference and PyTorch in SageMaker, see `Deploying PyTorch Models `__.
+* For information on how Amazon Elastic Inference works, see `How EI Works `__.
+* For more information in regards to using Amazon Elastic Inference in SageMaker, see `Amazon SageMaker Elastic Inference `__.
+
+Building the SageMaker Elastic Inference PyTorch Serving container
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Amazon Elastic Inference is designed to be used with AWS enhanced versions of TensorFlow serving, Apache MXNet or PyTorch serving.
+The SageMaker PyTorch containers with Amazon Elastic Inference support were built utilizing the
+same instructions listed `above <#building-your-image>`__ with the
+EIA Dockerfiles, which are all named ``Dockerfile.eia``, and can be found in the same ``docker/`` directory.
+
+Example:
+
+::
+
+ # PyTorch 1.3.1, Python 3, EI
+ $ cp dist/sagemaker_pytorch_inference-*.tar.gz dist/sagemaker_pytorch_inference.tar.gz
+ $ docker build -t preprod-pytorch-serving-eia:1.3.1-cpu-py3 -f docker/1.3.1/py3/Dockerfile.eia .
+
+
+* Currently, only PyTorch serving 1.3.1 is supported for Elastic Inference.
Running the tests
-----------------