Skip to content

Eia fw versions #515

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Nov 29, 2018
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -356,7 +356,9 @@ MXNet SageMaker Estimators

By using MXNet SageMaker ``Estimators``, you can train and host MXNet models on Amazon SageMaker.

Supported versions of MXNet: ``1.2.1``, ``1.1.0``, ``1.0.0``, ``0.12.1``.
Supported versions of MXNet: ``1.3.0``, ``1.2.1``, ``1.1.0``, ``1.0.0``, ``0.12.1``.

Supported versions of MXNet for Elastic Inference: ``1.3.0``

We recommend that you use the latest supported version, because that's where we focus most of our development efforts.

Expand All @@ -372,6 +374,8 @@ By using TensorFlow SageMaker ``Estimators``, you can train and host TensorFlow

Supported versions of TensorFlow: ``1.4.1``, ``1.5.0``, ``1.6.0``, ``1.7.0``, ``1.8.0``, ``1.9.0``, ``1.10.0``, ``1.11.0``.

Supported versions of TensorFlow for Elastic Inference: ``1.11.0``.

We recommend that you use the latest supported version, because that's where we focus most of our development efforts.

For more information, see `TensorFlow SageMaker Estimators and Models`_.
Expand Down
10 changes: 10 additions & 0 deletions src/sagemaker/mxnet/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ With MXNet Estimators, you can train and host MXNet models on Amazon SageMaker.

Supported versions of MXNet: ``1.3.0``, ``1.2.1``, ``1.1.0``, ``1.0.0``, ``0.12.1``.

Supported versions of MXNet for Elastic Inference: ``1.3.0``.

Training with MXNet
~~~~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -480,6 +482,14 @@ After calling ``fit``, you can call ``deploy`` on an ``MXNet`` Estimator to crea

You use the SageMaker MXNet model server to host your MXNet model when you call ``deploy`` on an ``MXNet`` Estimator. The model server runs inside a SageMaker Endpoint, which your call to ``deploy`` creates. You can access the name of the Endpoint by the ``name`` property on the returned ``Predictor``.

MXNet on SageMaker has support for `Elastic Inference <https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html>`_, which allows for inference acceleration to a hosted endpoint for a fraction of the cost of using a full GPU instance. In order to attach an Elastic Inference accelerator to your endpoint provide the accelerator type to ``accelerator_type`` to your ``deploy``.

.. code:: python

predictor = mxnet_estimator.deploy(instance_type='ml.m4.xlarge',
initial_instance_count=1,
accelerator_type='ml.eia1.medium')

The SageMaker MXNet Model Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down
2 changes: 2 additions & 0 deletions src/sagemaker/tensorflow/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ models on SageMaker Hosting.

Supported versions of TensorFlow: ``1.4.1``, ``1.5.0``, ``1.6.0``, ``1.7.0``, ``1.8.0``, ``1.9.0``, ``1.10.0``, ``1.11.0``.

Supported versions of TensorFlow for Elastic Inference: ``1.11.0``.

Training with TensorFlow
~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down
9 changes: 9 additions & 0 deletions src/sagemaker/tensorflow/deploying_tensorflow_serving.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,15 @@ estimator object to create a SageMaker Endpoint:

The code block above deploys a SageMaker Endpoint with one instance of the type 'ml.c5.xlarge'.

TensorFlow serving on SageMaker has support for `Elastic Inference <https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html>`_, which allows for inference acceleration to a hosted endpoint for a fraction of the cost of using a full GPU instance. In order to attach an Elastic Inference accelerator to your endpoint provide the accelerator type to ``accelerator_type`` to your ``deploy``.

.. code:: python

predictor = estimator.deploy(initial_instance_count=1,
instance_type='ml.c5.xlarge',
accelerator_type='ml.eia1.medium'
endpoint_type='tensorflow-serving-elastic-inference')

What happens when deploy is called
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Expand Down