From 1af767f2525e9854ab142c1d2620599d3d3816d4 Mon Sep 17 00:00:00 2001 From: Lauren Yu <6631887+laurenyu@users.noreply.github.com> Date: Wed, 3 Jun 2020 11:33:54 -0700 Subject: [PATCH 1/2] doc: update TF documentation to reflect breaking changes and how to upgrade --- doc/frameworks/rl/using_rl.rst | 4 +- .../deploying_tensorflow_serving.rst | 16 +- doc/frameworks/tensorflow/index.rst | 6 +- .../tensorflow/upgrade_from_legacy.rst | 254 ++++++++++++++++++ doc/frameworks/tensorflow/using_tf.rst | 133 +++------ 5 files changed, 311 insertions(+), 102 deletions(-) rename {src/sagemaker => doc/frameworks}/tensorflow/deploying_tensorflow_serving.rst (97%) create mode 100644 doc/frameworks/tensorflow/upgrade_from_legacy.rst diff --git a/doc/frameworks/rl/using_rl.rst b/doc/frameworks/rl/using_rl.rst index 5bba4f3dc9..39b012faad 100644 --- a/doc/frameworks/rl/using_rl.rst +++ b/doc/frameworks/rl/using_rl.rst @@ -217,7 +217,7 @@ which was run when you called ``fit``. This was the model you saved to ``model_d In case if ``image_name`` was specified it would use provided image for the deployment. ``deploy`` returns a ``sagemaker.mxnet.MXNetPredictor`` for MXNet or -``sagemaker.tensorflow.serving.Predictor`` for TensorFlow. +``sagemaker.tensorflow.TensorFlowPredictor`` for TensorFlow. ``predict`` returns the result of inference against your model. @@ -241,7 +241,7 @@ In case if ``image_name`` was specified it would use provided image for the depl response = predictor.predict(data) For more information please see `The SageMaker MXNet Model Server `_ -and `Deploying to TensorFlow Serving Endpoints `_ documentation. +and `Deploying to TensorFlow Serving Endpoints `_ documentation. Working with Existing Training Jobs diff --git a/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst b/doc/frameworks/tensorflow/deploying_tensorflow_serving.rst similarity index 97% rename from src/sagemaker/tensorflow/deploying_tensorflow_serving.rst rename to doc/frameworks/tensorflow/deploying_tensorflow_serving.rst index 45b607ae94..43f04af757 100644 --- a/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst +++ b/doc/frameworks/tensorflow/deploying_tensorflow_serving.rst @@ -58,9 +58,9 @@ If you already have existing model artifacts in S3, you can skip training and de .. code:: python - from sagemaker.tensorflow.serving import Model + from sagemaker.tensorflow import TensorFlowModel - model = Model(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') + model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge') @@ -68,9 +68,9 @@ Python-based TensorFlow serving on SageMaker has support for `Elastic Inference .. code:: python - from sagemaker.tensorflow.serving import Model + from sagemaker.tensorflow import TensorFlowModel - model = Model(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') + model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge', accelerator_type='ml.eia1.medium') @@ -276,7 +276,7 @@ This customized Python code must be named ``inference.py`` and specified through .. code:: - from sagemaker.tensorflow.serving import Model + from sagemaker.tensorflow import TensorFlowModel model = Model(entry_point='inference.py', model_data='s3://mybucket/model.tar.gz', @@ -429,7 +429,7 @@ processing. There are 2 ways to do this: .. code:: - from sagemaker.tensorflow.serving import Model + from sagemaker.tensorflow import TensorFlowModel model = Model(entry_point='inference.py', source_dir='source/directory', @@ -447,7 +447,7 @@ processing. There are 2 ways to do this: .. code:: - from sagemaker.tensorflow.serving import Model + from sagemaker.tensorflow import TensorFlowModel model = Model(entry_point='inference.py', dependencies=['/path/to/folder/named/lib'], @@ -546,7 +546,7 @@ For the remaining steps, let's return to python code using the SageMaker Python .. code:: python - from sagemaker.tensorflow.serving import Model, Predictor + from sagemaker.tensorflow import TensorFlowModel, TensorFlowPredictor # change this to the name or ARN of your SageMaker execution role role = 'SageMakerRole' diff --git a/doc/frameworks/tensorflow/index.rst b/doc/frameworks/tensorflow/index.rst index 9a6639abbc..46f535ba1d 100644 --- a/doc/frameworks/tensorflow/index.rst +++ b/doc/frameworks/tensorflow/index.rst @@ -1,6 +1,6 @@ -############################# +########## TensorFlow -############################# +########## A managed environment for TensorFlow training and hosting on Amazon SageMaker @@ -8,6 +8,8 @@ A managed environment for TensorFlow training and hosting on Amazon SageMaker :maxdepth: 1 using_tf + deploying_tensorflow_serving + upgrade_from_legacy .. toctree:: :maxdepth: 2 diff --git a/doc/frameworks/tensorflow/upgrade_from_legacy.rst b/doc/frameworks/tensorflow/upgrade_from_legacy.rst new file mode 100644 index 0000000000..ff436b5c77 --- /dev/null +++ b/doc/frameworks/tensorflow/upgrade_from_legacy.rst @@ -0,0 +1,254 @@ +###################################### +Upgrade from Legacy TensorFlow Support +###################################### + +With v2 of the SageMaker Python SDK, support for legacy SageMaker TensorFlow images has been deprecated. +This guide explains how to upgrade your SageMaker Python SDK usage. + +For more information about using TensorFlow with the SageMaker Python SDK, see `Use TensorFlow with the SageMaker Python SDK `_. + +.. contents:: + +******************************************** +What Constitutes "Legacy TensorFlow Support" +******************************************** + +This guide is relevant if one of the following applies: + +#. You are using TensorFlow versions 1.4-1.10 +#. You are using TensorFlow versions 1.11-1.12 with Python 2, and + + - you do *not* have ``script_mode=True`` when creating your estimator + - you are using ``sagemaker.tensorflow.model.TensorFlowModel`` and/or ``sagemaker.tensorflow.model.TensorFlowPredictor`` + +#. You are using a pre-built SageMaker image whose URI looks like ``520713654638.dkr.ecr..amazonaws.com/sagemaker-tensorflow:`` + +If one of the above applies, then keep reading. + +************** +How to Upgrade +************** + +We recommend that you use the latest supported version of TensorFlow because that's where we focus our development efforts. +For information about supported versions of TensorFlow, see the `AWS documentation `_. + +For general information about using TensorFlow with the SageMaker Python SDK, see `Use TensorFlow with the SageMaker Python SDK `_. + +Training Script +=============== + +Newer versions of TensorFlow require your training script to be runnable as a command-line script, similar to what you might run outside of SageMaker. For more information, including how to adapt a locally-runnable script, see `Prepare a Training Script `_. + +In addition, your training script needs to save your model. If you have your own ``serving_input_fn`` implementation, then that can be passed to an exporter: + +.. code:: python + + import tensorflow as tf + + exporter = tf.estimator.LatestExporter("Servo", serving_input_receiver_fn=serving_input_fn) + +For an example of how to repackage your legacy TensorFlow training script for use with a newer version of TensorFlow, +see `this example notebook `_. + +Inference Script +================ + +Newer versions of TensorFlow Serving require a different format for the inference script. Some key differences: + +- The script must be named ``inference.py``. +- ``input_fn`` has been replaced by ``input_handler``. +- ``output_fn`` has been replaced by ``output_handler``. + +Like with the legacy versions, the pre-built SageMaker TensorFlow Serving images do have default implementations for pre- and post-processing. + +For more information about implementing your own handlers, see `How to implement the pre- and/or post-processing handler(s) `_. + +***************************** +Continue with Legacy Versions +***************************** + +While not recommended, you can still use a legacy TensorFlow version with v2 of the SageMaker Python SDK. +In order to do so, you need to change how a few parameters are defined. + +Training +======== + +When creating an estimator, v2 requires the following changes: + +#. Explicitly specify the ECR image URI via ``image_name``. + To determine the URI, you can use :func:`sagemaker.fw_utils.create_image_uri`. +#. Specify ``model_dir=False``. +#. Use hyperparameters for ``training_steps``, ``evaluation_steps``, ``checkpoint_path``, and ``requirements_file``. + +For example, if using TF 1.10.0 with an ml.m4.xlarge instance in us-west-2, +the difference in code would be as follows: + +.. code:: python + + from sagemaker.tensorflow import TensorFlow + + # v1 + estimator = TensorFlow( + ... + source_dir="code", + framework_version="1.10.0", + train_instance_type="ml.m4.xlarge", + training_steps=100, + evaluation_steps=10, + checkpoint_path="s3://bucket/path", + requirements_file="requirements.txt", + ) + + # v2 + estimator = TensorFlow( + ... + source_dir="code", + framework_version="1.10.0", + train_instance_type="ml.m4.xlarge", + image_name="520713654638.dkr.ecr.us-west-2.amazonaws.com/sagemaker-tensorflow:1.10.0-cpu-py2", + hyperparameters={ + "training_steps": 100, + "evaluation_steps": 10, + "checkpoint_path": "s3://bucket/path", + "sagemaker_requirements": "requirements.txt", + }, + model_dir=False, + ) + +Requirements File with Training +------------------------------- + +To provide a requirements file, define a hyperparameter named "sagemaker_requirements" that contains the relative path to the requirements file from ``source_dir``. + +Inference +========= + +Using a legacy TensorFlow version for endpoints and batch transform can be achieved with v2 of the SageMaker Python SDK with some minor changes to your code. + +From an Estimator +----------------- + +If you are starting with a training job, you can call :func:`sagemaker.estimator.EstimatorBase.deploy` or :func:`sagemaker.tensorflow.estimator.Estimator.transformer` from your estimator for inference. + +To specify the number of model server workers, you need to set it through an environment variable named ``MODEL_SERVER_WORKERS``: + +.. code:: python + + # v1 + estimator.deploy(..., model_server_workers=4) + + # v2 + estimator.deploy(..., env={"MODEL_SERVER_WORKERS": 4}) + +From a Model +------------ + +If you are starting with a model, v2 requires the following changes: + +#. Use the the :class:`sagemaker.model.FrameworkModel` class. +#. Explicitly specify the ECR image URI via ``image``. + To determine the URI, you can use :func:`sagemaker.fw_utils.create_image_uri`. +#. Use an environment variable for ``model_server_workers``. + +For example, if using TF 1.10.0 with a CPU instance in us-west-2, +the difference in code would be as follows: + +.. code:: python + + # v1 + from sagemaker.tensorflow import TensorFlowModel + + model = TensorFlowModel( + ... + py_version="py2", + framework_version="1.10.0", + model_server_workers=4, + ) + + # v2 + from sagemaker.model import FrameworkModel + + model = FrameworkModel( + ... + image="520713654638.dkr.ecr.us-west-2.amazonaws.com/sagemaker-tensorflow:1.10.0-cpu-py2", + env={"MODEL_SERVER_WORKERS": 4}, + ) + +Requirements File with Inference +-------------------------------- + +To provide a requirements file, define an environment variable named ``SAGEMAKER_REQUIREMENTS`` that contains the relative path to the requirements file from ``source_dir``. + +From an estimator: + +.. code:: python + + # for an endpoint + estimator.deploy(..., env={"SAGEMAKER_REQUIREMENTS": "requirements.txt"}) + + # for batch transform + estimator.transformer(..., env={"SAGEMAKER_REQUIREMENTS": "requirements.txt"}) + +From a model: + +.. code:: python + + from sagemaker.model import FrameworkModel + + model = FrameworkModel( + ... + source_dir="code", + env={"SAGEMAKER_REQUIREMENTS": "requirements.txt"}, + ) + + +Predictors +---------- + +If you want to use your model for endpoints, then you can use the :class:`sagemaker.predictor.RealTimePredictor` class instead of the legacy ``sagemaker.tensorflow.TensorFlowPredictor`` class: + +.. code:: python + + from sagemaker.model import FrameworkModel + from sagemaker.predictor import RealTimePredictor + + model = FrameworkModel( + ... + predictor_cls=RealTimePredictor, + ) + + predictor = model.deploy(...) + +If you are using protobuf prediction data, then you need to serialize and deserialize the data yourself. + +For example: + +.. code:: python + + from google.protobuf import json_format + from protobuf_to_dict import protobuf_to_dict + from tensorflow.core.framework import tensor_pb2 + + # Serialize the prediction data + json_format.MessageToJson(data) + + # Get the prediction result + result = predictor.predict(data) + + # Deserialize the prediction result + protobuf_to_dict(json_format.Parse(result, tensor_pb2.TensorProto())) + +Otherwise, you can use the serializers and deserialzers available in the SageMaker Python SDK or write your own. + +For example, if you want to use JSON serialization and deserialization: + +.. code:: python + + from sagemaker.predictor import json_deserializer, json_serializer + + predictor.content_type = "application/json" + predictor.serializer = json_serializer + predictor.accept = "application/json" + predictor.deserializer = json_deserializer + + predictor.predict(data) diff --git a/doc/frameworks/tensorflow/using_tf.rst b/doc/frameworks/tensorflow/using_tf.rst index 38ff81e511..ae3a56a183 100644 --- a/doc/frameworks/tensorflow/using_tf.rst +++ b/doc/frameworks/tensorflow/using_tf.rst @@ -1,23 +1,17 @@ -############################################## -Using TensorFlow with the SageMaker Python SDK -############################################## +############################################ +Use TensorFlow with the SageMaker Python SDK +############################################ With the SageMaker Python SDK, you can train and host TensorFlow models on Amazon SageMaker. -For information about supported versions of TensorFlow, see the `AWS documentation `__. +For information about supported versions of TensorFlow, see the `AWS documentation `_. We recommend that you use the latest supported version because that's where we focus our development efforts. For general information about using the SageMaker Python SDK, see :ref:`overview:Using the SageMaker Python SDK`. .. warning:: - We have added a new format of your TensorFlow training script with TensorFlow version 1.11. - This new way gives the user script more flexibility. - This new format is called Script Mode, as opposed to Legacy Mode, which is what we support with TensorFlow 1.11 and older versions. - In addition we are adding Python 3 support with Script Mode. - The last supported version of Legacy Mode will be TensorFlow 1.12. - Script Mode is available with TensorFlow version 1.11 and newer. - Make sure you refer to the correct version of this README when you prepare your script. - You can find the Legacy Mode README `here `_. + Support for TensorFlow versions 1.4-1.10 has been deprecated. + For information on how to upgrade, see `Upgrade from Legacy TensorFlow Support `_. .. contents:: @@ -33,14 +27,12 @@ To train a TensorFlow model by using the SageMaker Python SDK: .. |call fit| replace:: Call the estimator's ``fit`` method .. _call fit: #call-the-fit-method -1. `Prepare a training script <#prepare-a-script-mode-training-script>`_ +1. `Prepare a training script <#prepare-a-training-script>`_ 2. |create tf estimator|_ 3. |call fit|_ -Prepare a Script Mode Training Script -===================================== - -Your TensorFlow training script must be a Python 2.7-, 3.6- or 3.7-compatible source file. +Prepare a Training Script +========================= The training script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, including the following: @@ -141,18 +133,18 @@ For training, support for installing packages using ``requirements.txt`` varies - For TensorFlow 1.15.2 with Python 3.7 or newer, and TensorFlow 2.2 or newer: - Include a ``requirements.txt`` file in the same directory as your training script. - You must specify this directory using the ``source_dir`` argument when creating a TensorFlow estimator. -- For older versions of TensorFlow using Script Mode (TensorFlow 1.11-1.15.2, 2.0-2.1 with Python 2.7 or 3.6): +- For TensorFlow versions 1.11-1.15.2, 2.0-2.1 with Python 2.7 or 3.6: - Write a shell script for your entry point that first calls ``pip install -r requirements.txt``, then runs your training script. - For an example of using shell scripts, see `this example notebook `__. -- For older versions of TensorFlow using Legacy Mode: - - Specify the path to your ``requirements.txt`` file using the ``requirements_file`` argument. +- For legacy versions of TensorFlow: + - See `Upgrade from Legacy TensorFlow Support `_. For serving, support for installing packages using ``requirements.txt`` varies by TensorFlow version as follows: - For TensorFlow 1.11 or newer: - Include a ``requirements.txt`` file in the ``code`` directory. -- For older versions of TensorFlow: - - Specify the path to your ``requirements.txt`` file using the ``SAGEMAKER_REQUIREMENTS`` environment variable. +- For legacy versions of TensorFlow: + - See `Upgrade from Legacy TensorFlow Support `_. A ``requirements.txt`` file is a text file that contains a list of items that are installed by using ``pip install``. You can also specify the version of an item to install. @@ -164,27 +156,11 @@ Create an Estimator After you create your training script, create an instance of the :class:`sagemaker.tensorflow.TensorFlow` estimator. -To use Script Mode, set at least one of these args - -- ``py_version='py3'`` -- ``script_mode=True`` - To use Python 3.7, please specify both of the args: - ``py_version='py37'`` - ``framework_version='1.15.2'`` -When using Script Mode, your training script needs to accept the following args: - -- ``model_dir`` - -The following args are not permitted when using Script Mode: - -- ``checkpoint_path`` -- ``training_steps`` -- ``evaluation_steps`` -- ``requirements_file`` - .. code:: python from sagemaker.tensorflow import TensorFlow @@ -202,32 +178,7 @@ For more information about the sagemaker.tensorflow.TensorFlow estimator, see `S Call the fit Method =================== -You start your training script by calling the ``fit`` method on a ``TensorFlow`` estimator. ``fit`` takes -both required and optional arguments. - -Required arguments ------------------- - -- ``inputs``: The S3 location(s) of datasets to be used for training. This can take one of two forms: - - - ``str``: An S3 URI, for example ``s3://my-bucket/my-training-data``, which indicates the dataset's location. - - ``dict[str, str]``: A dictionary mapping channel names to S3 locations, for example ``{'train': 's3://my-bucket/my-training-data/train', 'test': 's3://my-bucket/my-training-data/test'}`` - - ``sagemaker.session.s3_input``: channel configuration for S3 data sources that can provide additional information as well as the path to the training dataset. See `the API docs `_ for full details. - -Optional arguments ------------------- - -- ``wait (bool)``: Defaults to True, whether to block and wait for the - training script to complete before returning. - If set to False, it will return immediately, and can later be attached to. -- ``logs (bool)``: Defaults to True, whether to show logs produced by training - job in the Python session. Only meaningful when wait is True. -- ``run_tensorboard_locally (bool)``: Defaults to False. If set to True a Tensorboard command will be printed out. -- ``job_name (str)``: Training job name. If not specified, the estimator generates a default job name, - based on the training image name and current timestamp. - -What happens when fit is called -------------------------------- +You start your training script by calling the ``fit`` method on a ``TensorFlow`` estimator. Calling ``fit`` starts a SageMaker training job. The training job will execute the following. @@ -254,6 +205,8 @@ After attaching, the estimator can be deployed as usual. tf_estimator = TensorFlow.attach(training_job_name=training_job_name) +For more information about the options available for ``fit``, see the `API documentation `_. + Distributed Training ==================== @@ -285,8 +238,8 @@ Training with Horovod Horovod is a distributed training framework based on MPI. Horovod is only available with TensorFlow version ``1.12`` or newer. You can find more details at `Horovod README `__. -The container sets up the MPI environment and executes the ``mpirun`` command enabling you to run any Horovod -training script with Script Mode. +The container sets up the MPI environment and executes the ``mpirun`` command, enabling you to run any Horovod +training script. Training with ``MPI`` is configured by specifying following fields in ``distributions``: @@ -294,7 +247,7 @@ Training with ``MPI`` is configured by specifying following fields in ``distribu - ``processes_per_host (int)``: Number of processes MPI should launch on each host. Note, this should not be greater than the available slots on the selected instance type. This flag should be set for the multi-cpu/gpu training. -- ``custom_mpi_options (str)``: Any `mpirun` flag(s) can be passed in this field that will be added to the `mpirun` +- ``custom_mpi_options (str)``: Any ``mpirun`` flag(s) can be passed in this field that will be added to the ``mpirun`` command executed by SageMaker to launch distributed horovod training. @@ -465,9 +418,9 @@ If you already have existing model artifacts in S3, you can skip training and de .. code:: python - from sagemaker.tensorflow.serving import Model + from sagemaker.tensorflow import TensorFlowModel - model = Model(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') + model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge') @@ -475,9 +428,9 @@ Python-based TensorFlow serving on SageMaker has support for `Elastic Inference .. code:: python - from sagemaker.tensorflow.serving import Model + from sagemaker.tensorflow import TensorFlowModel - model = Model(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') + model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', role='MySageMakerRole') predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge', accelerator_type='ml.eia1.medium') @@ -549,7 +502,7 @@ classify/regress requests) to get multiple prediction results in one request to If your application allows request grouping like this, it is **much** more efficient than making separate requests. -See `Deploying to TensorFlow Serving Endpoints ` to learn how to deploy your model and make inference requests. +See `Deploying to TensorFlow Serving Endpoints `_ to learn how to deploy your model and make inference requests. Run a Batch Transform Job ========================= @@ -762,20 +715,20 @@ This customized Python code must be named ``inference.py`` and specified through .. code:: - from sagemaker.tensorflow.serving import Model + from sagemaker.tensorflow import TensorFlowModel - model = Model(entry_point='inference.py', - model_data='s3://mybucket/model.tar.gz', - role='MySageMakerRole') + model = TensorFlowModel(entry_point='inference.py', + model_data='s3://mybucket/model.tar.gz', + role='MySageMakerRole') How to implement the pre- and/or post-processing handler(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Your entry point file must be named ``inference.py`` and should implement - either a pair of ``input_handler`` and ``output_handler`` functions or - a single ``handler`` function. - Note that if ``handler`` function is implemented, ``input_handler`` - and ``output_handler`` are ignored. +either a pair of ``input_handler`` and ``output_handler`` functions or +a single ``handler`` function. +Note that if ``handler`` function is implemented, ``input_handler`` +and ``output_handler`` are ignored. To implement pre- and/or post-processing handler(s), use the Context object that the Python service creates. The Context object is a namedtuple with the following attributes: @@ -915,12 +868,12 @@ processing. There are 2 ways to do this: .. code:: - from sagemaker.tensorflow.serving import Model + from sagemaker.tensorflow import TensorFlowModel - model = Model(entry_point='inference.py', - dependencies=['requirements.txt'], - model_data='s3://mybucket/model.tar.gz', - role='MySageMakerRole') + model = TensorFlowModel(entry_point='inference.py', + dependencies=['requirements.txt'], + model_data='s3://mybucket/model.tar.gz', + role='MySageMakerRole') 2. If you are working in a network-isolation situation or if you don't @@ -933,12 +886,12 @@ processing. There are 2 ways to do this: .. code:: - from sagemaker.tensorflow.serving import Model + from sagemaker.tensorflow import TensorFlowModel - model = Model(entry_point='inference.py', - dependencies=['/path/to/folder/named/lib'], - model_data='s3://mybucket/model.tar.gz', - role='MySageMakerRole') + model = TensorFlowModel(entry_point='inference.py', + dependencies=['/path/to/folder/named/lib'], + model_data='s3://mybucket/model.tar.gz', + role='MySageMakerRole') For more information, see: https://github.com/aws/sagemaker-tensorflow-serving-container#prepost-processing From e9f429edf06f30d76448601f707978eea205ab97 Mon Sep 17 00:00:00 2001 From: Lauren Yu <6631887+laurenyu@users.noreply.github.com> Date: Mon, 8 Jun 2020 10:12:31 -0700 Subject: [PATCH 2/2] clarify *trained* model --- doc/frameworks/tensorflow/upgrade_from_legacy.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/doc/frameworks/tensorflow/upgrade_from_legacy.rst b/doc/frameworks/tensorflow/upgrade_from_legacy.rst index ff436b5c77..7beec18dda 100644 --- a/doc/frameworks/tensorflow/upgrade_from_legacy.rst +++ b/doc/frameworks/tensorflow/upgrade_from_legacy.rst @@ -140,10 +140,10 @@ To specify the number of model server workers, you need to set it through an env # v2 estimator.deploy(..., env={"MODEL_SERVER_WORKERS": 4}) -From a Model ------------- +From a Trained Model +-------------------- -If you are starting with a model, v2 requires the following changes: +If you are starting with a trained model, v2 requires the following changes: #. Use the the :class:`sagemaker.model.FrameworkModel` class. #. Explicitly specify the ECR image URI via ``image``.