diff --git a/src/sagemaker/tensorflow/README.rst b/src/sagemaker/tensorflow/README.rst index b7587155b9..dbaf5b3a32 100644 --- a/src/sagemaker/tensorflow/README.rst +++ b/src/sagemaker/tensorflow/README.rst @@ -647,6 +647,38 @@ When the ``deploy`` call finishes, the created SageMaker Endpoint is ready for p how to make predictions against the Endpoint, how to use different content-types in your requests, and how to extend the Web server functionality. +Deploying directly from model artifacts +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +If you already have existing model artifacts, you can skip training and deploy them directly to an endpoint: + +.. code:: python + + from sagemaker.tensorflow import TensorFlowModel + + tf_model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', + role='MySageMakerRole', + entry_point='entry.py', + name='model_name') + + predictor = tf_model.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge') + +You can also optionally specify a pip `requirements file `_ if you need to install additional packages into the deployed +runtime environment by including it in your source_dir and specifying it in the ``'SAGEMAKER_REQUIREMENTS'`` env variable: + +.. code:: python + + from sagemaker.tensorflow import TensorFlowModel + + tf_model = TensorFlowModel(model_data='s3://mybucket/model.tar.gz', + role='MySageMakerRole', + entry_point='entry.py', + source_dir='my_src', # directory which contains entry_point script and requirements file + name='model_name', + env={'SAGEMAKER_REQUIREMENTS': 'requirements.txt'}) # path relative to source_dir + + predictor = tf_model.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge') + Making predictions against a SageMaker Endpoint ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~