Skip to content

Commit 784ff7a

Browse files
authored
Merge branch 'master' into fix-docs
2 parents f738ea8 + d60f8d3 commit 784ff7a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

88 files changed

+8358
-970
lines changed

CHANGELOG.rst

Lines changed: 37 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,50 @@
22
CHANGELOG
33
=========
44

5-
1.15.1dev
6-
=========
5+
1.16.1.post1
6+
============
7+
8+
* Documentation: add documentation for Reinforcement Learning Estimator.
9+
* Documentation: update TensorFlow README for Script Mode
10+
11+
1.16.1
12+
======
13+
14+
* feature: update boto3 to version 1.9.55
15+
16+
1.16.0
17+
======
718

19+
* feature: Add 0.10.1 coach version
20+
* feature: Add support for SageMaker Neo
21+
* feature: Estimators: Add RLEstimator to provide support for Reinforcement Learning
22+
* feature: Add support for Amazon Elastic Inference
23+
* feature: Add support for Algorithm Estimators and ModelPackages: includes support for AWS Marketplace
24+
* feature: Add SKLearn Estimator to provide support for SciKit Learn
25+
* feature: Add Amazon SageMaker Semantic Segmentation algorithm to the registry
26+
* feature: Add support for SageMaker Inference Pipelines
27+
* feature: Add support for SparkML serving container
28+
29+
1.15.2
30+
======
31+
32+
* bug-fix: Fix FileNotFoundError for entry_point without source_dir
33+
* doc-fix: Add missing feature 1.5.0 in change log
34+
* doc-fix: Add README for airflow
35+
36+
1.15.1
37+
======
38+
39+
* enhancement: Local Mode: add explicit pull for serving
840
* feature: Estimators: dependencies attribute allows export of additional libraries into the container
941
* feature: Add APIs to export Airflow transform and deploy config
42+
* bug-fix: Allow code_location argument to be S3 URI in training_config API
43+
* enhancement: Local Mode: add explicit pull for serving
1044

1145
1.15.0
1246
======
1347

48+
* feature: Estimator: add script mode and Python 3 support for TensorFlow
1449
* bug-fix: Changes to use correct S3 bucket and time range for dataframes in TrainingJobAnalytics.
1550
* bug-fix: Local Mode: correctly handle the case where the model output folder doesn't exist yet
1651
* feature: Add APIs to export Airflow training, tuning and model config

README.rst

Lines changed: 173 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -32,12 +32,18 @@ Table of Contents
3232
4. `TensorFlow SageMaker Estimators <#tensorflow-sagemaker-estimators>`__
3333
5. `Chainer SageMaker Estimators <#chainer-sagemaker-estimators>`__
3434
6. `PyTorch SageMaker Estimators <#pytorch-sagemaker-estimators>`__
35-
7. `AWS SageMaker Estimators <#aws-sagemaker-estimators>`__
36-
8. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators>`__
37-
9. `SageMaker Automatic Model Tuning <#sagemaker-automatic-model-tuning>`__
38-
10. `SageMaker Batch Transform <#sagemaker-batch-transform>`__
39-
11. `Secure Training and Inference with VPC <#secure-training-and-inference-with-vpc>`__
40-
12. `BYO Model <#byo-model>`__
35+
7. `SageMaker Reinforcement Learning Estimators <#sagemaker-reinforcement-learning-estimators>`__
36+
8. `SageMaker SparkML Serving <#sagemaker-sparkml-serving>`__
37+
9. `AWS SageMaker Estimators <#aws-sagemaker-estimators>`__
38+
10. `Using SageMaker AlgorithmEstimators <#using-sagemaker-algorithmestimators>`__
39+
11. `Consuming SageMaker Model Packages <#consuming-sagemaker-model-packages>`__
40+
12. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators>`__
41+
13. `SageMaker Automatic Model Tuning <#sagemaker-automatic-model-tuning>`__
42+
14. `SageMaker Batch Transform <#sagemaker-batch-transform>`__
43+
15. `Secure Training and Inference with VPC <#secure-training-and-inference-with-vpc>`__
44+
16. `BYO Model <#byo-model>`__
45+
17. `Inference Pipelines <#inference-pipelines>`__
46+
18. `SageMaker Workflow <#sagemaker-workflow>`__
4147

4248

4349
Installing the SageMaker Python SDK
@@ -138,6 +144,7 @@ The following sections of this document explain how to use the different estimat
138144
* `TensorFlow SageMaker Estimators and Models <#tensorflow-sagemaker-estimators>`__
139145
* `Chainer SageMaker Estimators and Models <#chainer-sagemaker-estimators>`__
140146
* `PyTorch SageMaker Estimators <#pytorch-sagemaker-estimators>`__
147+
* `SageMaker Reinforcement Learning Estimators <#sagemaker-reinforcement-learning-estimators>`__
141148
* `AWS SageMaker Estimators and Models <#aws-sagemaker-estimators>`__
142149
* `Custom SageMaker Estimators and Models <#byo-docker-containers-with-sagemaker-estimators>`__
143150

@@ -341,15 +348,17 @@ Currently, the following algorithms support incremental training:
341348
342349
- Image Classification
343350
- Object Detection
344-
- Semantics Segmentation
351+
- Semantic Segmentation
345352
346353
347354
MXNet SageMaker Estimators
348355
--------------------------
349356
350357
By using MXNet SageMaker ``Estimators``, you can train and host MXNet models on Amazon SageMaker.
351358
352-
Supported versions of MXNet: ``1.2.1``, ``1.1.0``, ``1.0.0``, ``0.12.1``.
359+
Supported versions of MXNet: ``1.3.0``, ``1.2.1``, ``1.1.0``, ``1.0.0``, ``0.12.1``.
360+
361+
Supported versions of MXNet for Elastic Inference: ``1.3.0``
353362
354363
We recommend that you use the latest supported version, because that's where we focus most of our development efforts.
355364
@@ -365,6 +374,8 @@ By using TensorFlow SageMaker ``Estimators``, you can train and host TensorFlow
365374
366375
Supported versions of TensorFlow: ``1.4.1``, ``1.5.0``, ``1.6.0``, ``1.7.0``, ``1.8.0``, ``1.9.0``, ``1.10.0``, ``1.11.0``.
367376
377+
Supported versions of TensorFlow for Elastic Inference: ``1.11.0``.
378+
368379
We recommend that you use the latest supported version, because that's where we focus most of our development efforts.
369380
370381
For more information, see `TensorFlow SageMaker Estimators and Models`_.
@@ -373,7 +384,7 @@ For more information, see `TensorFlow SageMaker Estimators and Models`_.
373384
374385
375386
Chainer SageMaker Estimators
376-
-------------------------------
387+
----------------------------
377388
378389
By using Chainer SageMaker ``Estimators``, you can train and host Chainer models on Amazon SageMaker.
379390
@@ -389,7 +400,7 @@ For more information about Chainer SageMaker ``Estimators``, see `Chainer SageM
389400
390401
391402
PyTorch SageMaker Estimators
392-
-------------------------------
403+
----------------------------
393404
394405
With PyTorch SageMaker ``Estimators``, you can train and host PyTorch models on Amazon SageMaker.
395406
@@ -407,6 +418,55 @@ For more information about PyTorch SageMaker ``Estimators``, see `PyTorch SageMa
407418
.. _PyTorch SageMaker Estimators and Models: src/sagemaker/pytorch/README.rst
408419
409420
421+
SageMaker Reinforcement Learning Estimators
422+
-------------------------------------------
423+
424+
With Reinforcement Learning (RL) Estimators, you can use reinforcement learning to train models on Amazon SageMaker.
425+
426+
Supported versions of Coach: ``0.10.1`` with TensorFlow, ``0.11.0`` with TensorFlow or MXNet.
427+
For more information about Coach, see https://github.com/NervanaSystems/coach
428+
429+
Supported versions of Ray: ``0.5.3`` with TensorFlow.
430+
For more information about Ray, see https://github.com/ray-project/ray
431+
432+
For more information about SageMaker RL ``Estimators``, see `SageMaker Reinforcement Learning Estimators`_.
433+
434+
.. _SageMaker Reinforcement Learning Estimators: src/sagemaker/rl/README.rst
435+
436+
437+
SageMaker SparkML Serving
438+
-------------------------
439+
440+
With SageMaker SparkML Serving, you can now perform predictions against a SparkML Model in SageMaker.
441+
In order to host a SparkML model in SageMaker, it should be serialized with ``MLeap`` library.
442+
443+
For more information on MLeap, see https://github.com/combust/mleap .
444+
445+
Supported major version of Spark: 2.2 (MLeap version - 0.9.6)
446+
447+
Here is an example on how to create an instance of ``SparkMLModel`` class and use ``deploy()`` method to create an
448+
endpoint which can be used to perform prediction against your trained SparkML Model.
449+
450+
.. code:: python
451+
452+
sparkml_model = SparkMLModel(model_data='s3://path/to/model.tar.gz', env={'SAGEMAKER_SPARKML_SCHEMA': schema})
453+
model_name = 'sparkml-model'
454+
endpoint_name = 'sparkml-endpoint'
455+
predictor = sparkml_model.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge', endpoint_name=endpoint_name)
456+
457+
Once the model is deployed, we can invoke the endpoint with a ``CSV`` payload like this:
458+
459+
.. code:: python
460+
461+
payload = 'field_1,field_2,field_3,field_4,field_5'
462+
predictor.predict(payload)
463+
464+
465+
For more information about the different ``content-type`` and ``Accept`` formats as well as the structure of the
466+
``schema`` that SageMaker SparkML Serving recognizes, please see `SageMaker SparkML Serving Container`_.
467+
468+
.. _SageMaker SparkML Serving Container: https://github.com/aws/sagemaker-sparkml-serving-container
469+
410470
AWS SageMaker Estimators
411471
------------------------
412472
Amazon SageMaker provides several built-in machine learning algorithms that you can use to solve a variety of problems.
@@ -420,6 +480,59 @@ For more information, see `AWS SageMaker Estimators and Models`_.
420480
421481
.. _AWS SageMaker Estimators and Models: src/sagemaker/amazon/README.rst
422482
483+
Using SageMaker AlgorithmEstimators
484+
-----------------------------------
485+
486+
With the SageMaker Algorithm entities, you can create training jobs with just an ``algorithm_arn`` instead of
487+
a training image. There is a dedicated ``AlgorithmEstimator`` class that accepts ``algorithm_arn`` as a
488+
parameter, the rest of the arguments are similar to the other Estimator classes. This class also allows you to
489+
consume algorithms that you have subscribed to in the AWS Marketplace. The AlgorithmEstimator performs
490+
client-side validation on your inputs based on the algorithm's properties.
491+
492+
Here is an example:
493+
494+
.. code:: python
495+
496+
import sagemaker
497+
498+
algo = sagemaker.AlgorithmEstimator(
499+
algorithm_arn='arn:aws:sagemaker:us-west-2:1234567:algorithm/some-algorithm',
500+
role='SageMakerRole',
501+
train_instance_count=1,
502+
train_instance_type='ml.c4.xlarge')
503+
504+
train_input = algo.sagemaker_session.upload_data(path='/path/to/your/data')
505+
506+
algo.fit({'training': train_input})
507+
algo.deploy(1, 'ml.m4.xlarge')
508+
509+
# When you are done using your endpoint
510+
algo.delete_endpoint()
511+
512+
513+
Consuming SageMaker Model Packages
514+
----------------------------------
515+
516+
SageMaker Model Packages are a way to specify and share information for how to create SageMaker Models.
517+
With a SageMaker Model Package that you have created or subscribed to in the AWS Marketplace,
518+
you can use the specified serving image and model data for Endpoints and Batch Transform jobs.
519+
520+
To work with a SageMaker Model Package, use the ``ModelPackage`` class.
521+
522+
Here is an example:
523+
524+
.. code:: python
525+
526+
import sagemaker
527+
528+
model = sagemaker.ModelPackage(
529+
role='SageMakerRole',
530+
model_package_arn='arn:aws:sagemaker:us-west-2:123456:model-package/my-model-package')
531+
model.deploy(1, 'ml.m4.xlarge', endpoint_name='my-endpoint')
532+
533+
# When you are done using your endpoint
534+
model.sagemaker_session.delete_endpoint('my-endpoint')
535+
423536
424537
BYO Docker Containers with SageMaker Estimators
425538
-----------------------------------------------
@@ -434,7 +547,7 @@ Please refer to the full example in the examples repo:
434547
git clone https://github.com/awslabs/amazon-sagemaker-examples.git
435548
436549
437-
The example notebook is is located here:
550+
The example notebook is located here:
438551
``advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb``
439552
440553
@@ -706,3 +819,52 @@ After that, invoke the ``deploy()`` method on the ``Model``:
706819
This returns a predictor the same way an ``Estimator`` does when ``deploy()`` is called. You can now get inferences just like with any other model deployed on Amazon SageMaker.
707820
708821
A full example is available in the `Amazon SageMaker examples repository <https://github.com/awslabs/amazon-sagemaker-examples/tree/master/advanced_functionality/mxnet_mnist_byom>`__.
822+
823+
824+
Inference Pipelines
825+
-------------------
826+
You can create a Pipeline for realtime or batch inference comprising of one or multiple model containers. This will help
827+
you to deploy an ML pipeline behind a single endpoint and you can have one API call perform pre-processing, model-scoring
828+
and post-processing on your data before returning it back as the response.
829+
830+
For this, you have to create a ``PipelineModel`` which will take a list of ``Model`` objects. Calling ``deploy()`` on the
831+
``PipelineModel`` will provide you with an endpoint which can be invoked to perform the prediction on a data point against
832+
the ML Pipeline.
833+
834+
.. code:: python
835+
836+
xgb_image = get_image_uri(sess.boto_region_name, 'xgboost', repo_version="latest")
837+
xgb_model = Model(model_data='s3://path/to/model.tar.gz', image=xgb_image)
838+
sparkml_model = SparkMLModel(model_data='s3://path/to/model.tar.gz', env={'SAGEMAKER_SPARKML_SCHEMA': schema})
839+
840+
model_name = 'inference-pipeline-model'
841+
endpoint_name = 'inference-pipeline-endpoint'
842+
sm_model = PipelineModel(name=model_name, role=sagemaker_role, models=[sparkml_model, xgb_model])
843+
844+
This will define a ``PipelineModel`` consisting of SparkML model and an XGBoost model stacked sequentially. For more
845+
information about how to train an XGBoost model, please refer to the XGBoost notebook here_.
846+
847+
.. _here: https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html#xgboost-sample-notebooks
848+
849+
.. code:: python
850+
851+
sm_model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge', endpoint_name=endpoint_name)
852+
853+
This returns a predictor the same way an ``Estimator`` does when ``deploy()`` is called. Whenever you make an inference
854+
request using this predictor, you should pass the data that the first container expects and the predictor will return the
855+
output from the last container.
856+
857+
For comprehensive examples on how to use Inference Pipelines please refer to the following notebooks:
858+
859+
- `inference_pipeline_sparkml_xgboost_abalone.ipynb <https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/inference_pipeline_sparkml_xgboost_abalone/inference_pipeline_sparkml_xgboost_abalone.ipynb>`__
860+
- `inference_pipeline_sparkml_blazingtext_dbpedia.ipynb <https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/inference_pipeline_sparkml_blazingtext_dbpedia/inference_pipeline_sparkml_blazingtext_dbpedia.ipynb>`__
861+
862+
863+
SageMaker Workflow
864+
------------------
865+
866+
You can use Apache Airflow to author, schedule and monitor SageMaker workflow.
867+
868+
For more information, see `SageMaker Workflow in Apache Airflow`_.
869+
870+
.. _SageMaker Workflow in Apache Airflow: src/sagemaker/workflow/README.rst

doc/conf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ def __getattr__(cls, name):
3232
'numpy', 'scipy', 'scipy.sparse']
3333
sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)
3434

35-
version = '1.15.0'
35+
version = '1.16.1.post1'
3636
project = u'sagemaker'
3737

3838
# Add any Sphinx extension module names here, as strings. They can be extensions

doc/index.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,15 @@ A managed environment for TensorFlow training and hosting on Amazon SageMaker
3939

4040
sagemaker.tensorflow
4141

42+
Reinforcement Learning
43+
----------------------
44+
A managed environment for Reinforcement Learning training and hosting on Amazon SageMaker
45+
46+
.. toctree::
47+
:maxdepth: 2
48+
49+
sagemaker.rl
50+
4251
SageMaker First-Party Algorithms
4352
--------------------------------
4453
Amazon provides implementations of some common machine learning algortithms optimized for GPU architecture and massive datasets.

doc/pipeline.rst

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
PipelineModel
2+
-------------
3+
4+
.. autoclass:: sagemaker.pipeline.PipelineModel
5+
:members:
6+
:undoc-members:
7+
:show-inheritance:

doc/sagemaker.sparkml.rst

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
SparkML Serving
2+
===============
3+
4+
SparkML Model
5+
-------------
6+
7+
.. autoclass:: sagemaker.sparkml.model.SparkMLModel
8+
:members:
9+
:undoc-members:
10+
:show-inheritance:
11+
12+
SparkML Predictor
13+
-----------------
14+
15+
.. autoclass:: sagemaker.sparkml.model.SparkMLPredictor
16+
:members:
17+
:undoc-members:
18+
:show-inheritance:

0 commit comments

Comments
 (0)