@@ -32,12 +32,18 @@ Table of Contents
32
32
4. `TensorFlow SageMaker Estimators <#tensorflow-sagemaker-estimators >`__
33
33
5. `Chainer SageMaker Estimators <#chainer-sagemaker-estimators >`__
34
34
6. `PyTorch SageMaker Estimators <#pytorch-sagemaker-estimators >`__
35
- 7. `AWS SageMaker Estimators <#aws-sagemaker-estimators >`__
36
- 8. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators >`__
37
- 9. `SageMaker Automatic Model Tuning <#sagemaker-automatic-model-tuning >`__
38
- 10. `SageMaker Batch Transform <#sagemaker-batch-transform >`__
39
- 11. `Secure Training and Inference with VPC <#secure-training-and-inference-with-vpc >`__
40
- 12. `BYO Model <#byo-model >`__
35
+ 7. `SageMaker Reinforcement Learning Estimators <#sagemaker-reinforcement-learning-estimators >`__
36
+ 8. `SageMaker SparkML Serving <#sagemaker-sparkml-serving >`__
37
+ 9. `AWS SageMaker Estimators <#aws-sagemaker-estimators >`__
38
+ 10. `Using SageMaker AlgorithmEstimators <#using-sagemaker-algorithmestimators >`__
39
+ 11. `Consuming SageMaker Model Packages <#consuming-sagemaker-model-packages >`__
40
+ 12. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators >`__
41
+ 13. `SageMaker Automatic Model Tuning <#sagemaker-automatic-model-tuning >`__
42
+ 14. `SageMaker Batch Transform <#sagemaker-batch-transform >`__
43
+ 15. `Secure Training and Inference with VPC <#secure-training-and-inference-with-vpc >`__
44
+ 16. `BYO Model <#byo-model >`__
45
+ 17. `Inference Pipelines <#inference-pipelines >`__
46
+ 18. `SageMaker Workflow <#sagemaker-workflow >`__
41
47
42
48
43
49
Installing the SageMaker Python SDK
@@ -138,6 +144,7 @@ The following sections of this document explain how to use the different estimat
138
144
* `TensorFlow SageMaker Estimators and Models <#tensorflow-sagemaker-estimators >`__
139
145
* `Chainer SageMaker Estimators and Models <#chainer-sagemaker-estimators >`__
140
146
* `PyTorch SageMaker Estimators <#pytorch-sagemaker-estimators >`__
147
+ * `SageMaker Reinforcement Learning Estimators <#sagemaker-reinforcement-learning-estimators >`__
141
148
* `AWS SageMaker Estimators and Models <#aws-sagemaker-estimators >`__
142
149
* `Custom SageMaker Estimators and Models <#byo-docker-containers-with-sagemaker-estimators >`__
143
150
@@ -341,15 +348,17 @@ Currently, the following algorithms support incremental training:
341
348
342
349
- Image Classification
343
350
- Object Detection
344
- - Semantics Segmentation
351
+ - Semantic Segmentation
345
352
346
353
347
354
MXNet SageMaker Estimators
348
355
--------------------------
349
356
350
357
By using MXNet SageMaker `` Estimators`` , you can train and host MXNet models on Amazon SageMaker.
351
358
352
- Supported versions of MXNet: `` 1.2 .1`` , `` 1.1 .0`` , `` 1.0 .0`` , `` 0.12 .1`` .
359
+ Supported versions of MXNet: `` 1.3 .0`` , `` 1.2 .1`` , `` 1.1 .0`` , `` 1.0 .0`` , `` 0.12 .1`` .
360
+
361
+ Supported versions of MXNet for Elastic Inference: `` 1.3 .0``
353
362
354
363
We recommend that you use the latest supported version, because that' s where we focus most of our development efforts.
355
364
@@ -365,6 +374,8 @@ By using TensorFlow SageMaker ``Estimators``, you can train and host TensorFlow
365
374
366
375
Supported versions of TensorFlow: `` 1.4 .1`` , `` 1.5 .0`` , `` 1.6 .0`` , `` 1.7 .0`` , `` 1.8 .0`` , `` 1.9 .0`` , `` 1.10 .0`` , `` 1.11 .0`` .
367
376
377
+ Supported versions of TensorFlow for Elastic Inference: `` 1.11 .0`` .
378
+
368
379
We recommend that you use the latest supported version, because that' s where we focus most of our development efforts.
369
380
370
381
For more information, see `TensorFlow SageMaker Estimators and Models` _.
@@ -373,7 +384,7 @@ For more information, see `TensorFlow SageMaker Estimators and Models`_.
373
384
374
385
375
386
Chainer SageMaker Estimators
376
- ------------------------------ -
387
+ ----------------------------
377
388
378
389
By using Chainer SageMaker `` Estimators`` , you can train and host Chainer models on Amazon SageMaker.
379
390
@@ -389,7 +400,7 @@ For more information about Chainer SageMaker ``Estimators``, see `Chainer SageM
389
400
390
401
391
402
PyTorch SageMaker Estimators
392
- ------------------------------ -
403
+ ----------------------------
393
404
394
405
With PyTorch SageMaker `` Estimators`` , you can train and host PyTorch models on Amazon SageMaker.
395
406
@@ -407,6 +418,55 @@ For more information about PyTorch SageMaker ``Estimators``, see `PyTorch SageMa
407
418
.. _PyTorch SageMaker Estimators and Models: src/ sagemaker/ pytorch/ README .rst
408
419
409
420
421
+ SageMaker Reinforcement Learning Estimators
422
+ ------------------------------------------ -
423
+
424
+ With Reinforcement Learning (RL ) Estimators, you can use reinforcement learning to train models on Amazon SageMaker.
425
+
426
+ Supported versions of Coach: `` 0.10 .1`` with TensorFlow, `` 0.11 .0`` with TensorFlow or MXNet.
427
+ For more information about Coach, see https:// github.com/ NervanaSystems/ coach
428
+
429
+ Supported versions of Ray: `` 0.5 .3`` with TensorFlow.
430
+ For more information about Ray, see https:// github.com/ ray- project/ ray
431
+
432
+ For more information about SageMaker RL `` Estimators`` , see `SageMaker Reinforcement Learning Estimators` _.
433
+
434
+ .. _SageMaker Reinforcement Learning Estimators: src/ sagemaker/ rl/ README .rst
435
+
436
+
437
+ SageMaker SparkML Serving
438
+ ------------------------ -
439
+
440
+ With SageMaker SparkML Serving, you can now perform predictions against a SparkML Model in SageMaker.
441
+ In order to host a SparkML model in SageMaker, it should be serialized with `` MLeap`` library.
442
+
443
+ For more information on MLeap, see https:// github.com/ combust/ mleap .
444
+
445
+ Supported major version of Spark: 2.2 (MLeap version - 0.9 .6)
446
+
447
+ Here is an example on how to create an instance of `` SparkMLModel`` class and use `` deploy()`` method to create an
448
+ endpoint which can be used to perform prediction against your trained SparkML Model.
449
+
450
+ .. code:: python
451
+
452
+ sparkml_model = SparkMLModel(model_data = ' s3://path/to/model.tar.gz' , env = {' SAGEMAKER_SPARKML_SCHEMA' : schema})
453
+ model_name = ' sparkml-model'
454
+ endpoint_name = ' sparkml-endpoint'
455
+ predictor = sparkml_model.deploy(initial_instance_count = 1 , instance_type = ' ml.c4.xlarge' , endpoint_name = endpoint_name)
456
+
457
+ Once the model is deployed, we can invoke the endpoint with a `` CSV `` payload like this:
458
+
459
+ .. code:: python
460
+
461
+ payload = ' field_1,field_2,field_3,field_4,field_5'
462
+ predictor.predict(payload)
463
+
464
+
465
+ For more information about the different `` content- type `` and `` Accept`` formats as well as the structure of the
466
+ `` schema`` that SageMaker SparkML Serving recognizes, please see `SageMaker SparkML Serving Container` _.
467
+
468
+ .. _SageMaker SparkML Serving Container: https:// github.com/ aws/ sagemaker- sparkml- serving- container
469
+
410
470
AWS SageMaker Estimators
411
471
------------------------
412
472
Amazon SageMaker provides several built- in machine learning algorithms that you can use to solve a variety of problems.
@@ -420,6 +480,59 @@ For more information, see `AWS SageMaker Estimators and Models`_.
420
480
421
481
.. _AWS SageMaker Estimators and Models: src/ sagemaker/ amazon/ README .rst
422
482
483
+ Using SageMaker AlgorithmEstimators
484
+ ---------------------------------- -
485
+
486
+ With the SageMaker Algorithm entities, you can create training jobs with just an `` algorithm_arn`` instead of
487
+ a training image. There is a dedicated `` AlgorithmEstimator`` class that accepts `` algorithm_arn`` as a
488
+ parameter, the rest of the arguments are similar to the other Estimator classes. This class also allows you to
489
+ consume algorithms that you have subscribed to in the AWS Marketplace. The AlgorithmEstimator performs
490
+ client- side validation on your inputs based on the algorithm' s properties.
491
+
492
+ Here is an example:
493
+
494
+ .. code:: python
495
+
496
+ import sagemaker
497
+
498
+ algo = sagemaker.AlgorithmEstimator(
499
+ algorithm_arn = ' arn:aws:sagemaker:us-west-2:1234567:algorithm/some-algorithm' ,
500
+ role = ' SageMakerRole' ,
501
+ train_instance_count = 1 ,
502
+ train_instance_type = ' ml.c4.xlarge' )
503
+
504
+ train_input = algo.sagemaker_session.upload_data(path = ' /path/to/your/data' )
505
+
506
+ algo.fit({' training' : train_input})
507
+ algo.deploy(1 , ' ml.m4.xlarge' )
508
+
509
+ # When you are done using your endpoint
510
+ algo.delete_endpoint()
511
+
512
+
513
+ Consuming SageMaker Model Packages
514
+ ----------------------------------
515
+
516
+ SageMaker Model Packages are a way to specify and share information for how to create SageMaker Models.
517
+ With a SageMaker Model Package that you have created or subscribed to in the AWS Marketplace,
518
+ you can use the specified serving image and model data for Endpoints and Batch Transform jobs.
519
+
520
+ To work with a SageMaker Model Package, use the `` ModelPackage`` class .
521
+
522
+ Here is an example:
523
+
524
+ .. code:: python
525
+
526
+ import sagemaker
527
+
528
+ model = sagemaker.ModelPackage(
529
+ role = ' SageMakerRole' ,
530
+ model_package_arn = ' arn:aws:sagemaker:us-west-2:123456:model-package/my-model-package' )
531
+ model.deploy(1 , ' ml.m4.xlarge' , endpoint_name = ' my-endpoint' )
532
+
533
+ # When you are done using your endpoint
534
+ model.sagemaker_session.delete_endpoint(' my-endpoint' )
535
+
423
536
424
537
BYO Docker Containers with SageMaker Estimators
425
538
---------------------------------------------- -
@@ -434,7 +547,7 @@ Please refer to the full example in the examples repo:
434
547
git clone https:// github.com/ awslabs/ amazon- sagemaker- examples.git
435
548
436
549
437
- The example notebook is is located here:
550
+ The example notebook is located here:
438
551
`` advanced_functionality/ scikit_bring_your_own/ scikit_bring_your_own.ipynb``
439
552
440
553
@@ -706,3 +819,52 @@ After that, invoke the ``deploy()`` method on the ``Model``:
706
819
This returns a predictor the same way an `` Estimator`` does when `` deploy()`` is called. You can now get inferences just like with any other model deployed on Amazon SageMaker.
707
820
708
821
A full example is available in the `Amazon SageMaker examples repository < https:// github.com/ awslabs/ amazon- sagemaker- examples/ tree/ master/ advanced_functionality/ mxnet_mnist_byom> ` __.
822
+
823
+
824
+ Inference Pipelines
825
+ ------------------ -
826
+ You can create a Pipeline for realtime or batch inference comprising of one or multiple model containers. This will help
827
+ you to deploy an ML pipeline behind a single endpoint and you can have one API call perform pre- processing, model- scoring
828
+ and post- processing on your data before returning it back as the response.
829
+
830
+ For this, you have to create a `` PipelineModel`` which will take a list of `` Model`` objects. Calling `` deploy()`` on the
831
+ `` PipelineModel`` will provide you with an endpoint which can be invoked to perform the prediction on a data point against
832
+ the ML Pipeline.
833
+
834
+ .. code:: python
835
+
836
+ xgb_image = get_image_uri(sess.boto_region_name, ' xgboost' , repo_version = " latest" )
837
+ xgb_model = Model(model_data = ' s3://path/to/model.tar.gz' , image = xgb_image)
838
+ sparkml_model = SparkMLModel(model_data = ' s3://path/to/model.tar.gz' , env = {' SAGEMAKER_SPARKML_SCHEMA' : schema})
839
+
840
+ model_name = ' inference-pipeline-model'
841
+ endpoint_name = ' inference-pipeline-endpoint'
842
+ sm_model = PipelineModel(name = model_name, role = sagemaker_role, models = [sparkml_model, xgb_model])
843
+
844
+ This will define a `` PipelineModel`` consisting of SparkML model and an XGBoost model stacked sequentially. For more
845
+ information about how to train an XGBoost model, please refer to the XGBoost notebook here_.
846
+
847
+ .. _here: https:// docs.aws.amazon.com/ sagemaker/ latest/ dg/ xgboost.html# xgboost-sample-notebooks
848
+
849
+ .. code:: python
850
+
851
+ sm_model.deploy(initial_instance_count = 1 , instance_type = ' ml.c5.xlarge' , endpoint_name = endpoint_name)
852
+
853
+ This returns a predictor the same way an `` Estimator`` does when `` deploy()`` is called. Whenever you make an inference
854
+ request using this predictor, you should pass the data that the first container expects and the predictor will return the
855
+ output from the last container.
856
+
857
+ For comprehensive examples on how to use Inference Pipelines please refer to the following notebooks:
858
+
859
+ - `inference_pipeline_sparkml_xgboost_abalone.ipynb < https:// github.com/ awslabs/ amazon- sagemaker- examples/ blob/ master/ advanced_functionality/ inference_pipeline_sparkml_xgboost_abalone/ inference_pipeline_sparkml_xgboost_abalone.ipynb> ` __
860
+ - `inference_pipeline_sparkml_blazingtext_dbpedia.ipynb < https:// github.com/ awslabs/ amazon- sagemaker- examples/ blob/ master/ advanced_functionality/ inference_pipeline_sparkml_blazingtext_dbpedia/ inference_pipeline_sparkml_blazingtext_dbpedia.ipynb> ` __
861
+
862
+
863
+ SageMaker Workflow
864
+ ------------------
865
+
866
+ You can use Apache Airflow to author, schedule and monitor SageMaker workflow.
867
+
868
+ For more information, see `SageMaker Workflow in Apache Airflow` _.
869
+
870
+ .. _SageMaker Workflow in Apache Airflow: src/ sagemaker/ workflow/ README .rst
0 commit comments