Skip to content

Commit a70bd89

Browse files
authored
Merge branch 'master' into tf242ioc
2 parents 4b30f0a + 9002d6f commit a70bd89

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+1035
-228
lines changed

CHANGELOG.md

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,24 @@
11
# Changelog
22

3+
## v2.81.0 (2022-03-26)
4+
5+
### Features
6+
7+
* Retrieve data configuration
8+
* enable EnableInterContainerTrafficEncryption for model monitoring
9+
* Hugging Face Transformers 4.17 for PT 1.10
10+
11+
### Bug Fixes and Other Changes
12+
13+
* remove `new` from serverless
14+
* temporarily skip tests impacted by data inconsistency
15+
* Implement override solution for pipeline variables
16+
17+
### Documentation Changes
18+
19+
* add documentation for image_uri serverless use case
20+
* minor fixes for smddp 1.4.0 doc
21+
322
## v2.80.0 (2022-03-18)
423

524
### Features

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
2.80.1.dev0
1+
2.81.1.dev0

doc/overview.rst

Lines changed: 32 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1226,6 +1226,28 @@ to configure or manage the underlying infrastructure. After you trained a model,
12261226
Serverless endpoint and then invoke the endpoint with the model to get inference results back. More information about
12271227
SageMaker Serverless Inference can be found in the `AWS documentation <https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html>`__.
12281228

1229+
For using SageMaker Serverless Inference, if you plan to use any of the SageMaker-provided container or Bring Your Own Container
1230+
model, you will need to pass ``image_uri``. An example to use ``image_uri`` for creating MXNet model:
1231+
1232+
.. code:: python
1233+
1234+
from sagemaker.mxnet import MXNetModel
1235+
import sagemaker
1236+
1237+
role = sagemaker.get_execution_role()
1238+
1239+
# create MXNet Model Class
1240+
mxnet_model = MXNetModel(
1241+
model_data="s3://my_bucket/pretrained_model/model.tar.gz", # path to your trained sagemaker model
1242+
role=role, # iam role with permissions to create an Endpoint
1243+
entry_point="inference.py",
1244+
image_uri="763104351884.dkr.ecr.us-west-2.amazonaws.com/mxnet-inference:1.4.1-cpu-py3" # image wanted to use
1245+
)
1246+
1247+
For more Amazon SageMaker provided algorithms and containers image paths, please check this page: `Amazon SageMaker provided
1248+
algorithms and Deep Learning Containers <https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html>`_.
1249+
After creating model using ``image_uri``, you can then follow the steps below to create serverless endpoint.
1250+
12291251
To deploy serverless endpoint, you will need to create a ``ServerlessInferenceConfig``.
12301252
If you create ``ServerlessInferenceConfig`` without specifying its arguments, the default ``MemorySizeInMB`` will be **2048** and
12311253
the default ``MaxConcurrency`` will be **5** :
@@ -1235,14 +1257,14 @@ the default ``MaxConcurrency`` will be **5** :
12351257
from sagemaker.serverless import ServerlessInferenceConfig
12361258
12371259
# Create an empty ServerlessInferenceConfig object to use default values
1238-
serverless_config = new ServerlessInferenceConfig()
1260+
serverless_config = ServerlessInferenceConfig()
12391261
12401262
Or you can specify ``MemorySizeInMB`` and ``MaxConcurrency`` in ``ServerlessInferenceConfig`` (example shown below):
12411263

12421264
.. code:: python
12431265
12441266
# Specify MemorySizeInMB and MaxConcurrency in the serverless config object
1245-
serverless_config = new ServerlessInferenceConfig(
1267+
serverless_config = ServerlessInferenceConfig(
12461268
memory_size_in_mb=4096,
12471269
max_concurrency=10,
12481270
)
@@ -1254,6 +1276,14 @@ Then use the ``ServerlessInferenceConfig`` in the estimator's ``deploy()`` metho
12541276
# Deploys the model that was generated by fit() to a SageMaker serverless endpoint
12551277
serverless_predictor = estimator.deploy(serverless_inference_config=serverless_config)
12561278
1279+
Or directly using model's ``deploy()`` method to deploy a serverless endpoint:
1280+
1281+
.. code:: python
1282+
1283+
# Deploys the model to a SageMaker serverless endpoint
1284+
serverless_predictor = model.deploy(serverless_inference_config=serverless_config)
1285+
1286+
12571287
After deployment is complete, you can use predictor's ``predict()`` method to invoke the serverless endpoint just like
12581288
real-time endpoints:
12591289

setup.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
import os
1717
from glob import glob
1818

19-
from setuptools import setup, find_packages
19+
from setuptools import find_packages, setup
2020

2121

2222
def read(fname):
@@ -81,6 +81,7 @@ def read_version():
8181
"fabric==2.6.0",
8282
"requests==2.27.1",
8383
"sagemaker-experiments==0.1.35",
84+
"Jinja2==3.0.3",
8485
],
8586
)
8687

src/sagemaker/chainer/model.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -99,10 +99,14 @@ def __init__(
9999
file which should be executed as the entry point to model
100100
hosting. If ``source_dir`` is specified, then ``entry_point``
101101
must point to a file located at the root of ``source_dir``.
102-
image_uri (str): A Docker image URI (default: None). If not specified, a
103-
default image for Chainer will be used. If ``framework_version``
104-
or ``py_version`` are ``None``, then ``image_uri`` is required. If
105-
also ``None``, then a ``ValueError`` will be raised.
102+
image_uri (str): A Docker image URI (default: None). In serverless
103+
inferece, it is required. More image information can be found in
104+
`Amazon SageMaker provided algorithms and Deep Learning Containers
105+
<https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html>`_.
106+
In instance based inference, if not specified, a default image for
107+
Chainer will be used. If ``framework_version`` or ``py_version``
108+
are ``None``, then ``image_uri`` is required. If also ``None``,
109+
then a ``ValueError`` will be raised.
106110
framework_version (str): Chainer version you want to use for
107111
executing your model training code. Defaults to ``None``. Required
108112
unless ``image_uri`` is provided.

src/sagemaker/estimator.py

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -74,9 +74,7 @@
7474
get_config_value,
7575
name_from_base,
7676
)
77-
from sagemaker.workflow.entities import Expression
78-
from sagemaker.workflow.parameters import Parameter
79-
from sagemaker.workflow.properties import Properties
77+
from sagemaker.workflow.entities import PipelineVariable
8078

8179
logger = logging.getLogger(__name__)
8280

@@ -602,7 +600,7 @@ def _json_encode_hyperparameters(hyperparameters: Dict[str, Any]) -> Dict[str, A
602600
current_hyperparameters = hyperparameters
603601
if current_hyperparameters is not None:
604602
hyperparameters = {
605-
str(k): (v if isinstance(v, (Parameter, Expression, Properties)) else json.dumps(v))
603+
str(k): (v.to_string() if isinstance(v, PipelineVariable) else json.dumps(v))
606604
for (k, v) in current_hyperparameters.items()
607605
}
608606
return hyperparameters
@@ -1813,7 +1811,7 @@ def _get_train_args(cls, estimator, inputs, experiment_config):
18131811
current_hyperparameters = estimator.hyperparameters()
18141812
if current_hyperparameters is not None:
18151813
hyperparameters = {
1816-
str(k): (v if isinstance(v, (Parameter, Expression, Properties)) else str(v))
1814+
str(k): (v.to_string() if isinstance(v, PipelineVariable) else str(v))
18171815
for (k, v) in current_hyperparameters.items()
18181816
}
18191817

src/sagemaker/huggingface/model.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,11 @@ def __init__(
133133
py_version (str): Python version you want to use for executing your
134134
model training code. Defaults to ``None``. Required unless
135135
``image_uri`` is provided.
136-
image_uri (str): A Docker image URI. Defaults to None. If not specified, a
136+
image_uri (str): A Docker image URI. Defaults to None. For serverless
137+
inferece, it is required. More image information can be found in
138+
`Amazon SageMaker provided algorithms and Deep Learning Containers
139+
<https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html>`_.
140+
For instance based inference, if not specified, a
137141
default image for PyTorch will be used. If ``framework_version``
138142
or ``py_version`` are ``None``, then ``image_uri`` is required. If
139143
also ``None``, then a ``ValueError`` will be raised.

src/sagemaker/model_monitor/clarify_model_monitoring.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -397,7 +397,6 @@ def _build_create_job_definition_request(
397397

398398
if network_config is not None:
399399
network_config_dict = network_config._to_request_dict()
400-
self._validate_network_config(network_config_dict)
401400
request_dict["NetworkConfig"] = network_config_dict
402401
elif existing_network_config is not None:
403402
request_dict["NetworkConfig"] = existing_network_config

src/sagemaker/model_monitor/model_monitoring.py

Lines changed: 9 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,6 @@ def create_monitoring_schedule(
295295
network_config_dict = None
296296
if self.network_config is not None:
297297
network_config_dict = self.network_config._to_request_dict()
298-
self._validate_network_config(network_config_dict)
299298

300299
self.sagemaker_session.create_monitoring_schedule(
301300
monitoring_schedule_name=self.monitoring_schedule_name,
@@ -448,7 +447,6 @@ def update_monitoring_schedule(
448447
network_config_dict = None
449448
if self.network_config is not None:
450449
network_config_dict = self.network_config._to_request_dict()
451-
self._validate_network_config(network_config_dict)
452450

453451
self.sagemaker_session.update_monitoring_schedule(
454452
monitoring_schedule_name=self.monitoring_schedule_name,
@@ -708,6 +706,9 @@ def attach(cls, monitor_schedule_name, sagemaker_session=None):
708706
if network_config_dict:
709707
network_config = NetworkConfig(
710708
enable_network_isolation=network_config_dict["EnableNetworkIsolation"],
709+
encrypt_inter_container_traffic=network_config_dict[
710+
"EnableInterContainerTrafficEncryption"
711+
],
711712
security_group_ids=security_group_ids,
712713
subnets=subnets,
713714
)
@@ -784,6 +785,9 @@ def _attach(clazz, sagemaker_session, schedule_desc, job_desc, tags):
784785
if network_config_dict:
785786
network_config = NetworkConfig(
786787
enable_network_isolation=network_config_dict["EnableNetworkIsolation"],
788+
encrypt_inter_container_traffic=network_config_dict[
789+
"EnableInterContainerTrafficEncryption"
790+
],
787791
security_group_ids=security_group_ids,
788792
subnets=subnets,
789793
)
@@ -1164,31 +1168,6 @@ def _wait_for_schedule_changes_to_apply(self):
11641168
if schedule_desc["MonitoringScheduleStatus"] != "Pending":
11651169
break
11661170

1167-
def _validate_network_config(self, network_config_dict):
1168-
"""Function to validate EnableInterContainerTrafficEncryption.
1169-
1170-
It validates EnableInterContainerTrafficEncryption is not set in the provided
1171-
NetworkConfig request dictionary.
1172-
1173-
Args:
1174-
network_config_dict (dict): NetworkConfig request dictionary.
1175-
Contains parameters from :class:`~sagemaker.network.NetworkConfig` object
1176-
that configures network isolation, encryption of
1177-
inter-container traffic, security group IDs, and subnets.
1178-
1179-
"""
1180-
if "EnableInterContainerTrafficEncryption" in network_config_dict:
1181-
message = (
1182-
"EnableInterContainerTrafficEncryption is not supported in Model Monitor. "
1183-
"Please ensure that encrypt_inter_container_traffic=None "
1184-
"when creating your NetworkConfig object. "
1185-
"Current encrypt_inter_container_traffic value: {}".format(
1186-
self.network_config.encrypt_inter_container_traffic
1187-
)
1188-
)
1189-
_LOGGER.info(message)
1190-
raise ValueError(message)
1191-
11921171
@classmethod
11931172
def monitoring_type(cls):
11941173
"""Type of the monitoring job."""
@@ -1781,7 +1760,6 @@ def update_monitoring_schedule(
17811760
network_config_dict = None
17821761
if self.network_config is not None:
17831762
network_config_dict = self.network_config._to_request_dict()
1784-
super(DefaultModelMonitor, self)._validate_network_config(network_config_dict)
17851763

17861764
if role is not None:
17871765
self.role = role
@@ -2034,6 +2012,9 @@ def attach(cls, monitor_schedule_name, sagemaker_session=None):
20342012
subnets = vpc_config.get("Subnets")
20352013
network_config = NetworkConfig(
20362014
enable_network_isolation=network_config_dict["EnableNetworkIsolation"],
2015+
encrypt_inter_container_traffic=network_config_dict[
2016+
"EnableInterContainerTrafficEncryption"
2017+
],
20372018
security_group_ids=security_group_ids,
20382019
subnets=subnets,
20392020
)
@@ -2304,7 +2285,6 @@ def _build_create_data_quality_job_definition_request(
23042285

23052286
if network_config is not None:
23062287
network_config_dict = network_config._to_request_dict()
2307-
self._validate_network_config(network_config_dict)
23082288
request_dict["NetworkConfig"] = network_config_dict
23092289
elif existing_network_config is not None:
23102290
request_dict["NetworkConfig"] = existing_network_config
@@ -3007,7 +2987,6 @@ def _build_create_model_quality_job_definition_request(
30072987

30082988
if network_config is not None:
30092989
network_config_dict = network_config._to_request_dict()
3010-
self._validate_network_config(network_config_dict)
30112990
request_dict["NetworkConfig"] = network_config_dict
30122991
elif existing_network_config is not None:
30132992
request_dict["NetworkConfig"] = existing_network_config

src/sagemaker/mxnet/model.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -107,9 +107,12 @@ def __init__(
107107
py_version (str): Python version you want to use for executing your
108108
model training code. Defaults to ``None``. Required unless
109109
``image_uri`` is provided.
110-
image_uri (str): A Docker image URI (default: None). If not specified, a
111-
default image for MXNet will be used.
112-
110+
image_uri (str): A Docker image URI (default: None). For serverless
111+
inferece, it is required. More image information can be found in
112+
`Amazon SageMaker provided algorithms and Deep Learning Containers
113+
<https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html>`_.
114+
For instance based inference, if not specified, a default image for
115+
MXNet will be used.
113116
If ``framework_version`` or ``py_version`` are ``None``, then
114117
``image_uri`` is required. If also ``None``, then a ``ValueError``
115118
will be raised.

src/sagemaker/parameter.py

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,8 @@
1414
from __future__ import absolute_import
1515

1616
import json
17-
from sagemaker.workflow.parameters import Parameter as PipelineParameter
18-
from sagemaker.workflow.functions import JsonGet as PipelineJsonGet
19-
from sagemaker.workflow.functions import Join as PipelineJoin
17+
18+
from sagemaker.workflow.entities import PipelineVariable
2019

2120

2221
class ParameterRange(object):
@@ -73,11 +72,11 @@ def as_tuning_range(self, name):
7372
return {
7473
"Name": name,
7574
"MinValue": str(self.min_value)
76-
if not isinstance(self.min_value, (PipelineParameter, PipelineJsonGet, PipelineJoin))
77-
else self.min_value,
75+
if not isinstance(self.min_value, PipelineVariable)
76+
else self.min_value.to_string(),
7877
"MaxValue": str(self.max_value)
79-
if not isinstance(self.max_value, (PipelineParameter, PipelineJsonGet, PipelineJoin))
80-
else self.max_value,
78+
if not isinstance(self.max_value, PipelineVariable)
79+
else self.max_value.to_string(),
8180
"ScalingType": self.scaling_type,
8281
}
8382

@@ -112,8 +111,7 @@ def __init__(self, values): # pylint: disable=super-init-not-called
112111
"""
113112
values = values if isinstance(values, list) else [values]
114113
self.values = [
115-
str(v) if not isinstance(v, (PipelineParameter, PipelineJsonGet, PipelineJoin)) else v
116-
for v in values
114+
str(v) if not isinstance(v, PipelineVariable) else v.to_string() for v in values
117115
]
118116

119117
def as_tuning_range(self, name):

src/sagemaker/pytorch/model.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -107,10 +107,14 @@ def __init__(
107107
py_version (str): Python version you want to use for executing your
108108
model training code. Defaults to ``None``. Required unless
109109
``image_uri`` is provided.
110-
image_uri (str): A Docker image URI (default: None). If not specified, a
111-
default image for PyTorch will be used. If ``framework_version``
112-
or ``py_version`` are ``None``, then ``image_uri`` is required. If
113-
also ``None``, then a ``ValueError`` will be raised.
110+
image_uri (str): A Docker image URI (default: None). For serverless
111+
inferece, it is required. More image information can be found in
112+
`Amazon SageMaker provided algorithms and Deep Learning Containers
113+
<https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html>`_.
114+
For instance based inference, if not specified, a default image for
115+
PyTorch will be used. If ``framework_version`` or ``py_version`` are
116+
``None``, then ``image_uri`` is required. If also ``None``, then a
117+
``ValueError`` will be raised.
114118
predictor_cls (callable[str, sagemaker.session.Session]): A function
115119
to call to create a predictor with an endpoint name and
116120
SageMaker ``Session``. If specified, ``deploy()`` returns the

src/sagemaker/sklearn/model.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -102,9 +102,12 @@ def __init__(
102102
model training code (default: 'py3'). Currently, 'py3' is the only
103103
supported version. If ``None`` is passed in, ``image_uri`` must be
104104
provided.
105-
image_uri (str): A Docker image URI (default: None). If not specified, a
106-
default image for Scikit-learn will be used.
107-
105+
image_uri (str): A Docker image URI (default: None). For serverless
106+
inferece, it is required. More image information can be found in
107+
`Amazon SageMaker provided algorithms and Deep Learning Containers
108+
<https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html>`_.
109+
For instance based inference, if not specified, a default image for
110+
Scikit-learn will be used.
108111
If ``framework_version`` or ``py_version`` are ``None``, then
109112
``image_uri`` is required. If also ``None``, then a ``ValueError``
110113
will be raised.

src/sagemaker/tensorflow/model.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -145,10 +145,14 @@ def __init__(
145145
file which should be executed as the entry point to model
146146
hosting. If ``source_dir`` is specified, then ``entry_point``
147147
must point to a file located at the root of ``source_dir``.
148-
image_uri (str): A Docker image URI (default: None). If not specified, a
149-
default image for TensorFlow Serving will be used. If
150-
``framework_version`` is ``None``, then ``image_uri`` is required.
151-
If also ``None``, then a ``ValueError`` will be raised.
148+
image_uri (str): A Docker image URI (default: None). For serverless
149+
inferece, it is required. More image information can be found in
150+
`Amazon SageMaker provided algorithms and Deep Learning Containers
151+
<https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html>`_.
152+
For instance based inference, if not specified, a default image for
153+
TensorFlow Serving will be used. If ``framework_version`` is ``None``,
154+
then ``image_uri`` is required. If also ``None``, then a ``ValueError``
155+
will be raised.
152156
framework_version (str): Optional. TensorFlow Serving version you
153157
want to use. Defaults to ``None``. Required unless ``image_uri`` is
154158
provided.

0 commit comments

Comments
 (0)