diff --git a/doc/api/inference/deserializers.rst b/doc/api/inference/deserializers.rst index f19ed91e8d..78859d75c5 100644 --- a/doc/api/inference/deserializers.rst +++ b/doc/api/inference/deserializers.rst @@ -5,3 +5,4 @@ Deserializers :members: :undoc-members: :show-inheritance: + diff --git a/doc/api/inference/model.rst b/doc/api/inference/model.rst index 038f34b953..d52ca3500a 100644 --- a/doc/api/inference/model.rst +++ b/doc/api/inference/model.rst @@ -7,6 +7,12 @@ Model :show-inheritance: :inherited-members: +.. autoclass:: sagemaker.jumpstart.model.JumpStartModel + :members: + :undoc-members: + :show-inheritance: + :inherited-members: + .. autoclass:: sagemaker.model.FrameworkModel :members: :undoc-members: diff --git a/doc/api/inference/serializers.rst b/doc/api/inference/serializers.rst index 6bd22ca4dc..c4817c658b 100644 --- a/doc/api/inference/serializers.rst +++ b/doc/api/inference/serializers.rst @@ -5,3 +5,4 @@ Serializers :members: :undoc-members: :show-inheritance: + diff --git a/doc/api/training/estimators.rst b/doc/api/training/estimators.rst index 8f80fba67c..a455f43e69 100644 --- a/doc/api/training/estimators.rst +++ b/doc/api/training/estimators.rst @@ -17,6 +17,12 @@ A high level interface for SageMaker training :show-inheritance: :inherited-members: +.. autoclass:: sagemaker.jumpstart.estimator.JumpStartEstimator + :members: + :undoc-members: + :show-inheritance: + :inherited-members: + .. autoclass:: sagemaker.estimator.Framework :members: :undoc-members: diff --git a/doc/api/utility/accept_types.rst b/doc/api/utility/accept_types.rst new file mode 100644 index 0000000000..94f8a5278c --- /dev/null +++ b/doc/api/utility/accept_types.rst @@ -0,0 +1,8 @@ +Accept Types +------------ + +.. automodule:: sagemaker.accept_types + :members: + :undoc-members: + :show-inheritance: + :private-members: diff --git a/doc/api/utility/content_types.rst b/doc/api/utility/content_types.rst new file mode 100644 index 0000000000..86af3d8950 --- /dev/null +++ b/doc/api/utility/content_types.rst @@ -0,0 +1,8 @@ +Content Types +------------- + +.. automodule:: sagemaker.content_types + :members: + :undoc-members: + :show-inheritance: + :private-members: diff --git a/doc/api/utility/hyperparameters.rst b/doc/api/utility/hyperparameters.rst index 41b571c778..f6fa793bc8 100644 --- a/doc/api/utility/hyperparameters.rst +++ b/doc/api/utility/hyperparameters.rst @@ -5,3 +5,4 @@ Hyperparameters :members: :undoc-members: :show-inheritance: + diff --git a/doc/api/utility/instance_types.rst b/doc/api/utility/instance_types.rst new file mode 100644 index 0000000000..14810fe179 --- /dev/null +++ b/doc/api/utility/instance_types.rst @@ -0,0 +1,8 @@ +Instance Types +-------------- + +.. automodule:: sagemaker.instance_types + :members: + :undoc-members: + :show-inheritance: + :private-members: diff --git a/doc/api/utility/metric_definitions.rst b/doc/api/utility/metric_definitions.rst new file mode 100644 index 0000000000..8c56639c7e --- /dev/null +++ b/doc/api/utility/metric_definitions.rst @@ -0,0 +1,8 @@ +Metric Definitions +------------------ + +.. automodule:: sagemaker.metric_definitions + :members: + :undoc-members: + :show-inheritance: + :private-members: diff --git a/doc/overview.rst b/doc/overview.rst index 82411f4e71..e58253a79f 100644 --- a/doc/overview.rst +++ b/doc/overview.rst @@ -579,14 +579,14 @@ Here is an example: Use Built-in Algorithms with Pre-trained Models in SageMaker Python SDK *********************************************************************** -SageMaker Python SDK provides built-in algorithms with pre-trained models from popular open source model -hubs, such as TensorFlow Hub, Pytorch Hub, and HuggingFace. Customer can deploy these pre-trained models +The SageMaker Python SDK provides built-in algorithms with pre-trained models from popular open source model +hubs, such as TensorFlow Hub, Pytorch Hub, and HuggingFace. You can deploy these pre-trained models as-is or first fine-tune them on a custom dataset and then deploy to a SageMaker endpoint for inference. -SageMaker SDK built-in algorithms allow customers access pre-trained models using model ids and model -versions. The ‘pre-trained model’ table below provides list of models with information useful in -selecting the correct model id and corresponding parameters. These models are also available through +SageMaker SDK built-in algorithms allow customers to access pre-trained models using model IDs and model +versions. The ‘pre-trained model’ table below provides a list of models with useful information for +selecting the correct model ID and corresponding parameters. These models are also available through the `JumpStart UI in SageMaker Studio `__. @@ -598,6 +598,16 @@ the `JumpStart UI in SageMaker Studio `__. + +Example notebooks for task-based models +--------------------------------------- SageMaker built-in algorithms with pre-trained models support 15 different machine learning problem types. Below is a list of all the supported problem types with a link to a Jupyter notebook that provides example usage. @@ -629,108 +639,97 @@ Tabular - `Tabular Regression (TabTransformer) `__ -The following topic give you information about JumpStart components, -as well as how to use the SageMaker Python SDK for these workflows. - Prerequisites ============= .. container:: - - You must set up AWS credentials following the steps - in `Quick configuration with aws configure `__. + - You must set up AWS credentials. For more information, see `Configuring the AWS CLI `__. - Your IAM role must allow connection to Amazon SageMaker and Amazon S3. For more information about IAM role permissions, see `Policies and permissions in IAM `__. -Built-in Components -=================== - -The following sections give information about the main built-in -components and their function. -Pre-trained models ------------------- - -SageMaker maintains a model zoo of over 300 models from popular open source model hubs, such as -TensorFlow Hub, Pytorch Hub, and HuggingFace. You can use the SageMaker Python SDK to fine-tune -a model on your own dataset or deploy it directly to a SageMaker endpoint for inference. +Deploy a Pre-Trained Model Directly to a SageMaker Endpoint +============================================================ -Model artifacts are stored as tarballs in a S3 bucket. Each model is versioned and contains a -unique ID which can be used to retrieve the model URI. The following information describes the -``model_id`` and ``model_version`` needed to retrieve the URI. +You can deploy a built-in algorithm or pre-trained model to a SageMaker endpoint in just a few lines of code using the SageMaker Python SDK. -.. container:: +First, find the model ID for the model of your choice in the :doc:`Built-in Algorithms with pre-trained Model Table<./doc_utils/pretrainedmodels>`. - - ``model_id``: A unique identifier for the JumpStart model. - - ``model_version``: The version of the specifications for the - model. To use the latest version, enter ``"*"``. This is a - required parameter. +Low-code deployment with the JumpStartModel class +------------------------------------------------- -To retrieve a model, first select a ``model ID`` and ``version`` from -the :doc:`available models <./doc_utils/pretrainedmodels>`. +Using the model ID, define your model as a JumpStart model. Use the ``deploy`` method to automatically deploy your model for inference. +In this example, we use the FLAN-T5 XL model from HuggingFace. .. code:: python - model_id, model_version = "huggingface-spc-bert-base-cased", "1.0.0" - scope = "training" # or "inference" + from sagemaker.jumpstart.model import JumpStartModel -Then use those values to retrieve the model as follows. + model_id = "huggingface-text2text-flan-t5-xl" + my_model = JumpStartModel(model_id=model_id) + predictor = my_model.deploy() + +You can then run inference with the deployed model using the ``predict`` method. .. code:: python - from sagemaker import model_uris + question = "What is Southern California often abbreviated as?" + response = predictor.predict(question) + print(response) - model_uri = model_uris.retrieve( -     model_id=model_id, model_version=model_version, model_scope=scope - ) +.. note:: + This example uses the foundation model FLAN-T5 XL, which is suitable for a wide range of text generation use cases including question answering, + summarization, chatbot creation, and more. For more information about model use cases, see + `Choose a foundation model `__ in the *Amazon SageMaker Developer Guide*. -Model scripts -------------- +For more information about the ``JumpStartModel`` class and its parameters, +see `JumpStartModel `__. -To adapt pre-trained models for SageMaker, a custom script is needed to perform training -or inference. SageMaker maintains a suite of scripts used for each of the models in the -S3 bucket, which can be accessed using the SageMaker Python SDK Use the ``model_id`` and -``version`` of the corresponding model to retrieve the related script as follows. +Additional low-code deployment utilities +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +You can optionally include specific model versions or instance types when deploying a pretrained model +using the ``JumpStartModel`` class. All JumpStart models have a default instance type. +Retrieve the default deployment instance type using the following code: .. code:: python - from sagemaker import script_uris + from sagemaker import instance_types - script_uri = script_uris.retrieve( -     model_id=model_id, model_version=model_version, script_scope=scope - ) + instance_type = instance_types.retrieve_default( + model_id=model_id, + model_version=model_version, + scope="inference") + print(instance_type) -Model images -------------- +See all supported instance types for a given JumpStart model with the ``instance_types.retrieve()`` method. -A Docker image is required to perform training or inference on all -SageMaker models. SageMaker relies on Docker images from the -following repos https://github.com/aws/deep-learning-containers, -https://github.com/aws/sagemaker-xgboost-container, -and https://github.com/aws/sagemaker-scikit-learn-container. Use -the ``model_id`` and ``version`` of the corresponding model to -retrieve the related image as follows. +To check valid data input and output formats for inference, you can use the ``retrieve_options()`` method +from the ``Serializers`` and ``Deserializers`` classes. .. code:: python - from sagemaker import image_uris + print(sagemaker.serializers.retrieve_options(model_id=model_id, model_version=model_version)) + print(sagemaker.deserializers.retrieve_options(model_id=model_id, model_version=model_version)) - image_uri = image_uris.retrieve( -     region=None, -     framework=None, -     image_scope=scope, -     model_id=model_id, -     model_version=model_version, -     instance_type="ml.m5.xlarge", - ) +Similarly, you can use the ``retrieve_options()`` method +to check the supported content and accept types for a model. -Deploy a  Pre-Trained Model Directly to a SageMaker Endpoint -============================================================ +.. code:: python + + print(sagemaker.content_types.retrieve_options(model_id=model_id, model_version=model_version)) + print(sagemaker.accept_types.retrieve_options(model_id=model_id, model_version=model_version)) + +For more information about utilities, see `Utility APIs `__. + +Deploy a pre-trained model using the SageMaker Model class +---------------------------------------------------------- In this section, you learn how to take a pre-trained model and deploy -it directly to a SageMaker Endpoint. This is the fastest way to start -machine learning with a pre-trained model. The following +it directly to a SageMaker Endpoint and understand what happens behind +the scenes if you deployed your model as a ``JumpStartModel``. The following assumes familiarity with `SageMaker models `__ and their deploy functions. @@ -808,8 +807,8 @@ the endpoint, endpoint config and model resources will be prefixed with ``sagemaker-jumpstart``. Refer to the model ``Tags`` to inspect the model artifacts involved in the model creation. -Perform Inference ------------------ +Perform inference +^^^^^^^^^^^^^^^^^ Finally, use the ``predictor`` instance to query your endpoint. For ``catboost-classification-model``, for example, the predictor accepts @@ -830,8 +829,90 @@ tune the model for your use case with your custom dataset. The following assumes familiarity with `SageMaker training jobs and their architecture `__. -Fine-tune a Pre-trained Model on a Custom Dataset -------------------------------------------------- +Low-code fine-tuning with the JumpStartEstimator class +------------------------------------------------------ + +You can fine-tune a built-in algorithm or pre-trained model in just a few lines of code using the SageMaker Python SDK. + +First, find the model ID for the model of your choice in the :doc:`Built-in Algorithms with pre-trained Model Table<./doc_utils/pretrainedmodels>`. + +Using the model ID, define your training job as a JumpStart estimator. Run ``estimator.fit()`` on your model, pointing to the training data to use for fine-tuning. +Then, use the ``deploy`` method to automatically deploy your model for inference. In this example, we use the GPT-J 6B model from HuggingFace. + +.. code:: python + + from sagemaker.jumpstart.estimator import JumpStartEstimator + + model_id = "huggingface-textgeneration1-gpt-j-6b" + estimator = JumpStartEstimator(model_id=model_id) + estimator.fit( + {"train": training_dataset_s3_path, "validation": validation_dataset_s3_path} + ) + predictor = estimator.deploy() + +You can then run inference with the deployed model using the ``predict`` method. + +.. code:: python + + question = "What is Southern California often abbreviated as?" + response = predictor.predict(question) + print(response) + +.. note:: + This example uses the foundation model GPT-J 6B, which is suitable for a wide range of text generation use cases including question answering, + named entity recognition, summarization, and more. For more information about model use cases, see + `Choose a foundation model `__ in the *Amazon SageMaker Developer Guide*. + +You can optionally specify model versions or instance types when creating your ``JumpStartEstimator``. For more information about the ``JumpStartEstimator`` class and its parameters, +see `JumpStartEstimator `__. + +Additional low-code training utilities +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +You can optionally include specific model versions or instance types when fine-tuning a pretrained model +using the ``JumpStartEstimator`` class. All JumpStart models have a default instance type. +Retrieve the default training instance type using the following code: + +.. code:: python + + from sagemaker import instance_types + + instance_type = instance_types.retrieve_default( + model_id=model_id, + model_version=model_version, + scope="training") + print(instance_type) + +See all supported instance types for a given JumpStart model with the ``instance_types.retrieve()`` method. + +To check the default hyperparameters used for training, you can use the ``retrieve_default()`` method +from the ``hyperparameters`` class. + +.. code:: python + + from sagemaker import hyperparameters + + my_hyperparameters = hyperparameters.retrieve_default(model_id=model_id, model_version=model_version) + print(my_hyperparameters) + + # Optionally override default hyperparameters for fine-tuning + my_hyperparameters["epoch"] = "3" + my_hyperparameters["per_device_train_batch_size"] = "4" + + # Optionally validate hyperparameters for the model + hyperparameters.validate(model_id=model_id, model_version=model_version, hyperparameters=my_hyperparameters) + +You can also check the default metric definitions: + +.. code:: python + + print(metric_definitions.retrieve_default(model_id=model_id, model_version=model_version)) + +For more information about inference and utilities, see `Inference APIs `__ +and `Utility APIs `__. + +Fine-tune a pre-trained model on a custom dataset using the SageMaker Estimator class +------------------------------------------------------------------------------------- To begin, select a ``model_id`` and ``version`` from the pre-trained models table, as well as a model scope. In this case, you begin by @@ -921,9 +1002,8 @@ amount of time. The time that it takes varies depending on the hyperparameters, dataset, and model you use and can range from 15 minutes to 12 hours. -Deploy your Trained Model to a SageMaker Endpoint -------------------------------------------------- - +Deploy your trained model to a SageMaker Endpoint +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Now that you’ve created your training job, use your ``estimator`` instance to create a SageMaker Endpoint that you can @@ -968,8 +1048,8 @@ took your model to train.     enable_network_isolation=True, ) -Perform Inference on a SageMaker Endpoint ------------------------------------------ +Perform inference on a SageMaker Endpoint +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Finally, use the ``predictor`` instance to query your endpoint. For ``huggingface-spc-bert-base-cased``, the predictor accepts an array @@ -985,6 +1065,95 @@ the predictor.predict(json.dumps(data).encode("utf-8"), {"ContentType": "application/list-text"}) +Built-in Components +=================== + +The following section provides information about the main components of built-in algorithms +including pretrained models, model scripts, and model images. + +Pre-trained models +------------------ + +SageMaker maintains a model zoo of over 600 models from popular open source model hubs, such as +TensorFlow Hub, Pytorch Hub, and HuggingFace. You can use the SageMaker Python SDK to fine-tune +a model on your own dataset or deploy it directly to a SageMaker endpoint for inference. + +Model artifacts are stored as tarballs in an S3 bucket. Each model is versioned and contains a +unique ID which can be used to retrieve the model URI. The following information describes the +``model_id`` and ``model_version`` needed to retrieve the URI. + +.. container:: + + - ``model_id``: A unique identifier for the JumpStart model. + - ``model_version``: The version of the specifications for the + model. To use the latest version, enter ``"*"``. This is a + required parameter. + +To retrieve a model, first select a ``model ID`` and ``version`` from +the :doc:`available models <./doc_utils/pretrainedmodels>`. + +.. code:: python + + model_id, model_version = "huggingface-spc-bert-base-cased", "1.0.0" + scope = "training" # or "inference" + +Then use those values to retrieve the model as follows. + +.. code:: python + + from sagemaker import model_uris + + model_uri = model_uris.retrieve( +     model_id=model_id, model_version=model_version, model_scope=scope + ) + +Model scripts +------------- + +To adapt pre-trained models for SageMaker, a custom script is needed to perform training +or inference. SageMaker maintains a suite of scripts used for each of the models in the +S3 bucket, which can be accessed using the SageMaker Python SDK Use the ``model_id`` and +``version`` of the corresponding model to retrieve the related script as follows. + +.. code:: python + + from sagemaker import script_uris + + script_uri = script_uris.retrieve( +     model_id=model_id, model_version=model_version, script_scope=scope + ) + +Model images +------------- + +A Docker image is required to perform training or inference on all +SageMaker models. SageMaker relies on Docker images from the +following repos https://github.com/aws/deep-learning-containers, +https://github.com/aws/sagemaker-xgboost-container, +and https://github.com/aws/sagemaker-scikit-learn-container. Use +the ``model_id`` and ``version`` of the corresponding model to +retrieve the related image as follows. You can also use the ``instance_types`` +utility to retrieve and use the default instance type for the model. + +.. code:: python + + from sagemaker import image_uris, instance_types + + instance_type = instance_types.retrieve_default( + model_id=model_id, + model_version=model_version, + scope=scope + ) + + image_uri = image_uris.retrieve( + region=None, + framework=None, + image_scope=scope, + model_id=model_id, + model_version=model_version, + instance_type=instance_type, + ) + Appendix ======== diff --git a/src/sagemaker/jumpstart/estimator.py b/src/sagemaker/jumpstart/estimator.py index eba981cb48..71e78d02ad 100644 --- a/src/sagemaker/jumpstart/estimator.py +++ b/src/sagemaker/jumpstart/estimator.py @@ -340,6 +340,7 @@ def __init__( when training on Amazon SageMaker. If 'git_config' is provided, 'source_dir' should be a relative location to a directory in the Git repo. + (Default: None). .. admonition:: Example @@ -353,7 +354,6 @@ def __init__( if you need 'train.py' as the entry point and 'test.py' as the training source code, you can assign entry_point='train.py', source_dir='src'. - (Default: None). git_config (Optional[dict[str, str]]): Git configurations used for cloning files, including ``repo``, ``branch``, ``commit``, ``2FA_enabled``, ``username``, ``password`` and ``token``. The @@ -363,18 +363,6 @@ def __init__( 'master' is used. If you don't provide ``commit``, the latest commit in the specified branch is used. - .. admonition:: Example - - The following config: - - >>> git_config = {'repo': 'https://github.com/aws/sagemaker-python-sdk.git', - >>> 'branch': 'test-branch-git-config', - >>> 'commit': '329bfcf884482002c05ff7f44f62599ebc9f445a'} - - results in cloning the repo specified in 'repo', then - checking out the 'master' branch, and checking out the specified - commit. - ``2FA_enabled``, ``username``, ``password`` and ``token`` are used for authentication. For GitHub (or other Git) accounts, set ``2FA_enabled`` to 'True' if two-factor authentication is @@ -405,6 +393,17 @@ def __init__( the SageMaker Python SDK attempts to use either the CodeCommit credential helper or local credential storage for authentication. (Default: None). + + .. admonition:: Example + The following config: + + >>> git_config = {'repo': 'https://github.com/aws/sagemaker-python-sdk.git', + >>> 'branch': 'test-branch-git-config', + >>> 'commit': '329bfcf884482002c05ff7f44f62599ebc9f445a'} + + results in cloning the repo specified in 'repo', then + checking out the 'master' branch, and checking out the specified + commit. container_log_level (Optional[Union[int, PipelineVariable]]): The log level to use within the container. Valid values are defined in the Python logging module. (Default: None). @@ -420,8 +419,9 @@ def __init__( must point to a file located at the root of ``source_dir``. If 'git_config' is provided, 'entry_point' should be a relative location to the Python source file in the Git repo. + (Default: None). - Example: + .. admonition:: Example With the following GitHub repo directory structure: >>> |----- README.md @@ -430,18 +430,16 @@ def __init__( >>> |----- test.py You can assign entry_point='src/train.py'. - - (Default: None). dependencies (Optional[list[str]]): A list of absolute or relative paths to directories with any additional libraries that should be exported to the container. The library folders are copied to SageMaker in the same folder where the entrypoint is copied. If 'git_config' is provided, 'dependencies' should be a list of relative locations to directories with any additional - libraries needed in the Git repo. + libraries needed in the Git repo. This is not supported with "local code" + in Local Mode. (Default: None). .. admonition:: Example - The following Estimator call: >>> Estimator(entry_point='train.py', @@ -455,9 +453,6 @@ def __init__( >>> |------ train.py >>> |------ common >>> |------ virtual-env - - This is not supported with "local code" in Local Mode. - (Default: None). instance_groups (Optional[list[:class:`sagemaker.instance_group.InstanceGroup`]]): A list of ``InstanceGroup`` objects for launching a training job with a heterogeneous cluster. For example: @@ -475,8 +470,7 @@ def __init__( through the SageMaker generic and framework estimator classes, see `Train Using a Heterogeneous Cluster `_ - in the *Amazon SageMaker developer guide*. - (Default: None). + in the *Amazon SageMaker developer guide*. (Default: None). training_repository_access_mode (Optional[str]): Specifies how SageMaker accesses the Docker image that contains the training algorithm (Default: None). Set this to one of the following values: @@ -797,7 +791,7 @@ def deploy( when training on Amazon SageMaker. If 'git_config' is provided, 'source_dir' should be a relative location to a directory in the Git repo. If the directory points to S3, no code is uploaded and the S3 location - is used instead. + is used instead. (Default: None). .. admonition:: Example @@ -809,7 +803,6 @@ def deploy( >>> |----- test.py You can assign entry_point='inference.py', source_dir='src'. - (Default: None). code_location (Optional[str]): Name of the S3 bucket where custom code is uploaded (Default: None). If not specified, the default bucket created by ``sagemaker.session.Session`` is used. (Default: None). diff --git a/src/sagemaker/jumpstart/model.py b/src/sagemaker/jumpstart/model.py index 98e7cd4278..1cd9da330e 100644 --- a/src/sagemaker/jumpstart/model.py +++ b/src/sagemaker/jumpstart/model.py @@ -138,7 +138,7 @@ def __init__( when training on Amazon SageMaker. If 'git_config' is provided, 'source_dir' should be a relative location to a directory in the Git repo. If the directory points to S3, no code is uploaded and the S3 location - is used instead. + is used instead. (Default: None). .. admonition:: Example @@ -150,7 +150,6 @@ def __init__( >>> |----- test.py You can assign entry_point='inference.py', source_dir='src'. - (Default: None). code_location (Optional[str]): Name of the S3 bucket where custom code is uploaded (Default: None). If not specified, the default bucket created by ``sagemaker.session.Session`` is used. (Default: None). @@ -159,9 +158,9 @@ def __init__( model hosting. (Default: None). If ``source_dir`` is specified, then ``entry_point`` must point to a file located at the root of ``source_dir``. If 'git_config' is provided, 'entry_point' should be - a relative location to the Python source file in the Git repo. + a relative location to the Python source file in the Git repo. (Default: None). - Example: + .. admonition:: Example With the following GitHub repo directory structure: >>> |----- README.md @@ -170,8 +169,6 @@ def __init__( >>> |----- test.py You can assign entry_point='src/inference.py'. - - (Default: None). container_log_level (Optional[Union[int, PipelineVariable]]): Log level to use within the container. Valid values are defined in the Python logging module. (Default: None). @@ -183,7 +180,8 @@ def __init__( list of relative locations to directories with any additional libraries needed in the Git repo. If the ```source_dir``` points to S3, code will be uploaded and the S3 location will be used - instead. + instead. This is not supported with "local code" in Local Mode. + (Default: None). .. admonition:: Example @@ -200,9 +198,6 @@ def __init__( >>> |------ inference.py >>> |------ common >>> |------ virtual-env - - This is not supported with "local code" in Local Mode. - (Default: None). git_config (Optional[dict[str, str]]): Git configurations used for cloning files, including ``repo``, ``branch``, ``commit``, ``2FA_enabled``, ``username``, ``password`` and ``token``. The @@ -212,18 +207,6 @@ def __init__( 'master' is used. If you don't provide ``commit``, the latest commit in the specified branch is used. - .. admonition:: Example - - The following config: - - >>> git_config = {'repo': 'https://github.com/aws/sagemaker-python-sdk.git', - >>> 'branch': 'test-branch-git-config', - >>> 'commit': '329bfcf884482002c05ff7f44f62599ebc9f445a'} - - results in cloning the repo specified in 'repo', then - checking out the 'master' branch, and checking out the specified - commit. - ``2FA_enabled``, ``username``, ``password`` and ``token`` are used for authentication. For GitHub (or other Git) accounts, set ``2FA_enabled`` to 'True' if two-factor authentication is @@ -255,6 +238,16 @@ def __init__( credential helper or local credential storage for authentication. (Default: None). + .. admonition:: Example + + The following config results in cloning the repo specified in 'repo', then + checking out the 'master' branch, and checking out the specified + commit. + + >>> git_config = {'repo': 'https://github.com/aws/sagemaker-python-sdk.git', + >>> 'branch': 'test-branch-git-config', + >>> 'commit': '329bfcf884482002c05ff7f44f62599ebc9f445a'} + Raises: ValueError: If the model ID is not recognized by JumpStart. """