You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/frameworks/xgboost/using_xgboost.rst
+216-13
Original file line number
Diff line number
Diff line change
@@ -161,7 +161,7 @@ and a dictionary of the hyperparameters to pass to the training script.
161
161
role=role,
162
162
train_instance_count=1,
163
163
train_instance_type="ml.m5.2xlarge",
164
-
framework_version="0.90-1",
164
+
framework_version="1.0-1",
165
165
)
166
166
167
167
@@ -179,24 +179,227 @@ After you create an estimator, call the ``fit`` method to run the training job.
179
179
Deploy Open Source XGBoost Models
180
180
=================================
181
181
182
-
After the training job finishes, call the ``deploy`` method of the estimator to create a predictor that you can use to get inferences from your trained model.
182
+
After you fit an XGBoost Estimator, you can host the newly created model in SageMaker.
183
+
184
+
After you call ``fit``, you can call ``deploy`` on an ``XGBoost`` estimator to create a SageMaker endpoint.
185
+
The endpoint runs a SageMaker-provided XGBoost model server and hosts the model produced by your training script,
186
+
which was run when you called ``fit``. This was the model you saved to ``model_dir``.
187
+
188
+
``deploy`` returns a ``Predictor`` object, which you can use to do inference on the Endpoint hosting your XGBoost model.
189
+
Each ``Predictor`` provides a ``predict`` method which can do inference with numpy arrays, Python lists, or strings.
190
+
After inference arrays or lists are serialized and sent to the XGBoost model server, ``predict`` returns the result of
The Processing component enables you to submit processing jobs to Amazon SageMaker directly from a Kubeflow Pipelines workflow. For more information, see \ `SageMaker Processing Kubeflow Pipeline component <https://github.com/kubeflow/pipelines/tree/master/components/aws/sagemaker/process>`__.
Orchestrate your SageMaker training and inference jobs with `AWS Step Functions <https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html>`__.
6
+
7
+
You can use the `AWS Step Functions Python SDK <https://aws-step-functions-data-science-sdk.readthedocs.io/en/stable/>`__ to easily create workflows that process and publish machine learning models using Amazon SageMaker and AWS Step Functions.
8
+
You can create multi-step machine learning workflows in Python that orchestrate AWS infrastructure at scale,
9
+
without having to provision and integrate the AWS services separately.
10
+
11
+
The AWS Step Functions Python SDK uses the SageMaker Python SDK as a dependency.
12
+
To get started with step functions, try the workshop or visit the SDK's website:
13
+
14
+
* `Workshop on using AWS Step Functions with SageMaker <https://www.sagemakerworkshop.com/step/>`__
0 commit comments