@@ -390,6 +390,56 @@ The function should return a byte array of data serialized to ``content_type``.
390
390
The default implementation expects ``prediction `` to be a NumPy array and can serialize the result to JSON, CSV, or NPY.
391
391
It accepts response content types of "application/json", "text/csv", and "application/x-npy".
392
392
393
+ Bring Your Own Model
394
+ --------------------
395
+
396
+ You can deploy an XGBoost model that you trained outside of SageMaker by using the Amazon SageMaker XGBoost container.
397
+ Typically, you save an XGBoost model by pickling the ``Booster `` object or calling ``booster.save_model ``.
398
+ The XGBoost `built-in algorithm mode <https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html#xgboost-modes >`_
399
+ supports both a pickled ``Booster `` object and a model produced by ``booster.save_model ``.
400
+ You can also deploy an XGBoost model by using XGBoost as a framework.
401
+ By using XGBoost as a framework, you have more flexibility.
402
+ To deploy an XGBoost model by using XGBoost as a framework, you need to:
403
+
404
+ - Write an inference script.
405
+ - Create the XGBoostModel object.
406
+
407
+ Write an Inference Script
408
+ ^^^^^^^^^^^^^^^^^^^^^^^^^
409
+
410
+ You must create an inference script that implements (at least) the ``model_fn `` function that calls the loaded model to get a prediction.
411
+
412
+ Optionally, you can also implement ``input_fn `` and ``output_fn `` to process input and output,
413
+ and ``predict_fn `` to customize how the model server gets predictions from the loaded model.
414
+ For information about how to write an inference script, see `SageMaker XGBoost Model Server <#sagemaker-xgboost-model-server >`_.
415
+ Pass the filename of the inference script as the ``entry_point `` parameter when you create the `XGBoostModel ` object.
416
+
417
+ Create an XGBoostModel Object
418
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
419
+
420
+ To create a model object, call the ``sagemaker.xgboost.model.XGBoostModel `` constructor,
421
+ and then call its ``deploy() `` method to deploy your model for inference.
422
+
423
+ .. code :: python
424
+
425
+ xgboost_model = XGBoostModel(
426
+ model_data = " s3://my-bucket/my-path/model.tar.gz" ,
427
+ role = " my-role" ,
428
+ entry_point = " inference.py" ,
429
+ framework_version = " 1.0-1"
430
+ )
431
+
432
+ predictor = xgboost_model.deploy(
433
+ instance_type = ' ml.c4.xlarge' ,
434
+ initial_instance_count = 1
435
+ )
436
+
437
+ # If payload is a string in LIBSVM format, we need to change serializer.
438
+ predictor.serializer = str
439
+ predictor.predict(" <label> <index1>:<value1> <index2>:<value2>" )
440
+
441
+ To get predictions from your deployed model, you can call the ``predict() `` method.
442
+
393
443
Host Multiple Models with Multi-Model Endpoints
394
444
-----------------------------------------------
395
445
@@ -401,7 +451,6 @@ in the AWS documentation.
401
451
For a sample notebook that uses Amazon SageMaker to deploy multiple XGBoost models to an endpoint, see the
402
452
`Multi-Model Endpoint XGBoost Sample Notebook <https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/multi_model_xgboost_home_value/xgboost_multi_model_endpoint_home_value.ipynb >`_.
403
453
404
-
405
454
*************************
406
455
SageMaker XGBoost Classes
407
456
*************************
0 commit comments