You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@ A container provides an effectively isolated environment, ensuring a consistent
18
18
Containerizing your model and code enables fast and reliable deployment of your model.
19
19
20
20
The **SageMaker Inference Toolkit** implements a model serving stack and can be easily added to any Docker container, making it [deployable to SageMaker](https://aws.amazon.com/sagemaker/deploy/).
21
-
This library's serving stack is built on [Multi Model Server](https://github.com/awslabs/mxnet-model-server), and it can serve your own models or those you trained on SageMaker using [machine learning frameworks with native SageMaker support](https://docs.aws.amazon.com/sagemaker/latest/dg/frameworks.html).
21
+
This library's serving stack is built on [Multi Model Server](https://github.com/awslabs/multi-model-server), and it can serve your own models or those you trained on SageMaker using [machine learning frameworks with native SageMaker support](https://docs.aws.amazon.com/sagemaker/latest/dg/frameworks.html).
22
22
If you use a [prebuilt SageMaker Docker image for inference](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html), this library may already be included.
23
23
24
24
For more information, see the Amazon SageMaker Developer Guide sections on [building your own container with Multi Model Server](https://docs.aws.amazon.com/sagemaker/latest/dg/build-multi-model-build-container.html) and [using your own models](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html).
@@ -98,7 +98,7 @@ To use the SageMaker Inference Toolkit, you need to do the following:
98
98
99
99
2. Implement a handler service that is executed by the model server.
100
100
([Here is an example](https://github.com/aws/sagemaker-pytorch-serving-container/blob/master/src/sagemaker_pytorch_serving_container/handler_service.py) of a handler service.)
101
-
For more information on how to define your `HANDLER_SERVICE`file, see [the MMS custom service documentation](https://github.com/awslabs/mxnet-model-server/blob/master/docs/custom_service.md).
101
+
For more information on how to define your `HANDLER_SERVICE`file, see [the MMS custom service documentation](https://github.com/awslabs/multi-model-server/blob/master/docs/custom_service.md).
102
102
103
103
``` python
104
104
from sagemaker_inference.default_handler_service import DefaultHandlerService
@@ -112,7 +112,7 @@ To use the SageMaker Inference Toolkit, you need to do the following:
112
112
This class extends ``DefaultHandlerService``, which define the following:
113
113
- The ``handle`` method is invoked for all incoming inference requests to the model server.
114
114
- The ``initialize`` method is invoked at model server start up.
115
-
Based on: https://github.com/awslabs/mxnet-model-server/blob/master/docs/custom_service.md
115
+
Based on: https://github.com/awslabs/multi-model-server/blob/master/docs/custom_service.md
0 commit comments