Skip to content
This repository was archived by the owner on May 23, 2024. It is now read-only.

change: update MME Pre/Post-Processing model and script paths #153

Merged
merged 13 commits into from
Jul 16, 2020
Merged

change: update MME Pre/Post-Processing model and script paths #153

merged 13 commits into from
Jul 16, 2020

Conversation

chuyang-deng
Copy link
Contributor

Issue #, if available:
In Single-Model mode, model data is downloaded to /opt/ml/model/ when container starts (BEFORE the first invocation request) arrives. Therefore, pre/post-processing scripts and modules can be imported/installed when container starts.

However, in Multi-Model mode, model data is downloaded to /opt/ml/models/{model_name}/model/ when the first invocation request arrives (AFTER python service is running). Therefore, pre/post-processing scripts and modules can only be imported/installed when handling the POST /models request.

Description of changes:

  • Update the pre/post-processing modules path to /opt/ml/models/{model_name}/model/ when Multi-Model mode is enabled
  • Install dependencies and import handlers when loading a model

Tests:
Update integrations tests to reflect the real scenario of model data downloading through hosting platform

Also tested and verified from logs that dependencies are installed, custom handlers are used

`# Create Endpoint
endpoint_name = 'DEMO-MultiModelEndpoint-' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print('Endpoint name: ' + endpoint_name)

create_endpoint_response = sm_client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name)
print('Endpoint Arn: ' + create_endpoint_response['EndpointArn'])

resp = sm_client.describe_endpoint(EndpointName=endpoint_name)
status = resp['EndpointStatus']
print("Endpoint Status: " + status)

print('Waiting for {} endpoint to be in service...'.format(endpoint_name))
waiter = sm_client.get_waiter('endpoint_in_service')
waiter.wait(EndpointName=endpoint_name)

response = runtime_sm_client.invoke_endpoint(
EndpointName=endpoint_name,
ContentType='applicaton/json',
TargetModel='clean-save.tar.gz'.format(str(i)), # clean-save.tar.gz contains inference.py and requirements.txt
Body=payload)
`

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@chuyang-deng chuyang-deng marked this pull request as ready for review July 14, 2020 19:34
@sagemaker-bot
Copy link

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-tensorflow-serving-container-pr
  • Commit ID: c229871
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

log.info("pip install dependencies from requirements.txt")
pip_install_cmd = "pip3 install -r {}".format(requirements_file_path)
try:
subprocess.check_call(pip_install_cmd.split())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it doesn't matter here, but for future reference: https://docs.python.org/3.7/library/shlex.html#shlex.split

def _import_handlers(self, model_name=None):
inference_script = INFERENCE_SCRIPT_PATH
if model_name:
inference_script = "/opt/ml/models/{}/model/code/inference.py".format(model_name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might make more sense to pass this string to the function so that we don't have it duplicated across the code?

Copy link
Contributor

@metrizable metrizable left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this package may benefit from a auto-formatter like black. a few high-level:

  • favor use of " over '
  • multi-line where appropriate

inference_script = INFERENCE_SCRIPT_PATH
if model_name:
inference_script = "/opt/ml/models/{}/model/code/inference.py".format(model_name)
spec = importlib.util.spec_from_file_location('inference', inference_script)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use " instead of '

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I have a separate PR to address the quotes issue: #149

@sagemaker-bot
Copy link

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-tensorflow-serving-container-pr
  • Commit ID: 1ac1ed4
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-tensorflow-serving-container-pr
  • Commit ID: 27411cb
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-tensorflow-serving-container-pr
  • Commit ID: 667ae52
  • Result: FAILED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link

AWS CodeBuild CI Report

  • CodeBuild project: sagemaker-tensorflow-serving-container-pr
  • Commit ID: 18931ac
  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants