Skip to content

change: default model_fn and predict_fn in default handler #51

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Feb 28, 2020
Merged

change: default model_fn and predict_fn in default handler #51

merged 6 commits into from
Feb 28, 2020

Conversation

chuyang-deng
Copy link
Contributor

Issue #, if available:

Description of changes:
implement default model_fn and predict_fn when the container is an eia container.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@chuyang-deng chuyang-deng requested a review from dfan February 27, 2020 23:47
@chuyang-deng chuyang-deng changed the title Eia default change: default model_fn and predict_fn in default handler Feb 27, 2020
with torch.no_grad():
output = model(input_data)
if os.getenv(INFERENCE_ACCELERATOR_PRESENT_ENV) == 'true':
logger.info(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed that in MXNet, there isn't any logged message indicating that EI is used. But, I think it is useful to have a logging statement like this since then the customer can definitively know that they're using EI. Wanted to point out the discrepancy, but I am onboard with this change
https://github.com/aws/sagemaker-mxnet-serving-container/blob/master/src/sagemaker_mxnet_serving_container/default_inference_handler.py#L62

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logging per-inference is a lot of logging. May be ok for DEBUG level but I'd suggest removing the log.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that logging with each inference request is unnecessary, so I removed them.

logger.info("Saving the model to {}.".format(model_dir))
path = os.path.join(model_dir, 'model.pth')
torch.jit.save(model, path)
# This file is intentionally left blank
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Maybe clarify that this is to allow the default model_fn and predict_fn to be used

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added more descriptive comments.

with torch.no_grad():
output = model(input_data)
if os.getenv(INFERENCE_ACCELERATOR_PRESENT_ENV) == 'true':
logger.info(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logging per-inference is a lot of logging. May be ok for DEBUG level but I'd suggest removing the log.

@@ -31,10 +43,18 @@ def default_model_fn(self, model_dir):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the docstrings be updated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docstring updated

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@sagemaker-bot
Copy link
Collaborator

AWS CodeBuild CI Report

  • Result: SUCCEEDED
  • Build Logs (available for 30 days)

Powered by github-codebuild-logs, available on the AWS Serverless Application Repository

@chuyang-deng chuyang-deng merged commit 7b9c5d7 into aws:master Feb 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants