Skip to content
This repository was archived by the owner on May 23, 2024. It is now read-only.

code/inference.py not utilised in multi model containers #107

Closed
Freakazo opened this issue Jan 14, 2020 · 3 comments
Closed

code/inference.py not utilised in multi model containers #107

Freakazo opened this issue Jan 14, 2020 · 3 comments

Comments

@Freakazo
Copy link

Freakazo commented Jan 14, 2020

Since for multi model containers, there will be no /opt/ml/model/code directory created, it's not possible to provide a custom inference.py function without changing this library.

Updating the docs to clarify this would be awesome,
or adding in the ability to find the inference.py file uploaded with the model
or providing a way to add in a global inference.py file would be great.

To reproduce: Create a multi model endpoint, add a model with a code/inference.py
that inference.py never gets called.

@laurenyu
Copy link
Contributor

laurenyu commented Jan 14, 2020

Thanks for bringing this up. There is currently no support for this, though it might be introduced in the future. I've opened a PR to update the documentation: #108

@lbustelo
Copy link

lbustelo commented Jun 3, 2020

@laurenyu Any more information on this support? We are deploying on neo, and we miss not being able to inject user code to the same level as we can in the tensorflow serving images.

@Freakazo
Copy link
Author

This issue has been fixed in this PR: #153

Closing.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants