You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 23, 2024. It is now read-only.
Since for multi model containers, there will be no /opt/ml/model/code directory created, it's not possible to provide a custom inference.py function without changing this library.
Updating the docs to clarify this would be awesome,
or adding in the ability to find the inference.py file uploaded with the model
or providing a way to add in a global inference.py file would be great.
To reproduce: Create a multi model endpoint, add a model with a code/inference.py
that inference.py never gets called.
The text was updated successfully, but these errors were encountered:
Thanks for bringing this up. There is currently no support for this, though it might be introduced in the future. I've opened a PR to update the documentation: #108
@laurenyu Any more information on this support? We are deploying on neo, and we miss not being able to inject user code to the same level as we can in the tensorflow serving images.
Uh oh!
There was an error while loading. Please reload this page.
Since for multi model containers, there will be no /opt/ml/model/code directory created, it's not possible to provide a custom inference.py function without changing this library.
Updating the docs to clarify this would be awesome,
or adding in the ability to find the inference.py file uploaded with the model
or providing a way to add in a global inference.py file would be great.
To reproduce: Create a multi model endpoint, add a model with a code/inference.py
that inference.py never gets called.
The text was updated successfully, but these errors were encountered: