-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Load endpoint? #36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
If you have an existing endpoint, you just need to create a Predictor object and provide it your endpoint name. This will give you an object that is the same as the object created by the Estimators in the end-to-end examples. Then, simply call its predict() method. You can either use the generic RealTimePredictor class, which does not do any serialization/deserialization logic on your input, but can be configured to do so through constructor arguments: Or you can use the TensorFlow / MXNet specific predictor classes, which have default serialization/deserialization logic: Example code using the TensorFlow predictor:
|
Added answer to the FAQ on our README as well. Please reopen if you have further questions. |
fix flush bug
Fix XGBoostProcessor and add TF integration test
Hi,
How can you load an existing endpoint after you've deployed it for predictions?
The notebook example I've seen show end to end training to deployment but what if you want to reuse a previous model just to make predictions?
The text was updated successfully, but these errors were encountered: