Skip to content

api call error : input error for model using tensorflow custom estimators #67

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ita9naiwa opened this issue Feb 1, 2018 · 3 comments
Closed

Comments

@ita9naiwa
Copy link

ita9naiwa commented Feb 1, 2018

Hi. I'm new to AWS sagemaker and build my custom tensorflow estimator with your tensorflow iris sample code.

I created own estimator, like this.

  if mode == tf.estimator.ModeKeys.PREDICT:
        export_outputs = {
            "recommend": tf.estimator.export.PredictOutput(predictions),
            tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY :
            tf.estimator.export.PredictOutput(predictions),
        }
        return tf.estimator.EstimatorSpec(mode,predictions=predictions,
                                         export_outputs = export_outputs)

(without export_outputs, classifier.export_savedmodel cannot export saved model)

I exported trained model using this

INPUT_TENSOR_NAME = 'items'
def serving_input_fn():
    feature_spec = {INPUT_TENSOR_NAME : tf.FixedLenFeature(dtype=tf.int64, shape=[100])}
    return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)()
exported_model = classifier.export_savedmodel(export_dir_base = 'export/Servo/', 
                               serving_input_receiver_fn = serving_input_fn)

Then I saved my model, created checkpoint and send query to it.

sample = np.arange(100).astype(np.int64).tolist() predictor.predict(sample)

I got error follows.

Error on Jupyter Notebook Console:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from model with message "". See https://us-west-2.console.aws.amazon.com/cloudwatch/home?region=us-west-2#logEventViewer:group=/aws/sagemaker/Endpoints/sagemaker-tensorflow-py2-cpu-2018-02-01-17-06-45-306 in account 561830960602 for more information.

Error found on CloudWatch Management Console

[2018-02-01 17:21:08,384] ERROR in serving: Unsupported request data format: [1].
Valid formats: tensor_pb2.TensorProto, dict<string, tensor_pb2.TensorProto> and predict_pb2.PredictRequest
Traceback (most recent call last):
File "/opt/amazon/lib/python2.7/site-packages/container_support/serving.py", line 161, in _invoke
self.transformer.transform(content, input_content_type, requested_output_content_type)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/serve.py", line 255, in transform
return self.transform_fn(data, content_type, accepts), accepts
File "/opt/amazon/lib/python2.7/site-packages/tf_container/serve.py", line 180, in f
prediction = self.predict_fn(input)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/serve.py", line 195, in predict_fn
return self.proxy_client.request(data)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/proxy_client.py", line 51, in request
return request_fn(data)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/proxy_client.py", line 77, in predict
request = self._create_predict_request(data)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/proxy_client.py", line 94, in _create_predict_request
input_map = self._create_input_map(data)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/proxy_client.py", line 199, in _create_input_map
raise ValueError(msg.format(data))
ValueError: Unsupported request data format: [1].
Valid formats: tensor_pb2.TensorProto, dict<string, tensor_pb2.TensorProto> and predict_pb2.PredictRequest
2018-02-01 17:21:08,384 ERROR - model server - Unsupported request data format: [1].
Valid formats: tensor_pb2.TensorProto, dict<string, tensor_pb2.TensorProto> and predict_pb2.PredictRequest
Traceback (most recent call last):
File "/opt/amazon/lib/python2.7/site-packages/container_support/serving.py", line 161, in _invoke
self.transformer.transform(content, input_content_type, requested_output_content_type)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/serve.py", line 255, in transform
return self.transform_fn(data, content_type, accepts), accepts
File "/opt/amazon/lib/python2.7/site-packages/tf_container/serve.py", line 180, in f
prediction = self.predict_fn(input)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/serve.py", line 195, in predict_fn
return self.proxy_client.request(data)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/proxy_client.py", line 51, in request
return request_fn(data)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/proxy_client.py", line 77, in predict
request = self._create_predict_request(data)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/proxy_client.py", line 94, in _create_predict_request
input_map = self._create_input_map(data)
File "/opt/amazon/lib/python2.7/site-packages/tf_container/proxy_client.py", line 199, in _create_input_map
raise ValueError(msg.format(data))
ValueError: Unsupported request data format: [1].
Valid formats: tensor_pb2.TensorProto, dict<string, tensor_pb2.TensorProto> and predict_pb2.PredictRequest
[2018-02-01 17:21:08,384] ERROR in serving: Unsupported request data format: [1].
Valid formats: tensor_pb2.TensorProto, dict<string, tensor_pb2.TensorProto> and predict_pb2.PredictRequest
2018-02-01 17:21:08,384 ERROR - model server - Unsupported request data format: [1].
Valid formats: tensor_pb2.TensorProto, dict<string, tensor_pb2.TensorProto> and predict_pb2.PredictRequest
10.32.0.2 - - [01/Feb/2018:17:21:08 +0000] "POST /invocations HTTP/1.1" 500 0 "-" "AHC/2.0"

I tried to send predict_pb2 object to model, but it failed.

@laurenyu
Copy link
Contributor

laurenyu commented Feb 3, 2018

Hi, thanks for trying out Amazon SageMaker!

It looks like it's an issue with the prediction format - I'll see if I can reproduce the issue locally myself to get a better idea of the problem, and report back with what I figure out.

@laurenyu
Copy link
Contributor

laurenyu commented Feb 5, 2018

I tried using the serving_input_fn() you've shown, but wasn't able to reproduce the error. would you mind posting the rest of your model_fn()?

@andremoeller
Copy link
Contributor

Hi @ita9naiwa,

I'm closing this issue, but if you can post your model_fn, we'd appreciate that. Feel free to reopen this.

Thanks!

apacker pushed a commit to apacker/sagemaker-python-sdk that referenced this issue Nov 15, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants