You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I followed the steps in the sample code for iris_dnn_classifier that comes with jupiter notebook instance in sagemaker. Here is the code for reference-
I have iris_dnn_classifier.py file-
import numpy as np
import os
import tensorflow as tf
All this in done in Sagemaker console jupiter instance. I also tried to check is the model works by-
irispredictor.predict([6.4, 3.2, 4.5, 1.5])
and it works fine.
Then I separately created an endpoint called "irispredict" to use in AWS Lambda.Now I am trying to call it from AWS lambda console by doing-
import boto3
import json
sagemaker = boto3.client('runtime.sagemaker')
def lambda_handler(event, context):
data = {'key':'[6.4, 3.2, 4.5, 1.5]'}
result = sagemaker.invoke_endpoint(EndpointName='irispredict',Body=json.dumps(data))
print(result)
I get the Error-
"errorMessage": "An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from model
Minimal repro / logs
When I looked at the clockwatch logs, I see the following-
[2018-05-17 10:56:10,063] ERROR in serving: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="Name: , Feature: inputs (data type: float) is required but could not be found. #11 [[Node: ParseExample/ParseExample = ParseExample[Ndense=1, Nsparse=0, Tdense=[DT_FLOAT], dense_shapes=[[4]], sparse_types=[], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_example_tensor_0_0, ParseExample/ParseExample/names, ParseExample/ParseExample/dense_keys_0, ParseExample/Const)]]")
.
So I am guessing there is problem is in the input "data" variable?
Any suggestion/pointers are greatly appreciated
Thank you,
Sandy
The text was updated successfully, but these errors were encountered:
I believe you're right, that data isn't formatted as the TensorFlow container expects. Passing in just the list [6.4, 3.2, 4.5, 1.5] worked for me on an endpoint hosting the iris model:
Thank you @andremoeller . That worked! I now have trouble reading the response['Body'] into JSON :) but I have opened a separate thread for that! Thank you once again!
Hi,
I am new to Amazon sagemaker and I am trying to build, deploy a model and then invoke it from AWS Lambda
System Information
Describe the problem
I followed the steps in the sample code for iris_dnn_classifier that comes with jupiter notebook instance in sagemaker. Here is the code for reference-
I have iris_dnn_classifier.py file-
import numpy as np
import os
import tensorflow as tf
def estimator(model_path, hyperparameters):
feature_columns = [tf.feature_column.numeric_column(INPUT_TENSOR_NAME, shape=[4])]
return tf.estimator.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir=model_path)
def train_input_fn(training_dir, hyperparameters):
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename=os.path.join(training_dir, 'iris_training.csv'),
target_dtype=np.int,
features_dtype=np.float32)
def serving_input_fn(hyperparameters):
feature_spec = {INPUT_TENSOR_NAME: tf.FixedLenFeature(dtype=tf.float32, shape=[4])}
return tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)()
Then I created the estimater and deployed the model as follows-
from sagemaker.tensorflow import TensorFlow
iris_estimator = TensorFlow(entry_point='iris_dnn_classifier.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
training_steps=1000,
evaluation_steps=100)
import boto3
region = boto3.Session().region_name
train_data_location = 's3://sagemaker-sample-data-{}/tensorflow/iris'.format(region)
iris_estimator.fit(train_data_location)
irispredictor = iris_estimator.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
All this in done in Sagemaker console jupiter instance. I also tried to check is the model works by-
irispredictor.predict([6.4, 3.2, 4.5, 1.5])
and it works fine.
Then I separately created an endpoint called "irispredict" to use in AWS Lambda.Now I am trying to call it from AWS lambda console by doing-
import boto3
import json
sagemaker = boto3.client('runtime.sagemaker')
def lambda_handler(event, context):
data = {'key':'[6.4, 3.2, 4.5, 1.5]'}
result = sagemaker.invoke_endpoint(EndpointName='irispredict',Body=json.dumps(data))
print(result)
I get the Error-
"errorMessage": "An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from model
Minimal repro / logs
When I looked at the clockwatch logs, I see the following-
[2018-05-17 10:56:10,063] ERROR in serving: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="Name: , Feature: inputs (data type: float) is required but could not be found.
#11 [[Node: ParseExample/ParseExample = ParseExample[Ndense=1, Nsparse=0, Tdense=[DT_FLOAT], dense_shapes=[[4]], sparse_types=[], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_input_example_tensor_0_0, ParseExample/ParseExample/names, ParseExample/ParseExample/dense_keys_0, ParseExample/Const)]]")
.
So I am guessing there is problem is in the input "data" variable?
Any suggestion/pointers are greatly appreciated
Thank you,
Sandy
The text was updated successfully, but these errors were encountered: