-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Getting an FileNotFoundError: [Errno 2] No such file or directory: 'inference.py' #4007
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Facing the same issue! |
Is there anyone who solve this problem? I have a same issue |
I conjecture that you are compiling segamaker using a local environment. The reason for the issue is the absence of When executing However, this image lacks "inference.py", resulting in a Solution:Firstly, create
Next, deploy the model using the inference approach. The steps below demonstrate how to locally deploy the model using TensorFlow:
|
whats the solution? |
I think i've figured it out ; you MUST create the app.py file WITHIN here : file browser pane, browse to "./lab1/packages/{account_id}-lab1_code-1.0/src/ Some tutorials say here: "In the file browser pane, browse to ./lab1/packages/{account_id}-lab1_code-1.0/. Find descriptor.json." BUT the right tutorial (https://catalog.workshops.aws/panorama-immersion-day/en-US/20-lab1-object-detection/21-lab1) correctly states to create and save the app.py txt file in ""./lab1/packages/{account_id}-lab1_code-1.0/src/" i.e. the SRC folder found in Lab 1. Wrong version - https://explore.skillbuilder.aws/learn/course/17780/play/93251/aws-panorama-building-edge-computer-vision-cv-applications |
Also facing this issue. I think the docs on https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#bring-your-own-model may be faulty as well? |
I also encountered the same problem. import sagemaker
from sagemaker.pytorch import PyTorchModel
from sagemaker.serverless import ServerlessInferenceConfig
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
model = PyTorchModel(
entry_point='inference.py',
role=role,
model_data='s3://***/model.tar.gz',
framework_version='2.1',
py_version='py310',
)
serverless_config = ServerlessInferenceConfig(
max_concurrency=1,
memory_size_in_mb=3072,
)
deploy_params = {
'instance_type': 'ml.t3.medium',
'initial_instance_count': 1,
'serverless_inference_config': serverless_config,
}
predictor = model.deploy(**deploy_params) |
I solved this problem by changing how to create tar.gz file. $ tar czvf ../model.tar.gz *
code/
code/requirements.txt
code/inference.py
model.pth I failed when I run $tar czvf model.tar.gz model
model/
model/model.pth
model/code/
model/code/requirements.txt
model/code/inference.py As you can see, these results are different each other. |
Describe the bug
Probably this is not a Bug, but when I try to deploy the Sagemaker Pytorch model, I am getting a
FileNotFoundError: [Errno 2] No such file or directory: 'inference.py'
errorTo reproduce
My folder structure as follow
predictor = pytorch_model.deploy(instance_type='ml.c4.xlarge', initial_instance_count=1)
Expected behavior
successful deploy
Screenshots or logs
The text was updated successfully, but these errors were encountered: