-
Notifications
You must be signed in to change notification settings - Fork 1.2k
How to train on SageMaker and deploy on Nvidia Jetson boards? #1178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You can deploy the model as if you have trained it locally. At the end of the training you get a trained model saved to your s3 bucket. |
Thanks for your reply, Ok, I got this part, but I'm wondering about the steps to run this trained model on the local board. And, should I set up or install other dependencies? If yes, please give me an example. Thanks for your support! |
It really depends on your model and/or framework, as well as on whether you need to process prediction requests at all or just simply run evaluation on some data. I would direct you to the frameworks documentation on how to host model locally. You can also check out our open sourced serving containers to see how we set hosting and what libraries we use to process prediction requests: |
Ok, The links sound very useful for me. Thanks so much for your help! |
Recently, we have decided to use AWS SageMaker to train our models, but after studying its documentation, I didn't find how to deploy the trained model on a local machine. Our local machines are mainly Nvidia Jetson boards.
Is it possible? If yes, could you explain to us how can we do it?
Your help will be much appreciated!
Mohammad.
The text was updated successfully, but these errors were encountered: