Request Inferences from a Deployed Service - Amazon SageMaker

Request Inferences from a Deployed Service

If you have followed instructions in Deploy a Model, you should have a SageMaker endpoint set up and running. Regardless of how you deployed your Neo-compiled model, there are three ways you can submit inference requests: