AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Creates a batch inference job to invoke a model on multiple prompts. Format your data according to Format your inference data and upload it to an Amazon S3 bucket. For more information, see Process multiple prompts with batch inference.
The response returns a jobArn
that you can use to stop or get details about
the job.
For .NET Core this operation is only available in asynchronous form. Please refer to CreateModelInvocationJobAsync.
Namespace: Amazon.Bedrock
Assembly: AWSSDK.Bedrock.dll
Version: 3.x.y.z
public virtual CreateModelInvocationJobResponse CreateModelInvocationJob( CreateModelInvocationJobRequest request )
Container for the necessary parameters to execute the CreateModelInvocationJob service method.
Exception | Condition |
---|---|
AccessDeniedException | The request is denied because of missing access permissions. |
ConflictException | Error occurred because of a conflict while performing an operation. |
InternalServerException | An internal server error occurred. Retry your request. |
ResourceNotFoundException | The specified resource Amazon Resource Name (ARN) was not found. Check the Amazon Resource Name (ARN) and try your request again. |
ServiceQuotaExceededException | The number of requests exceeds the service quota. Resubmit your request later. |
ThrottlingException | The number of requests exceeds the limit. Resubmit your request later. |
ValidationException | Input validation failed. Check your request parameters and retry the request. |
.NET Framework:
Supported in: 4.5 and newer, 3.5