쿠키 기본 설정 선택

당사는 사이트와 서비스를 제공하는 데 필요한 필수 쿠키 및 유사한 도구를 사용합니다. 고객이 사이트를 어떻게 사용하는지 파악하고 개선할 수 있도록 성능 쿠키를 사용해 익명의 통계를 수집합니다. 필수 쿠키는 비활성화할 수 없지만 '사용자 지정' 또는 ‘거부’를 클릭하여 성능 쿠키를 거부할 수 있습니다.

사용자가 동의하는 경우 AWS와 승인된 제3자도 쿠키를 사용하여 유용한 사이트 기능을 제공하고, 사용자의 기본 설정을 기억하고, 관련 광고를 비롯한 관련 콘텐츠를 표시합니다. 필수가 아닌 모든 쿠키를 수락하거나 거부하려면 ‘수락’ 또는 ‘거부’를 클릭하세요. 더 자세한 내용을 선택하려면 ‘사용자 정의’를 클릭하세요.

GetModelInvocationJob - Amazon Bedrock
이 페이지는 귀하의 언어로 번역되지 않았습니다. 번역 요청

GetModelInvocationJob

Gets details about a batch inference job. For more information, see Monitor batch inference jobs

Request Syntax

GET /model-invocation-job/jobIdentifier HTTP/1.1

URI Request Parameters

The request uses the following URI parameters.

jobIdentifier

The Amazon Resource Name (ARN) of the batch inference job.

Length Constraints: Minimum length of 0. Maximum length of 1011.

Pattern: ^((arn:aws(-[^:]+)?:bedrock:[a-z0-9-]{1,20}:[0-9]{12}:model-invocation-job/)?[a-z0-9]{12})$

Required: Yes

Request Body

The request does not have a request body.

Response Syntax

HTTP/1.1 200 Content-type: application/json { "clientRequestToken": "string", "endTime": "string", "inputDataConfig": { ... }, "jobArn": "string", "jobExpirationTime": "string", "jobName": "string", "lastModifiedTime": "string", "message": "string", "modelId": "string", "outputDataConfig": { ... }, "roleArn": "string", "status": "string", "submitTime": "string", "timeoutDurationInHours": number, "vpcConfig": { "securityGroupIds": [ "string" ], "subnetIds": [ "string" ] } }

Response Elements

If the action is successful, the service sends back an HTTP 200 response.

The following data is returned in JSON format by the service.

clientRequestToken

A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.

Type: String

Length Constraints: Minimum length of 1. Maximum length of 256.

Pattern: ^[a-zA-Z0-9]{1,256}(-*[a-zA-Z0-9]){0,256}$

endTime

The time at which the batch inference job ended.

Type: Timestamp

inputDataConfig

Details about the location of the input to the batch inference job.

Type: ModelInvocationJobInputDataConfig object

Note: This object is a Union. Only one member of this object can be specified or returned.

jobArn

The Amazon Resource Name (ARN) of the batch inference job.

Type: String

Length Constraints: Minimum length of 0. Maximum length of 1011.

Pattern: ^(arn:aws(-[^:]+)?:bedrock:[a-z0-9-]{1,20}:[0-9]{12}:model-invocation-job/[a-z0-9]{12})$

jobExpirationTime

The time at which the batch inference job times or timed out.

Type: Timestamp

jobName

The name of the batch inference job.

Type: String

Length Constraints: Minimum length of 1. Maximum length of 63.

Pattern: ^[a-zA-Z0-9]{1,63}(-*[a-zA-Z0-9\+\-\.]){0,63}$

lastModifiedTime

The time at which the batch inference job was last modified.

Type: Timestamp

message

If the batch inference job failed, this field contains a message describing why the job failed.

Type: String

Length Constraints: Minimum length of 0. Maximum length of 2048.

modelId

The unique identifier of the foundation model used for model inference.

Type: String

Length Constraints: Minimum length of 1. Maximum length of 2048.

Pattern: ^(arn:aws(-[^:]+)?:bedrock:[a-z0-9-]{1,20}:(([0-9]{12}:custom-model/[a-z0-9-]{1,63}[.]{1}[a-z0-9-:]{1,63}/[a-z0-9]{12}$)|(:foundation-model/[a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}$)))|([a-z0-9-]{1,63}[.]{1}[a-z0-9-]{1,63}([.]?[a-z0-9-]{1,63})([:][a-z0-9-]{1,63}){0,2})|(([0-9a-zA-Z][_-]?)+)$

outputDataConfig

Details about the location of the output of the batch inference job.

Type: ModelInvocationJobOutputDataConfig object

Note: This object is a Union. Only one member of this object can be specified or returned.

roleArn

The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.

Type: String

Length Constraints: Minimum length of 0. Maximum length of 2048.

Pattern: ^arn:aws(-[^:]+)?:iam::([0-9]{12})?:role/.+$

status

The status of the batch inference job.

The following statuses are possible:

  • Submitted – This job has been submitted to a queue for validation.

  • Validating – This job is being validated for the requirements described in Format and upload your batch inference data. The criteria include the following:

    • Your IAM service role has access to the Amazon S3 buckets containing your files.

    • Your files are .jsonl files and each individual record is a JSON object in the correct format. Note that validation doesn't check if the modelInput value matches the request body for the model.

    • Your files fulfill the requirements for file size and number of records. For more information, see Quotas for Amazon Bedrock.

  • Scheduled – This job has been validated and is now in a queue. The job will automatically start when it reaches its turn.

  • Expired – This job timed out because it was scheduled but didn't begin before the set timeout duration. Submit a new job request.

  • InProgress – This job has begun. You can start viewing the results in the output S3 location.

  • Completed – This job has successfully completed. View the output files in the output S3 location.

  • PartiallyCompleted – This job has partially completed. Not all of your records could be processed in time. View the output files in the output S3 location.

  • Failed – This job has failed. Check the failure message for any further details. For further assistance, reach out to the Support Center.

  • Stopped – This job was stopped by a user.

  • Stopping – This job is being stopped by a user.

Type: String

Valid Values: Submitted | InProgress | Completed | Failed | Stopping | Stopped | PartiallyCompleted | Expired | Validating | Scheduled

submitTime

The time at which the batch inference job was submitted.

Type: Timestamp

timeoutDurationInHours

The number of hours after which batch inference job was set to time out.

Type: Integer

Valid Range: Minimum value of 24. Maximum value of 168.

vpcConfig

The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.

Type: VpcConfig object

Errors

For information about the errors that are common to all actions, see Common Errors.

AccessDeniedException

The request is denied because of missing access permissions.

HTTP Status Code: 403

InternalServerException

An internal server error occurred. Retry your request.

HTTP Status Code: 500

ResourceNotFoundException

The specified resource Amazon Resource Name (ARN) was not found. Check the Amazon Resource Name (ARN) and try your request again.

HTTP Status Code: 404

ThrottlingException

The number of requests exceeds the limit. Resubmit your request later.

HTTP Status Code: 429

ValidationException

Input validation failed. Check your request parameters and retry the request.

HTTP Status Code: 400

Examples

Get a batch inference job

This example illustrates one usage of GetModelInvocationJob.

GET /model-invocation-job/BATCHJOB1234 HTTP/1.1

See Also

For more information about using this API in one of the language-specific AWS SDKs, see the following:

프라이버시사이트 이용 약관쿠키 기본 설정
© 2025, Amazon Web Services, Inc. 또는 계열사. All rights reserved.