@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class InvokeModelRequest extends AmazonWebServiceRequest implements Serializable, Cloneable
NOOP
Constructor and Description |
---|
InvokeModelRequest() |
Modifier and Type | Method and Description |
---|---|
InvokeModelRequest |
clone()
Creates a shallow clone of this object for all fields except the handler context.
|
boolean |
equals(Object obj) |
String |
getAccept()
The desired MIME type of the inference body in the response.
|
ByteBuffer |
getBody()
The prompt and inference parameters in the format specified in the
contentType in the header. |
String |
getContentType()
The MIME type of the input data in the request.
|
String |
getGuardrailIdentifier()
The unique identifier of the guardrail that you want to use.
|
String |
getGuardrailVersion()
The version number for the guardrail.
|
String |
getModelId()
The unique identifier of the model to invoke to run inference.
|
String |
getTrace()
Specifies whether to enable or disable the Bedrock trace.
|
int |
hashCode() |
void |
setAccept(String accept)
The desired MIME type of the inference body in the response.
|
void |
setBody(ByteBuffer body)
The prompt and inference parameters in the format specified in the
contentType in the header. |
void |
setContentType(String contentType)
The MIME type of the input data in the request.
|
void |
setGuardrailIdentifier(String guardrailIdentifier)
The unique identifier of the guardrail that you want to use.
|
void |
setGuardrailVersion(String guardrailVersion)
The version number for the guardrail.
|
void |
setModelId(String modelId)
The unique identifier of the model to invoke to run inference.
|
void |
setTrace(String trace)
Specifies whether to enable or disable the Bedrock trace.
|
String |
toString()
Returns a string representation of this object.
|
InvokeModelRequest |
withAccept(String accept)
The desired MIME type of the inference body in the response.
|
InvokeModelRequest |
withBody(ByteBuffer body)
The prompt and inference parameters in the format specified in the
contentType in the header. |
InvokeModelRequest |
withContentType(String contentType)
The MIME type of the input data in the request.
|
InvokeModelRequest |
withGuardrailIdentifier(String guardrailIdentifier)
The unique identifier of the guardrail that you want to use.
|
InvokeModelRequest |
withGuardrailVersion(String guardrailVersion)
The version number for the guardrail.
|
InvokeModelRequest |
withModelId(String modelId)
The unique identifier of the model to invoke to run inference.
|
InvokeModelRequest |
withTrace(String trace)
Specifies whether to enable or disable the Bedrock trace.
|
InvokeModelRequest |
withTrace(Trace trace)
Specifies whether to enable or disable the Bedrock trace.
|
addHandlerContext, getCloneRoot, getCloneSource, getCustomQueryParameters, getCustomRequestHeaders, getGeneralProgressListener, getHandlerContext, getReadLimit, getRequestClientOptions, getRequestCredentials, getRequestCredentialsProvider, getRequestMetricCollector, getSdkClientExecutionTimeout, getSdkRequestTimeout, putCustomQueryParameter, putCustomRequestHeader, setGeneralProgressListener, setRequestCredentials, setRequestCredentialsProvider, setRequestMetricCollector, setSdkClientExecutionTimeout, setSdkRequestTimeout, withGeneralProgressListener, withRequestCredentialsProvider, withRequestMetricCollector, withSdkClientExecutionTimeout, withSdkRequestTimeout
public void setBody(ByteBuffer body)
The prompt and inference parameters in the format specified in the contentType
in the header. You
must provide the body in JSON format. To see the format and content of the request and response bodies for
different models, refer to Inference parameters. For
more information, see Run
inference in the Bedrock User Guide.
The AWS SDK for Java performs a Base64 encoding on this field before sending this request to the AWS service. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
body
- The prompt and inference parameters in the format specified in the contentType
in the header.
You must provide the body in JSON format. To see the format and content of the request and response bodies
for different models, refer to Inference
parameters. For more information, see Run inference in the
Bedrock User Guide.public ByteBuffer getBody()
The prompt and inference parameters in the format specified in the contentType
in the header. You
must provide the body in JSON format. To see the format and content of the request and response bodies for
different models, refer to Inference parameters. For
more information, see Run
inference in the Bedrock User Guide.
ByteBuffer
s are stateful. Calling their get
methods changes their position
. We recommend
using ByteBuffer.asReadOnlyBuffer()
to create a read-only view of the buffer with an independent
position
, and calling get
methods on this rather than directly on the returned ByteBuffer
.
Doing so will ensure that anyone else using the ByteBuffer
will not be affected by changes to the
position
.
contentType
in the
header. You must provide the body in JSON format. To see the format and content of the request and
response bodies for different models, refer to Inference
parameters. For more information, see Run inference in the
Bedrock User Guide.public InvokeModelRequest withBody(ByteBuffer body)
The prompt and inference parameters in the format specified in the contentType
in the header. You
must provide the body in JSON format. To see the format and content of the request and response bodies for
different models, refer to Inference parameters. For
more information, see Run
inference in the Bedrock User Guide.
The AWS SDK for Java performs a Base64 encoding on this field before sending this request to the AWS service. Users of the SDK should not perform Base64 encoding on this field.
Warning: ByteBuffers returned by the SDK are mutable. Changes to the content or position of the byte buffer will be seen by all objects that have a reference to this object. It is recommended to call ByteBuffer.duplicate() or ByteBuffer.asReadOnlyBuffer() before using or reading from the buffer. This behavior will be changed in a future major version of the SDK.
body
- The prompt and inference parameters in the format specified in the contentType
in the header.
You must provide the body in JSON format. To see the format and content of the request and response bodies
for different models, refer to Inference
parameters. For more information, see Run inference in the
Bedrock User Guide.public void setContentType(String contentType)
The MIME type of the input data in the request. You must specify application/json
.
contentType
- The MIME type of the input data in the request. You must specify application/json
.public String getContentType()
The MIME type of the input data in the request. You must specify application/json
.
application/json
.public InvokeModelRequest withContentType(String contentType)
The MIME type of the input data in the request. You must specify application/json
.
contentType
- The MIME type of the input data in the request. You must specify application/json
.public void setAccept(String accept)
The desired MIME type of the inference body in the response. The default value is application/json
.
accept
- The desired MIME type of the inference body in the response. The default value is
application/json
.public String getAccept()
The desired MIME type of the inference body in the response. The default value is application/json
.
application/json
.public InvokeModelRequest withAccept(String accept)
The desired MIME type of the inference body in the response. The default value is application/json
.
accept
- The desired MIME type of the inference body in the response. The default value is
application/json
.public void setModelId(String modelId)
The unique identifier of the model to invoke to run inference.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
modelId
- The unique identifier of the model to invoke to run inference.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
public String getModelId()
The unique identifier of the model to invoke to run inference.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
public InvokeModelRequest withModelId(String modelId)
The unique identifier of the model to invoke to run inference.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
modelId
- The unique identifier of the model to invoke to run inference.
The modelId
to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
public void setTrace(String trace)
Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
trace
- Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.Trace
public String getTrace()
Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
Trace
public InvokeModelRequest withTrace(String trace)
Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
trace
- Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.Trace
public InvokeModelRequest withTrace(Trace trace)
Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
trace
- Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.Trace
public void setGuardrailIdentifier(String guardrailIdentifier)
The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation.
An error will be thrown in the following situations.
You don't provide a guardrail identifier but you specify the amazon-bedrock-guardrailConfig
field in
the request body.
You enable the guardrail but the contentType
isn't application/json
.
You provide a guardrail identifier, but guardrailVersion
isn't specified.
guardrailIdentifier
- The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is
applied to the invocation.
An error will be thrown in the following situations.
You don't provide a guardrail identifier but you specify the amazon-bedrock-guardrailConfig
field in the request body.
You enable the guardrail but the contentType
isn't application/json
.
You provide a guardrail identifier, but guardrailVersion
isn't specified.
public String getGuardrailIdentifier()
The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation.
An error will be thrown in the following situations.
You don't provide a guardrail identifier but you specify the amazon-bedrock-guardrailConfig
field in
the request body.
You enable the guardrail but the contentType
isn't application/json
.
You provide a guardrail identifier, but guardrailVersion
isn't specified.
An error will be thrown in the following situations.
You don't provide a guardrail identifier but you specify the amazon-bedrock-guardrailConfig
field in the request body.
You enable the guardrail but the contentType
isn't application/json
.
You provide a guardrail identifier, but guardrailVersion
isn't specified.
public InvokeModelRequest withGuardrailIdentifier(String guardrailIdentifier)
The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation.
An error will be thrown in the following situations.
You don't provide a guardrail identifier but you specify the amazon-bedrock-guardrailConfig
field in
the request body.
You enable the guardrail but the contentType
isn't application/json
.
You provide a guardrail identifier, but guardrailVersion
isn't specified.
guardrailIdentifier
- The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is
applied to the invocation.
An error will be thrown in the following situations.
You don't provide a guardrail identifier but you specify the amazon-bedrock-guardrailConfig
field in the request body.
You enable the guardrail but the contentType
isn't application/json
.
You provide a guardrail identifier, but guardrailVersion
isn't specified.
public void setGuardrailVersion(String guardrailVersion)
The version number for the guardrail. The value can also be DRAFT
.
guardrailVersion
- The version number for the guardrail. The value can also be DRAFT
.public String getGuardrailVersion()
The version number for the guardrail. The value can also be DRAFT
.
DRAFT
.public InvokeModelRequest withGuardrailVersion(String guardrailVersion)
The version number for the guardrail. The value can also be DRAFT
.
guardrailVersion
- The version number for the guardrail. The value can also be DRAFT
.public String toString()
toString
in class Object
Object.toString()
public InvokeModelRequest clone()
AmazonWebServiceRequest
clone
in class AmazonWebServiceRequest
Object.clone()