@Generated(value="com.amazonaws:aws-java-sdk-code-generator") public class PromptModelInferenceConfiguration extends Object implements Serializable, Cloneable, StructuredPojo
Contains inference configurations related to model inference for a prompt. For more information, see Inference parameters.
Constructor and Description |
---|
PromptModelInferenceConfiguration() |
Modifier and Type | Method and Description |
---|---|
PromptModelInferenceConfiguration |
clone() |
boolean |
equals(Object obj) |
Integer |
getMaxTokens()
The maximum number of tokens to return in the response.
|
List<String> |
getStopSequences()
A list of strings that define sequences after which the model will stop generating.
|
Float |
getTemperature()
Controls the randomness of the response.
|
Integer |
getTopK()
The number of most-likely candidates that the model considers for the next token during generation.
|
Float |
getTopP()
The percentage of most-likely candidates that the model considers for the next token.
|
int |
hashCode() |
void |
marshall(ProtocolMarshaller protocolMarshaller)
Marshalls this structured data using the given
ProtocolMarshaller . |
void |
setMaxTokens(Integer maxTokens)
The maximum number of tokens to return in the response.
|
void |
setStopSequences(Collection<String> stopSequences)
A list of strings that define sequences after which the model will stop generating.
|
void |
setTemperature(Float temperature)
Controls the randomness of the response.
|
void |
setTopK(Integer topK)
The number of most-likely candidates that the model considers for the next token during generation.
|
void |
setTopP(Float topP)
The percentage of most-likely candidates that the model considers for the next token.
|
String |
toString()
Returns a string representation of this object.
|
PromptModelInferenceConfiguration |
withMaxTokens(Integer maxTokens)
The maximum number of tokens to return in the response.
|
PromptModelInferenceConfiguration |
withStopSequences(Collection<String> stopSequences)
A list of strings that define sequences after which the model will stop generating.
|
PromptModelInferenceConfiguration |
withStopSequences(String... stopSequences)
A list of strings that define sequences after which the model will stop generating.
|
PromptModelInferenceConfiguration |
withTemperature(Float temperature)
Controls the randomness of the response.
|
PromptModelInferenceConfiguration |
withTopK(Integer topK)
The number of most-likely candidates that the model considers for the next token during generation.
|
PromptModelInferenceConfiguration |
withTopP(Float topP)
The percentage of most-likely candidates that the model considers for the next token.
|
public void setMaxTokens(Integer maxTokens)
The maximum number of tokens to return in the response.
maxTokens
- The maximum number of tokens to return in the response.public Integer getMaxTokens()
The maximum number of tokens to return in the response.
public PromptModelInferenceConfiguration withMaxTokens(Integer maxTokens)
The maximum number of tokens to return in the response.
maxTokens
- The maximum number of tokens to return in the response.public List<String> getStopSequences()
A list of strings that define sequences after which the model will stop generating.
public void setStopSequences(Collection<String> stopSequences)
A list of strings that define sequences after which the model will stop generating.
stopSequences
- A list of strings that define sequences after which the model will stop generating.public PromptModelInferenceConfiguration withStopSequences(String... stopSequences)
A list of strings that define sequences after which the model will stop generating.
NOTE: This method appends the values to the existing list (if any). Use
setStopSequences(java.util.Collection)
or withStopSequences(java.util.Collection)
if you want
to override the existing values.
stopSequences
- A list of strings that define sequences after which the model will stop generating.public PromptModelInferenceConfiguration withStopSequences(Collection<String> stopSequences)
A list of strings that define sequences after which the model will stop generating.
stopSequences
- A list of strings that define sequences after which the model will stop generating.public void setTemperature(Float temperature)
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
temperature
- Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher
value for more surprising outputs.public Float getTemperature()
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
public PromptModelInferenceConfiguration withTemperature(Float temperature)
Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher value for more surprising outputs.
temperature
- Controls the randomness of the response. Choose a lower value for more predictable outputs and a higher
value for more surprising outputs.public void setTopK(Integer topK)
The number of most-likely candidates that the model considers for the next token during generation.
topK
- The number of most-likely candidates that the model considers for the next token during generation.public Integer getTopK()
The number of most-likely candidates that the model considers for the next token during generation.
public PromptModelInferenceConfiguration withTopK(Integer topK)
The number of most-likely candidates that the model considers for the next token during generation.
topK
- The number of most-likely candidates that the model considers for the next token during generation.public void setTopP(Float topP)
The percentage of most-likely candidates that the model considers for the next token.
topP
- The percentage of most-likely candidates that the model considers for the next token.public Float getTopP()
The percentage of most-likely candidates that the model considers for the next token.
public PromptModelInferenceConfiguration withTopP(Float topP)
The percentage of most-likely candidates that the model considers for the next token.
topP
- The percentage of most-likely candidates that the model considers for the next token.public String toString()
toString
in class Object
Object.toString()
public PromptModelInferenceConfiguration clone()
public void marshall(ProtocolMarshaller protocolMarshaller)
StructuredPojo
ProtocolMarshaller
.marshall
in interface StructuredPojo
protocolMarshaller
- Implementation of ProtocolMarshaller
used to marshall this object's data.