AWS::Bedrock::Agent InferenceConfiguration
Base inference parameters to pass to a model in a call to Converse or ConverseStream. For more information, see Inference parameters for foundation models.
If you need to pass additional parameters that the model
supports, use the additionalModelRequestFields
request field in the call to Converse
or ConverseStream
.
For more information, see Model parameters.
Syntax
To declare this entity in your AWS CloudFormation template, use the following syntax:
JSON
{ "MaximumLength" :
Number
, "StopSequences" :[ String, ... ]
, "Temperature" :Number
, "TopK" :Number
, "TopP" :Number
}
YAML
MaximumLength:
Number
StopSequences:- String
Temperature:Number
TopK:Number
TopP:Number
Properties
MaximumLength
-
The maximum number of tokens allowed in the generated response.
Required: No
Type: Number
Minimum:
0
Maximum:
4096
Update requires: No interruption
StopSequences
-
A list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response.
Required: No
Type: Array of String
Minimum:
0
Maximum:
4
Update requires: No interruption
Temperature
-
The likelihood of the model selecting higher-probability options while generating a response. A lower value makes the model more likely to choose higher-probability options, while a higher value makes the model more likely to choose lower-probability options.
The default value is the default value for the model that you are using. For more information, see Inference parameters for foundation models.
Required: No
Type: Number
Minimum:
0
Maximum:
1
Update requires: No interruption
TopK
-
While generating a response, the model determines the probability of the following token at each point of generation. The value that you set for
topK
is the number of most-likely candidates from which the model chooses the next token in the sequence. For example, if you settopK
to 50, the model selects the next token from among the top 50 most likely choices.Required: No
Type: Number
Minimum:
0
Maximum:
500
Update requires: No interruption
TopP
-
The percentage of most-likely candidates that the model considers for the next token. For example, if you choose a value of 0.8 for
topP
, the model selects from the top 80% of the probability distribution of tokens that could be next in the sequence.The default value is the default value for the model that you are using. For more information, see Inference parameters for foundation models.
Required: No
Type: Number
Minimum:
0
Maximum:
1
Update requires: No interruption