InferenceConfiguration
Contains inference parameters to use when the agent invokes a foundation model in the part of the agent sequence defined by the promptType
. For more information, see Inference parameters for foundation models.
Contents
 maximumLength

The maximum number of tokens to allow in the generated response.
Type: Integer
Valid Range: Minimum value of 0. Maximum value of 4096.
Required: No
 stopSequences

A list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response.
Type: Array of strings
Array Members: Minimum number of 0 items. Maximum number of 4 items.
Required: No
 temperature

The likelihood of the model selecting higherprobability options while generating a response. A lower value makes the model more likely to choose higherprobability options, while a higher value makes the model more likely to choose lowerprobability options.
Type: Float
Valid Range: Minimum value of 0. Maximum value of 1.
Required: No
 topK

While generating a response, the model determines the probability of the following token at each point of generation. The value that you set for
topK
is the number of mostlikely candidates from which the model chooses the next token in the sequence. For example, if you settopK
to 50, the model selects the next token from among the top 50 most likely choices.Type: Integer
Valid Range: Minimum value of 0. Maximum value of 500.
Required: No
 topP

While generating a response, the model determines the probability of the following token at each point of generation. The value that you set for
Top P
determines the number of mostlikely candidates from which the model chooses the next token in the sequence. For example, if you settopP
to 80, the model only selects the next token from the top 80% of the probability distribution of next tokens.Type: Float
Valid Range: Minimum value of 0. Maximum value of 1.
Required: No
See Also
For more information about using this API in one of the languagespecific AWS SDKs, see the following: