OrchestrationConfiguration - Amazon Bedrock

OrchestrationConfiguration

Settings for how the model processes the prompt prior to retrieval and generation.

Contents

additionalModelRequestFields

Additional model parameters and corresponding values not included in the textInferenceConfig structure for a knowledge base. This allows users to provide custom model parameters specific to the language model being used.

Type: String to JSON value map

Key Length Constraints: Minimum length of 1. Maximum length of 100.

Required: No

inferenceConfig

Configuration settings for inference when using RetrieveAndGenerate to generate responses while using a knowledge base as a source.

Type: InferenceConfig object

Required: No

promptTemplate

Contains the template for the prompt that's sent to the model. Orchestration prompts must include the $conversation_history$ and $output_format_instructions$ variables. For more information, see Use placeholder variables in the user guide.

Type: PromptTemplate object

Required: No

queryTransformationConfiguration

To split up the prompt and retrieve multiple sources, set the transformation type to QUERY_DECOMPOSITION.

Type: QueryTransformationConfiguration object

Required: No

See Also

For more information about using this API in one of the language-specific AWS SDKs, see the following: