Configure advanced prompts - Amazon Bedrock

Configure advanced prompts

You can configure advanced prompts in either the AWS Management Console or through the API.

Console

In the console, you can configure advanced prompts after you have created the agent. You configure them while editing the agent.

To view or edit advanced prompts for your agent
  1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.

  2. In the left navigation pane, choose Agents. Then choose an agent in the Agents section.

  3. On the agent details page, in the Working draft section, select Working draft.

  4. On the Working draft page, in the Advanced prompts section, choose Edit.

  5. On the Edit advanced prompts page, choose the tab corresponding to the step of the agent sequence that you want to edit.

  6. To enable editing of the template, turn on Override template defaults. In the Override template defaults dialog box, choose Confirm.

    Warning

    If you turn off Override template defaults or change the model, the default Amazon Bedrock template is used and your template will be immediately deleted. To confirm, enter confirm in the text box to confirm the message that appears.

  7. To allow the agent to use the template when generating responses, turn on Activate template. If this configuration is turned off, the agent doesn't use the template.

  8. To modify the example prompt template, use the Prompt template editor.

  9. In Configurations, you can modify inference parameters for the prompt. For definitions of parameters and more information about parameters for different models, see Inference request parameters and response fields for foundation models.

  10. (Optional) To use a Lambda function that you have defined to parse the raw foundation model output, perform the following actions:

    Note

    One Lambda function is used for all the prompt templates.

    1. In the Configurations section, select Use Lambda function for parsing. If you clear this setting, your agent will use the default parser for the prompt.

    2. For the Parser Lambda function, select a Lambda function from the dropdown menu.

      Note

      You must attach permissions for your agent so that it can access the Lambda function. For more information, see Resource-based policy to allow Amazon Bedrock to invoke an action group Lambda function.

  11. To save your settings, choose one of the following options:

    1. To remain in the same window so that you can dynamically update the prompt settings while testing your updated agent, choose Save.

    2. To save your settings and return to the Working draft page, choose Save and exit.

  12. To test the updated settings, choose Prepare in the Test window.

Setting up advanced prompts in the console.
API

To configure advanced prompts by using the API operations, you send an UpdateAgent call and modify the following promptOverrideConfiguration object.

"promptOverrideConfiguration": { "overrideLambda": "string", "promptConfigurations": [ { "basePromptTemplate": "string", "inferenceConfiguration": { "maximumLength": int, "stopSequences": [ "string" ], "temperature": float, "topK": float, "topP": float }, "parserMode": "DEFAULT | OVERRIDDEN", "promptCreationMode": "DEFAULT | OVERRIDDEN", "promptState": "ENABLED | DISABLED", "promptType": "PRE_PROCESSING | ORCHESTRATION | KNOWLEDGE_BASE_RESPONSE_GENERATION | POST_PROCESSING" } ] }
  1. In the promptConfigurations list, include a promptConfiguration object for each prompt template that you want to edit.

  2. Specify the prompt to modify in the promptType field.

  3. Modify the prompt template through the following steps:

    1. Specify the basePromptTemplate fields with your prompt template.

    2. Include inference parameters in the inferenceConfiguration objects. For more information about inference configurations, see Inference request parameters and response fields for foundation models.

  4. To enable the prompt template, set the promptCreationMode to OVERRIDDEN.

  5. To allow or prevent the agent from performing the step in the promptType field, modify the promptState value. This setting can be useful for troubleshooting the agent's behavior.

    • If you set promptState to DISABLED for the PRE_PROCESSING, KNOWLEDGE_BASE_RESPONSE_GENERATION, or POST_PROCESSING steps, the agent skips that step.

    • If you set promptState to DISABLED for the ORCHESTRATION step, the agent sends only the user input to the foundation model in orchestration. In addition, the agent returns the response as is without orchestrating calls between API operations and knowledge bases.

    • By default, the POST_PROCESSING step is DISABLED. By default, the PRE_PROCESSING, ORCHESTRATION, and KNOWLEDGE_BASE_RESPONSE_GENERATION steps are ENABLED.

  6. To use a Lambda function that you have defined to parse the raw foundation model output, perform the following steps:

    1. For each prompt template that you want to enable the Lambda function for, set parserMode to OVERRIDDEN.

    2. Specify the Amazon Resource Name (ARN) of the Lambda function in the overrideLambda field in the promptOverrideConfiguration object.