Create a prompt using Prompt management - Amazon Bedrock

Create a prompt using Prompt management

When you create a prompt, you have the following options:

  • Write the prompt message that serves as input for an FM to generate an output.

  • Use double curly braces to include variables (as in {{variable}}) in the prompt message that can be filled in when you call the prompt.

  • Choose a model with which to invoke the prompt or, if you plan to use the prompt with an agent, leave it unspecified. If you choose a model, you can also modify the inference configurations to use. To see inference parameters for different models, see Inference request parameters and response fields for foundation models.

If the model that you choose for the prompt supports the Converse API (for more information, see Carry out a conversation with the Converse API operations), you can include the following when constructing the prompt:

  • A system prompt to provide instructions or context to the model.

  • Previous prompts (user messages) and model responses (assistant messages) as conversational history for the model to consider when generating a response for the final user message.

  • (If supported by the model) Tools for the model to use when generating the response.

To learn how to create a prompt using Prompt management, choose the tab for your preferred method, and then follow the steps:

Console
To create a prompt
  1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at Getting Started with the AWS Management Console.

  2. Select Prompt management from the left navigation pane. Then, choose Create prompt.

  3. Provide a name for the prompt and an optional description.

  4. To encrypt your prompt with a customer managed key, select Customize encryption settings (advanced) in the KMS key selection section. If you omit this field, your prompt will be encrypted with an AWS managed key. For more information, see AWS KMS keys.

  5. Choose Create prompt. Your prompt is created and you'll be taken to the Prompt builder for your newly created prompt, where you can configure your prompt.

  6. You can continue to the following procedure to configure your prompt or return to the prompt builder later.

To configure your prompt
  1. If you're not already in the prompt builder, do the following:

    1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at Getting Started with the AWS Management Console.

    2. Select Prompt management from the left navigation pane. Then, choose a prompt in the Prompts section.

    3. In the Prompt draft section, choose Edit in prompt builder.

  2. Use the Prompt pane to construct the prompt. Enter the prompt in the last User message box. If the model supports the Converse API or the AnthropicClaude Messages API, you can also include a System prompt and previous User messages and Assistant messages for context.

    When you write a prompt, you can include variables in double curly braces (as in {{variable}}). Each variable that you include appears in the Test variables section.

  3. (Optional) You can modify your prompt in the following ways:

    • In the Configurations pane, do the following:

      1. Choose a Generative AI resource for running inference.

        Note

        If you choose an agent, you can only test the prompt in the console. To learn how to test a prompt with an agent in the API, see Test a prompt using Prompt management.

      2. Set the Inference parameters.

      3. If the model that you choose supports tools, choose Configure tools to use tools with the prompt.

    • To compare different variants of your prompt, choose Actions and select Compare prompt variants. You can do the following on the comparison page:

      • To add a variant, choose the plus sign. You can add up to three variants.

      • After you specify the details of a variant, you can specify any Test variables and choose Run to test the output of the variant.

      • To delete a variant, choose the three dots and select Remove from compare.

      • To replace the working draft and leave the comparison mode, choose Save as draft. All the other variants will be deleted.

      • To leave the comparison mode, choose Exit compare mode.

  4. You have the following options when you're finished configuring the prompt:

API

To create a prompt, send a CreatePrompt request with an Agents for Amazon Bedrock build-time endpoint.

The following fields are required:

Field Brief description
name A name for the prompt.
variants A list of different configurations for the prompt (see below).
defaultVariant The name of the default variant.

Each variant in the variants list is a PromptVariant object of the following general structure:

{ "name": "string", # modelId or genAiResource (see below) "templateType": "TEXT", "templateConfiguration": # see below, "inferenceConfiguration": { "text": { "maxTokens": int, "stopSequences": ["string", ...], "temperature": float, "topP": float } }, "additionalModelRequestFields": { "key": "value", ... }, "metadata": [ { "key": "string", "value": "string" }, ... ] }

Fill in the fields as follows:

  • name – Enter a name for the variant.

  • Include one of these fields, depending on the model invocation resource to use:

    • modelId – To specify a foundation model or inference profile to use with the prompt, enter its ARN or ID.

    • genAiResource – To specify an agent, enter its ID or ARN. The value of the genAiResource is a JSON object of the following format:

      { "genAiResource": { "agent": { "agentIdentifier": "string" } }
      Note

      If you include the genAiResource field, you can only test the prompt in the console. To test a prompt with an agent in the API, you must enter the text of the prompt directly into the inputText field of the InvokeAgent request.

  • templateType – Enter TEXT or CHAT. CHAT is only compatible with models that support the Converse API.

  • templateConfiguration – The value depends on the template type that you specified:

  • inferenceConfiguration – The text field maps to a PromptModelInferenceConfiguration. This field contains inference parameters that are common to all models. To learn more about inference parameters, see Influence response generation with inference parameters.

  • additionalModelRequestFields – Use this field to specify inference parameters that are specific to the model that you're running inference with. To learn more about model-specific inference parameters, see Inference request parameters and response fields for foundation models.

  • metadata – Metadata to associate with the prompt variant. You can append key-value pairs to the array to tag the prompt variant with metadata.

The following fields are optional:

Field Use case
description To provide a description for the prompt.
clientToken To ensure the API request completes only once. For more information, see Ensuring idempotency.
tags To associate tags with the flow. For more information, see Tagging Amazon Bedrock resources.

The response creates a DRAFT version and returns an ID and ARN that you can use as a prompt identifier for other prompt-related API requests.