Create a prompt using Prompt management - Amazon Bedrock

Create a prompt using Prompt management

Note

Prompt management is in preview and is subject to change.

When you create a prompt, you have the following options:

  • Write the prompt message that serves as input for an FM to generate an output.

  • Include variables in the prompt message that can be filled in at runtime.

  • Choose a model to run the prompt or let it be filled in at runtime. If you choose a model, you can also modify the inference configurations to use. To see inference parameters for different models, see Inference parameters for foundation models.

  • Create variants of your prompt that use different messages, models, or configurations so that you can compare their outputs to decide the best variant for your use case.

To learn how to create a prompt using Prompt management, select the tab corresponding to your method of choice and follow the steps.

Console
To create a prompt
  1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at Getting Started with the AWS Management Console.

  2. Select Prompt management from the left navigation pane. Then, choose Create prompt.

  3. (Optional) Change the default Name for the prompt and its Description.

  4. Choose Create prompt. Your prompt is created and you'll be taken to the prompt builder for your newly created prompt, where you can configure your prompt.

  5. You can continue to the following procedure to configure your prompt or return to the prompt builder later.

To configure your prompt
  1. If you're not already in the prompt builder, do the following:

    1. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at Getting Started with the AWS Management Console.

    2. Select Prompt management from the left navigation pane. Then, choose a prompt in the Prompts section.

    3. In the Prompt draft section, choose Edit in prompt builder.

  2. In the Prompt pane, enter a prompt in the Message box. You can use double curly braces to include variables (as in {{variable}}). Note the following about prompt variables:

    • Each variable that you include appears in the Test variables section.

    • You can replace these variables with actual values when testing the prompt or when configuring the prompt in a prompt flow.

  3. (Optional) You can modify your prompt in the following ways:

    • In the Configurations pane, choose a Model for running inference and set the Inference parameters.

    • To compare different variants of your prompt, choose Actions and select Compare prompt variants. You can do the following on the comparison page:

      • To add a variant, choose the plus sign. You can add up to three variants.

      • After you specify the details of a variant, you can specify any Test variables and choose Run to test the output of the variant.

      • To delete a variant, choose the three dots and select Remove from compare.

      • To replace the working draft and leave the comparison mode, choose Save as draft. All the other variants will be deleted.

      • To leave the comparison mode, choose Exit compare mode.

  4. You have the following options when you're finished configuring the prompt:

API

To create a prompt, send a CreatePrompt request (see link for request and response formats and field details) with an Agents for Amazon Bedrock build-time endpoint.

The following fields are required:

Field Brief description
name A name for the prompt.
variants A list of different configurations for the prompt (see below).
defaultVariant The name of the default variant.

Each variant in the variants list is a PromptVariant object of the following general structure:

{ "name": "string", "modelId": "string", "templateType": "TEXT", "templateConfiguration": { "text": { "text": "string", "inputVariables": [ { "name": "string" }, ... ] } }, "inferenceConfiguration": { "text": { "maxTokens": int, "stopSequences": ["string", ...], "temperature": float, "topK": int, "topP": float } } }

Fill in the fields as follows:

  • name – Enter a name for the variant.

  • modelId – Enter the model ID to run inference with.

  • templateType – Enter TEXT (currently, only text prompts are supported).

  • templateConfiguration – The text field maps to a TextPromptTemplateConfiguration. Fill out the following fields in it:

    • text – The message for the prompt. Enclose variables in double curly braces: {{variable}}.

    • inputVariables – For each object in the list, enter each variable that you created in the name field.

  • inferenceConfiguration – The text field maps to a PromptModelInferenceConfiguration. To learn more about inference parameters, see Inference parameters.

The following fields are optional:

Field Use case
description To provide a description for the prompt.
clientToken To prevent duplication of the request.

The response creates a DRAFT version and returns an ID and ARN that you can use as a prompt identifier for other prompt-related API requests.