Generate responses in the console using playgrounds - Amazon Bedrock

Generate responses in the console using playgrounds

The Amazon Bedrock playgrounds are a tool in the AWS Management Console that provide a visual interface to experiment with running inference on different models and using different configurations. You can use the playgrounds to test different models and values before you integrate them into your application.

Running a prompt in a playground is equivalent to making an InvokeModel, InvokeModelWithResponseStream, Converse, or ConverseStream request in the API.

Amazon Bedrock offers the following playgrounds for you to experiment with:

  • Chat/text – Submit text prompts and generate responses. You can select one of the following modes:

    • Chat – Submit a text prompt and include any images or documents to supplement the prompt. Subsequent prompts that you submit will include your previous prompts as context, such that the sequence of prompts and responses resembles a conversation.

    • Single prompt – Submit a single text prompt and generate a response to it.

  • Image – Submit a text prompt to generate an image. You can also submit an image prompt and specify whether to edit it or to generate variations of it.

The following procedure describes how to submit a prompt in the playground, the options that you can adjust, and the actions that you can take after the model generates a response.

To use a playground
  1. If you haven't already, request access to the models that you want to use. For more information, see Access Amazon Bedrock foundation models.

  2. Sign in to the AWS Management Console using an IAM role with Amazon Bedrock permissions, and open the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/.

  3. From the navigation pane, under Playgrounds, choose Chat/text or Image.

  4. If you're in the Chat/text playground, select a Mode.

  5. Choose Select model and select a provider, model, and throughput to use. For more information about throughput types, see Increase throughput for resiliency and processing power.

  6. Submit the following information to generate a response:

    • Prompt – One or more sentences of text that set up a scenario, question, or task for a model. For information about creating prompts, see Prompt engineering concepts.

      If you're using the chat mode of the chat/text playground, you can select Choose files or drag a file on to the prompt text field to include files to complement your prompt. You can refer to the file in the prompt text, such as Summarize this document for me or Tell me what's in this image. You can include the following types of files:

      • Documents – Add documents to complement the prompt. For a list of supported file types, see the format field in DocumentBlock.

        Warning

        Document names are vulnerable to prompt injections, because the model might inadvertently interpret them as instructions. Therefore, we recommend that you specify a neutral name.

      • Images – Add images to complement the prompt, if the model supports multimodal prompts. For a list of supported file types, see the format field in the ImageBlock.

      Note

      The following restrictions pertain when you add files to a prompt:

      • You can include up to 20 images. Each image's size, height, and width must be no more than 3.75 MB, 8,000 px, and 8,000 px, respectively.

      • You can include up to five documents. Each document's size must be no more than 4.5 MB.

    • Configurations – Settings that you adjust to modify the model response. Configurations include the following:

  7. (Optional) If a model supports streaming, the default behavior in the chat/text playground is to stream the responses. You can turn off streaming by choosing the options icon ( Vertical ellipsis icon representing a menu or more options. ) and modifying the Streaming preference option.

  8. (Optional) In the chat mode of the chat/text playground, you can compare responses from different models by doing the following:

    1. Turn on Compare mode.

    2. Choose Select model and select a provider, model, and throughput to use.

    3. Choose the configurations icon ( Three horizontal sliders with adjustable circular controls for settings or parameters. ) to modify the configurations to use.

    4. To add more models to compare, choose the + icon on the right, select a model, and modify the configurations as necessary.

  9. To run the prompt, choose Run. Amazon Bedrock doesn't store any text, images, or documents that you provide. The data is only used to generate the response.

    Note

    If the response violates the content moderation policy, Amazon Bedrock doesn't display it. If you have turned on streaming, Amazon Bedrock clears the entire response if it generates content that violates the policy. For more details, navigate to the Amazon Bedrock console, select Providers, and read the text under the Content limitations section.

  10. The model returns the response. If you're using the chat mode of the chat/text playground, you can submit a prompt to reply to the response and generate another response.

  11. After generating a response, you have the following options:

    • To export the response as a JSON file, choose the options icon ( Vertical ellipsis icon representing a menu or more options. ) and select Export as JSON.

    • To view the API request that you made, choose the options icon ( Vertical ellipsis icon representing a menu or more options. ) and select View API request.

    • In the chat mode of the chat/text playground, you can view metrics in the Model metrics section. The following model metrics are available:

      • Latency — The time it takes for the model to generate each token (word) in a sequence.

      • Input token count — The number of tokens that are fed into the model as input during inference.

      • Output token count — The number of tokens generated in response to a prompt. Longer, more conversational, responses require more tokens.

      • Cost — The cost of processing the input and generating output tokens.

      To set metric criteria that you want the response to match, choose Define metric criteria and define conditions for the model to match. After you apply the criteria, the Model metrics section shows how many and which criteria were met by the response.

      If criteria are unmet, you can choose a different model, rewrite the prompt, or modify configurations and rerun the prompt.