Playgrounds - Amazon Bedrock

Playgrounds

Important

Before you can use any of the foundation models, you must request access to that model through the Amazon Bedrock console. You can manage model access only through the console. If you try to use the model (with the API or within the console) before you have requested access to it, you'll receive an error message. For more information, see Manage access to Amazon Bedrock foundation models.

The Amazon Bedrock playgrounds provide you a console environment to experiment with running inference on different models and with different configurations, before deciding to use them in an application. In the console, access the playgrounds by choosing Playgrounds in the left navigation pane. You can also navigate directly to the playground when you choose a model from a model details page or the examples page.

There are playgrounds for text, chat, and image models.

Within each playground you can enter prompts and experiment with inference parameters. Prompts are usually one or more sentences of text that set up a scenario, question, or task for a model. For information about creating prompts, see Prompt engineering guidelines.

Inference parameters influence the response generated by a model, such as the randomness of generated text. When you load a model into a playground, the playground configures the model with its default inference settings. You can change and reset the settings as you experiment with the model. Each model has its own set of inference parameters. For more information, see Inference parameters for foundation models.

If supported by a model, such as Anthropic Claude 3 Sonnet, you can specify a system prompt. A system prompt is a type of prompt that provides instructions or context to the model about the task it should perform, or the persona it should adopt during the conversation. For example, you can specify a system prompt that tells the model to generate code in the response, or request that the model adopts the persona of a school teacher when generating its response.

When you submit a response, the model responds with its generated output.

If a chat or text model supports streaming, the default is to stream the responses from a model. You can turn off streaming, if desired.

Chat playground

The chat playground lets you experiment with the chat models that Amazon Bedrock provides. When you submit a prompt to a model, you have the following options:

  • Modify Configurations to influence the response.

  • Include an image (if the model supports multimodal prompts) or document and submit a prompt to the model related to the document.

The response is returned alongside model metrics.

Configuration changes

The configuration changes you can make varies betwen models, but typically include inference parameters changes such as Temperature and Top K. For more information, see Inference parameters. To see the inference parameters for a specific model, see Inference parameters for foundation models.

You can set one or more stop sequences that, if generated by the model, signal that the model must stop generating more output.

Model metrics

The chat playground creates the following metrics for prompts that it processes.

  • Latency — The time it takes for the model to generate each token (word) in a sequence.

  • Input token count — The number of tokens that are fed into the model as input during inference.

  • Output token count — The number of tokens generated in response to a prompt. Longer, more conversational, responses require more tokens.

  • Cost — The cost of processing the input and generating output tokens.

You can also define criteria that you want the model response to match.

By turning on compare model, you can compare the chat responses for a single prompt with the responses from up to three models. This helps you to understand the comparative performance of each model, without having to switch between models. For more information, see Use a playground.

Text playground

The text playground lets you experiment with the text models that Amazon Bedrock provides. You can submit text to a model and the text playground shows the text that the model generates from the prompt.

Image playground

The image playground lets you experiment with the image models that Amazon Bedrock provides. You can submit a text prompt to a model and the image playground shows the image that the model generates for the prompt.

Along with setting inference parameters, you can make additional configuration changes (differs by model):

Stable Diffusion XL
  • Action – Decide whether you want to choose another action like Generate image, Generate variations of the image, or Edit the image.

    If you edit a reference image, the model needs a segmentation mask that covers the area of the image that you want the model to edit. Create the segmentation mask by using the image plaground to draw a rectangle on the reference image.

  • Negative prompt – Describe what not to include in the image. For example,cartoon or violence.

  • Reference image – The image on which to generate the response or that you want the model to edit.

  • Response image – Output settings for the generated image, such as quality, orientation, size, and the number of images to generate.

  • Advanced configurations

    • Prompt strength– Use this to determine how much the final image portrays the prompt.

    • Generate step– Use this to determine how many times the image is sampled. More steps can result in a more accurate result.

    • Seed– Use this to generate similar results. Refer to the documentation links below for details about other inference parameters.

Titan Image Generator G1
  • Action – Decide whether you want to choose another action like Generate image, Generate variations of the image, Remove object, object, or Replace background of the image.

  • Negative prompt – items or concepts that you don't want the model to generate, such as cartoon or violence.

  • Reference image – The image on which to generate the response or that you want the model to edit.

  • Response image – Output settings for the generated image, such as quality, orientation, size, and the number of images to generate.

  • Mask tools – Choose from either the selector or the prompt tool to define your mask.

  • Advanced configurations

    • Prompt strength– Use this to determines how much the final image portrays the prompt.

    • Seed– Use this to generate similar results. Refer to the documentation links below for details about other inference parameters.

Use a playground

The following procedure shows how to submit a prompt to a playground and view the response. In each playground, you can configure the inference parameters for the model. In the chat playground, you can view metrics, and optionally compare the output of up to three models. In the image playground you can make advanced configuration changes, which also vary by model.

To use a playground
  1. If you haven't already, request access to the models that you want to use. For more information, see Manage access to Amazon Bedrock foundation models.

  2. Open the Amazon Bedrock console.

  3. From the navigation pane, under Playgrounds, choose Chat, Text, or Image.

  4. Choose Select model to open the Select model dialog box.

    1. In Category select from the available providers or custom models.

    2. In Model select a model.

    3. In Throughput select the throughput (on-demand, or provisioned throughput) that you want the model to use. If you are using a custom model, you must have set up Provisioned Throughput for the model beforehand. For more information, see Provisioned Throughput for Amazon Bedrock

    4. Choose Apply.

  5. The following steps are optional to influence the model response:

    1. In Configurations choose the inference parameters that you want to use. For more information, see Inference parameters for foundation models. For information about configuration changes you can make in the image playground, see Image playground.

    2. If the model supports system prompts, you can enter a system prompt in the System prompt text box.

    3. If you're using the chat playground, you can select Choose files or drag a file on to the prompt text field to include the following types of files to complement your prompt:

      • Documents – Add documents to complement the prompt. For a list of supported file types, see the format field in DocumentBlock.

        Warning

        Document names are vulnerable to prompt injections, because the model might inadvertently interpret them as instructions. Therefore, we recommend that you specify a neutral name.

      • Images – Add images to complement the prompt, if the model supports multimodal prompts. For a list of supported file types, see the format field in the ImageBlock.

      Note

      The following restrictions pertain when you add files to the chat playground:

      • You can include up to 20 images. Each image's size, height, and width must be no more than 3.75 MB, 8,000 px, and 8,000 px, respectively.

      • You can include up to five documents. Each document's size must be no more than 4.5 MB.

  6. Enter your prompt into the text field. A prompt is a natural language phrase or command, such as Tell me about the best restaurants to visit in Seattle. If you include an image or document, you can refer to it in the prompt, such as Summarize this document for me or Tell me what's in this image. For more information, see Prompt engineering guidelines.

    Note

    Amazon Bedrock doesn't store any text, images, or documents that you provide. The data is only used to generate the response.

  7. Amazon Bedrock doesn't store any text, images, or documents that you provide. The data is only used to generate the response. To run the prompt, choose Run.

    Note

    If the response violates the content moderation policy, Amazon Bedrock doesn't display it. If you have turned on streaming, Amazon Bedrock clears the entire response if it generates content that violates the policy. For more details, navigate to the Amazon Bedrock console, select Providers, and read the text under the Content limitations section.

    For information about prompt engineering, see Prompt engineering guidelines.

  8. If you're using the chat playground, view the model metrics and compare models by doing the following.

    1. In the Model metrics section, view the metrics for each model.

    2. (Optional) Define criteria that you want to match by doing the following:

      1. Choose Define metric criteria.

      2. For the metrics you want to use, choose the condition and value. You can set the following conditions:

        • less than – The metric value is less than the specified value.

        • greater than – the metric value is more than the specified value.

      3. Choose Apply to apply your criteria.

      4. View which criteria are met. If all criteria are met, the Overall summary is Meets all criteria. If 1 or more criteria are not met, the Overall summary is n criteria unmet and the unmet criteria are highlighted in red.

    3. (Optional) Add models to compare by doing the following:

      1. Turn on Compare mode.

      2. Choose Select model to select a model.

      3. In the dialog box, choose a provider, model, and throughput.

      4. Choose Apply.

      5. (Optional) Choose the menu icon next to each model to configure inference parameters for that model. For more information, see Inference parameters for foundation models.

      6. Choose the + icon on the right of the Chat playground section to add a second or third model to compare.

      7. Repeat steps a-c to choose the models that you want to compare.

      8. Enter your a prompt into the text field and choose Run.