Using foundation models - Amazon Bedrock

Using foundation models

You must request access to a model before you can use it. After doing so, you can then use FMs in the following ways.

  • Run inference by sending prompts to a model and generating responses. The playgrounds offer a user-friendly interface in the AWS Management Console for generating text, images, or chats. See the Output modality column to determine the models you can use in each playground.

    Note

    The console playgrounds don't support running inference on embeddings models. Use the API to run inference on embeddings models.

  • Evaluate models to compare outputs and determine the best model for your use-case.

  • Set up a knowledge base with the help of an embeddings model. Then use a text model to generate responses to queries.

  • Create an agent and use a model to run inference on prompts to carry out orchestration.

  • Customize a model by feeding training and validation data to adjust model parameters for your use-case. To use a customized model, you must purchase Provisioned Throughput for it.

  • Purchase Provisioned Throughput for a model to increase throughput for it.

To use an FM in the API, you need to determine the appropriate model ID to use.

Use case How to find the model ID
Use a base model Look up the ID in the base model IDs chart
Purchase Provisioned Throughput for a base model Look up the ID in the model IDs for Provisioned Throughput chart and use it as the modelId in the CreateProvisionedModelThroughput request.
Purchase Provisioned Throughput for a custom model Use the name of the custom model or its ARN as the modelId in the CreateProvisionedModelThroughput request.
Use a provisioned model After you create a Provisioned Throughput, it returns a provisionedModelArn. This ARN is the model ID.
Use a custom model Purchase Provisioned Throughput for the custom model and use the returned provisionedModelArn as the model ID.