Anthropic Claude Messages API - Amazon Bedrock

Anthropic Claude Messages API

This section provides inference parameters and code examples for using the Anthropic Claude Messages API.

Anthropic Claude Messages API overview

You can use the Messages API to create chat bots or virtual assistant applications. The API manages the conversational exchanges between a user and an Anthropic Claude model (assistant).

Anthropic trains Claude models to operate on alternating user and assistant conversational turns. When creating a new message, you specify the prior conversational turns with the messages parameter. The model then generates the next Message in the conversation.

Each input message must be an object with a role and content. You can specify a single user-role message, or you can include multiple user and assistant messages. The first message must always use the user role.

If you are using the technique of prefilling the response from Claude (filling in the beginning of Claude's response by using a final assistant role Message), Claude will respond by picking up from where you left off. With this technique, Claude will still return a response with the assistant role.

If the final message uses the assistant role, the response content will continue immediately from the content in that message. You can use this to constrain part of the model's response.

Example with a single user message:

[{"role": "user", "content": "Hello, Claude"}]

Example with multiple conversational turns:

[ {"role": "user", "content": "Hello there."}, {"role": "assistant", "content": "Hi, I'm Claude. How can I help you?"}, {"role": "user", "content": "Can you explain LLMs in plain English?"}, ]

Example with a partially-filled response from Claude:

[ {"role": "user", "content": "Please describe yourself using only JSON"}, {"role": "assistant", "content": "Here is my JSON description:\n{"}, ]

Each input message content may be either a single string or an array of content blocks, where each block has a specific type. Using a string is shorthand for an array of one content block of type "text". The following input messages are equivalent:

{"role": "user", "content": "Hello, Claude"}
{"role": "user", "content": [{"type": "text", "text": "Hello, Claude"}]}

For information about creating prompts for Anthropic Claude models, see Intro to prompting in the Anthropic Claude documentation. If you have existing Text Completion prompts that you want to migrate to the messages API, see Migrating from Text Completions.

System prompts

You can also include a system prompt in the request. A system prompt lets you provide context and instructions to Anthropic Claude, such as specifying a particular goal or role. Specify a system prompt in the system field, as shown in the following example.

"system": "You are Claude, an AI assistant created by Anthropic to be helpful, harmless, and honest. Your goal is to provide informative and substantive responses to queries while avoiding potential harms."

For more information, see System prompts in the Anthropic documentation.

Multimodal prompts

A multimodal prompt combines multiple modalities (images and text) in a single prompt. You specify the modalities in the content input field. The following example shows how you could ask Anthropic Claude to describe the content of a supplied image. For example code, see Multimodal code examples.

{ "anthropic_version": "bedrock-2023-05-31", "max_tokens": 1024, "messages": [ { "role": "user", "content": [ { "type": "image", "source": { "type": "base64", "media_type": "image/jpeg", "data": "iVBORw..." } }, { "type": "text", "text": "What's in these images?" } ] } ] }

You can supply up to 20 images to the model. You can't put images in the assistant role.

Each image you include in a request counts towards your token usage. For more information, see Image costs in the Anthropic documentation.

Supported models

You can use the Messages API with the following Anthropic Claude models.

  • Anthropic Claude Instant v1.2

  • Anthropic Claude 2 v2

  • Anthropic Claude 2 v2.1

  • Anthropic Claude 3 Sonnet

  • Anthropic Claude 3 Haiku

Request and Response

The request body is passed in the body field of a request to InvokeModel or InvokeModelWithResponseStream. The maximum size of the payload you can send in a request is 20MB.

For more information, see https://docs.anthropic.com/claude/reference/messages_post.

Request

Anthropic Claude has the following inference parameters for a messages inference call.

{ "anthropic_version": "bedrock-2023-05-31", "max_tokens": int, "system": string, "messages": [ { "role": string, "content": [ { "type": "image", "source": { "type": "base64", "media_type": "image/jpeg", "data": "content image bytes" } }, { "type": "text", "text": "content text" } ] } ], "temperature": float, "top_p": float, "top_k": int, "stop_sequences": [string] }

The following are required parameters.

  • anthropic_version – (Required) The anthropic version. The value must be bedrock-2023-05-31.

  • max_tokens – (Required) The maximum number of tokens to generate before stopping.

    Note that Anthropic Claude models might stop generating tokens before reaching the value of max_tokens. Different Anthropic Claude models have different maximum values for this parameter. For more information, see Model comparison.

  • messages – (Required) The input messages.

    • role – The role of the conversation turn. Valid values are user and assistant.

    • content – (required) The content of the conversation turn.

      • type – (required) The type of the content. Valid values are image and text.

        If you specify image, you must also specify the image source in the following format

        source – (required) The content of the conversation turn.

        • type – (required) The encoding type for the image. You can specify base64.

        • media_type – (required) The type of the image. You can specify the following image formats.

          • image/jpeg

          • image/png

          • image/webp

          • image/gif

        • data – (required) The base64 encoded image bytes for the image. The maximum image size is 3.75MB. The maximum height and width of an image is 8000 pixels.

        If you specify text, you must also specify the prompt in text.

The following are optional parameters.

  • system – (Optional) The system prompt for the request.

    A system prompt is a way of providing context and instructions to Anthropic Claude, such as specifying a particular goal or role. For more information, see How to use system prompts in the Anthropic documentation.

    Note

    You can use system prompts with Anthropic Claude version 2.1 or higher.

  • stop_sequences – (Optional) Custom text sequences that cause the model to stop generating. Anthropic Claude models normally stop when they have naturally completed their turn, in this case the value of the stop_reason response field is end_turn. If you want the model to stop generating when it encounters custom strings of text, you can use the stop_sequences parameter. If the model encounters one of the custom text strings, the value of the stop_reason response field is stop_sequence and the value of stop_sequence contains the matched stop sequence.

    The maximum number of entries is 8191.

  • temperature – (Optional) The amount of randomness injected into the response.

    Default Minimum Maximum

    1

    0

    1

  • top_p – (Optional) Use nucleus sampling.

    In nucleus sampling, Anthropic Claude computes the cumulative distribution over all the options for each subsequent token in decreasing probability order and cuts it off once it reaches a particular probability specified by top_p. You should alter either temperature or top_p, but not both.

    Default Minimum Maximum

    0.999

    0

    1

The following are optional parameters.

  • top_k – (Optional) Only sample from the top K options for each subsequent token.

    Use top_k to remove long tail low probability responses.

    Default Minimum Maximum

    Disabled by default

    0

    500

Response

The Anthropic Claude model returns the following fields for a messages inference call.

{ "id": string, "model": string, "type" : "message", "role" : "assistant", "content": [ { "type": "text", "text": string } ], "stop_reason": string, "stop_sequence": string, "usage": { "input_tokens": integer, "output_tokens": integer } }
  • id – The unique identifier for the response. The format and length of the ID might change over time.

  • model – The ID for the Anthropic Claude model that made the request.

  • stop_reason – The reason why Anthropic Claude stopped generating the response.

    • end_turn – The model reached a natural stopping point

    • max_tokens – The generated text exceeded the value of the max_tokens input field or exceeded the maximum number of tokens that the model supports.' .

    • stop_sequence – The model generated one of the stop sequences that you specified in the stop_sequences input field.

  • type – The type of response. The value is always message.

  • role – The conversational role of the generated message. The value is always assistant.

  • content – The content generated by the model. Returned as an array.

    • type – The type of the content. Currently the only supported value is text.

    • text – The text of the content.

  • usage – Container for the number of tokens that you supplied in the request and the number tokens of that the model generated in the response.

    • input_tokens – The number of input tokens in the request.

    • output_tokens – The number tokens of that the model generated in the response.

    • stop_sequence – The model generated one of the stop sequences that you specified in the stop_sequences input field.

Code examples

The following code examples show how to use the messges API.

Messages code example

This example shows how to send a single turn user message and a user turn with a prefilled assistant message to an Anthropic Claude 3 Sonnet model.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate a message with Anthropic Claude (on demand). """ import boto3 import json import logging from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_message(bedrock_runtime, model_id, system_prompt, messages, max_tokens): body=json.dumps( { "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "system": system_prompt, "messages": messages } ) response = bedrock_runtime.invoke_model(body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Anthropic Claude message example. """ try: bedrock_runtime = boto3.client(service_name='bedrock-runtime') model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' system_prompt = "Please respond only with emoji." max_tokens = 1000 # Prompt with user turn only. user_message = {"role": "user", "content": "Hello World"} messages = [user_message] response = generate_message (bedrock_runtime, model_id, system_prompt, messages, max_tokens) print("User turn only.") print(json.dumps(response, indent=4)) # Prompt with both user turn and prefilled assistant response. #Anthropic Claude continues by using the prefilled assistant text. assistant_message = {"role": "assistant", "content": "<emoji>"} messages = [user_message, assistant_message] response = generate_message(bedrock_runtime, model_id,system_prompt, messages, max_tokens) print("User turn and prefilled assistant response.") print(json.dumps(response, indent=4)) except ClientError as err: message=err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()

Multimodal code examples

The following examples show how to pass an image and prompt text in a multimodal message to an Anthropic Claude 3 Sonnet model.

Multimodal prompt with InvokeModel

The following example shows how to send a multimodal prompt to Anthropic Claude 3 Sonnet with InvokeModel.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to run a multimodal prompt with Anthropic Claude (on demand) and InvokeModel. """ import json import logging import base64 import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def run_multi_modal_prompt(bedrock_runtime, model_id, messages, max_tokens): """ Invokes a model with a multimodal prompt. Args: bedrock_runtime: The Amazon Bedrock boto3 client. model_id (str): The model ID to use. messages (JSON) : The messages to send to the model. max_tokens (int) : The maximum number of tokens to generate. Returns: None. """ body = json.dumps( { "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "messages": messages } ) response = bedrock_runtime.invoke_model( body=body, modelId=model_id) response_body = json.loads(response.get('body').read()) return response_body def main(): """ Entrypoint for Anthropic Claude multimodal prompt example. """ try: bedrock_runtime = boto3.client(service_name='bedrock-runtime') model_id = 'anthropic.claude-3-sonnet-20240229-v1:0' max_tokens = 1000 input_image = "/path/to/image" input_text = "What's in this image?" # Read reference image from file and encode as base64 strings. with open(input_image, "rb") as image_file: content_image = base64.b64encode(image_file.read()).decode('utf8') message = {"role": "user", "content": [ {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": content_image}}, {"type": "text", "text": input_text} ]} messages = [message] response = run_multi_modal_prompt( bedrock_runtime, model_id, messages, max_tokens) print(json.dumps(response, indent=4)) except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()

Streaming multimodal prompt with InvokeModelWithResponseStream

The following example shows how to stream the response from a multimodal prompt sent to Anthropic Claude 3 Sonnet with InvokeModelWithResponseStream.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to stream the response from Anthropic Claude Sonnet (on demand) for a multimodal request. """ import json import base64 import logging import boto3 from botocore.exceptions import ClientError logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def stream_multi_modal_prompt(bedrock_runtime, model_id, input_text, image, max_tokens): """ Streams the response from a multimodal prompt. Args: bedrock_runtime: The Amazon Bedrock boto3 client. model_id (str): The model ID to use. input_text (str) : The prompt text image (str) : The path to an image that you want in the prompt. max_tokens (int) : The maximum number of tokens to generate. Returns: None. """ with open(image, "rb") as image_file: encoded_string = base64.b64encode(image_file.read()) body = json.dumps({ "anthropic_version": "bedrock-2023-05-31", "max_tokens": max_tokens, "messages": [ { "role": "user", "content": [ {"type": "text", "text": input_text}, {"type": "image", "source": {"type": "base64", "media_type": "image/jpeg", "data": encoded_string.decode('utf-8')}} ] } ] }) response = bedrock_runtime.invoke_model_with_response_stream( body=body, modelId=model_id) for event in response.get("body"): chunk = json.loads(event["chunk"]["bytes"]) if chunk['type'] == 'message_delta': print(f"\nStop reason: {chunk['delta']['stop_reason']}") print(f"Stop sequence: {chunk['delta']['stop_sequence']}") print(f"Output tokens: {chunk['usage']['output_tokens']}") if chunk['type'] == 'content_block_delta': if chunk['delta']['type'] == 'text_delta': print(chunk['delta']['text'], end="") def main(): """ Entrypoint for Anthropic Claude Sonnet multimodal prompt example. """ model_id = "anthropic.claude-3-sonnet-20240229-v1:0" input_text = "What can you tell me about this image?" image = "/path/to/image" max_tokens = 100 try: bedrock_runtime = boto3.client('bedrock-runtime') stream_multi_modal_prompt( bedrock_runtime, model_id, input_text, image, max_tokens) except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) if __name__ == "__main__": main()