Amazon Titan Multimodal Embeddings G1 - Amazon Bedrock

Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich.

Amazon Titan Multimodal Embeddings G1

Dieser Abschnitt enthält Textformate für Anfragen und Antworten sowie Codebeispiele für die Verwendung von Amazon. Titan Multimodal Embeddings G1.

Anfrage und Antwort

Der Text der Anfrage wird im body Feld einer InvokeModelAnfrage übergeben.

Request

Der Anfragetext für Amazon Titan Multimodal Embeddings G1 umfasst die folgenden Felder.

{ "inputText": string, "inputImage": base64-encoded string, "embeddingConfig": { "outputEmbeddingLength": 256 | 384 | 1024 } }

Mindestens eines der folgenden Felder ist erforderlich. Schließen Sie beide ein, um einen Einbettungsvektor zu generieren, der den Durchschnitt der resultierenden Text- und Bildeinbettungsvektoren berechnet.

Das folgende Feld ist optional.

  • embeddingConfig— Enthält ein outputEmbeddingLength Feld, in dem Sie eine der folgenden Längen für den Ausgabe-Einbettungsvektor angeben.

    • 256

    • 384

    • 1024 (Standard)

Response

Die body Antwort enthält die folgenden Felder.

{ "embedding": [float, float, ...], "inputTextTokenCount": int, "message": string }

Die -Felder werden im Folgenden beschrieben.

  • Einbettung — Ein Array, das den Einbettungsvektor der von Ihnen angegebenen Eingabe darstellt.

  • inputTextTokenAnzahl — Die Anzahl der Token in der Texteingabe.

  • Nachricht — Gibt alle Fehler an, die während der Generierung auftreten.

Beispiel-Code

Die folgenden Beispiele zeigen, wie Amazon aufgerufen wird Titan Multimodal Embeddings G1 Modell mit On-Demand-Durchsatz in PythonSDK. Wählen Sie eine Registerkarte aus, um ein Beispiel für jeden Anwendungsfall anzuzeigen.

Text embeddings

Dieses Beispiel zeigt, wie man Amazon anruft Titan Multimodal Embeddings G1 Modell zur Generierung von Texteinbettungen.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate embeddings from text with the Amazon Titan Multimodal Embeddings G1 model (on demand). """ import json import logging import boto3 from botocore.exceptions import ClientError class EmbedError(Exception): "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1" def __init__(self, message): self.message = message logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_embeddings(model_id, body): """ Generate a vector of embeddings for a text input using Amazon Titan Multimodal Embeddings G1 on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (JSON): The embeddings that the model generated, token information, and the reason the model stopped generating embeddings. """ logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') accept = "application/json" content_type = "application/json" response = bedrock.invoke_model( body=body, modelId=model_id, accept=accept, contentType=content_type ) response_body = json.loads(response.get('body').read()) finish_reason = response_body.get("message") if finish_reason is not None: raise EmbedError(f"Embeddings generation error: {finish_reason}") return response_body def main(): """ Entrypoint for Amazon Titan Multimodal Embeddings G1 example. """ logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") model_id = "amazon.titan-embed-image-v1" input_text = "What are the different services that you offer?" output_embedding_length = 256 # Create request body. body = json.dumps({ "inputText": input_text, "embeddingConfig": { "outputEmbeddingLength": output_embedding_length } }) try: response = generate_embeddings(model_id, body) print(f"Generated text embeddings of length {output_embedding_length}: {response['embedding']}") print(f"Input text token count: {response['inputTextTokenCount']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) except EmbedError as err: logger.error(err.message) print(err.message) else: print(f"Finished generating text embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.") if __name__ == "__main__": main()
Image embeddings

Dieses Beispiel zeigt, wie man Amazon anruft Titan Multimodal Embeddings G1 Modell zur Generierung von Bildeinbettungen.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate embeddings from an image with the Amazon Titan Multimodal Embeddings G1 model (on demand). """ import base64 import json import logging import boto3 from botocore.exceptions import ClientError class EmbedError(Exception): "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1" def __init__(self, message): self.message = message logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_embeddings(model_id, body): """ Generate a vector of embeddings for an image input using Amazon Titan Multimodal Embeddings G1 on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (JSON): The embeddings that the model generated, token information, and the reason the model stopped generating embeddings. """ logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') accept = "application/json" content_type = "application/json" response = bedrock.invoke_model( body=body, modelId=model_id, accept=accept, contentType=content_type ) response_body = json.loads(response.get('body').read()) finish_reason = response_body.get("message") if finish_reason is not None: raise EmbedError(f"Embeddings generation error: {finish_reason}") return response_body def main(): """ Entrypoint for Amazon Titan Multimodal Embeddings G1 example. """ logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") # Read image from file and encode it as base64 string. with open("/path/to/image", "rb") as image_file: input_image = base64.b64encode(image_file.read()).decode('utf8') model_id = 'amazon.titan-embed-image-v1' output_embedding_length = 256 # Create request body. body = json.dumps({ "inputImage": input_image, "embeddingConfig": { "outputEmbeddingLength": output_embedding_length } }) try: response = generate_embeddings(model_id, body) print(f"Generated image embeddings of length {output_embedding_length}: {response['embedding']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) except EmbedError as err: logger.error(err.message) print(err.message) else: print(f"Finished generating image embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.") if __name__ == "__main__": main()
Text and image embeddings

Dieses Beispiel zeigt, wie man Amazon anruft Titan Multimodal Embeddings G1 Modell zum Generieren von Einbettungen aus einer kombinierten Text- und Bildeingabe. Der resultierende Vektor ist der Durchschnitt des generierten Texteinbettungsvektors und des Bildeinbettungsvektors.

# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 """ Shows how to generate embeddings from an image and accompanying text with the Amazon Titan Multimodal Embeddings G1 model (on demand). """ import base64 import json import logging import boto3 from botocore.exceptions import ClientError class EmbedError(Exception): "Custom exception for errors returned by Amazon Titan Multimodal Embeddings G1" def __init__(self, message): self.message = message logger = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO) def generate_embeddings(model_id, body): """ Generate a vector of embeddings for a combined text and image input using Amazon Titan Multimodal Embeddings G1 on demand. Args: model_id (str): The model ID to use. body (str) : The request body to use. Returns: response (JSON): The embeddings that the model generated, token information, and the reason the model stopped generating embeddings. """ logger.info("Generating embeddings with Amazon Titan Multimodal Embeddings G1 model %s", model_id) bedrock = boto3.client(service_name='bedrock-runtime') accept = "application/json" content_type = "application/json" response = bedrock.invoke_model( body=body, modelId=model_id, accept=accept, contentType=content_type ) response_body = json.loads(response.get('body').read()) finish_reason = response_body.get("message") if finish_reason is not None: raise EmbedError(f"Embeddings generation error: {finish_reason}") return response_body def main(): """ Entrypoint for Amazon Titan Multimodal Embeddings G1 example. """ logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s") model_id = "amazon.titan-embed-image-v1" input_text = "A family eating dinner" # Read image from file and encode it as base64 string. with open("/path/to/image", "rb") as image_file: input_image = base64.b64encode(image_file.read()).decode('utf8') output_embedding_length = 256 # Create request body. body = json.dumps({ "inputText": input_text, "inputImage": input_image, "embeddingConfig": { "outputEmbeddingLength": output_embedding_length } }) try: response = generate_embeddings(model_id, body) print(f"Generated embeddings of length {output_embedding_length}: {response['embedding']}") print(f"Input text token count: {response['inputTextTokenCount']}") except ClientError as err: message = err.response["Error"]["Message"] logger.error("A client error occurred: %s", message) print("A client error occured: " + format(message)) except EmbedError as err: logger.error(err.message) print(err.message) else: print(f"Finished generating embeddings with Amazon Titan Multimodal Embeddings G1 model {model_id}.") if __name__ == "__main__": main()