Amazon Titan Text Embeddings models - Amazon Bedrock

Amazon Titan Text Embeddings models

Amazon Titan Embeddings text models include Amazon Titan Text Embeddings v2 and Titan Text Embeddings G1 model.

Text embeddings represent meaningful vector representations of unstructured text such as documents, paragraphs, and sentences. You input a body of text and the output is a (1 x n) vector. You can use embedding vectors for a wide variety of applications.

The Amazon Titan Text Embedding v2 model (amazon.titan-embed-text-v2:0) can intake up to 8,192 tokens and outputs a vector of 1,024 dimensions. The model also works in 100+ different languages. The model is optimized for text retrieval tasks, but can also perform additional tasks, such as semantic similarity and clustering. Amazon Titan Embeddings text v2 also supports long documents, however, for retrieval tasks it is recommended to segment documents into logical segments , such as paragraphs or sections.

Amazon Titan Embeddings models generate meaningful semantic representation of documents, paragraphs and sentences. Amazon Titan Text Embeddings takes as input a body of text and generates a n-dimensional vector. Amazon Titan Text Embeddings is offered via latency-optimized endpoint invocation for faster search (recommended during the retrieval step) as well as throughput optimized batch jobs for faster indexing.

The Amazon Titan Embedding Text v2 model supports the following languages: English, German, French, Spanish, Japanese, Chinese, Hindi, Arabic, Italian, Portuguese, Swedish, Korean, Hebrew, Czech, Turkish, Tagalog, Russian, Dutch, Polish, Tamil, Marathi, Malayalam, Telugu, Kannada, Vietnamese, Indonesian, Persian, Hungarian, Modern Greek, Romanian, Danish, Thai, Finnish, Slovak, Ukrainian, Norwegian, Bulgarian, Catalan, Serbian, Croatian, Lithuanian, Slovenian, Estonian, Latin, Bengali, Latvian, Malay, Bosnian, Albanian, Azerbaijani, Galician, Icelandic, Georgian, Macedonian, Basque, Armenian, Nepali, Urdu, Kazakh, Mongolian, Belarusian, Uzbek, Khmer, Norwegian Nynorsk, Gujarati, Burmese, Welsh, Esperanto, Sinhala, Tatar, Swahili, Afrikaans, Irish, Panjabi, Kurdish, Kirghiz, Tajik, Oriya, Lao, Faroese, Maltese, Somali, Luxembourgish, Amharic, Occitan, Javanese, Hausa, Pushto, Sanskrit, Western Frisian, Malagasy, Assamese, Bashkir, Breton, Waray (Philippines), Turkmen, Corsican, Dhivehi, Cebuano, Kinyarwanda, Haitian, Yiddish, Sindhi, Zulu, Scottish Gaelic, Tibetan, Uighur, Maori, Romansh, Xhosa, Sundanese, Yoruba.

Note

Amazon Titan Text Embeddings v2 model and Titan Text Embeddings v1 model do not supports inference parameters such as maxTokenCount or topP.

Amazon Titan Text Embeddings V2 model

  • Model IDamazon.titan-embed-text-v2:0

  • Max input text tokens – 8,192

  • Languages – English (100+ languages in preview)

  • Output vector size – 1,024 (default), 384, 256

  • Inference types – On-Demand, Provisioned Throughput

  • Supported use cases – RAG, document search, reranking, classification, etc.

Note

Titan Text Embeddings V2 takes as input a non-empty string with up to 8,192 tokens. The characters to token ratio in English is 4.7 characters per token. While Titan Text Embeddings V1 and Titan Text Embeddings V2 are able to accommodate up to 8,192 tokens, it is recommended to segment documents into logical segments (such as paragraphs or sections).

To use the text or image embeddings models, use the Invoke Model API operation with amazon.titan-embed-text-v1 or amazon.titan-embed-image-v1 as the model Id and retrieve the embedding object in the response.

To see Jupyter notebook examples:

  1. Sign in to the Amazon Bedrock console at https://console.aws.amazon.com/bedrock/home.

  2. From the left-side menu, choose Base models.

  3. Scroll down and select the Amazon Titan Embeddings G1 - Text model

  4. In the Amazon Titan Embeddings G1 - Text tab (depending on which model you chose), select View example notebook to see example notebooks for embeddings.

For more information on preparing your dataset for multimodal training, see Preparing your dataset.