Common tasks supported by LLMs on Amazon Bedrock include text classification, summarization, and questions and answers (with and without context). For these tasks, you can use the following templates and examples to help you create prompts for Amazon Bedrock text models.
Topics
Text classification
For text classification, the prompt includes a question with several possible choices for the answer, and the model must respond with the correct choice. Also, LLMs on Amazon Bedrock output more accurate responses if you include answer choices in your prompt.
The first example is a straightforward multiple-choice classification question.
|
|
(Source of prompt:
Wikipedia on San Francisco
Sentiment analysis is a form of classification, where the model chooses the sentiment from a list of choices expressed in the text.
|
|
(Source of prompt: AWS, model used: Amazon Titan Text)
The following example uses Anthropic Claude models to classify text. As suggested in
AnthropicClaude Guides
(Source: Written by AWS) |
|
(Source of prompt: AWS, model used: Anthropic Claude)
Question-answer, without context
In a question-answer prompt without context, the model must answer the question with its internal knowledge without using any context or document.
|
|
(Source of prompt: AWS, model used: Amazon Titan Text)
Model encouragement can also help in question-answer tasks.
|
|
(Source of prompt: AWS, model used: Amazon Titan Text)
|
|
(Source of prompt: AWS, model used: AI21 Labs Jurassic-2 Ultra v1)
Question-answer, with context
In a question-answer prompt with context, an input text followed by a question is provided by the user, and the model must answer the question based on information provided within the input text. Putting the question in the end after the text can help LLMs on Amazon Bedrock better answer the question. Model encouragement works for this use case as well.
|
|
(Source of prompt: https://en.wikipedia.org/wiki/Red_panda, model used: Amazon Titan Text)
When prompting Anthropic Claude models, it's helpful to wrap the
input text in XML tags. In the following example, the input text is enclosed in
<text></text>
.
|
|
(Source of prompt:
Wikipedia on the Super Bowl LV halftime show
Summarization
For a summarization task, the prompt is a passage of text, and the model must respond with a shorter passage that captures the main points of the input. Specification of the output in terms of length (number of sentences or paragraphs) is helpful for this use case.
|
|
(Source of prompt: AWS, model used: Amazon Titan Text)
In the following example, Anthropic Claude summarizes the given
text in one sentence. To include input text in your prompts, format the text with
XML mark up: <text> {{text content}} </text>
. Using XML
within prompts is a common practice when prompting Anthropic
Claude models.
|
|
(Source of prompt: Wikipedia
on Nash equilibrium
Text generation
Given a prompt, LLMs on Amazon Bedrock can respond with a passage of original text that matches the description. Here is one example:
|
|
(Source of prompt: AWS, model used: Amazon Titan Text)
For text generation use cases, specifying detailed task requirements can work well. In the following example, we ask the model to generate response with exclamation points.
|
|
(Source of prompt: AWS, model used: Amazon Titan Text)
In the following example, a user prompts the model to take on the role of a specific person when generating the text. Notice how the signature reflects the role the model is taking on in the response.
|
|
(Source of prompt: AWS, model used: AI21 Labs Jurassic-2 Ultra v1)
Code generation
The prompt describes the task or function and programming language for the code the user expects the model to generate.
|
|
(Source of prompt: AWS, model used: Amazon Titan Text)
|
|
(Source of prompt: AWS, model used: Anthropic Claude)
Mathematics
The input describes a problem that requires mathematical reasoning at some level, which may
be numerical, logical,
geometric,
or otherwise. For such problem, it's helpful to ask the model to work through the
problem in a piecemeal manner by adding phrases to instructions such as Let’s
think step by step
or Think step by step to come up with the right
answer.
|
|
(Source of prompt: AWS, model used: Amazon Titan Text)
The following example also demonstrates asking the model to think step by step:
|
|
(Source of prompt: AWS, model used: AI21 Labs Jurassic-2 Ultra v1)
Reasoning/logical thinking
For complex reasoning tasks or problems that requires logical thinking, we can ask the model to make logical deductions and explain its answers.
|
|
(Source of prompt: AWS, model used: Amazon Titan Text)
Here is another example using the Anthropic Claude model:
|
|
(Source of prompt: https://en.wikipedia.org/wiki/Barber_paradox, model used: Anthropic Claude)
Entity extraction
For entity extraction from a provided input question. Extract entities from generated text and place them in XML tags for further processing.
|
(Source of prompt: AWS, model used: Amazon Titan Text G1- Premier)
Example:
User: You are an expert entity extractor who extracts entities from provided input question.
You are responsible for extracting following entities: name, location
Please follow below instructions while extracting the Name, and reply in <name></name>
XML Tags:
- These entities include a specific name of a person, animal or a thing
- Please extract only specific name name entities mentioned in the input query
- DO NOT extract the general mention of name by terms of "name", "boy", "girl",
"animal name", etc.
Please follow below instructions while extracting the location, and reply
in <location></location> XML Tags:
- These entities include a specific location of a place, city, country or a town
- Please extract only specific name location entities mentioned in the input query
- DO NOT extract the general mention of location by terms of "location", "city", "country",
"town", etc.
If no name or location is found, please return the same input string as is.
Below are some examples:
input: How was Sarah's birthday party in Seattle, WA?
output: How was <name>Sarah's</name> birthday party
in <location>Seattle, WA</location>?
input: Why did Joe's father go to the city?
output: Why did <name>Joe's</name> father go to the city?
input: What is the zipcode of Manhattan, New york city?
output: What is the zipcode of <location>Manhattan,New york city<location>?
input: Who is the mayor of San Francisco?
Bot:
Chain-of-thought reasoning
Provide a step-by-step analysis on how the answer was derived. Fact check and validate how the model produced an answer.
|
(Source of prompt: AWS, model used: Amazon Titan Text G1- Premier)
Example:
User: If Jeff had 100 dollars, and he gave $20 to Sarah,
and bought lottery tickets with another $20. With the lottery
tickets he bought he won 35 dollars. Jeff then went to buy
his lunch and spend 40 dollars in lunch. Lastly he made a
donation to charity for $20. Stephen met with Jeff and wanted
to lend some money from him for his taxi. How much maximum money
can Jeff give to Stephen, given that he needs to save $10 for
his ride back home?. Please do not answer immediately, think
step by step and show me your thinking.
Bot: