Amazon Redshift ML integration with Amazon Bedrock
This section describes how to use Amazon Redshift ML integration with Amazon Bedrock. With this feature, you can invoke an Amazon Bedrock model using SQL, and you can use your data from a Amazon Redshift data warehouse to build generative AI applications such as text generation, sentiment analysis, or translation.
Topics
- Creating or updating an IAM role for Amazon Redshift ML integration with Amazon Bedrock
- Creating an external model for Amazon Redshift ML integration with Amazon Bedrock
- Using an external model for Amazon Redshift ML integration with Amazon Bedrock
- Prompt engineering for Amazon Redshift ML integration with Amazon Bedrock
Creating or updating an IAM role for Amazon Redshift ML integration with Amazon Bedrock
This section demonstrates how to create an IAM role to use with Amazon Redshift ML integration with Amazon Bedrock.
Add the following policy to the IAM role you use with Amazon Redshift ML integration with Amazon Bedrock:
AmazonBedrockFullAccess
To allow Amazon Redshift to assume a role to interact with other services, add the following trust policy to the IAM role:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "redshift.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] }
If the cluster or namespace is in a VPC, follow the instructions in Cluster and configure setup for Amazon Redshift ML administration.
If you need a more restrictive policy, you can create one that includes only the Amazon Bedrock permissions specified in the following pages:
For information about creating an IAM role, see IAM Role Creation in the AWS Identity and Access Management User Guide.
Creating an external model for Amazon Redshift ML integration with Amazon Bedrock
This section shows how to create an external model to use as an interface for Amazon Bedrock within your Amazon Redshift data warehouse.
To invoke an Amazon Bedrock model from Amazon Redshift, you must first run the CREATE EXTERNAL MODEL
command. This command creates an
external model object in the database, and an associated user function that you use to generate text content
with Amazon Bedrock.
The following code example shows a basic CREATE EXTERNAL MODEL
command:
CREATE EXTERNAL MODEL llm_claude FUNCTION llm_claude_func IAM_ROLE '
<IAM role arn>
' MODEL_TYPE BEDROCK SETTINGS ( MODEL_ID 'anthropic.claude-v2:1', PROMPT 'Summarize the following text:');
The CREATE EXTERNAL MODEL
command has a unified and consistent interface with Amazon Bedrock
for all Foundation Models (FM) that support messages. This is the default option when using the
CREATE EXTERNAL MODEL
command or when explicitly specifying the request type to be
UNIFIED
. For more information, see the
Converse API documentation
in the Amazon Bedrock API documentation.
If an FM doesn't support messages, then you must set the request_type
setting to RAW
.
When you set request_type
to RAW
, you must construct the request sent to Amazon Bedrock
when using the inference function based on the selected FM.
The PROMPT
parameter for the CREATE EXTERNAL MODEL
command is a static prompt.
If you need a dynamic prompt for your application, you must specify it when using the inference function.
For more details, see Prompt engineering for Amazon Redshift ML integration with Amazon Bedrock following.
For more information about the CREATE EXTERNAL MODEL
statement and its parameters and settings,
see CREATE EXTERNAL MODEL.
Using an external model for Amazon Redshift ML integration with Amazon Bedrock
This section shows how to invoke an external model to generate text in response to provided prompts. To
invoke an external model, use the inference function that you create with CREATE EXTERNAL MODEL
.
Topics
Inference with UNIFIED
request type models
The inference function for models with request type UNIFIED
has the following three parameters
that are passed to the function in order:
Input text (required): This parameter specifies the input text that Amazon Redshift passes to Amazon Bedrock.
Inference configuration and Additional model request fields (optional): Amazon Redshift passes these parameters to the corresponding parameters for the Converse model API.
The following code example shows how to use a UNIFIED
type inference function:
SELECT llm_claude_func(input_text, object('temperature', 0.7, 'maxtokens', 500)) FROM some_data;
Inference with RAW
request type models
The inference function for models with request type RAW
has only one parameter of data type
SUPER
. The syntax of this parameter depends on the Amazon Bedrock model used.
The following code example shows how to use a RAW
type inference function:
SELECT llm_titan_func( object( "inputText", "Summarize the following text: " | input_text, "textGenerationConfig", object("temperature", 0.5, "maxTokenCount", 500) ) ) FROM some_data;
Inference functions as leader-only functions
Inference functions for Amazon Bedrock models can run as leader node-only functions when the query that uses them doesn't reference any tables. This can be helpful if you want to quickly ask an LLM a question.
The following code example shows how to use a leader-only inference function:
SELECT general_titan_llm_func('Summarize the benefits of LLM on data analytics in 100 words');
Inference function usage notes
Note the following when using inference functions with Amazon Redshift ML integration with Amazon Bedrock:
The names of the parameters for all Amazon Bedrock models are case sensitive. If your parameters do not match the ones required by the model, Amazon Bedrock might quietly ignore them.
The throughput of inference queries is limited by the runtime quotas of the different models offered by Amazon Bedrock in different regions. For more information, see Quotas for Amazon Bedrock in the Amazon Bedrock User Guide.
If you need guaranteed and consistent throughput, consider getting provisioned throughput for the model you need from Amazon Bedrock. For more information, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock in the Amazon Bedrock User Guide.
Inference queries with large amounts of data might get throttling exceptions. This is because of the limited runtime quotas for Amazon Bedrock. Amazon Redshift retries requests multiple times, but queries can still get throttled because throughput for non-provisioned models might be variable.
Prompt engineering for Amazon Redshift ML integration with Amazon Bedrock
This section shows how to use static prompts with an external model.
To use static prefix and suffix prompts for your external model, provide them using the PROMPT
and SUFFIX
parameters of the CREATE EXTERNAL MODEL
statement. These prompts
are added to every query using the external model.
The following example shows how to add prefix and suffix prompts to an external model:
CREATE EXTERNAL MODEL llm_claude FUNCTION llm_claude_func IAM_ROLE '
<IAM role arn>
' MODEL_TYPE BEDROCK SETTINGS ( MODEL_ID 'anthropic.claude-v2:1',PROMPT 'Summarize the following text:', SUFFIX 'Respond in an analytic tone')
;
To use dynamic prompts, you can provide them when using the inference function by concatenating them in the function input. The following example shows how to use dynamic prompts with an inference function:
SELECT llm_claude_func('Summarize the following review:' | input_text | 'The review should have formal tone.') FROM some_data