Connect to Salesforce for your Amazon Bedrock knowledge base
Salesforce is a customer relationship management (CRM) tool for managing
support, sales, and marketing teams.
You can connect to your Salesforce instance for
your Amazon Bedrock knowledge base by using either the AWS Management Console for Amazon Bedrock
Note
Salesforce data source connector is in preview release and is subject to change.
Currently, only Amazon OpenSearch Serverless vector store is available to use with this data source.
There are limits to how many files and MB per file that can be crawled. See Quotas for knowledge bases.
Supported features
-
Auto detection of main document fields
-
Inclusion/exclusion content filters
-
Incremental content syncs for added, updated, deleted content
-
OAuth 2.0 authentication
Prerequisites
In Salesforce, make sure you:
-
Take note of your Salesforce instance URL. For example,
https://company.salesforce.com/
. The instance must be running a Salesforce Connected App. -
Create a Salesforce Connected App and configure client credentials. Then, for your selected app, copy the consumer key (client ID) and consumer secret (client secret) from the OAuth settings. For more information, see Salesforce documentation on Create a Connected App
and Configure a Connected App for the OAuth 2.0 Client Credentials . Note
For Salesforce Connected Apps, under Client Credentials Flow, make sure you search and select the user’s name or alias for your client credentials in the “Run As” field.
In your AWS account, make sure you:
-
Store your authentication credentials in an AWS Secrets Manager secret and note the Amazon Resource Name (ARN) of the secret. Follow the Connection configuration instructions on this page to include the key-values pairs that must be included in your secret.
-
Include the necessary permissions to connect to your data source in your AWS Identity and Access Management (IAM) role/permissions policy for your knowledge base. For information on the required permissions for this data source to add to your knowledge base IAM role, see Permissions to access data sources.
Note
If you use the console, you can go to AWS Secrets Manager to add your secret or use an existing secret as part of the data source configuration step. The IAM role with all the required permissions can be created for you as part of the console steps for creating a knowledge base. After you have configured your data source and other configurations, the IAM role with all the required permissions are applied to your specific knowledge base.
We recommend that you regularly refresh or rotate your credentials and secret. Provide only the necessary access level for your own security. We do not recommend that you re-use credentials and secrets across data sources.
Connection configuration
To connect to your Salesforce instance, you must provide the necessary configuration information so that Amazon Bedrock can access and crawl your data. You must also follow the Prerequisites.
An example of a configuration for this data source is included in this section.
For more information about auto detection of document fields, inclusion/exclusion filters, incremental syncing, secret authentication credentials, and how these work, select the following:
The data source connector automatically detects and crawls all of the main metadata fields of your documents or content. For example, the data source connector can crawl the document body equivalent of your documents, the document title, the document creation or modification date, or other core fields that might apply to your documents.
Important
If your content includes sensitive information, then Amazon Bedrock could respond using sensitive information.
You can apply filtering operators to metadata fields to help you further improve the relevancy of responses. For example, document "epoch_modification_time" or the number of seconds that’s passed January 1 1970 for when the document was last updated. You can filter on the most recent data, where "epoch_modification_time" is greater than a certain number. For more information on the filtering operators you can apply to your metadata fields, see Metadata and filtering.
You can include or exclude crawling certain content. For example, you can specify an exclusion prefix/regular expression pattern to skip crawling any file that contains “private” in the file name. You could also specify an inclusion prefix/regular expression pattern to include certain content entities or content types. If you specify an inclusion and exclusion filter and both match a document, the exclusion filter takes precedence and the document isn’t crawled.
An example of a regular expression pattern to exclude or filter out campaigns that contain "private" in the campaign name: ".*private.*"
You can apply inclusion/exclusion filters on the following content types:
-
Account
: Account number/identifier -
Attachment
: Attachment file name with its extension -
Campaign
: Campaign name and associated identifiers -
ContentVersion
: Document version and associated identifiers -
Partner
: Partner information fields including associated identifiers -
Pricebook2
: Product/price list name -
Case
: Customer inquiry/issue number and other information fields including associated identifiers (please note: can contain personal information, which you can choose to exclude or filter out) -
Contact
: Customer information fields (please note: can contain personal information, which you can choose to exclude or filter out) -
Contract
: Contract name and associated identifiers -
Document
: File name with its extension -
Idea
: Idea information fields and associated identifiers -
Lead
: Potential new customer information fields (please note: can contain personal information, which you can choose to exclude or filter out) -
Opportunity
: Pending sale/deal information fields and associated identifiers -
Product2
: Product information fields and associated identifiers -
Solution
: Solution name for a customer inquiry/issue and associated identifiers -
Task
: Task information fields and associated identifiers -
FeedItem
: Identifier of the chatter feed post -
FeedComment
: Identifier of the chatter feed post that the comments belong to -
Knowledge__kav
: Knowledge article version and associated identifiers -
User
: User alias within your organization -
CollaborationGroup
: Chatter group name (unique)
The data source connector crawls new, modified, and deleted content each time your data source syncs with your knowledge base. Amazon Bedrock can use your data source’s mechanism for tracking content changes and crawl content that changed since the last sync. When you sync your data source with your knowledge base for the first time, all content is crawled by default.
To sync your data source with your knowledge base, use the StartIngestionJob API or select your knowledge base in the console and select Sync within the data source overview section.
Important
All data that you sync from your data source becomes available to anyone with
bedrock:Retrieve
permissions to retrieve the data. This can also include any
data with controlled data source permissions. For more
information, see Knowledge base permissions.
(For OAuth 2.0 authentication) Your secret authentication credentials in AWS Secrets Manager should include these key-value pairs:
-
consumerKey
:app client ID
-
consumerSecret
:app client secret
-
authenticationUrl
:Salesforce instance URL or the URL to request the authentication token from
Note
Your secret in AWS Secrets Manager must use the same region of your knowledge base.