Use AI Guardrails for Amazon Q in Connect
Important
Amazon Q in Connect guardrails supports English-only. Evaluating text content in other languages can result in unreliable results.
AI guardrails let you implement safeguards based on your use cases and responsible AI policies. You can configure company-specific guardrails for Amazon Q in Connect to filter harmful and inappropriate responses, redact sensitive personal information, and limit incorrect information in the responses due to potential large language model (LLM) hallucination.
AI guardrails are Amazon Connect resources you can create then associate with your Amazon Q in Connect AI agents. For more information on how to make an AI agent association, see Customize Amazon Q in Connect. AI guardrails come with a default message informing the user when Q has blocked a response according to a policy. This default message can be overwritten to any message of your choosing by customizing Amazon Q in Connect's AI prompts.
The following image shows an example of the default guardrail message that is displayed to a customer. The default message is "Blocked input text by guadrail."
Guardrail policy configurations
Below are examples of the five policy configurations you add to an AI guardrail.
Topics
This policy configuration allows you to block undesirable topics.
{ "assistantId": "a0a81ecf-6df1-4f91-9513-3bdcb9497e32", "name": "test-ai-guardrail-2", "description": "This is a test ai-guardrail", "blockedInputMessaging": "Blocked input text by guardrail", "blockedOutputsMessaging": "Blocked output text by guardrail", "visibilityStatus": "PUBLISHED", "topicPolicyConfig": { "topicsConfig": [ { "name": "Financial Advice", "definition": "Investment advice refers to financial inquiries, guidance, or recommendations with the goal of generating returns or achieving specific financial objectives.", "examples": ["- Is investment in stocks better than index funds?", "Which stocks should I invest into?", "- Can you manage my personal finance?"], "type": "DENY" } ] } }
Content
This policy configurations allows you to filter harmful and inappropriate content.
{ "assistantId": "a0a81ecf-6df1-4f91-9513-3bdcb9497e32", "name": "test-ai-guardrail-2", "description": "This is a test ai-guardrail", "blockedInputMessaging": "Blocked input text by guardrail", "blockedOutputsMessaging": "Blocked output text by guardrail", "visibilityStatus": "PUBLISHED", "contentPolicyConfig": { "filtersConfig": [ { "inputStrength": "HIGH", "outputStrength": "HIGH", "type": "INSULTS" } ] } }
Words
This policy configuration allows you to filter harmful and inappropriate words.
{ "assistantId": "a0a81ecf-6df1-4f91-9513-3bdcb9497e32", "name": "test-ai-guardrail-2", "description": "This is a test ai-guardrail", "blockedInputMessaging": "Blocked input text by guardrail", "blockedOutputsMessaging": "Blocked output text by guardrail", "visibilityStatus": "PUBLISHED", "wordPolicyConfig": { "wordsConfig": [ { "text": "Nvidia", }, ] } }
Contextual Grounding
This policy configuration allows Q to detect hallucinations in the model response.
{ "assistantId": "a0a81ecf-6df1-4f91-9513-3bdcb9497e32", "name": "test-ai-guardrail-2", "description": "This is a test ai-guardrail", "blockedInputMessaging": "Blocked input text by guardrail", "blockedOutputsMessaging": "Blocked output text by guardrail", "visibilityStatus": "PUBLISHED", "contextualGroundPolicyConfig": { "filtersConfig": [ { "type": "RELEVANCE", "threshold": 0.50 }, ] } }
Sensitive Information
This policy configuration will redact sensitive information such as personal identifiable information (PII)
{ "assistantId": "a0a81ecf-6df1-4f91-9513-3bdcb9497e32", "name": "test-ai-guardrail-2", "description": "This is a test ai-guardrail", "blockedInputMessaging": "Blocked input text by guardrail", "blockedOutputsMessaging": "Blocked output text by guardrail", "visibilityStatus": "PUBLISHED", "sensitiveInformationPolicyConfig": { "piiEntitiesConfig": [ { "type": "CREDIT_DEBIT_CARD_NUMBER", "action":"BLOCK", }, ] } }