Evaluate performance using generative AI (preview) - Amazon Connect

Evaluate performance using generative AI (preview)

This is prerelease documentation for a service in preview release. It is subject to change.

Powered by Amazon Bedrock: AWS implements automated abuse detection. Because Amazon Q in Connect is built on Amazon Bedrock, users can take full advantage of the controls implemented in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intelligence (AI).

Managers can perform evaluations faster and more accurately with generative AI-powered recommendations for answers to questions in agent evaluation forms. Managers can receive extra insights into agent behavior, along with context and justification for recommended answers. These insights are derived from reference points in the transcript that were used to provide the answers.

Generative AI-powered performance evaluations are provided by analyzing the conversation transcript to answer the evaluation form questions, using the criteria specified within the instructions to evaluators associated with each question.

Region and language availability

This feature is available for Amazon Connect instances created in the US East (N. Virginia) and US West (Oregon) AWS regions, in English locales.

Get generative AI-powered evaluation recommendations

  1. Log into Amazon Connect with a user account that has permissions to perform evaluations and ask AI assistant.

  2. Choose the Ask AI button below a question to receive a generative AI-powered recommendation for the answer, along with context and justification (reference points from the transcript that were used to provide answers).

    1. The answer will get automatically selected based on the generative AI recommendation, but can be changed by the user. 

    2. You can get generative AI-powered recommendations by choosing Ask AI for up to 5 questions per contact.

  3. You can choose the time associated with a transcript reference to be directed to the point in the conversation

    Generative AI-powered recommendations while evaluating agent performance.

Provide criteria for answering evaluation form questions using generative AI

While configuring an evaluation form, you can provide criteria for answering questions within the instructions to evaluators associated with each evaluation form question. Apart from driving consistency in evaluations by evaluators, these instructions are also used to provide generative AI-powered evaluations.

New account opening scorecard.

Guidelines to improve generative AI accuracy

Selecting questions for getting generative AI recommendations
  1. Utilize generative AI to respond to questions that can be answered using information from the conversation transcript, without the need to validate information through third-party applications such as CRM systems.

  2. Using generative AI to answer questions requiring numeric responses, such as "How long did the agent interact with the customer?" is not recommended. Instead, consider setting up automation for such evaluation form questions using Contact Lens or contact metrics.

  3. Avoid using generative AI to answer highly subjective questions, for example, "Was the agent attentive during the call?".

Improving phrasing of questions and associated instructions
  1. Use complete sentences to word questions, for example, replacing ID validation with "Did the agent attempt to validate the customer’s identity?", will enable the generative AI to better understand the question.

  2. It is recommended that you provide detailed criteria for answering the question within the instructions to evaluators, especially if its not possible to answer the question based on the question text alone. For example, for the question "Did the agent try to validate the customer identity?", you may provide additional instructions such as, The agent is required to always ask a customer their membership ID and postal code before addressing the customer’s questions.

  3. If answering a question requires knowledge of some business specific terms, then specify those terms in the instruction. For example, if the agent needs to specify the name of the department in the greeting, then list the required department name(s) that the agent needs to state as part of the instructions to evaluators associated with the question.

  4. If possible, use the term 'agent' instead of terms like 'colleague', 'employee', 'representative', 'advocate', or 'associate'. Similarly use the term 'customer', instead of terms like 'member', 'caller', 'guest', or 'subscriber'.

  5. Only use double quotes in your instruction if you want to check for exact words being spoken by the agent or the customer. For example, If the instruction is to check for the agent saying "Have a nice day", then the generative AI will not detect Have a nice afternoon. Instead the instruction should say: The agent wished the customer a nice day.