Creating call analytics configurations - Amazon Chime SDK

Creating call analytics configurations

To use call analytics, you start by creating a configuration, a static structure that holds the information needed to create a call analytics pipeline. You can use the Amazon Chime SDK console to create a configuration, or call the CreateMediaInsightsPipelineConfiguration API.

A call analytics configuration includes details about audio processors, such as recording, voice analytics, or Amazon Transcribe. It also includes insight destinations and alert event configuration. Optionally, you can save your call data to an Amazon S3 bucket for further analysis.

However, configurations do not include specific audio sources. That allows you reuse the configuration across multiple call analytics workflows. For example, you can use the same call analytics configuration with different Voice Connectors or across different Amazon Kinesis Video Streams (KVS) sources.

You use the configurations to create pipelines when SIP calls occur through a Voice Connector, or when new media is sent to an Amazon Kinesis Video Stream (KVS). The pipelines, in turn, process the media according to the specifications in the configuration.

You can stop a pipeline programmatically at any time. Pipelines also stop processing media when a Voice Connector call ends. In addition, you can pause a pipeline. Doing so disables calls to the underlying Amazon machine learning services and resumes them when desired. However, call recording runs while you pause a pipeline.


To use call analytics with Amazon Transcribe, Amazon Transcribe Analytics, or Amazon Chime SDK voice analytics, you must have the following items:

Creating a call analytics configuration

After you create the configuration, you enable call analytics by associating a Voice Connector with the configuration. Once you do that, call analytics starts automatically when a call comes in to that Voice Connector. For more information, refer to Configuring Voice Connectors to use call analytics, earlier in this guide.

The following sections explain how to complete each step of the process. Expand them in the order listed.

To specify configuration details
  1. Open the Amazon Chime SDK console at

  2. In the navigation pane, under Call Analytics, choose Configurations, then choose Create configuration.

  3. Under Basic information, do the following:

    1. Enter a name for the configuration. The name should reflect your use case and any tags.

    2. (Optional) Under Tags, choose Add new tag, then enter your tag keys and optional values. You define the keys and values. Tags can help you query the configuration.

    3. Choose Next.

To configure recording
  • On the Configure recording page, do the following:

    1. Choose the Activate call recording checkbox. This enables recording for Voice Connector calls or KVS streams and sending the data to your Amazon S3 bucket.

    2. Under File format, choose WAV with PCM for the best audio quality.


      Choose OGG with OPUS to compress the audio and optimize storage.

    3. (Optional) As needed, choose the Create an Amazon S3 bucket link and follow those steps to create an Amazon S3 bucket.

    4. Enter the URI of your Amazon S3 bucket, or choose Browse to locate a bucket.

    5. (Optional) Choose Activate voice enhancement to help improve the audio quality of your recordings.

    6. Choose Next.

For more information about voice enhancement, expand the next section.

Voice enhancement helps improve the audio quality of the recorded phone calls in your customers' Amazon S3 buckets. Phone calls are narrowband-filtered and sampled at an 8 kHz rate. Voice enhancement boosts the sampling rate from 8kHz to 16kHz and uses a machine learning model to expand the frequency content from narrowband to wideband to make the speech more natural-sounding. Voice enhancement also uses a noise reduction model called Amazon Voice Focus to help reduce background noise in the enhanced audio.

When voice enhancement is enabled, voice enhancement processing is performed after the call recording is completed. The enhanced audio file is written to your Amazon S3 bucket as the original recording and and has the suffix _enhanced added to the base file name of the original recording. Voice enhancement can process calls up to 30 minutes long. Enhanced recordings will not be generated for calls that are longer than 30 minutes.

For information about using voice enhancement programmatically, refer to Using APIs to create call analytics configurations, in the Amazon Chime SDK Developer Guide.

For more information about voice enhancement, refer to Understanding voice enhancement , in the

Amazon Transcribe provides text transcriptions of calls. You can then use the transcripts to augment other machine learning services such as Amazon Comprehend or your own machine learning models.


Amazon Transcribe also provides automatic language recognition. However, You can't use that feature with custom language models or content redaction. Also, if you use language identification with other features, you can only use the languages that those features support. For more information, refer to Language identification with streaming transcriptions, in the Amazon Transcribe Developer Guide.

Amazon Transcribe Call Analytics is a machine-learning powered API that provides call transcripts, sentiment, and real-time conversation insights. The service eliminates the need for note-taking, and it can enable immediate action on detected issues. The service also provides post-call analytics, such as caller sentiment, call drivers, non-talk time, interruptions, talk speed, and conversation characteristics.


By default, post-call analytics streams call recordings to your Amazon S3 bucket. To avoid creating duplicate recordings, do not enable call recording and post-call analytics at the same time.

Finally, Transcribe Call Analytics can automatically tag conversations based on specific phrases and help redact sensitive information from audio and text. For more information on the call analytics media processors, insights generated by these processors, and output destinations, refer to Call analytics processor and output destinations, in the Amazon Chime SDK Developer Guide.

To configure analytics services
  1. On the Configure analytics services page, select the check boxes next to Voice analytics or Transcription services. You can select both items.

    Select the Voice analytics, checkbox to enable any combination of Speaker search and Voice tone analysis.

    Select the Transcription services checkbox to enable Amazon Transcribe or Transcribe Call Analytics.

    1. To enable Speaker search

      • Select the Yes, I agree to the Consent Acknowledgement for Amazon Chime SDK voice analytics checkbox, then choose Accept.

    2. To enable Voice tone analysis

      • Select the Voice tone analysis checkbox.

    3. To enable Amazon Transcribe

      1. Choose the Amazon Transcribe button.

      2. Under Language settings, do either of the following:

        1. If your callers speak a single language, choose Specific language, then open the Language list and select the language.

        2. If your callers speak multiple languages, you can automatically identify them. Choose Automatic language detection.

        3. Open the Language options for automatic language identification list and select at least two languages.

        4. (Optional) Open the Preferred language list and specify a preferred language. When the languages you selected in the previous step have matching confidence scores, the service transcribes the preferred language.

        5. (Optional) Expand Content removal settings, select one or more options, then choose one or more of the additional options that appear. Helper text explains each option.

        6. (Optional) Expand Additional settings, select one or more options, then choose one or more of the additional options that appear. Helper text explains each option.

    4. To enable Amazon Transcribe Call Analytics

      1. Choose the Amazon Transcribe Call Analytics button.

      2. Open the Language list and select a language.

      3. (Optional) Expand Content removal settings, select one or more options, then choose one or more of the additional options that appear. Helper text explains each option.

      4. (Optional) Expand Additional settings, select one or more options, then choose one or more of the additional options that appear. Helper text explains each option.

      5. (Optional) Expand Post-call analytics settings and do the following:

        1. Choose the Post-call analysis checkbox.

        2. Enter the URI of your Amazon S3 bucket.

        3. Select a content redaction type.

  2. When you finish making your selections, choose Next.

After you finish the media processing steps, you select a destination for the analytics output. Call analytics provides live insights via Amazon Kinesis Data Streams, and optionally through a data warehouse in an Amazon S3 bucket of your choice. To create the data warehouse, you use a CloudFormation Template. The template helps you create the infrastructure that delivers the call metadata and insights to your Amazon S3 bucket. For more information about creating the data warehouse, refer to Creating an Amazon Chime data lake and Call analytics data model, in the Amazon Chime SDK Developer Guide.

If you enable voice analytics when you create a configuration, you can also add a voice analytics notification destinations such as AWS Lambda, Amazon Simple Queue Service, or Amazon Simple Notification Service. The following steps explain how.

To configure output details
  1. Open the Kinesis data stream list and select your data stream.


    If you want to visualize your data, you must select the Kinesis data stream used by the Amazon S3 bucket and Amazon Kinesis Data Firehose.

  2. (Optional) Expand Additional voice analytics notification destinations and select any combination of AWS Lambda, Amazon SNS, and Amazon SQS destinations.

  3. (Optional) Under Analyze and visualize insights, select the Perform historical analysis with data lake checkbox.

  4. When finished, choose Next.

To enable call analytics, the machine learning service and other resources must have permissions to access data media and deliver insights. For more information, refer to Using the call analytics resource access role, in the Amazon Chime SDK Developer Guide.

To configure access permissions
  1. On the Configure access permissions page, do one of the following:

    1. Select Create and use a new service role.

    2. In the Service role name suffix box, enter a descriptive suffix for the role.


    1. Select Use an existing service role.

    2. Open the Service role list and select a role.

  2. Choose Next.


To use real-time alerts, you must first enable Amazon Transcribe or Amazon Transcribe Call Analytics.

You can create a set of rules that send real-time alerts to Amazon EventBridge. When an insight generated by Amazon Transcribe or Amazon Transcribe Call Analytics matches your specified rule during an analytics session, an alert is sent. Alerts have the detail type Media Insights Rules Matched. EventBridge supports integration with downstream services such as Amazon Lambda, Amazon SQS, and Amazon SNS to trigger notifications for the end user or initiate other custom business logic. For more information, refer to Automating the Amazon Chime SDK with EventBridge, later in this section.

To configure alerts
  1. Under Real-time alerts, choose Active real-time alerts.

  2. Under Rules, select Create rule.

  3. In the Rule name box, enter a name for the rule.

  4. Open the Rule type list and select the type of rule you want to use.

  5. Use the controls that appear to add keywords to the rule and apply logic, such as mentioned or not mentioned.

  6. Choose Next.

To create the configuration
  1. Review the settings in each section. As needed choose Edit to change a setting.

  2. Choose Create configuration.

Your configuration appears on the Configurations page of the Amazon Chime SDK console.