Limitations - AI Powered Speech Analytics for Amazon Connect

Limitations

Consider the following limitations before using AI Powered Speech Analytics for Amazon Connect in a production environment.

  • When a new AWS Lambda container is used for a call, it is considered a cold start and has a latency of approximately 10 seconds. When using this solution in a production environment where latency needs to be at zero seconds, we recommend increasing the memory of the Lambda function so new containers are used less often. Or, you can use a different AWS service such as AWS Fargate. AWS Lambda stops transcribing if a call is longer than 15 minutes. Using the AWS Fargate alternative overcomes this limitation.

  • For security purposes, this solution uses role chaining to assume a role with temporary security credentials using the GetSessionToken API. It cannot assume the role for more than an hour. As a result, translation and sentiment analysis cannot run for more than an hour. You can customize this solution to implement a refresh mechanism for the temporary token by calling the GetSessionToken API or use the long-term AWS security credentials of the AWS account root user or an IAM user to overcome this limitation. For additional information, refer to GetSessionToken in the AWS Security Token Service API Reference Guide.

  • The first call using the Amazon Kinesis video stream streaming block, may fail. If this happens wait 10 seconds and try again. Also, the number of concurrent calls cannot exceed the number of KVS streams, so you must manually increase this number of streams based on your estimated traffic.

  • The first few seconds of a call may not be transcribed. If this happens and impacts your transcriptions, we recommend changing the START_SELECTOR_TYPE Lambda function value to FRAGMENT_NUMBER.