We announced the upcoming end-of-support for AWS SDK for JavaScript v2.
We recommend that you migrate to AWS SDK for JavaScript v3. For dates, additional details, and information on how to migrate, please refer to the linked announcement.

Class: AWS.ChimeSDKMediaPipelines

Inherits:
AWS.Service show all
Identifier:
chimesdkmediapipelines
API Version:
2021-07-15
Defined in:
(unknown)

Overview

Constructs a service interface object. Each API operation is exposed as a function on service.

Service Description

The Amazon Chime SDK media pipeline APIs in this section allow software developers to create Amazon Chime SDK media pipelines that capture, concatenate, or stream your Amazon Chime SDK meetings. For more information about media pipelines, see Amazon Chime SDK media pipelines.

Sending a Request Using ChimeSDKMediaPipelines

var chimesdkmediapipelines = new AWS.ChimeSDKMediaPipelines();
chimesdkmediapipelines.createMediaCapturePipeline(params, function (err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Locking the API Version

In order to ensure that the ChimeSDKMediaPipelines object uses this specific API, you can construct the object by passing the apiVersion option to the constructor:

var chimesdkmediapipelines = new AWS.ChimeSDKMediaPipelines({apiVersion: '2021-07-15'});

You can also set the API version globally in AWS.config.apiVersions using the chimesdkmediapipelines service identifier:

AWS.config.apiVersions = {
  chimesdkmediapipelines: '2021-07-15',
  // other service API versions
};

var chimesdkmediapipelines = new AWS.ChimeSDKMediaPipelines();

Version:

  • 2021-07-15

Constructor Summary collapse

Property Summary collapse

Properties inherited from AWS.Service

apiVersions

Method Summary collapse

Methods inherited from AWS.Service

makeRequest, makeUnauthenticatedRequest, waitFor, setupRequestListeners, defineService

Constructor Details

new AWS.ChimeSDKMediaPipelines(options = {}) ⇒ Object

Constructs a service object. This object has one method for each API operation.

Examples:

Constructing a ChimeSDKMediaPipelines object

var chimesdkmediapipelines = new AWS.ChimeSDKMediaPipelines({apiVersion: '2021-07-15'});

Options Hash (options):

  • params (map)

    An optional map of parameters to bind to every request sent by this service object. For more information on bound parameters, see "Working with Services" in the Getting Started Guide.

  • endpoint (String|AWS.Endpoint)

    The endpoint URI to send requests to. The default endpoint is built from the configured region. The endpoint should be a string like 'https://{service}.{region}.amazonaws.com' or an Endpoint object.

  • accessKeyId (String)

    your AWS access key ID.

  • secretAccessKey (String)

    your AWS secret access key.

  • sessionToken (AWS.Credentials)

    the optional AWS session token to sign requests with.

  • credentials (AWS.Credentials)

    the AWS credentials to sign requests with. You can either specify this object, or specify the accessKeyId and secretAccessKey options directly.

  • credentialProvider (AWS.CredentialProviderChain)

    the provider chain used to resolve credentials if no static credentials property is set.

  • region (String)

    the region to send service requests to. See AWS.ChimeSDKMediaPipelines.region for more information.

  • maxRetries (Integer)

    the maximum amount of retries to attempt with a request. See AWS.ChimeSDKMediaPipelines.maxRetries for more information.

  • maxRedirects (Integer)

    the maximum amount of redirects to follow with a request. See AWS.ChimeSDKMediaPipelines.maxRedirects for more information.

  • sslEnabled (Boolean)

    whether to enable SSL for requests.

  • paramValidation (Boolean|map)

    whether input parameters should be validated against the operation description before sending the request. Defaults to true. Pass a map to enable any of the following specific validation features:

    • min [Boolean] — Validates that a value meets the min constraint. This is enabled by default when paramValidation is set to true.
    • max [Boolean] — Validates that a value meets the max constraint.
    • pattern [Boolean] — Validates that a string value matches a regular expression.
    • enum [Boolean] — Validates that a string value matches one of the allowable enum values.
  • computeChecksums (Boolean)

    whether to compute checksums for payload bodies when the service accepts it (currently supported in S3 only)

  • convertResponseTypes (Boolean)

    whether types are converted when parsing response data. Currently only supported for JSON based services. Turning this off may improve performance on large response payloads. Defaults to true.

  • correctClockSkew (Boolean)

    whether to apply a clock skew correction and retry requests that fail because of an skewed client clock. Defaults to false.

  • s3ForcePathStyle (Boolean)

    whether to force path style URLs for S3 objects.

  • s3BucketEndpoint (Boolean)

    whether the provided endpoint addresses an individual bucket (false if it addresses the root API endpoint). Note that setting this configuration option requires an endpoint to be provided explicitly to the service constructor.

  • s3DisableBodySigning (Boolean)

    whether S3 body signing should be disabled when using signature version v4. Body signing can only be disabled when using https. Defaults to true.

  • s3UsEast1RegionalEndpoint ('legacy'|'regional')

    when region is set to 'us-east-1', whether to send s3 request to global endpoints or 'us-east-1' regional endpoints. This config is only applicable to S3 client. Defaults to legacy

  • s3UseArnRegion (Boolean)

    whether to override the request region with the region inferred from requested resource's ARN. Only available for S3 buckets Defaults to true

  • retryDelayOptions (map)

    A set of options to configure the retry delay on retryable errors. Currently supported options are:

    • base [Integer] — The base number of milliseconds to use in the exponential backoff for operation retries. Defaults to 100 ms for all services except DynamoDB, where it defaults to 50ms.
    • customBackoff [function] — A custom function that accepts a retry count and error and returns the amount of time to delay in milliseconds. If the result is a non-zero negative value, no further retry attempts will be made. The base option will be ignored if this option is supplied. The function is only called for retryable errors.
  • httpOptions (map)

    A set of options to pass to the low-level HTTP request. Currently supported options are:

    • proxy [String] — the URL to proxy requests through
    • agent [http.Agent, https.Agent] — the Agent object to perform HTTP requests with. Used for connection pooling. Defaults to the global agent (http.globalAgent) for non-SSL connections. Note that for SSL connections, a special Agent object is used in order to enable peer certificate verification. This feature is only available in the Node.js environment.
    • connectTimeout [Integer] — Sets the socket to timeout after failing to establish a connection with the server after connectTimeout milliseconds. This timeout has no effect once a socket connection has been established.
    • timeout [Integer] — Sets the socket to timeout after timeout milliseconds of inactivity on the socket. Defaults to two minutes (120000).
    • xhrAsync [Boolean] — Whether the SDK will send asynchronous HTTP requests. Used in the browser environment only. Set to false to send requests synchronously. Defaults to true (async on).
    • xhrWithCredentials [Boolean] — Sets the "withCredentials" property of an XMLHttpRequest object. Used in the browser environment only. Defaults to false.
  • apiVersion (String, Date)

    a String in YYYY-MM-DD format (or a date) that represents the latest possible API version that can be used in all services (unless overridden by apiVersions). Specify 'latest' to use the latest possible version.

  • apiVersions (map<String, String|Date>)

    a map of service identifiers (the lowercase service class name) with the API version to use when instantiating a service. Specify 'latest' for each individual that can use the latest available version.

  • logger (#write, #log)

    an object that responds to .write() (like a stream) or .log() (like the console object) in order to log information about requests

  • systemClockOffset (Number)

    an offset value in milliseconds to apply to all signing times. Use this to compensate for clock skew when your system may be out of sync with the service time. Note that this configuration option can only be applied to the global AWS.config object and cannot be overridden in service-specific configuration. Defaults to 0 milliseconds.

  • signatureVersion (String)

    the signature version to sign requests with (overriding the API configuration). Possible values are: 'v2', 'v3', 'v4'.

  • signatureCache (Boolean)

    whether the signature to sign requests with (overriding the API configuration) is cached. Only applies to the signature version 'v4'. Defaults to true.

  • dynamoDbCrc32 (Boolean)

    whether to validate the CRC32 checksum of HTTP response bodies returned by DynamoDB. Default: true.

  • useAccelerateEndpoint (Boolean)

    Whether to use the S3 Transfer Acceleration endpoint with the S3 service. Default: false.

  • clientSideMonitoring (Boolean)

    whether to collect and publish this client's performance metrics of all its API requests.

  • endpointDiscoveryEnabled (Boolean|undefined)

    whether to call operations with endpoints given by service dynamically. Setting this

  • endpointCacheSize (Number)

    the size of the global cache storing endpoints from endpoint discovery operations. Once endpoint cache is created, updating this setting cannot change existing cache size. Defaults to 1000

  • hostPrefixEnabled (Boolean)

    whether to marshal request parameters to the prefix of hostname. Defaults to true.

  • stsRegionalEndpoints ('legacy'|'regional')

    whether to send sts request to global endpoints or regional endpoints. Defaults to 'legacy'.

  • useFipsEndpoint (Boolean)

    Enables FIPS compatible endpoints. Defaults to false.

  • useDualstackEndpoint (Boolean)

    Enables IPv6 dualstack endpoint. Defaults to false.

Property Details

endpointAWS.Endpoint (readwrite)

Returns an Endpoint object representing the endpoint URL for service requests.

Returns:

  • (AWS.Endpoint)

    an Endpoint object representing the endpoint URL for service requests.

Method Details

createMediaCapturePipeline(params = {}, callback) ⇒ AWS.Request

Creates a media pipeline.

Service Reference:

Examples:

Calling the createMediaCapturePipeline operation

var params = {
  SinkArn: 'STRING_VALUE', /* required */
  SinkType: S3Bucket, /* required */
  SourceArn: 'STRING_VALUE', /* required */
  SourceType: ChimeSdkMeeting, /* required */
  ChimeSdkMeetingConfiguration: {
    ArtifactsConfiguration: {
      Audio: { /* required */
        MuxType: AudioOnly | AudioWithActiveSpeakerVideo | AudioWithCompositedVideo /* required */
      },
      Content: { /* required */
        State: Enabled | Disabled, /* required */
        MuxType: ContentOnly
      },
      Video: { /* required */
        State: Enabled | Disabled, /* required */
        MuxType: VideoOnly
      },
      CompositedVideo: {
        GridViewConfiguration: { /* required */
          ContentShareLayout: PresenterOnly | Horizontal | Vertical | ActiveSpeakerOnly, /* required */
          ActiveSpeakerOnlyConfiguration: {
            ActiveSpeakerPosition: TopLeft | TopRight | BottomLeft | BottomRight
          },
          CanvasOrientation: Landscape | Portrait,
          HorizontalLayoutConfiguration: {
            TileAspectRatio: 'STRING_VALUE',
            TileCount: 'NUMBER_VALUE',
            TileOrder: JoinSequence | SpeakerSequence,
            TilePosition: Top | Bottom
          },
          PresenterOnlyConfiguration: {
            PresenterPosition: TopLeft | TopRight | BottomLeft | BottomRight
          },
          VerticalLayoutConfiguration: {
            TileAspectRatio: 'STRING_VALUE',
            TileCount: 'NUMBER_VALUE',
            TileOrder: JoinSequence | SpeakerSequence,
            TilePosition: Left | Right
          },
          VideoAttribute: {
            BorderColor: Black | Blue | Red | Green | White | Yellow,
            BorderThickness: 'NUMBER_VALUE',
            CornerRadius: 'NUMBER_VALUE',
            HighlightColor: Black | Blue | Red | Green | White | Yellow
          }
        },
        Layout: GridView,
        Resolution: HD | FHD
      }
    },
    SourceConfiguration: {
      SelectedVideoStreams: {
        AttendeeIds: [
          'STRING_VALUE',
          /* more items */
        ],
        ExternalUserIds: [
          'STRING_VALUE',
          /* more items */
        ]
      }
    }
  },
  ClientRequestToken: 'STRING_VALUE',
  Tags: [
    {
      Key: 'STRING_VALUE', /* required */
      Value: 'STRING_VALUE' /* required */
    },
    /* more items */
  ]
};
chimesdkmediapipelines.createMediaCapturePipeline(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • SourceType — (String)

      Source type from which the media artifacts are captured. A Chime SDK Meeting is the only supported source.

      Possible values include:
      • "ChimeSdkMeeting"
    • SourceArn — (String)

      ARN of the source from which the media artifacts are captured.

    • SinkType — (String)

      Destination type to which the media artifacts are saved. You must use an S3 bucket.

      Possible values include:
      • "S3Bucket"
    • SinkArn — (String)

      The ARN of the sink type.

    • ClientRequestToken — (String)

      The unique identifier for the client request. The token makes the API request idempotent. Use a unique token for each media pipeline request.

      If a token is not provided, the SDK will use a version 4 UUID.
    • ChimeSdkMeetingConfiguration — (map)

      The configuration for a specified media pipeline. SourceType must be ChimeSdkMeeting.

      • SourceConfiguration — (map)

        The source configuration for a specified media pipeline.

        • SelectedVideoStreams — (map)

          The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

          • AttendeeIds — (Array<String>)

            The attendee IDs of the streams selected for a media pipeline.

          • ExternalUserIds — (Array<String>)

            The external user IDs of the streams selected for a media pipeline.

      • ArtifactsConfiguration — (map)

        The configuration for the artifacts in an Amazon Chime SDK meeting.

        • Audiorequired — (map)

          The configuration for the audio artifacts.

          • MuxTyperequired — (String)

            The MUX type of the audio artifact configuration object.

            Possible values include:
            • "AudioOnly"
            • "AudioWithActiveSpeakerVideo"
            • "AudioWithCompositedVideo"
        • Videorequired — (map)

          The configuration for the video artifacts.

          • Staterequired — (String)

            Indicates whether the video artifact is enabled or disabled.

            Possible values include:
            • "Enabled"
            • "Disabled"
          • MuxType — (String)

            The MUX type of the video artifact configuration object.

            Possible values include:
            • "VideoOnly"
        • Contentrequired — (map)

          The configuration for the content artifacts.

          • Staterequired — (String)

            Indicates whether the content artifact is enabled or disabled.

            Possible values include:
            • "Enabled"
            • "Disabled"
          • MuxType — (String)

            The MUX type of the artifact configuration.

            Possible values include:
            • "ContentOnly"
        • CompositedVideo — (map)

          Enables video compositing.

          • Layout — (String)

            The layout setting, such as GridView in the configuration object.

            Possible values include:
            • "GridView"
          • Resolution — (String)

            The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

            Possible values include:
            • "HD"
            • "FHD"
          • GridViewConfigurationrequired — (map)

            The GridView configuration setting.

            • ContentShareLayoutrequired — (String)

              Defines the layout of the video tiles when content sharing is enabled.

              Possible values include:
              • "PresenterOnly"
              • "Horizontal"
              • "Vertical"
              • "ActiveSpeakerOnly"
            • PresenterOnlyConfiguration — (map)

              Defines the configuration options for a presenter only video tile.

              • PresenterPosition — (String)

                Defines the position of the presenter video tile. Default: TopRight.

                Possible values include:
                • "TopLeft"
                • "TopRight"
                • "BottomLeft"
                • "BottomRight"
            • ActiveSpeakerOnlyConfiguration — (map)

              The configuration settings for an ActiveSpeakerOnly video tile.

              • ActiveSpeakerPosition — (String)

                The position of the ActiveSpeakerOnly video tile.

                Possible values include:
                • "TopLeft"
                • "TopRight"
                • "BottomLeft"
                • "BottomRight"
            • HorizontalLayoutConfiguration — (map)

              The configuration settings for a horizontal layout.

              • TileOrder — (String)

                Sets the automatic ordering of the video tiles.

                Possible values include:
                • "JoinSequence"
                • "SpeakerSequence"
              • TilePosition — (String)

                Sets the position of horizontal tiles.

                Possible values include:
                • "Top"
                • "Bottom"
              • TileCount — (Integer)

                The maximum number of video tiles to display.

              • TileAspectRatio — (String)

                Specifies the aspect ratio of all video tiles.

            • VerticalLayoutConfiguration — (map)

              The configuration settings for a vertical layout.

              • TileOrder — (String)

                Sets the automatic ordering of the video tiles.

                Possible values include:
                • "JoinSequence"
                • "SpeakerSequence"
              • TilePosition — (String)

                Sets the position of vertical tiles.

                Possible values include:
                • "Left"
                • "Right"
              • TileCount — (Integer)

                The maximum number of tiles to display.

              • TileAspectRatio — (String)

                Sets the aspect ratio of the video tiles, such as 16:9.

            • VideoAttribute — (map)

              The attribute settings for the video tiles.

              • CornerRadius — (Integer)

                Sets the corner radius of all video tiles.

              • BorderColor — (String)

                Defines the border color of all video tiles.

                Possible values include:
                • "Black"
                • "Blue"
                • "Red"
                • "Green"
                • "White"
                • "Yellow"
              • HighlightColor — (String)

                Defines the highlight color for the active video tile.

                Possible values include:
                • "Black"
                • "Blue"
                • "Red"
                • "Green"
                • "White"
                • "Yellow"
              • BorderThickness — (Integer)

                Defines the border thickness for all video tiles.

            • CanvasOrientation — (String)

              The orientation setting, horizontal or vertical.

              Possible values include:
              • "Landscape"
              • "Portrait"
    • Tags — (Array<map>)

      The tag key-value pairs.

      • Keyrequired — (String)

        The key half of a tag.

      • Valuerequired — (String)

        The value half of a tag.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaCapturePipeline — (map)

        A media pipeline object, the ID, source type, source ARN, sink type, and sink ARN of a media pipeline object.

        • MediaPipelineId — (String)

          The ID of a media pipeline.

        • MediaPipelineArn — (String)

          The ARN of the media capture pipeline

        • SourceType — (String)

          Source type from which media artifacts are saved. You must use ChimeMeeting.

          Possible values include:
          • "ChimeSdkMeeting"
        • SourceArn — (String)

          ARN of the source from which the media artifacts are saved.

        • Status — (String)

          The status of the media pipeline.

          Possible values include:
          • "Initializing"
          • "InProgress"
          • "Failed"
          • "Stopping"
          • "Stopped"
          • "Paused"
          • "NotStarted"
        • SinkType — (String)

          Destination type to which the media artifacts are saved. You must use an S3 Bucket.

          Possible values include:
          • "S3Bucket"
        • SinkArn — (String)

          ARN of the destination to which the media artifacts are saved.

        • CreatedTimestamp — (Date)

          The time at which the pipeline was created, in ISO 8601 format.

        • UpdatedTimestamp — (Date)

          The time at which the pipeline was updated, in ISO 8601 format.

        • ChimeSdkMeetingConfiguration — (map)

          The configuration for a specified media pipeline. SourceType must be ChimeSdkMeeting.

          • SourceConfiguration — (map)

            The source configuration for a specified media pipeline.

            • SelectedVideoStreams — (map)

              The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

              • AttendeeIds — (Array<String>)

                The attendee IDs of the streams selected for a media pipeline.

              • ExternalUserIds — (Array<String>)

                The external user IDs of the streams selected for a media pipeline.

          • ArtifactsConfiguration — (map)

            The configuration for the artifacts in an Amazon Chime SDK meeting.

            • Audiorequired — (map)

              The configuration for the audio artifacts.

              • MuxTyperequired — (String)

                The MUX type of the audio artifact configuration object.

                Possible values include:
                • "AudioOnly"
                • "AudioWithActiveSpeakerVideo"
                • "AudioWithCompositedVideo"
            • Videorequired — (map)

              The configuration for the video artifacts.

              • Staterequired — (String)

                Indicates whether the video artifact is enabled or disabled.

                Possible values include:
                • "Enabled"
                • "Disabled"
              • MuxType — (String)

                The MUX type of the video artifact configuration object.

                Possible values include:
                • "VideoOnly"
            • Contentrequired — (map)

              The configuration for the content artifacts.

              • Staterequired — (String)

                Indicates whether the content artifact is enabled or disabled.

                Possible values include:
                • "Enabled"
                • "Disabled"
              • MuxType — (String)

                The MUX type of the artifact configuration.

                Possible values include:
                • "ContentOnly"
            • CompositedVideo — (map)

              Enables video compositing.

              • Layout — (String)

                The layout setting, such as GridView in the configuration object.

                Possible values include:
                • "GridView"
              • Resolution — (String)

                The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

                Possible values include:
                • "HD"
                • "FHD"
              • GridViewConfigurationrequired — (map)

                The GridView configuration setting.

                • ContentShareLayoutrequired — (String)

                  Defines the layout of the video tiles when content sharing is enabled.

                  Possible values include:
                  • "PresenterOnly"
                  • "Horizontal"
                  • "Vertical"
                  • "ActiveSpeakerOnly"
                • PresenterOnlyConfiguration — (map)

                  Defines the configuration options for a presenter only video tile.

                  • PresenterPosition — (String)

                    Defines the position of the presenter video tile. Default: TopRight.

                    Possible values include:
                    • "TopLeft"
                    • "TopRight"
                    • "BottomLeft"
                    • "BottomRight"
                • ActiveSpeakerOnlyConfiguration — (map)

                  The configuration settings for an ActiveSpeakerOnly video tile.

                  • ActiveSpeakerPosition — (String)

                    The position of the ActiveSpeakerOnly video tile.

                    Possible values include:
                    • "TopLeft"
                    • "TopRight"
                    • "BottomLeft"
                    • "BottomRight"
                • HorizontalLayoutConfiguration — (map)

                  The configuration settings for a horizontal layout.

                  • TileOrder — (String)

                    Sets the automatic ordering of the video tiles.

                    Possible values include:
                    • "JoinSequence"
                    • "SpeakerSequence"
                  • TilePosition — (String)

                    Sets the position of horizontal tiles.

                    Possible values include:
                    • "Top"
                    • "Bottom"
                  • TileCount — (Integer)

                    The maximum number of video tiles to display.

                  • TileAspectRatio — (String)

                    Specifies the aspect ratio of all video tiles.

                • VerticalLayoutConfiguration — (map)

                  The configuration settings for a vertical layout.

                  • TileOrder — (String)

                    Sets the automatic ordering of the video tiles.

                    Possible values include:
                    • "JoinSequence"
                    • "SpeakerSequence"
                  • TilePosition — (String)

                    Sets the position of vertical tiles.

                    Possible values include:
                    • "Left"
                    • "Right"
                  • TileCount — (Integer)

                    The maximum number of tiles to display.

                  • TileAspectRatio — (String)

                    Sets the aspect ratio of the video tiles, such as 16:9.

                • VideoAttribute — (map)

                  The attribute settings for the video tiles.

                  • CornerRadius — (Integer)

                    Sets the corner radius of all video tiles.

                  • BorderColor — (String)

                    Defines the border color of all video tiles.

                    Possible values include:
                    • "Black"
                    • "Blue"
                    • "Red"
                    • "Green"
                    • "White"
                    • "Yellow"
                  • HighlightColor — (String)

                    Defines the highlight color for the active video tile.

                    Possible values include:
                    • "Black"
                    • "Blue"
                    • "Red"
                    • "Green"
                    • "White"
                    • "Yellow"
                  • BorderThickness — (Integer)

                    Defines the border thickness for all video tiles.

                • CanvasOrientation — (String)

                  The orientation setting, horizontal or vertical.

                  Possible values include:
                  • "Landscape"
                  • "Portrait"

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

createMediaConcatenationPipeline(params = {}, callback) ⇒ AWS.Request

Creates a media concatenation pipeline.

Examples:

Calling the createMediaConcatenationPipeline operation

var params = {
  Sinks: [ /* required */
    {
      S3BucketSinkConfiguration: { /* required */
        Destination: 'STRING_VALUE' /* required */
      },
      Type: S3Bucket /* required */
    },
    /* more items */
  ],
  Sources: [ /* required */
    {
      MediaCapturePipelineSourceConfiguration: { /* required */
        ChimeSdkMeetingConfiguration: { /* required */
          ArtifactsConfiguration: { /* required */
            Audio: { /* required */
              State: Enabled /* required */
            },
            CompositedVideo: { /* required */
              State: Enabled | Disabled /* required */
            },
            Content: { /* required */
              State: Enabled | Disabled /* required */
            },
            DataChannel: { /* required */
              State: Enabled | Disabled /* required */
            },
            MeetingEvents: { /* required */
              State: Enabled | Disabled /* required */
            },
            TranscriptionMessages: { /* required */
              State: Enabled | Disabled /* required */
            },
            Video: { /* required */
              State: Enabled | Disabled /* required */
            }
          }
        },
        MediaPipelineArn: 'STRING_VALUE' /* required */
      },
      Type: MediaCapturePipeline /* required */
    },
    /* more items */
  ],
  ClientRequestToken: 'STRING_VALUE',
  Tags: [
    {
      Key: 'STRING_VALUE', /* required */
      Value: 'STRING_VALUE' /* required */
    },
    /* more items */
  ]
};
chimesdkmediapipelines.createMediaConcatenationPipeline(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Sources — (Array<map>)

      An object that specifies the sources for the media concatenation pipeline.

      • Typerequired — (String)

        The type of concatenation source in a configuration object.

        Possible values include:
        • "MediaCapturePipeline"
      • MediaCapturePipelineSourceConfigurationrequired — (map)

        The concatenation settings for the media pipeline in a configuration object.

        • MediaPipelineArnrequired — (String)

          The media pipeline ARN in the configuration object of a media capture pipeline.

        • ChimeSdkMeetingConfigurationrequired — (map)

          The meeting configuration settings in a media capture pipeline configuration object.

          • ArtifactsConfigurationrequired — (map)

            The configuration for the artifacts in an Amazon Chime SDK meeting concatenation.

            • Audiorequired — (map)

              The configuration for the audio artifacts concatenation.

              • Staterequired — (String)

                Enables or disables the configuration object.

                Possible values include:
                • "Enabled"
            • Videorequired — (map)

              The configuration for the video artifacts concatenation.

              • Staterequired — (String)

                Enables or disables the configuration object.

                Possible values include:
                • "Enabled"
                • "Disabled"
            • Contentrequired — (map)

              The configuration for the content artifacts concatenation.

              • Staterequired — (String)

                Enables or disables the configuration object.

                Possible values include:
                • "Enabled"
                • "Disabled"
            • DataChannelrequired — (map)

              The configuration for the data channel artifacts concatenation.

              • Staterequired — (String)

                Enables or disables the configuration object.

                Possible values include:
                • "Enabled"
                • "Disabled"
            • TranscriptionMessagesrequired — (map)

              The configuration for the transcription messages artifacts concatenation.

              • Staterequired — (String)

                Enables or disables the configuration object.

                Possible values include:
                • "Enabled"
                • "Disabled"
            • MeetingEventsrequired — (map)

              The configuration for the meeting events artifacts concatenation.

              • Staterequired — (String)

                Enables or disables the configuration object.

                Possible values include:
                • "Enabled"
                • "Disabled"
            • CompositedVideorequired — (map)

              The configuration for the composited video artifacts concatenation.

              • Staterequired — (String)

                Enables or disables the configuration object.

                Possible values include:
                • "Enabled"
                • "Disabled"
    • Sinks — (Array<map>)

      An object that specifies the data sinks for the media concatenation pipeline.

      • Typerequired — (String)

        The type of data sink in the configuration object.

        Possible values include:
        • "S3Bucket"
      • S3BucketSinkConfigurationrequired — (map)

        The configuration settings for an Amazon S3 bucket sink.

        • Destinationrequired — (String)

          The destination URL of the S3 bucket.

    • ClientRequestToken — (String)

      The unique identifier for the client request. The token makes the API request idempotent. Use a unique token for each media concatenation pipeline request.

      If a token is not provided, the SDK will use a version 4 UUID.
    • Tags — (Array<map>)

      The tags associated with the media concatenation pipeline.

      • Keyrequired — (String)

        The key half of a tag.

      • Valuerequired — (String)

        The value half of a tag.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaConcatenationPipeline — (map)

        A media concatenation pipeline object, the ID, source type, MediaPipelineARN, and sink of a media concatenation pipeline object.

        • MediaPipelineId — (String)

          The ID of the media pipeline being concatenated.

        • MediaPipelineArn — (String)

          The ARN of the media pipeline that you specify in the SourceConfiguration object.

        • Sources — (Array<map>)

          The data sources being concatenated.

          • Typerequired — (String)

            The type of concatenation source in a configuration object.

            Possible values include:
            • "MediaCapturePipeline"
          • MediaCapturePipelineSourceConfigurationrequired — (map)

            The concatenation settings for the media pipeline in a configuration object.

            • MediaPipelineArnrequired — (String)

              The media pipeline ARN in the configuration object of a media capture pipeline.

            • ChimeSdkMeetingConfigurationrequired — (map)

              The meeting configuration settings in a media capture pipeline configuration object.

              • ArtifactsConfigurationrequired — (map)

                The configuration for the artifacts in an Amazon Chime SDK meeting concatenation.

                • Audiorequired — (map)

                  The configuration for the audio artifacts concatenation.

                  • Staterequired — (String)

                    Enables or disables the configuration object.

                    Possible values include:
                    • "Enabled"
                • Videorequired — (map)

                  The configuration for the video artifacts concatenation.

                  • Staterequired — (String)

                    Enables or disables the configuration object.

                    Possible values include:
                    • "Enabled"
                    • "Disabled"
                • Contentrequired — (map)

                  The configuration for the content artifacts concatenation.

                  • Staterequired — (String)

                    Enables or disables the configuration object.

                    Possible values include:
                    • "Enabled"
                    • "Disabled"
                • DataChannelrequired — (map)

                  The configuration for the data channel artifacts concatenation.

                  • Staterequired — (String)

                    Enables or disables the configuration object.

                    Possible values include:
                    • "Enabled"
                    • "Disabled"
                • TranscriptionMessagesrequired — (map)

                  The configuration for the transcription messages artifacts concatenation.

                  • Staterequired — (String)

                    Enables or disables the configuration object.

                    Possible values include:
                    • "Enabled"
                    • "Disabled"
                • MeetingEventsrequired — (map)

                  The configuration for the meeting events artifacts concatenation.

                  • Staterequired — (String)

                    Enables or disables the configuration object.

                    Possible values include:
                    • "Enabled"
                    • "Disabled"
                • CompositedVideorequired — (map)

                  The configuration for the composited video artifacts concatenation.

                  • Staterequired — (String)

                    Enables or disables the configuration object.

                    Possible values include:
                    • "Enabled"
                    • "Disabled"
        • Sinks — (Array<map>)

          The data sinks of the concatenation pipeline.

          • Typerequired — (String)

            The type of data sink in the configuration object.

            Possible values include:
            • "S3Bucket"
          • S3BucketSinkConfigurationrequired — (map)

            The configuration settings for an Amazon S3 bucket sink.

            • Destinationrequired — (String)

              The destination URL of the S3 bucket.

        • Status — (String)

          The status of the concatenation pipeline.

          Possible values include:
          • "Initializing"
          • "InProgress"
          • "Failed"
          • "Stopping"
          • "Stopped"
          • "Paused"
          • "NotStarted"
        • CreatedTimestamp — (Date)

          The time at which the concatenation pipeline was created.

        • UpdatedTimestamp — (Date)

          The time at which the concatenation pipeline was last updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

createMediaInsightsPipeline(params = {}, callback) ⇒ AWS.Request

Creates a media insights pipeline.

Service Reference:

Examples:

Calling the createMediaInsightsPipeline operation

var params = {
  MediaInsightsPipelineConfigurationArn: 'STRING_VALUE', /* required */
  ClientRequestToken: 'STRING_VALUE',
  KinesisVideoStreamRecordingSourceRuntimeConfiguration: {
    FragmentSelector: { /* required */
      FragmentSelectorType: ProducerTimestamp | ServerTimestamp, /* required */
      TimestampRange: { /* required */
        EndTimestamp: new Date || 'Wed Dec 31 1969 16:00:00 GMT-0800 (PST)' || 123456789, /* required */
        StartTimestamp: new Date || 'Wed Dec 31 1969 16:00:00 GMT-0800 (PST)' || 123456789 /* required */
      }
    },
    Streams: [ /* required */
      {
        StreamArn: 'STRING_VALUE'
      },
      /* more items */
    ]
  },
  KinesisVideoStreamSourceRuntimeConfiguration: {
    MediaEncoding: pcm, /* required */
    MediaSampleRate: 'NUMBER_VALUE', /* required */
    Streams: [ /* required */
      {
        StreamArn: 'STRING_VALUE', /* required */
        StreamChannelDefinition: { /* required */
          NumberOfChannels: 'NUMBER_VALUE', /* required */
          ChannelDefinitions: [
            {
              ChannelId: 'NUMBER_VALUE', /* required */
              ParticipantRole: AGENT | CUSTOMER
            },
            /* more items */
          ]
        },
        FragmentNumber: 'STRING_VALUE'
      },
      /* more items */
    ]
  },
  MediaInsightsRuntimeMetadata: {
    '<NonEmptyString>': 'STRING_VALUE',
    /* '<NonEmptyString>': ... */
  },
  S3RecordingSinkRuntimeConfiguration: {
    Destination: 'STRING_VALUE', /* required */
    RecordingFileFormat: Wav | Opus /* required */
  },
  Tags: [
    {
      Key: 'STRING_VALUE', /* required */
      Value: 'STRING_VALUE' /* required */
    },
    /* more items */
  ]
};
chimesdkmediapipelines.createMediaInsightsPipeline(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • MediaInsightsPipelineConfigurationArn — (String)

      The ARN of the pipeline's configuration.

    • KinesisVideoStreamSourceRuntimeConfiguration — (map)

      The runtime configuration for the Kinesis video stream source of the media insights pipeline.

      • Streamsrequired — (Array<map>)

        The streams in the source runtime configuration of a Kinesis video stream.

        • StreamArnrequired — (String)

          The ARN of the stream.

        • FragmentNumber — (String)

          The unique identifier of the fragment to begin processing.

        • StreamChannelDefinitionrequired — (map)

          The streaming channel definition in the stream configuration.

          • NumberOfChannelsrequired — (Integer)

            The number of channels in a streaming channel.

          • ChannelDefinitions — (Array<map>)

            The definitions of the channels in a streaming channel.

            • ChannelIdrequired — (Integer)

              The channel ID.

            • ParticipantRole — (String)

              Specifies whether the audio in a channel belongs to the AGENT or CUSTOMER.

              Possible values include:
              • "AGENT"
              • "CUSTOMER"
      • MediaEncodingrequired — (String)

        Specifies the encoding of your input audio. Supported format: PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

        For more information, see Media formats in the Amazon Transcribe Developer Guide.

        Possible values include:
        • "pcm"
      • MediaSampleRaterequired — (Integer)

        The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.

        Valid Range: Minimum value of 8000. Maximum value of 48000.

    • MediaInsightsRuntimeMetadata — (map<String>)

      The runtime metadata for the media insights pipeline. Consists of a key-value map of strings.

    • KinesisVideoStreamRecordingSourceRuntimeConfiguration — (map)

      The runtime configuration for the Kinesis video recording stream source.

      • Streamsrequired — (Array<map>)

        The stream or streams to be recorded.

        • StreamArn — (String)

          The ARN of the recording stream.

      • FragmentSelectorrequired — (map)

        Describes the timestamp range and timestamp origin of a range of fragments in the Kinesis video stream.

        • FragmentSelectorTyperequired — (String)

          The origin of the timestamps to use, Server or Producer. For more information, see StartSelectorType in the Amazon Kinesis Video Streams Developer Guide.

          Possible values include:
          • "ProducerTimestamp"
          • "ServerTimestamp"
        • TimestampRangerequired — (map)

          The range of timestamps to return.

          • StartTimestamprequired — (Date)

            The starting timestamp for the specified range.

          • EndTimestamprequired — (Date)

            The ending timestamp for the specified range.

    • S3RecordingSinkRuntimeConfiguration — (map)

      The runtime configuration for the S3 recording sink. If specified, the settings in this structure override any settings in S3RecordingSinkConfiguration.

      • Destinationrequired — (String)

        The URI of the S3 bucket used as the sink.

      • RecordingFileFormatrequired — (String)

        The file format for the media files sent to the Amazon S3 bucket.

        Possible values include:
        • "Wav"
        • "Opus"
    • Tags — (Array<map>)

      The tags assigned to the media insights pipeline.

      • Keyrequired — (String)

        The key half of a tag.

      • Valuerequired — (String)

        The value half of a tag.

    • ClientRequestToken — (String)

      The unique identifier for the media insights pipeline request.

      If a token is not provided, the SDK will use a version 4 UUID.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaInsightsPipeline — (map)

        The media insights pipeline object.

        • MediaPipelineId — (String)

          The ID of a media insights pipeline.

        • MediaPipelineArn — (String)

          The ARN of a media insights pipeline.

        • MediaInsightsPipelineConfigurationArn — (String)

          The ARN of a media insight pipeline's configuration settings.

        • Status — (String)

          The status of a media insights pipeline.

          Possible values include:
          • "Initializing"
          • "InProgress"
          • "Failed"
          • "Stopping"
          • "Stopped"
          • "Paused"
          • "NotStarted"
        • KinesisVideoStreamSourceRuntimeConfiguration — (map)

          The configuration settings for a Kinesis runtime video stream in a media insights pipeline.

          • Streamsrequired — (Array<map>)

            The streams in the source runtime configuration of a Kinesis video stream.

            • StreamArnrequired — (String)

              The ARN of the stream.

            • FragmentNumber — (String)

              The unique identifier of the fragment to begin processing.

            • StreamChannelDefinitionrequired — (map)

              The streaming channel definition in the stream configuration.

              • NumberOfChannelsrequired — (Integer)

                The number of channels in a streaming channel.

              • ChannelDefinitions — (Array<map>)

                The definitions of the channels in a streaming channel.

                • ChannelIdrequired — (Integer)

                  The channel ID.

                • ParticipantRole — (String)

                  Specifies whether the audio in a channel belongs to the AGENT or CUSTOMER.

                  Possible values include:
                  • "AGENT"
                  • "CUSTOMER"
          • MediaEncodingrequired — (String)

            Specifies the encoding of your input audio. Supported format: PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

            For more information, see Media formats in the Amazon Transcribe Developer Guide.

            Possible values include:
            • "pcm"
          • MediaSampleRaterequired — (Integer)

            The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.

            Valid Range: Minimum value of 8000. Maximum value of 48000.

        • MediaInsightsRuntimeMetadata — (map<String>)

          The runtime metadata of a media insights pipeline.

        • KinesisVideoStreamRecordingSourceRuntimeConfiguration — (map)

          The runtime configuration settings for a Kinesis recording video stream in a media insights pipeline.

          • Streamsrequired — (Array<map>)

            The stream or streams to be recorded.

            • StreamArn — (String)

              The ARN of the recording stream.

          • FragmentSelectorrequired — (map)

            Describes the timestamp range and timestamp origin of a range of fragments in the Kinesis video stream.

            • FragmentSelectorTyperequired — (String)

              The origin of the timestamps to use, Server or Producer. For more information, see StartSelectorType in the Amazon Kinesis Video Streams Developer Guide.

              Possible values include:
              • "ProducerTimestamp"
              • "ServerTimestamp"
            • TimestampRangerequired — (map)

              The range of timestamps to return.

              • StartTimestamprequired — (Date)

                The starting timestamp for the specified range.

              • EndTimestamprequired — (Date)

                The ending timestamp for the specified range.

        • S3RecordingSinkRuntimeConfiguration — (map)

          The runtime configuration of the Amazon S3 bucket that stores recordings in a media insights pipeline.

          • Destinationrequired — (String)

            The URI of the S3 bucket used as the sink.

          • RecordingFileFormatrequired — (String)

            The file format for the media files sent to the Amazon S3 bucket.

            Possible values include:
            • "Wav"
            • "Opus"
        • CreatedTimestamp — (Date)

          The time at which the media insights pipeline was created.

        • ElementStatuses — (Array<map>)

          The statuses that the elements in a media insights pipeline can have during data processing.

          • Type — (String)

            The type of status.

            Possible values include:
            • "AmazonTranscribeCallAnalyticsProcessor"
            • "VoiceAnalyticsProcessor"
            • "AmazonTranscribeProcessor"
            • "KinesisDataStreamSink"
            • "LambdaFunctionSink"
            • "SqsQueueSink"
            • "SnsTopicSink"
            • "S3RecordingSink"
            • "VoiceEnhancementSink"
          • Status — (String)

            The element's status.

            Possible values include:
            • "NotStarted"
            • "NotSupported"
            • "Initializing"
            • "InProgress"
            • "Failed"
            • "Stopping"
            • "Stopped"
            • "Paused"

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

createMediaInsightsPipelineConfiguration(params = {}, callback) ⇒ AWS.Request

A structure that contains the static configurations for a media insights pipeline.

Examples:

Calling the createMediaInsightsPipelineConfiguration operation

var params = {
  Elements: [ /* required */
    {
      Type: AmazonTranscribeCallAnalyticsProcessor | VoiceAnalyticsProcessor | AmazonTranscribeProcessor | KinesisDataStreamSink | LambdaFunctionSink | SqsQueueSink | SnsTopicSink | S3RecordingSink | VoiceEnhancementSink, /* required */
      AmazonTranscribeCallAnalyticsProcessorConfiguration: {
        LanguageCode: en-US | en-GB | es-US | fr-CA | fr-FR | en-AU | it-IT | de-DE | pt-BR, /* required */
        CallAnalyticsStreamCategories: [
          'STRING_VALUE',
          /* more items */
        ],
        ContentIdentificationType: PII,
        ContentRedactionType: PII,
        EnablePartialResultsStabilization: true || false,
        FilterPartialResults: true || false,
        LanguageModelName: 'STRING_VALUE',
        PartialResultsStability: high | medium | low,
        PiiEntityTypes: 'STRING_VALUE',
        PostCallAnalyticsSettings: {
          DataAccessRoleArn: 'STRING_VALUE', /* required */
          OutputLocation: 'STRING_VALUE', /* required */
          ContentRedactionOutput: redacted | redacted_and_unredacted,
          OutputEncryptionKMSKeyId: 'STRING_VALUE'
        },
        VocabularyFilterMethod: remove | mask | tag,
        VocabularyFilterName: 'STRING_VALUE',
        VocabularyName: 'STRING_VALUE'
      },
      AmazonTranscribeProcessorConfiguration: {
        ContentIdentificationType: PII,
        ContentRedactionType: PII,
        EnablePartialResultsStabilization: true || false,
        FilterPartialResults: true || false,
        IdentifyLanguage: true || false,
        LanguageCode: en-US | en-GB | es-US | fr-CA | fr-FR | en-AU | it-IT | de-DE | pt-BR,
        LanguageModelName: 'STRING_VALUE',
        LanguageOptions: 'STRING_VALUE',
        PartialResultsStability: high | medium | low,
        PiiEntityTypes: 'STRING_VALUE',
        PreferredLanguage: en-US | en-GB | es-US | fr-CA | fr-FR | en-AU | it-IT | de-DE | pt-BR,
        ShowSpeakerLabel: true || false,
        VocabularyFilterMethod: remove | mask | tag,
        VocabularyFilterName: 'STRING_VALUE',
        VocabularyFilterNames: 'STRING_VALUE',
        VocabularyName: 'STRING_VALUE',
        VocabularyNames: 'STRING_VALUE'
      },
      KinesisDataStreamSinkConfiguration: {
        InsightsTarget: 'STRING_VALUE'
      },
      LambdaFunctionSinkConfiguration: {
        InsightsTarget: 'STRING_VALUE'
      },
      S3RecordingSinkConfiguration: {
        Destination: 'STRING_VALUE',
        RecordingFileFormat: Wav | Opus
      },
      SnsTopicSinkConfiguration: {
        InsightsTarget: 'STRING_VALUE'
      },
      SqsQueueSinkConfiguration: {
        InsightsTarget: 'STRING_VALUE'
      },
      VoiceAnalyticsProcessorConfiguration: {
        SpeakerSearchStatus: Enabled | Disabled,
        VoiceToneAnalysisStatus: Enabled | Disabled
      },
      VoiceEnhancementSinkConfiguration: {
        Disabled: true || false
      }
    },
    /* more items */
  ],
  MediaInsightsPipelineConfigurationName: 'STRING_VALUE', /* required */
  ResourceAccessRoleArn: 'STRING_VALUE', /* required */
  ClientRequestToken: 'STRING_VALUE',
  RealTimeAlertConfiguration: {
    Disabled: true || false,
    Rules: [
      {
        Type: KeywordMatch | Sentiment | IssueDetection, /* required */
        IssueDetectionConfiguration: {
          RuleName: 'STRING_VALUE' /* required */
        },
        KeywordMatchConfiguration: {
          Keywords: [ /* required */
            'STRING_VALUE',
            /* more items */
          ],
          RuleName: 'STRING_VALUE', /* required */
          Negate: true || false
        },
        SentimentConfiguration: {
          RuleName: 'STRING_VALUE', /* required */
          SentimentType: NEGATIVE, /* required */
          TimePeriod: 'NUMBER_VALUE' /* required */
        }
      },
      /* more items */
    ]
  },
  Tags: [
    {
      Key: 'STRING_VALUE', /* required */
      Value: 'STRING_VALUE' /* required */
    },
    /* more items */
  ]
};
chimesdkmediapipelines.createMediaInsightsPipelineConfiguration(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • MediaInsightsPipelineConfigurationName — (String)

      The name of the media insights pipeline configuration.

    • ResourceAccessRoleArn — (String)

      The ARN of the role used by the service to access Amazon Web Services resources, including Transcribe and Transcribe Call Analytics, on the caller’s behalf.

    • RealTimeAlertConfiguration — (map)

      The configuration settings for the real-time alerts in a media insights pipeline configuration.

      • Disabled — (Boolean)

        Turns off real-time alerts.

      • Rules — (Array<map>)

        The rules in the alert. Rules specify the words or phrases that you want to be notified about.

        • Typerequired — (String)

          The type of alert rule.

          Possible values include:
          • "KeywordMatch"
          • "Sentiment"
          • "IssueDetection"
        • KeywordMatchConfiguration — (map)

          Specifies the settings for matching the keywords in a real-time alert rule.

          • RuleNamerequired — (String)

            The name of the keyword match rule.

          • Keywordsrequired — (Array<String>)

            The keywords or phrases that you want to match.

          • Negate — (Boolean)

            Matches keywords or phrases on their presence or absence. If set to TRUE, the rule matches when all the specified keywords or phrases are absent. Default: FALSE.

        • SentimentConfiguration — (map)

          Specifies the settings for predicting sentiment in a real-time alert rule.

          • RuleNamerequired — (String)

            The name of the rule in the sentiment configuration.

          • SentimentTyperequired — (String)

            The type of sentiment, POSITIVE, NEGATIVE, or NEUTRAL.

            Possible values include:
            • "NEGATIVE"
          • TimePeriodrequired — (Integer)

            Specifies the analysis interval.

        • IssueDetectionConfiguration — (map)

          Specifies the issue detection settings for a real-time alert rule.

          • RuleNamerequired — (String)

            The name of the issue detection rule.

    • Elements — (Array<map>)

      The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream.

      • Typerequired — (String)

        The element type.

        Possible values include:
        • "AmazonTranscribeCallAnalyticsProcessor"
        • "VoiceAnalyticsProcessor"
        • "AmazonTranscribeProcessor"
        • "KinesisDataStreamSink"
        • "LambdaFunctionSink"
        • "SqsQueueSink"
        • "SnsTopicSink"
        • "S3RecordingSink"
        • "VoiceEnhancementSink"
      • AmazonTranscribeCallAnalyticsProcessorConfiguration — (map)

        The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

        • LanguageCoderequired — (String)

          The language code in the configuration.

          Possible values include:
          • "en-US"
          • "en-GB"
          • "es-US"
          • "fr-CA"
          • "fr-FR"
          • "en-AU"
          • "it-IT"
          • "de-DE"
          • "pt-BR"
        • VocabularyName — (String)

          Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

          If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.

          For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterName — (String)

          Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

          If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.

          For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterMethod — (String)

          Specifies how to apply a vocabulary filter to a transcript.

          To replace words with ***, choose mask.

          To delete words, choose remove.

          To flag words without changing them, choose tag.

          Possible values include:
          • "remove"
          • "mask"
          • "tag"
        • LanguageModelName — (String)

          Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

          The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.

          For more information, see Custom language models in the Amazon Transcribe Developer Guide.

        • EnablePartialResultsStabilization — (Boolean)

          Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        • PartialResultsStability — (String)

          Specifies the level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

          Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

          For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "high"
          • "medium"
          • "low"
        • ContentIdentificationType — (String)

          Labels all personally identifiable information (PII) identified in your transcript.

          Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

          You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "PII"
        • ContentRedactionType — (String)

          Redacts all personally identifiable information (PII) identified in your transcript.

          Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

          You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "PII"
        • PiiEntityTypes — (String)

          Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

          To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

          Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

          Length Constraints: Minimum length of 1. Maximum length of 300.

        • FilterPartialResults — (Boolean)

          If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

        • PostCallAnalyticsSettings — (map)

          The settings for a post-call analysis task in an analytics configuration.

          • OutputLocationrequired — (String)

            The URL of the Amazon S3 bucket that contains the post-call data.

          • DataAccessRoleArnrequired — (String)

            The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide.

          • ContentRedactionOutput — (String)

            The content redaction output settings for a post-call analysis task.

            Possible values include:
            • "redacted"
            • "redacted_and_unredacted"
          • OutputEncryptionKMSKeyId — (String)

            The ID of the KMS (Key Management Service) key used to encrypt the output.

        • CallAnalyticsStreamCategories — (Array<String>)

          By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

      • AmazonTranscribeProcessorConfiguration — (map)

        The transcription processor configuration settings in a media insights pipeline configuration element.

        • LanguageCode — (String)

          The language code that represents the language spoken in your audio.

          If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

          For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "en-US"
          • "en-GB"
          • "es-US"
          • "fr-CA"
          • "fr-FR"
          • "en-AU"
          • "it-IT"
          • "de-DE"
          • "pt-BR"
        • VocabularyName — (String)

          The name of the custom vocabulary that you specified in your Call Analytics request.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterName — (String)

          The name of the custom vocabulary filter that you specified in your Call Analytics request.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterMethod — (String)

          The vocabulary filtering method used in your Call Analytics transcription.

          Possible values include:
          • "remove"
          • "mask"
          • "tag"
        • ShowSpeakerLabel — (Boolean)

          Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

          For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide.

        • EnablePartialResultsStabilization — (Boolean)

          Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

          For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        • PartialResultsStability — (String)

          The level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

          Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

          For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "high"
          • "medium"
          • "low"
        • ContentIdentificationType — (String)

          Labels all personally identifiable information (PII) identified in your transcript.

          Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

          You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "PII"
        • ContentRedactionType — (String)

          Redacts all personally identifiable information (PII) identified in your transcript.

          Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

          You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "PII"
        • PiiEntityTypes — (String)

          The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

          To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

          Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

          If you leave this parameter empty, the default behavior is equivalent to ALL.

        • LanguageModelName — (String)

          The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

          The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

          For more information, see Custom language models in the Amazon Transcribe Developer Guide.

        • FilterPartialResults — (Boolean)

          If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

        • IdentifyLanguage — (Boolean)

          Turns language identification on or off.

        • LanguageOptions — (String)

          The language options for the transcription, such as automatic language detection.

        • PreferredLanguage — (String)

          The preferred language for the transcription.

          Possible values include:
          • "en-US"
          • "en-GB"
          • "es-US"
          • "fr-CA"
          • "fr-FR"
          • "en-AU"
          • "it-IT"
          • "de-DE"
          • "pt-BR"
        • VocabularyNames — (String)

          The names of the custom vocabulary or vocabularies used during transcription.

        • VocabularyFilterNames — (String)

          The names of the custom vocabulary filter or filters using during transcription.

      • KinesisDataStreamSinkConfiguration — (map)

        The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

        • InsightsTarget — (String)

          The ARN of the sink.

      • S3RecordingSinkConfiguration — (map)

        The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

        • Destination — (String)

          The default URI of the Amazon S3 bucket used as the recording sink.

        • RecordingFileFormat — (String)

          The default file format for the media files sent to the Amazon S3 bucket.

          Possible values include:
          • "Wav"
          • "Opus"
      • VoiceAnalyticsProcessorConfiguration — (map)

        The voice analytics configuration settings in a media insights pipeline configuration element.

        • SpeakerSearchStatus — (String)

          The status of the speaker search task.

          Possible values include:
          • "Enabled"
          • "Disabled"
        • VoiceToneAnalysisStatus — (String)

          The status of the voice tone analysis task.

          Possible values include:
          • "Enabled"
          • "Disabled"
      • LambdaFunctionSinkConfiguration — (map)

        The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

        • InsightsTarget — (String)

          The ARN of the sink.

      • SqsQueueSinkConfiguration — (map)

        The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

        • InsightsTarget — (String)

          The ARN of the SQS sink.

      • SnsTopicSinkConfiguration — (map)

        The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

        • InsightsTarget — (String)

          The ARN of the SNS sink.

      • VoiceEnhancementSinkConfiguration — (map)

        The configuration settings for voice enhancement sink in a media insights pipeline configuration element.

        • Disabled — (Boolean)

          Disables the VoiceEnhancementSinkConfiguration element.

    • Tags — (Array<map>)

      The tags assigned to the media insights pipeline configuration.

      • Keyrequired — (String)

        The key half of a tag.

      • Valuerequired — (String)

        The value half of a tag.

    • ClientRequestToken — (String)

      The unique identifier for the media insights pipeline configuration request.

      If a token is not provided, the SDK will use a version 4 UUID.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaInsightsPipelineConfiguration — (map)

        The configuration settings for the media insights pipeline.

        • MediaInsightsPipelineConfigurationName — (String)

          The name of the configuration.

        • MediaInsightsPipelineConfigurationArn — (String)

          The ARN of the configuration.

        • ResourceAccessRoleArn — (String)

          The ARN of the role used by the service to access Amazon Web Services resources.

        • RealTimeAlertConfiguration — (map)

          Lists the rules that trigger a real-time alert.

          • Disabled — (Boolean)

            Turns off real-time alerts.

          • Rules — (Array<map>)

            The rules in the alert. Rules specify the words or phrases that you want to be notified about.

            • Typerequired — (String)

              The type of alert rule.

              Possible values include:
              • "KeywordMatch"
              • "Sentiment"
              • "IssueDetection"
            • KeywordMatchConfiguration — (map)

              Specifies the settings for matching the keywords in a real-time alert rule.

              • RuleNamerequired — (String)

                The name of the keyword match rule.

              • Keywordsrequired — (Array<String>)

                The keywords or phrases that you want to match.

              • Negate — (Boolean)

                Matches keywords or phrases on their presence or absence. If set to TRUE, the rule matches when all the specified keywords or phrases are absent. Default: FALSE.

            • SentimentConfiguration — (map)

              Specifies the settings for predicting sentiment in a real-time alert rule.

              • RuleNamerequired — (String)

                The name of the rule in the sentiment configuration.

              • SentimentTyperequired — (String)

                The type of sentiment, POSITIVE, NEGATIVE, or NEUTRAL.

                Possible values include:
                • "NEGATIVE"
              • TimePeriodrequired — (Integer)

                Specifies the analysis interval.

            • IssueDetectionConfiguration — (map)

              Specifies the issue detection settings for a real-time alert rule.

              • RuleNamerequired — (String)

                The name of the issue detection rule.

        • Elements — (Array<map>)

          The elements in the configuration.

          • Typerequired — (String)

            The element type.

            Possible values include:
            • "AmazonTranscribeCallAnalyticsProcessor"
            • "VoiceAnalyticsProcessor"
            • "AmazonTranscribeProcessor"
            • "KinesisDataStreamSink"
            • "LambdaFunctionSink"
            • "SqsQueueSink"
            • "SnsTopicSink"
            • "S3RecordingSink"
            • "VoiceEnhancementSink"
          • AmazonTranscribeCallAnalyticsProcessorConfiguration — (map)

            The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

            • LanguageCoderequired — (String)

              The language code in the configuration.

              Possible values include:
              • "en-US"
              • "en-GB"
              • "es-US"
              • "fr-CA"
              • "fr-FR"
              • "en-AU"
              • "it-IT"
              • "de-DE"
              • "pt-BR"
            • VocabularyName — (String)

              Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

              If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.

              For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName — (String)

              Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

              If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.

              For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod — (String)

              Specifies how to apply a vocabulary filter to a transcript.

              To replace words with ***, choose mask.

              To delete words, choose remove.

              To flag words without changing them, choose tag.

              Possible values include:
              • "remove"
              • "mask"
              • "tag"
            • LanguageModelName — (String)

              Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide.

            • EnablePartialResultsStabilization — (Boolean)

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

            • PartialResultsStability — (String)

              Specifies the level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "high"
              • "medium"
              • "low"
            • ContentIdentificationType — (String)

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • ContentRedactionType — (String)

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • PiiEntityTypes — (String)

              Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

              Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

              Length Constraints: Minimum length of 1. Maximum length of 300.

            • FilterPartialResults — (Boolean)

              If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

            • PostCallAnalyticsSettings — (map)

              The settings for a post-call analysis task in an analytics configuration.

              • OutputLocationrequired — (String)

                The URL of the Amazon S3 bucket that contains the post-call data.

              • DataAccessRoleArnrequired — (String)

                The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide.

              • ContentRedactionOutput — (String)

                The content redaction output settings for a post-call analysis task.

                Possible values include:
                • "redacted"
                • "redacted_and_unredacted"
              • OutputEncryptionKMSKeyId — (String)

                The ID of the KMS (Key Management Service) key used to encrypt the output.

            • CallAnalyticsStreamCategories — (Array<String>)

              By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

          • AmazonTranscribeProcessorConfiguration — (map)

            The transcription processor configuration settings in a media insights pipeline configuration element.

            • LanguageCode — (String)

              The language code that represents the language spoken in your audio.

              If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

              For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "en-US"
              • "en-GB"
              • "es-US"
              • "fr-CA"
              • "fr-FR"
              • "en-AU"
              • "it-IT"
              • "de-DE"
              • "pt-BR"
            • VocabularyName — (String)

              The name of the custom vocabulary that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName — (String)

              The name of the custom vocabulary filter that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod — (String)

              The vocabulary filtering method used in your Call Analytics transcription.

              Possible values include:
              • "remove"
              • "mask"
              • "tag"
            • ShowSpeakerLabel — (Boolean)

              Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

              For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide.

            • EnablePartialResultsStabilization — (Boolean)

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

            • PartialResultsStability — (String)

              The level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "high"
              • "medium"
              • "low"
            • ContentIdentificationType — (String)

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • ContentRedactionType — (String)

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • PiiEntityTypes — (String)

              The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

              Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

              If you leave this parameter empty, the default behavior is equivalent to ALL.

            • LanguageModelName — (String)

              The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide.

            • FilterPartialResults — (Boolean)

              If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

            • IdentifyLanguage — (Boolean)

              Turns language identification on or off.

            • LanguageOptions — (String)

              The language options for the transcription, such as automatic language detection.

            • PreferredLanguage — (String)

              The preferred language for the transcription.

              Possible values include:
              • "en-US"
              • "en-GB"
              • "es-US"
              • "fr-CA"
              • "fr-FR"
              • "en-AU"
              • "it-IT"
              • "de-DE"
              • "pt-BR"
            • VocabularyNames — (String)

              The names of the custom vocabulary or vocabularies used during transcription.

            • VocabularyFilterNames — (String)

              The names of the custom vocabulary filter or filters using during transcription.

          • KinesisDataStreamSinkConfiguration — (map)

            The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the sink.

          • S3RecordingSinkConfiguration — (map)

            The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

            • Destination — (String)

              The default URI of the Amazon S3 bucket used as the recording sink.

            • RecordingFileFormat — (String)

              The default file format for the media files sent to the Amazon S3 bucket.

              Possible values include:
              • "Wav"
              • "Opus"
          • VoiceAnalyticsProcessorConfiguration — (map)

            The voice analytics configuration settings in a media insights pipeline configuration element.

            • SpeakerSearchStatus — (String)

              The status of the speaker search task.

              Possible values include:
              • "Enabled"
              • "Disabled"
            • VoiceToneAnalysisStatus — (String)

              The status of the voice tone analysis task.

              Possible values include:
              • "Enabled"
              • "Disabled"
          • LambdaFunctionSinkConfiguration — (map)

            The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the sink.

          • SqsQueueSinkConfiguration — (map)

            The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the SQS sink.

          • SnsTopicSinkConfiguration — (map)

            The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the SNS sink.

          • VoiceEnhancementSinkConfiguration — (map)

            The configuration settings for voice enhancement sink in a media insights pipeline configuration element.

            • Disabled — (Boolean)

              Disables the VoiceEnhancementSinkConfiguration element.

        • MediaInsightsPipelineConfigurationId — (String)

          The ID of the configuration.

        • CreatedTimestamp — (Date)

          The time at which the configuration was created.

        • UpdatedTimestamp — (Date)

          The time at which the configuration was last updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

createMediaLiveConnectorPipeline(params = {}, callback) ⇒ AWS.Request

Creates a media live connector pipeline in an Amazon Chime SDK meeting.

Examples:

Calling the createMediaLiveConnectorPipeline operation

var params = {
  Sinks: [ /* required */
    {
      RTMPConfiguration: { /* required */
        Url: 'STRING_VALUE', /* required */
        AudioChannels: Stereo | Mono,
        AudioSampleRate: 'STRING_VALUE'
      },
      SinkType: RTMP /* required */
    },
    /* more items */
  ],
  Sources: [ /* required */
    {
      ChimeSdkMeetingLiveConnectorConfiguration: { /* required */
        Arn: 'STRING_VALUE', /* required */
        MuxType: AudioWithCompositedVideo | AudioWithActiveSpeakerVideo, /* required */
        CompositedVideo: {
          GridViewConfiguration: { /* required */
            ContentShareLayout: PresenterOnly | Horizontal | Vertical | ActiveSpeakerOnly, /* required */
            ActiveSpeakerOnlyConfiguration: {
              ActiveSpeakerPosition: TopLeft | TopRight | BottomLeft | BottomRight
            },
            CanvasOrientation: Landscape | Portrait,
            HorizontalLayoutConfiguration: {
              TileAspectRatio: 'STRING_VALUE',
              TileCount: 'NUMBER_VALUE',
              TileOrder: JoinSequence | SpeakerSequence,
              TilePosition: Top | Bottom
            },
            PresenterOnlyConfiguration: {
              PresenterPosition: TopLeft | TopRight | BottomLeft | BottomRight
            },
            VerticalLayoutConfiguration: {
              TileAspectRatio: 'STRING_VALUE',
              TileCount: 'NUMBER_VALUE',
              TileOrder: JoinSequence | SpeakerSequence,
              TilePosition: Left | Right
            },
            VideoAttribute: {
              BorderColor: Black | Blue | Red | Green | White | Yellow,
              BorderThickness: 'NUMBER_VALUE',
              CornerRadius: 'NUMBER_VALUE',
              HighlightColor: Black | Blue | Red | Green | White | Yellow
            }
          },
          Layout: GridView,
          Resolution: HD | FHD
        },
        SourceConfiguration: {
          SelectedVideoStreams: {
            AttendeeIds: [
              'STRING_VALUE',
              /* more items */
            ],
            ExternalUserIds: [
              'STRING_VALUE',
              /* more items */
            ]
          }
        }
      },
      SourceType: ChimeSdkMeeting /* required */
    },
    /* more items */
  ],
  ClientRequestToken: 'STRING_VALUE',
  Tags: [
    {
      Key: 'STRING_VALUE', /* required */
      Value: 'STRING_VALUE' /* required */
    },
    /* more items */
  ]
};
chimesdkmediapipelines.createMediaLiveConnectorPipeline(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Sources — (Array<map>)

      The media live connector pipeline's data sources.

      • SourceTyperequired — (String)

        The source configuration's media source type.

        Possible values include:
        • "ChimeSdkMeeting"
      • ChimeSdkMeetingLiveConnectorConfigurationrequired — (map)

        The configuration settings of the connector pipeline.

        • Arnrequired — (String)

          The configuration object's Chime SDK meeting ARN.

        • MuxTyperequired — (String)

          The configuration object's multiplex type.

          Possible values include:
          • "AudioWithCompositedVideo"
          • "AudioWithActiveSpeakerVideo"
        • CompositedVideo — (map)

          The media pipeline's composited video.

          • Layout — (String)

            The layout setting, such as GridView in the configuration object.

            Possible values include:
            • "GridView"
          • Resolution — (String)

            The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

            Possible values include:
            • "HD"
            • "FHD"
          • GridViewConfigurationrequired — (map)

            The GridView configuration setting.

            • ContentShareLayoutrequired — (String)

              Defines the layout of the video tiles when content sharing is enabled.

              Possible values include:
              • "PresenterOnly"
              • "Horizontal"
              • "Vertical"
              • "ActiveSpeakerOnly"
            • PresenterOnlyConfiguration — (map)

              Defines the configuration options for a presenter only video tile.

              • PresenterPosition — (String)

                Defines the position of the presenter video tile. Default: TopRight.

                Possible values include:
                • "TopLeft"
                • "TopRight"
                • "BottomLeft"
                • "BottomRight"
            • ActiveSpeakerOnlyConfiguration — (map)

              The configuration settings for an ActiveSpeakerOnly video tile.

              • ActiveSpeakerPosition — (String)

                The position of the ActiveSpeakerOnly video tile.

                Possible values include:
                • "TopLeft"
                • "TopRight"
                • "BottomLeft"
                • "BottomRight"
            • HorizontalLayoutConfiguration — (map)

              The configuration settings for a horizontal layout.

              • TileOrder — (String)

                Sets the automatic ordering of the video tiles.

                Possible values include:
                • "JoinSequence"
                • "SpeakerSequence"
              • TilePosition — (String)

                Sets the position of horizontal tiles.

                Possible values include:
                • "Top"
                • "Bottom"
              • TileCount — (Integer)

                The maximum number of video tiles to display.

              • TileAspectRatio — (String)

                Specifies the aspect ratio of all video tiles.

            • VerticalLayoutConfiguration — (map)

              The configuration settings for a vertical layout.

              • TileOrder — (String)

                Sets the automatic ordering of the video tiles.

                Possible values include:
                • "JoinSequence"
                • "SpeakerSequence"
              • TilePosition — (String)

                Sets the position of vertical tiles.

                Possible values include:
                • "Left"
                • "Right"
              • TileCount — (Integer)

                The maximum number of tiles to display.

              • TileAspectRatio — (String)

                Sets the aspect ratio of the video tiles, such as 16:9.

            • VideoAttribute — (map)

              The attribute settings for the video tiles.

              • CornerRadius — (Integer)

                Sets the corner radius of all video tiles.

              • BorderColor — (String)

                Defines the border color of all video tiles.

                Possible values include:
                • "Black"
                • "Blue"
                • "Red"
                • "Green"
                • "White"
                • "Yellow"
              • HighlightColor — (String)

                Defines the highlight color for the active video tile.

                Possible values include:
                • "Black"
                • "Blue"
                • "Red"
                • "Green"
                • "White"
                • "Yellow"
              • BorderThickness — (Integer)

                Defines the border thickness for all video tiles.

            • CanvasOrientation — (String)

              The orientation setting, horizontal or vertical.

              Possible values include:
              • "Landscape"
              • "Portrait"
        • SourceConfiguration — (map)

          The source configuration settings of the media pipeline's configuration object.

          • SelectedVideoStreams — (map)

            The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

            • AttendeeIds — (Array<String>)

              The attendee IDs of the streams selected for a media pipeline.

            • ExternalUserIds — (Array<String>)

              The external user IDs of the streams selected for a media pipeline.

    • Sinks — (Array<map>)

      The media live connector pipeline's data sinks.

      • SinkTyperequired — (String)

        The sink configuration's sink type.

        Possible values include:
        • "RTMP"
      • RTMPConfigurationrequired — (map)

        The sink configuration's RTMP configuration settings.

        • Urlrequired — (String)

          The URL of the RTMP configuration.

        • AudioChannels — (String)

          The audio channels set for the RTMP configuration

          Possible values include:
          • "Stereo"
          • "Mono"
        • AudioSampleRate — (String)

          The audio sample rate set for the RTMP configuration. Default: 48000.

    • ClientRequestToken — (String)

      The token assigned to the client making the request.

      If a token is not provided, the SDK will use a version 4 UUID.
    • Tags — (Array<map>)

      The tags associated with the media live connector pipeline.

      • Keyrequired — (String)

        The key half of a tag.

      • Valuerequired — (String)

        The value half of a tag.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaLiveConnectorPipeline — (map)

        The new media live connector pipeline.

        • Sources — (Array<map>)

          The connector pipeline's data sources.

          • SourceTyperequired — (String)

            The source configuration's media source type.

            Possible values include:
            • "ChimeSdkMeeting"
          • ChimeSdkMeetingLiveConnectorConfigurationrequired — (map)

            The configuration settings of the connector pipeline.

            • Arnrequired — (String)

              The configuration object's Chime SDK meeting ARN.

            • MuxTyperequired — (String)

              The configuration object's multiplex type.

              Possible values include:
              • "AudioWithCompositedVideo"
              • "AudioWithActiveSpeakerVideo"
            • CompositedVideo — (map)

              The media pipeline's composited video.

              • Layout — (String)

                The layout setting, such as GridView in the configuration object.

                Possible values include:
                • "GridView"
              • Resolution — (String)

                The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

                Possible values include:
                • "HD"
                • "FHD"
              • GridViewConfigurationrequired — (map)

                The GridView configuration setting.

                • ContentShareLayoutrequired — (String)

                  Defines the layout of the video tiles when content sharing is enabled.

                  Possible values include:
                  • "PresenterOnly"
                  • "Horizontal"
                  • "Vertical"
                  • "ActiveSpeakerOnly"
                • PresenterOnlyConfiguration — (map)

                  Defines the configuration options for a presenter only video tile.

                  • PresenterPosition — (String)

                    Defines the position of the presenter video tile. Default: TopRight.

                    Possible values include:
                    • "TopLeft"
                    • "TopRight"
                    • "BottomLeft"
                    • "BottomRight"
                • ActiveSpeakerOnlyConfiguration — (map)

                  The configuration settings for an ActiveSpeakerOnly video tile.

                  • ActiveSpeakerPosition — (String)

                    The position of the ActiveSpeakerOnly video tile.

                    Possible values include:
                    • "TopLeft"
                    • "TopRight"
                    • "BottomLeft"
                    • "BottomRight"
                • HorizontalLayoutConfiguration — (map)

                  The configuration settings for a horizontal layout.

                  • TileOrder — (String)

                    Sets the automatic ordering of the video tiles.

                    Possible values include:
                    • "JoinSequence"
                    • "SpeakerSequence"
                  • TilePosition — (String)

                    Sets the position of horizontal tiles.

                    Possible values include:
                    • "Top"
                    • "Bottom"
                  • TileCount — (Integer)

                    The maximum number of video tiles to display.

                  • TileAspectRatio — (String)

                    Specifies the aspect ratio of all video tiles.

                • VerticalLayoutConfiguration — (map)

                  The configuration settings for a vertical layout.

                  • TileOrder — (String)

                    Sets the automatic ordering of the video tiles.

                    Possible values include:
                    • "JoinSequence"
                    • "SpeakerSequence"
                  • TilePosition — (String)

                    Sets the position of vertical tiles.

                    Possible values include:
                    • "Left"
                    • "Right"
                  • TileCount — (Integer)

                    The maximum number of tiles to display.

                  • TileAspectRatio — (String)

                    Sets the aspect ratio of the video tiles, such as 16:9.

                • VideoAttribute — (map)

                  The attribute settings for the video tiles.

                  • CornerRadius — (Integer)

                    Sets the corner radius of all video tiles.

                  • BorderColor — (String)

                    Defines the border color of all video tiles.

                    Possible values include:
                    • "Black"
                    • "Blue"
                    • "Red"
                    • "Green"
                    • "White"
                    • "Yellow"
                  • HighlightColor — (String)

                    Defines the highlight color for the active video tile.

                    Possible values include:
                    • "Black"
                    • "Blue"
                    • "Red"
                    • "Green"
                    • "White"
                    • "Yellow"
                  • BorderThickness — (Integer)

                    Defines the border thickness for all video tiles.

                • CanvasOrientation — (String)

                  The orientation setting, horizontal or vertical.

                  Possible values include:
                  • "Landscape"
                  • "Portrait"
            • SourceConfiguration — (map)

              The source configuration settings of the media pipeline's configuration object.

              • SelectedVideoStreams — (map)

                The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

                • AttendeeIds — (Array<String>)

                  The attendee IDs of the streams selected for a media pipeline.

                • ExternalUserIds — (Array<String>)

                  The external user IDs of the streams selected for a media pipeline.

        • Sinks — (Array<map>)

          The connector pipeline's data sinks.

          • SinkTyperequired — (String)

            The sink configuration's sink type.

            Possible values include:
            • "RTMP"
          • RTMPConfigurationrequired — (map)

            The sink configuration's RTMP configuration settings.

            • Urlrequired — (String)

              The URL of the RTMP configuration.

            • AudioChannels — (String)

              The audio channels set for the RTMP configuration

              Possible values include:
              • "Stereo"
              • "Mono"
            • AudioSampleRate — (String)

              The audio sample rate set for the RTMP configuration. Default: 48000.

        • MediaPipelineId — (String)

          The connector pipeline's ID.

        • MediaPipelineArn — (String)

          The connector pipeline's ARN.

        • Status — (String)

          The connector pipeline's status.

          Possible values include:
          • "Initializing"
          • "InProgress"
          • "Failed"
          • "Stopping"
          • "Stopped"
          • "Paused"
          • "NotStarted"
        • CreatedTimestamp — (Date)

          The time at which the connector pipeline was created.

        • UpdatedTimestamp — (Date)

          The time at which the connector pipeline was last updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

createMediaPipelineKinesisVideoStreamPool(params = {}, callback) ⇒ AWS.Request

Creates an Kinesis video stream pool for the media pipeline.

Examples:

Calling the createMediaPipelineKinesisVideoStreamPool operation

var params = {
  PoolName: 'STRING_VALUE', /* required */
  StreamConfiguration: { /* required */
    Region: 'STRING_VALUE', /* required */
    DataRetentionInHours: 'NUMBER_VALUE'
  },
  ClientRequestToken: 'STRING_VALUE',
  Tags: [
    {
      Key: 'STRING_VALUE', /* required */
      Value: 'STRING_VALUE' /* required */
    },
    /* more items */
  ]
};
chimesdkmediapipelines.createMediaPipelineKinesisVideoStreamPool(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • StreamConfiguration — (map)

      The configuration settings for the video stream.

      • Regionrequired — (String)

        The Amazon Web Services Region of the video stream.

      • DataRetentionInHours — (Integer)

        The amount of time that data is retained.

    • PoolName — (String)

      The name of the video stream pool.

    • ClientRequestToken — (String)

      The token assigned to the client making the request.

      If a token is not provided, the SDK will use a version 4 UUID.
    • Tags — (Array<map>)

      The tags assigned to the video stream pool.

      • Keyrequired — (String)

        The key half of a tag.

      • Valuerequired — (String)

        The value half of a tag.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • KinesisVideoStreamPoolConfiguration — (map)

        The configuration for the Kinesis video stream pool.

        • PoolArn — (String)

          The ARN of the video stream pool configuration.

        • PoolName — (String)

          The name of the video stream pool configuration.

        • PoolId — (String)

          The ID of the video stream pool in the configuration.

        • PoolStatus — (String)

          The status of the video stream pool in the configuration.

          Possible values include:
          • "CREATING"
          • "ACTIVE"
          • "UPDATING"
          • "DELETING"
          • "FAILED"
        • PoolSize — (Integer)

          The size of the video stream pool in the configuration.

        • StreamConfiguration — (map)

          The Kinesis video stream pool configuration object.

          • Regionrequired — (String)

            The Amazon Web Services Region of the video stream.

          • DataRetentionInHours — (Integer)

            The amount of time that data is retained.

        • CreatedTimestamp — (Date)

          The time at which the configuration was created.

        • UpdatedTimestamp — (Date)

          The time at which the configuration was updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

createMediaStreamPipeline(params = {}, callback) ⇒ AWS.Request

Creates a streaming media pipeline.

Service Reference:

Examples:

Calling the createMediaStreamPipeline operation

var params = {
  Sinks: [ /* required */
    {
      MediaStreamType: MixedAudio | IndividualAudio, /* required */
      ReservedStreamCapacity: 'NUMBER_VALUE', /* required */
      SinkArn: 'STRING_VALUE', /* required */
      SinkType: KinesisVideoStreamPool /* required */
    },
    /* more items */
  ],
  Sources: [ /* required */
    {
      SourceArn: 'STRING_VALUE', /* required */
      SourceType: ChimeSdkMeeting /* required */
    },
    /* more items */
  ],
  ClientRequestToken: 'STRING_VALUE',
  Tags: [
    {
      Key: 'STRING_VALUE', /* required */
      Value: 'STRING_VALUE' /* required */
    },
    /* more items */
  ]
};
chimesdkmediapipelines.createMediaStreamPipeline(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Sources — (Array<map>)

      The data sources for the media pipeline.

      • SourceTyperequired — (String)

        The type of media stream source.

        Possible values include:
        • "ChimeSdkMeeting"
      • SourceArnrequired — (String)

        The ARN of the media stream source.

    • Sinks — (Array<map>)

      The data sink for the media pipeline.

      • SinkArnrequired — (String)

        The ARN of the media stream sink.

      • SinkTyperequired — (String)

        The media stream sink's type.

        Possible values include:
        • "KinesisVideoStreamPool"
      • ReservedStreamCapacityrequired — (Integer)

        Specifies the number of streams that the sink can accept.

      • MediaStreamTyperequired — (String)

        The media stream sink's media stream type.

        Possible values include:
        • "MixedAudio"
        • "IndividualAudio"
    • ClientRequestToken — (String)

      The token assigned to the client making the request.

      If a token is not provided, the SDK will use a version 4 UUID.
    • Tags — (Array<map>)

      The tags assigned to the media pipeline.

      • Keyrequired — (String)

        The key half of a tag.

      • Valuerequired — (String)

        The value half of a tag.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaStreamPipeline — (map)

        The requested media pipeline.

        • MediaPipelineId — (String)

          The ID of the media stream pipeline

        • MediaPipelineArn — (String)

          The ARN of the media stream pipeline.

        • CreatedTimestamp — (Date)

          The time at which the media stream pipeline was created.

        • UpdatedTimestamp — (Date)

          The time at which the media stream pipeline was updated.

        • Status — (String)

          The status of the media stream pipeline.

          Possible values include:
          • "Initializing"
          • "InProgress"
          • "Failed"
          • "Stopping"
          • "Stopped"
          • "Paused"
          • "NotStarted"
        • Sources — (Array<map>)

          The media stream pipeline's data sources.

          • SourceTyperequired — (String)

            The type of media stream source.

            Possible values include:
            • "ChimeSdkMeeting"
          • SourceArnrequired — (String)

            The ARN of the media stream source.

        • Sinks — (Array<map>)

          The media stream pipeline's data sinks.

          • SinkArnrequired — (String)

            The ARN of the media stream sink.

          • SinkTyperequired — (String)

            The media stream sink's type.

            Possible values include:
            • "KinesisVideoStreamPool"
          • ReservedStreamCapacityrequired — (Integer)

            Specifies the number of streams that the sink can accept.

          • MediaStreamTyperequired — (String)

            The media stream sink's media stream type.

            Possible values include:
            • "MixedAudio"
            • "IndividualAudio"

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

deleteMediaCapturePipeline(params = {}, callback) ⇒ AWS.Request

Deletes the media pipeline.

Service Reference:

Examples:

Calling the deleteMediaCapturePipeline operation

var params = {
  MediaPipelineId: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.deleteMediaCapturePipeline(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • MediaPipelineId — (String)

      The ID of the media pipeline being deleted.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

deleteMediaInsightsPipelineConfiguration(params = {}, callback) ⇒ AWS.Request

Deletes the specified configuration settings.

Examples:

Calling the deleteMediaInsightsPipelineConfiguration operation

var params = {
  Identifier: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.deleteMediaInsightsPipelineConfiguration(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The unique identifier of the resource to be deleted. Valid values include the name and ARN of the media insights pipeline configuration.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

deleteMediaPipeline(params = {}, callback) ⇒ AWS.Request

Deletes the media pipeline.

Service Reference:

Examples:

Calling the deleteMediaPipeline operation

var params = {
  MediaPipelineId: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.deleteMediaPipeline(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • MediaPipelineId — (String)

      The ID of the media pipeline to delete.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

deleteMediaPipelineKinesisVideoStreamPool(params = {}, callback) ⇒ AWS.Request

Deletes an Kinesis video stream pool.

Examples:

Calling the deleteMediaPipelineKinesisVideoStreamPool operation

var params = {
  Identifier: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.deleteMediaPipelineKinesisVideoStreamPool(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The ID of the pool being deleted.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

getMediaCapturePipeline(params = {}, callback) ⇒ AWS.Request

Gets an existing media pipeline.

Service Reference:

Examples:

Calling the getMediaCapturePipeline operation

var params = {
  MediaPipelineId: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.getMediaCapturePipeline(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • MediaPipelineId — (String)

      The ID of the pipeline that you want to get.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaCapturePipeline — (map)

        The media pipeline object.

        • MediaPipelineId — (String)

          The ID of a media pipeline.

        • MediaPipelineArn — (String)

          The ARN of the media capture pipeline

        • SourceType — (String)

          Source type from which media artifacts are saved. You must use ChimeMeeting.

          Possible values include:
          • "ChimeSdkMeeting"
        • SourceArn — (String)

          ARN of the source from which the media artifacts are saved.

        • Status — (String)

          The status of the media pipeline.

          Possible values include:
          • "Initializing"
          • "InProgress"
          • "Failed"
          • "Stopping"
          • "Stopped"
          • "Paused"
          • "NotStarted"
        • SinkType — (String)

          Destination type to which the media artifacts are saved. You must use an S3 Bucket.

          Possible values include:
          • "S3Bucket"
        • SinkArn — (String)

          ARN of the destination to which the media artifacts are saved.

        • CreatedTimestamp — (Date)

          The time at which the pipeline was created, in ISO 8601 format.

        • UpdatedTimestamp — (Date)

          The time at which the pipeline was updated, in ISO 8601 format.

        • ChimeSdkMeetingConfiguration — (map)

          The configuration for a specified media pipeline. SourceType must be ChimeSdkMeeting.

          • SourceConfiguration — (map)

            The source configuration for a specified media pipeline.

            • SelectedVideoStreams — (map)

              The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

              • AttendeeIds — (Array<String>)

                The attendee IDs of the streams selected for a media pipeline.

              • ExternalUserIds — (Array<String>)

                The external user IDs of the streams selected for a media pipeline.

          • ArtifactsConfiguration — (map)

            The configuration for the artifacts in an Amazon Chime SDK meeting.

            • Audiorequired — (map)

              The configuration for the audio artifacts.

              • MuxTyperequired — (String)

                The MUX type of the audio artifact configuration object.

                Possible values include:
                • "AudioOnly"
                • "AudioWithActiveSpeakerVideo"
                • "AudioWithCompositedVideo"
            • Videorequired — (map)

              The configuration for the video artifacts.

              • Staterequired — (String)

                Indicates whether the video artifact is enabled or disabled.

                Possible values include:
                • "Enabled"
                • "Disabled"
              • MuxType — (String)

                The MUX type of the video artifact configuration object.

                Possible values include:
                • "VideoOnly"
            • Contentrequired — (map)

              The configuration for the content artifacts.

              • Staterequired — (String)

                Indicates whether the content artifact is enabled or disabled.

                Possible values include:
                • "Enabled"
                • "Disabled"
              • MuxType — (String)

                The MUX type of the artifact configuration.

                Possible values include:
                • "ContentOnly"
            • CompositedVideo — (map)

              Enables video compositing.

              • Layout — (String)

                The layout setting, such as GridView in the configuration object.

                Possible values include:
                • "GridView"
              • Resolution — (String)

                The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

                Possible values include:
                • "HD"
                • "FHD"
              • GridViewConfigurationrequired — (map)

                The GridView configuration setting.

                • ContentShareLayoutrequired — (String)

                  Defines the layout of the video tiles when content sharing is enabled.

                  Possible values include:
                  • "PresenterOnly"
                  • "Horizontal"
                  • "Vertical"
                  • "ActiveSpeakerOnly"
                • PresenterOnlyConfiguration — (map)

                  Defines the configuration options for a presenter only video tile.

                  • PresenterPosition — (String)

                    Defines the position of the presenter video tile. Default: TopRight.

                    Possible values include:
                    • "TopLeft"
                    • "TopRight"
                    • "BottomLeft"
                    • "BottomRight"
                • ActiveSpeakerOnlyConfiguration — (map)

                  The configuration settings for an ActiveSpeakerOnly video tile.

                  • ActiveSpeakerPosition — (String)

                    The position of the ActiveSpeakerOnly video tile.

                    Possible values include:
                    • "TopLeft"
                    • "TopRight"
                    • "BottomLeft"
                    • "BottomRight"
                • HorizontalLayoutConfiguration — (map)

                  The configuration settings for a horizontal layout.

                  • TileOrder — (String)

                    Sets the automatic ordering of the video tiles.

                    Possible values include:
                    • "JoinSequence"
                    • "SpeakerSequence"
                  • TilePosition — (String)

                    Sets the position of horizontal tiles.

                    Possible values include:
                    • "Top"
                    • "Bottom"
                  • TileCount — (Integer)

                    The maximum number of video tiles to display.

                  • TileAspectRatio — (String)

                    Specifies the aspect ratio of all video tiles.

                • VerticalLayoutConfiguration — (map)

                  The configuration settings for a vertical layout.

                  • TileOrder — (String)

                    Sets the automatic ordering of the video tiles.

                    Possible values include:
                    • "JoinSequence"
                    • "SpeakerSequence"
                  • TilePosition — (String)

                    Sets the position of vertical tiles.

                    Possible values include:
                    • "Left"
                    • "Right"
                  • TileCount — (Integer)

                    The maximum number of tiles to display.

                  • TileAspectRatio — (String)

                    Sets the aspect ratio of the video tiles, such as 16:9.

                • VideoAttribute — (map)

                  The attribute settings for the video tiles.

                  • CornerRadius — (Integer)

                    Sets the corner radius of all video tiles.

                  • BorderColor — (String)

                    Defines the border color of all video tiles.

                    Possible values include:
                    • "Black"
                    • "Blue"
                    • "Red"
                    • "Green"
                    • "White"
                    • "Yellow"
                  • HighlightColor — (String)

                    Defines the highlight color for the active video tile.

                    Possible values include:
                    • "Black"
                    • "Blue"
                    • "Red"
                    • "Green"
                    • "White"
                    • "Yellow"
                  • BorderThickness — (Integer)

                    Defines the border thickness for all video tiles.

                • CanvasOrientation — (String)

                  The orientation setting, horizontal or vertical.

                  Possible values include:
                  • "Landscape"
                  • "Portrait"

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

getMediaInsightsPipelineConfiguration(params = {}, callback) ⇒ AWS.Request

Gets the configuration settings for a media insights pipeline.

Examples:

Calling the getMediaInsightsPipelineConfiguration operation

var params = {
  Identifier: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.getMediaInsightsPipelineConfiguration(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The unique identifier of the requested resource. Valid values include the name and ARN of the media insights pipeline configuration.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaInsightsPipelineConfiguration — (map)

        The requested media insights pipeline configuration.

        • MediaInsightsPipelineConfigurationName — (String)

          The name of the configuration.

        • MediaInsightsPipelineConfigurationArn — (String)

          The ARN of the configuration.

        • ResourceAccessRoleArn — (String)

          The ARN of the role used by the service to access Amazon Web Services resources.

        • RealTimeAlertConfiguration — (map)

          Lists the rules that trigger a real-time alert.

          • Disabled — (Boolean)

            Turns off real-time alerts.

          • Rules — (Array<map>)

            The rules in the alert. Rules specify the words or phrases that you want to be notified about.

            • Typerequired — (String)

              The type of alert rule.

              Possible values include:
              • "KeywordMatch"
              • "Sentiment"
              • "IssueDetection"
            • KeywordMatchConfiguration — (map)

              Specifies the settings for matching the keywords in a real-time alert rule.

              • RuleNamerequired — (String)

                The name of the keyword match rule.

              • Keywordsrequired — (Array<String>)

                The keywords or phrases that you want to match.

              • Negate — (Boolean)

                Matches keywords or phrases on their presence or absence. If set to TRUE, the rule matches when all the specified keywords or phrases are absent. Default: FALSE.

            • SentimentConfiguration — (map)

              Specifies the settings for predicting sentiment in a real-time alert rule.

              • RuleNamerequired — (String)

                The name of the rule in the sentiment configuration.

              • SentimentTyperequired — (String)

                The type of sentiment, POSITIVE, NEGATIVE, or NEUTRAL.

                Possible values include:
                • "NEGATIVE"
              • TimePeriodrequired — (Integer)

                Specifies the analysis interval.

            • IssueDetectionConfiguration — (map)

              Specifies the issue detection settings for a real-time alert rule.

              • RuleNamerequired — (String)

                The name of the issue detection rule.

        • Elements — (Array<map>)

          The elements in the configuration.

          • Typerequired — (String)

            The element type.

            Possible values include:
            • "AmazonTranscribeCallAnalyticsProcessor"
            • "VoiceAnalyticsProcessor"
            • "AmazonTranscribeProcessor"
            • "KinesisDataStreamSink"
            • "LambdaFunctionSink"
            • "SqsQueueSink"
            • "SnsTopicSink"
            • "S3RecordingSink"
            • "VoiceEnhancementSink"
          • AmazonTranscribeCallAnalyticsProcessorConfiguration — (map)

            The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

            • LanguageCoderequired — (String)

              The language code in the configuration.

              Possible values include:
              • "en-US"
              • "en-GB"
              • "es-US"
              • "fr-CA"
              • "fr-FR"
              • "en-AU"
              • "it-IT"
              • "de-DE"
              • "pt-BR"
            • VocabularyName — (String)

              Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

              If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.

              For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName — (String)

              Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

              If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.

              For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod — (String)

              Specifies how to apply a vocabulary filter to a transcript.

              To replace words with ***, choose mask.

              To delete words, choose remove.

              To flag words without changing them, choose tag.

              Possible values include:
              • "remove"
              • "mask"
              • "tag"
            • LanguageModelName — (String)

              Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide.

            • EnablePartialResultsStabilization — (Boolean)

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

            • PartialResultsStability — (String)

              Specifies the level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "high"
              • "medium"
              • "low"
            • ContentIdentificationType — (String)

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • ContentRedactionType — (String)

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • PiiEntityTypes — (String)

              Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

              Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

              Length Constraints: Minimum length of 1. Maximum length of 300.

            • FilterPartialResults — (Boolean)

              If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

            • PostCallAnalyticsSettings — (map)

              The settings for a post-call analysis task in an analytics configuration.

              • OutputLocationrequired — (String)

                The URL of the Amazon S3 bucket that contains the post-call data.

              • DataAccessRoleArnrequired — (String)

                The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide.

              • ContentRedactionOutput — (String)

                The content redaction output settings for a post-call analysis task.

                Possible values include:
                • "redacted"
                • "redacted_and_unredacted"
              • OutputEncryptionKMSKeyId — (String)

                The ID of the KMS (Key Management Service) key used to encrypt the output.

            • CallAnalyticsStreamCategories — (Array<String>)

              By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

          • AmazonTranscribeProcessorConfiguration — (map)

            The transcription processor configuration settings in a media insights pipeline configuration element.

            • LanguageCode — (String)

              The language code that represents the language spoken in your audio.

              If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

              For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "en-US"
              • "en-GB"
              • "es-US"
              • "fr-CA"
              • "fr-FR"
              • "en-AU"
              • "it-IT"
              • "de-DE"
              • "pt-BR"
            • VocabularyName — (String)

              The name of the custom vocabulary that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName — (String)

              The name of the custom vocabulary filter that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod — (String)

              The vocabulary filtering method used in your Call Analytics transcription.

              Possible values include:
              • "remove"
              • "mask"
              • "tag"
            • ShowSpeakerLabel — (Boolean)

              Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

              For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide.

            • EnablePartialResultsStabilization — (Boolean)

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

            • PartialResultsStability — (String)

              The level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "high"
              • "medium"
              • "low"
            • ContentIdentificationType — (String)

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • ContentRedactionType — (String)

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • PiiEntityTypes — (String)

              The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

              Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

              If you leave this parameter empty, the default behavior is equivalent to ALL.

            • LanguageModelName — (String)

              The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide.

            • FilterPartialResults — (Boolean)

              If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

            • IdentifyLanguage — (Boolean)

              Turns language identification on or off.

            • LanguageOptions — (String)

              The language options for the transcription, such as automatic language detection.

            • PreferredLanguage — (String)

              The preferred language for the transcription.

              Possible values include:
              • "en-US"
              • "en-GB"
              • "es-US"
              • "fr-CA"
              • "fr-FR"
              • "en-AU"
              • "it-IT"
              • "de-DE"
              • "pt-BR"
            • VocabularyNames — (String)

              The names of the custom vocabulary or vocabularies used during transcription.

            • VocabularyFilterNames — (String)

              The names of the custom vocabulary filter or filters using during transcription.

          • KinesisDataStreamSinkConfiguration — (map)

            The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the sink.

          • S3RecordingSinkConfiguration — (map)

            The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

            • Destination — (String)

              The default URI of the Amazon S3 bucket used as the recording sink.

            • RecordingFileFormat — (String)

              The default file format for the media files sent to the Amazon S3 bucket.

              Possible values include:
              • "Wav"
              • "Opus"
          • VoiceAnalyticsProcessorConfiguration — (map)

            The voice analytics configuration settings in a media insights pipeline configuration element.

            • SpeakerSearchStatus — (String)

              The status of the speaker search task.

              Possible values include:
              • "Enabled"
              • "Disabled"
            • VoiceToneAnalysisStatus — (String)

              The status of the voice tone analysis task.

              Possible values include:
              • "Enabled"
              • "Disabled"
          • LambdaFunctionSinkConfiguration — (map)

            The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the sink.

          • SqsQueueSinkConfiguration — (map)

            The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the SQS sink.

          • SnsTopicSinkConfiguration — (map)

            The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the SNS sink.

          • VoiceEnhancementSinkConfiguration — (map)

            The configuration settings for voice enhancement sink in a media insights pipeline configuration element.

            • Disabled — (Boolean)

              Disables the VoiceEnhancementSinkConfiguration element.

        • MediaInsightsPipelineConfigurationId — (String)

          The ID of the configuration.

        • CreatedTimestamp — (Date)

          The time at which the configuration was created.

        • UpdatedTimestamp — (Date)

          The time at which the configuration was last updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

getMediaPipeline(params = {}, callback) ⇒ AWS.Request

Gets an existing media pipeline.

Service Reference:

Examples:

Calling the getMediaPipeline operation

var params = {
  MediaPipelineId: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.getMediaPipeline(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • MediaPipelineId — (String)

      The ID of the pipeline that you want to get.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaPipeline — (map)

        The media pipeline object.

        • MediaCapturePipeline — (map)

          A pipeline that enables users to capture audio and video.

          • MediaPipelineId — (String)

            The ID of a media pipeline.

          • MediaPipelineArn — (String)

            The ARN of the media capture pipeline

          • SourceType — (String)

            Source type from which media artifacts are saved. You must use ChimeMeeting.

            Possible values include:
            • "ChimeSdkMeeting"
          • SourceArn — (String)

            ARN of the source from which the media artifacts are saved.

          • Status — (String)

            The status of the media pipeline.

            Possible values include:
            • "Initializing"
            • "InProgress"
            • "Failed"
            • "Stopping"
            • "Stopped"
            • "Paused"
            • "NotStarted"
          • SinkType — (String)

            Destination type to which the media artifacts are saved. You must use an S3 Bucket.

            Possible values include:
            • "S3Bucket"
          • SinkArn — (String)

            ARN of the destination to which the media artifacts are saved.

          • CreatedTimestamp — (Date)

            The time at which the pipeline was created, in ISO 8601 format.

          • UpdatedTimestamp — (Date)

            The time at which the pipeline was updated, in ISO 8601 format.

          • ChimeSdkMeetingConfiguration — (map)

            The configuration for a specified media pipeline. SourceType must be ChimeSdkMeeting.

            • SourceConfiguration — (map)

              The source configuration for a specified media pipeline.

              • SelectedVideoStreams — (map)

                The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

                • AttendeeIds — (Array<String>)

                  The attendee IDs of the streams selected for a media pipeline.

                • ExternalUserIds — (Array<String>)

                  The external user IDs of the streams selected for a media pipeline.

            • ArtifactsConfiguration — (map)

              The configuration for the artifacts in an Amazon Chime SDK meeting.

              • Audiorequired — (map)

                The configuration for the audio artifacts.

                • MuxTyperequired — (String)

                  The MUX type of the audio artifact configuration object.

                  Possible values include:
                  • "AudioOnly"
                  • "AudioWithActiveSpeakerVideo"
                  • "AudioWithCompositedVideo"
              • Videorequired — (map)

                The configuration for the video artifacts.

                • Staterequired — (String)

                  Indicates whether the video artifact is enabled or disabled.

                  Possible values include:
                  • "Enabled"
                  • "Disabled"
                • MuxType — (String)

                  The MUX type of the video artifact configuration object.

                  Possible values include:
                  • "VideoOnly"
              • Contentrequired — (map)

                The configuration for the content artifacts.

                • Staterequired — (String)

                  Indicates whether the content artifact is enabled or disabled.

                  Possible values include:
                  • "Enabled"
                  • "Disabled"
                • MuxType — (String)

                  The MUX type of the artifact configuration.

                  Possible values include:
                  • "ContentOnly"
              • CompositedVideo — (map)

                Enables video compositing.

                • Layout — (String)

                  The layout setting, such as GridView in the configuration object.

                  Possible values include:
                  • "GridView"
                • Resolution — (String)

                  The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

                  Possible values include:
                  • "HD"
                  • "FHD"
                • GridViewConfigurationrequired — (map)

                  The GridView configuration setting.

                  • ContentShareLayoutrequired — (String)

                    Defines the layout of the video tiles when content sharing is enabled.

                    Possible values include:
                    • "PresenterOnly"
                    • "Horizontal"
                    • "Vertical"
                    • "ActiveSpeakerOnly"
                  • PresenterOnlyConfiguration — (map)

                    Defines the configuration options for a presenter only video tile.

                    • PresenterPosition — (String)

                      Defines the position of the presenter video tile. Default: TopRight.

                      Possible values include:
                      • "TopLeft"
                      • "TopRight"
                      • "BottomLeft"
                      • "BottomRight"
                  • ActiveSpeakerOnlyConfiguration — (map)

                    The configuration settings for an ActiveSpeakerOnly video tile.

                    • ActiveSpeakerPosition — (String)

                      The position of the ActiveSpeakerOnly video tile.

                      Possible values include:
                      • "TopLeft"
                      • "TopRight"
                      • "BottomLeft"
                      • "BottomRight"
                  • HorizontalLayoutConfiguration — (map)

                    The configuration settings for a horizontal layout.

                    • TileOrder — (String)

                      Sets the automatic ordering of the video tiles.

                      Possible values include:
                      • "JoinSequence"
                      • "SpeakerSequence"
                    • TilePosition — (String)

                      Sets the position of horizontal tiles.

                      Possible values include:
                      • "Top"
                      • "Bottom"
                    • TileCount — (Integer)

                      The maximum number of video tiles to display.

                    • TileAspectRatio — (String)

                      Specifies the aspect ratio of all video tiles.

                  • VerticalLayoutConfiguration — (map)

                    The configuration settings for a vertical layout.

                    • TileOrder — (String)

                      Sets the automatic ordering of the video tiles.

                      Possible values include:
                      • "JoinSequence"
                      • "SpeakerSequence"
                    • TilePosition — (String)

                      Sets the position of vertical tiles.

                      Possible values include:
                      • "Left"
                      • "Right"
                    • TileCount — (Integer)

                      The maximum number of tiles to display.

                    • TileAspectRatio — (String)

                      Sets the aspect ratio of the video tiles, such as 16:9.

                  • VideoAttribute — (map)

                    The attribute settings for the video tiles.

                    • CornerRadius — (Integer)

                      Sets the corner radius of all video tiles.

                    • BorderColor — (String)

                      Defines the border color of all video tiles.

                      Possible values include:
                      • "Black"
                      • "Blue"
                      • "Red"
                      • "Green"
                      • "White"
                      • "Yellow"
                    • HighlightColor — (String)

                      Defines the highlight color for the active video tile.

                      Possible values include:
                      • "Black"
                      • "Blue"
                      • "Red"
                      • "Green"
                      • "White"
                      • "Yellow"
                    • BorderThickness — (Integer)

                      Defines the border thickness for all video tiles.

                  • CanvasOrientation — (String)

                    The orientation setting, horizontal or vertical.

                    Possible values include:
                    • "Landscape"
                    • "Portrait"
        • MediaLiveConnectorPipeline — (map)

          The connector pipeline of the media pipeline.

          • Sources — (Array<map>)

            The connector pipeline's data sources.

            • SourceTyperequired — (String)

              The source configuration's media source type.

              Possible values include:
              • "ChimeSdkMeeting"
            • ChimeSdkMeetingLiveConnectorConfigurationrequired — (map)

              The configuration settings of the connector pipeline.

              • Arnrequired — (String)

                The configuration object's Chime SDK meeting ARN.

              • MuxTyperequired — (String)

                The configuration object's multiplex type.

                Possible values include:
                • "AudioWithCompositedVideo"
                • "AudioWithActiveSpeakerVideo"
              • CompositedVideo — (map)

                The media pipeline's composited video.

                • Layout — (String)

                  The layout setting, such as GridView in the configuration object.

                  Possible values include:
                  • "GridView"
                • Resolution — (String)

                  The video resolution setting in the configuration object. Default: HD at 1280 x 720. FHD resolution: 1920 x 1080.

                  Possible values include:
                  • "HD"
                  • "FHD"
                • GridViewConfigurationrequired — (map)

                  The GridView configuration setting.

                  • ContentShareLayoutrequired — (String)

                    Defines the layout of the video tiles when content sharing is enabled.

                    Possible values include:
                    • "PresenterOnly"
                    • "Horizontal"
                    • "Vertical"
                    • "ActiveSpeakerOnly"
                  • PresenterOnlyConfiguration — (map)

                    Defines the configuration options for a presenter only video tile.

                    • PresenterPosition — (String)

                      Defines the position of the presenter video tile. Default: TopRight.

                      Possible values include:
                      • "TopLeft"
                      • "TopRight"
                      • "BottomLeft"
                      • "BottomRight"
                  • ActiveSpeakerOnlyConfiguration — (map)

                    The configuration settings for an ActiveSpeakerOnly video tile.

                    • ActiveSpeakerPosition — (String)

                      The position of the ActiveSpeakerOnly video tile.

                      Possible values include:
                      • "TopLeft"
                      • "TopRight"
                      • "BottomLeft"
                      • "BottomRight"
                  • HorizontalLayoutConfiguration — (map)

                    The configuration settings for a horizontal layout.

                    • TileOrder — (String)

                      Sets the automatic ordering of the video tiles.

                      Possible values include:
                      • "JoinSequence"
                      • "SpeakerSequence"
                    • TilePosition — (String)

                      Sets the position of horizontal tiles.

                      Possible values include:
                      • "Top"
                      • "Bottom"
                    • TileCount — (Integer)

                      The maximum number of video tiles to display.

                    • TileAspectRatio — (String)

                      Specifies the aspect ratio of all video tiles.

                  • VerticalLayoutConfiguration — (map)

                    The configuration settings for a vertical layout.

                    • TileOrder — (String)

                      Sets the automatic ordering of the video tiles.

                      Possible values include:
                      • "JoinSequence"
                      • "SpeakerSequence"
                    • TilePosition — (String)

                      Sets the position of vertical tiles.

                      Possible values include:
                      • "Left"
                      • "Right"
                    • TileCount — (Integer)

                      The maximum number of tiles to display.

                    • TileAspectRatio — (String)

                      Sets the aspect ratio of the video tiles, such as 16:9.

                  • VideoAttribute — (map)

                    The attribute settings for the video tiles.

                    • CornerRadius — (Integer)

                      Sets the corner radius of all video tiles.

                    • BorderColor — (String)

                      Defines the border color of all video tiles.

                      Possible values include:
                      • "Black"
                      • "Blue"
                      • "Red"
                      • "Green"
                      • "White"
                      • "Yellow"
                    • HighlightColor — (String)

                      Defines the highlight color for the active video tile.

                      Possible values include:
                      • "Black"
                      • "Blue"
                      • "Red"
                      • "Green"
                      • "White"
                      • "Yellow"
                    • BorderThickness — (Integer)

                      Defines the border thickness for all video tiles.

                  • CanvasOrientation — (String)

                    The orientation setting, horizontal or vertical.

                    Possible values include:
                    • "Landscape"
                    • "Portrait"
              • SourceConfiguration — (map)

                The source configuration settings of the media pipeline's configuration object.

                • SelectedVideoStreams — (map)

                  The selected video streams for a specified media pipeline. The number of video streams can't exceed 25.

                  • AttendeeIds — (Array<String>)

                    The attendee IDs of the streams selected for a media pipeline.

                  • ExternalUserIds — (Array<String>)

                    The external user IDs of the streams selected for a media pipeline.

          • Sinks — (Array<map>)

            The connector pipeline's data sinks.

            • SinkTyperequired — (String)

              The sink configuration's sink type.

              Possible values include:
              • "RTMP"
            • RTMPConfigurationrequired — (map)

              The sink configuration's RTMP configuration settings.

              • Urlrequired — (String)

                The URL of the RTMP configuration.

              • AudioChannels — (String)

                The audio channels set for the RTMP configuration

                Possible values include:
                • "Stereo"
                • "Mono"
              • AudioSampleRate — (String)

                The audio sample rate set for the RTMP configuration. Default: 48000.

          • MediaPipelineId — (String)

            The connector pipeline's ID.

          • MediaPipelineArn — (String)

            The connector pipeline's ARN.

          • Status — (String)

            The connector pipeline's status.

            Possible values include:
            • "Initializing"
            • "InProgress"
            • "Failed"
            • "Stopping"
            • "Stopped"
            • "Paused"
            • "NotStarted"
          • CreatedTimestamp — (Date)

            The time at which the connector pipeline was created.

          • UpdatedTimestamp — (Date)

            The time at which the connector pipeline was last updated.

        • MediaConcatenationPipeline — (map)

          The media concatenation pipeline in a media pipeline.

          • MediaPipelineId — (String)

            The ID of the media pipeline being concatenated.

          • MediaPipelineArn — (String)

            The ARN of the media pipeline that you specify in the SourceConfiguration object.

          • Sources — (Array<map>)

            The data sources being concatenated.

            • Typerequired — (String)

              The type of concatenation source in a configuration object.

              Possible values include:
              • "MediaCapturePipeline"
            • MediaCapturePipelineSourceConfigurationrequired — (map)

              The concatenation settings for the media pipeline in a configuration object.

              • MediaPipelineArnrequired — (String)

                The media pipeline ARN in the configuration object of a media capture pipeline.

              • ChimeSdkMeetingConfigurationrequired — (map)

                The meeting configuration settings in a media capture pipeline configuration object.

                • ArtifactsConfigurationrequired — (map)

                  The configuration for the artifacts in an Amazon Chime SDK meeting concatenation.

                  • Audiorequired — (map)

                    The configuration for the audio artifacts concatenation.

                    • Staterequired — (String)

                      Enables or disables the configuration object.

                      Possible values include:
                      • "Enabled"
                  • Videorequired — (map)

                    The configuration for the video artifacts concatenation.

                    • Staterequired — (String)

                      Enables or disables the configuration object.

                      Possible values include:
                      • "Enabled"
                      • "Disabled"
                  • Contentrequired — (map)

                    The configuration for the content artifacts concatenation.

                    • Staterequired — (String)

                      Enables or disables the configuration object.

                      Possible values include:
                      • "Enabled"
                      • "Disabled"
                  • DataChannelrequired — (map)

                    The configuration for the data channel artifacts concatenation.

                    • Staterequired — (String)

                      Enables or disables the configuration object.

                      Possible values include:
                      • "Enabled"
                      • "Disabled"
                  • TranscriptionMessagesrequired — (map)

                    The configuration for the transcription messages artifacts concatenation.

                    • Staterequired — (String)

                      Enables or disables the configuration object.

                      Possible values include:
                      • "Enabled"
                      • "Disabled"
                  • MeetingEventsrequired — (map)

                    The configuration for the meeting events artifacts concatenation.

                    • Staterequired — (String)

                      Enables or disables the configuration object.

                      Possible values include:
                      • "Enabled"
                      • "Disabled"
                  • CompositedVideorequired — (map)

                    The configuration for the composited video artifacts concatenation.

                    • Staterequired — (String)

                      Enables or disables the configuration object.

                      Possible values include:
                      • "Enabled"
                      • "Disabled"
          • Sinks — (Array<map>)

            The data sinks of the concatenation pipeline.

            • Typerequired — (String)

              The type of data sink in the configuration object.

              Possible values include:
              • "S3Bucket"
            • S3BucketSinkConfigurationrequired — (map)

              The configuration settings for an Amazon S3 bucket sink.

              • Destinationrequired — (String)

                The destination URL of the S3 bucket.

          • Status — (String)

            The status of the concatenation pipeline.

            Possible values include:
            • "Initializing"
            • "InProgress"
            • "Failed"
            • "Stopping"
            • "Stopped"
            • "Paused"
            • "NotStarted"
          • CreatedTimestamp — (Date)

            The time at which the concatenation pipeline was created.

          • UpdatedTimestamp — (Date)

            The time at which the concatenation pipeline was last updated.

        • MediaInsightsPipeline — (map)

          The media insights pipeline of a media pipeline.

          • MediaPipelineId — (String)

            The ID of a media insights pipeline.

          • MediaPipelineArn — (String)

            The ARN of a media insights pipeline.

          • MediaInsightsPipelineConfigurationArn — (String)

            The ARN of a media insight pipeline's configuration settings.

          • Status — (String)

            The status of a media insights pipeline.

            Possible values include:
            • "Initializing"
            • "InProgress"
            • "Failed"
            • "Stopping"
            • "Stopped"
            • "Paused"
            • "NotStarted"
          • KinesisVideoStreamSourceRuntimeConfiguration — (map)

            The configuration settings for a Kinesis runtime video stream in a media insights pipeline.

            • Streamsrequired — (Array<map>)

              The streams in the source runtime configuration of a Kinesis video stream.

              • StreamArnrequired — (String)

                The ARN of the stream.

              • FragmentNumber — (String)

                The unique identifier of the fragment to begin processing.

              • StreamChannelDefinitionrequired — (map)

                The streaming channel definition in the stream configuration.

                • NumberOfChannelsrequired — (Integer)

                  The number of channels in a streaming channel.

                • ChannelDefinitions — (Array<map>)

                  The definitions of the channels in a streaming channel.

                  • ChannelIdrequired — (Integer)

                    The channel ID.

                  • ParticipantRole — (String)

                    Specifies whether the audio in a channel belongs to the AGENT or CUSTOMER.

                    Possible values include:
                    • "AGENT"
                    • "CUSTOMER"
            • MediaEncodingrequired — (String)

              Specifies the encoding of your input audio. Supported format: PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

              For more information, see Media formats in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "pcm"
            • MediaSampleRaterequired — (Integer)

              The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.

              Valid Range: Minimum value of 8000. Maximum value of 48000.

          • MediaInsightsRuntimeMetadata — (map<String>)

            The runtime metadata of a media insights pipeline.

          • KinesisVideoStreamRecordingSourceRuntimeConfiguration — (map)

            The runtime configuration settings for a Kinesis recording video stream in a media insights pipeline.

            • Streamsrequired — (Array<map>)

              The stream or streams to be recorded.

              • StreamArn — (String)

                The ARN of the recording stream.

            • FragmentSelectorrequired — (map)

              Describes the timestamp range and timestamp origin of a range of fragments in the Kinesis video stream.

              • FragmentSelectorTyperequired — (String)

                The origin of the timestamps to use, Server or Producer. For more information, see StartSelectorType in the Amazon Kinesis Video Streams Developer Guide.

                Possible values include:
                • "ProducerTimestamp"
                • "ServerTimestamp"
              • TimestampRangerequired — (map)

                The range of timestamps to return.

                • StartTimestamprequired — (Date)

                  The starting timestamp for the specified range.

                • EndTimestamprequired — (Date)

                  The ending timestamp for the specified range.

          • S3RecordingSinkRuntimeConfiguration — (map)

            The runtime configuration of the Amazon S3 bucket that stores recordings in a media insights pipeline.

            • Destinationrequired — (String)

              The URI of the S3 bucket used as the sink.

            • RecordingFileFormatrequired — (String)

              The file format for the media files sent to the Amazon S3 bucket.

              Possible values include:
              • "Wav"
              • "Opus"
          • CreatedTimestamp — (Date)

            The time at which the media insights pipeline was created.

          • ElementStatuses — (Array<map>)

            The statuses that the elements in a media insights pipeline can have during data processing.

            • Type — (String)

              The type of status.

              Possible values include:
              • "AmazonTranscribeCallAnalyticsProcessor"
              • "VoiceAnalyticsProcessor"
              • "AmazonTranscribeProcessor"
              • "KinesisDataStreamSink"
              • "LambdaFunctionSink"
              • "SqsQueueSink"
              • "SnsTopicSink"
              • "S3RecordingSink"
              • "VoiceEnhancementSink"
            • Status — (String)

              The element's status.

              Possible values include:
              • "NotStarted"
              • "NotSupported"
              • "Initializing"
              • "InProgress"
              • "Failed"
              • "Stopping"
              • "Stopped"
              • "Paused"
        • MediaStreamPipeline — (map)

          Designates a media pipeline as a media stream pipeline.

          • MediaPipelineId — (String)

            The ID of the media stream pipeline

          • MediaPipelineArn — (String)

            The ARN of the media stream pipeline.

          • CreatedTimestamp — (Date)

            The time at which the media stream pipeline was created.

          • UpdatedTimestamp — (Date)

            The time at which the media stream pipeline was updated.

          • Status — (String)

            The status of the media stream pipeline.

            Possible values include:
            • "Initializing"
            • "InProgress"
            • "Failed"
            • "Stopping"
            • "Stopped"
            • "Paused"
            • "NotStarted"
          • Sources — (Array<map>)

            The media stream pipeline's data sources.

            • SourceTyperequired — (String)

              The type of media stream source.

              Possible values include:
              • "ChimeSdkMeeting"
            • SourceArnrequired — (String)

              The ARN of the media stream source.

          • Sinks — (Array<map>)

            The media stream pipeline's data sinks.

            • SinkArnrequired — (String)

              The ARN of the media stream sink.

            • SinkTyperequired — (String)

              The media stream sink's type.

              Possible values include:
              • "KinesisVideoStreamPool"
            • ReservedStreamCapacityrequired — (Integer)

              Specifies the number of streams that the sink can accept.

            • MediaStreamTyperequired — (String)

              The media stream sink's media stream type.

              Possible values include:
              • "MixedAudio"
              • "IndividualAudio"

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

getMediaPipelineKinesisVideoStreamPool(params = {}, callback) ⇒ AWS.Request

Gets an Kinesis video stream pool.

Examples:

Calling the getMediaPipelineKinesisVideoStreamPool operation

var params = {
  Identifier: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.getMediaPipelineKinesisVideoStreamPool(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The ID of the video stream pool.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • KinesisVideoStreamPoolConfiguration — (map)

        The video stream pool configuration object.

        • PoolArn — (String)

          The ARN of the video stream pool configuration.

        • PoolName — (String)

          The name of the video stream pool configuration.

        • PoolId — (String)

          The ID of the video stream pool in the configuration.

        • PoolStatus — (String)

          The status of the video stream pool in the configuration.

          Possible values include:
          • "CREATING"
          • "ACTIVE"
          • "UPDATING"
          • "DELETING"
          • "FAILED"
        • PoolSize — (Integer)

          The size of the video stream pool in the configuration.

        • StreamConfiguration — (map)

          The Kinesis video stream pool configuration object.

          • Regionrequired — (String)

            The Amazon Web Services Region of the video stream.

          • DataRetentionInHours — (Integer)

            The amount of time that data is retained.

        • CreatedTimestamp — (Date)

          The time at which the configuration was created.

        • UpdatedTimestamp — (Date)

          The time at which the configuration was updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

getSpeakerSearchTask(params = {}, callback) ⇒ AWS.Request

Retrieves the details of the specified speaker search task.

Service Reference:

Examples:

Calling the getSpeakerSearchTask operation

var params = {
  Identifier: 'STRING_VALUE', /* required */
  SpeakerSearchTaskId: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.getSpeakerSearchTask(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.

    • SpeakerSearchTaskId — (String)

      The ID of the speaker search task.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • SpeakerSearchTask — (map)

        The details of the speaker search task.

        • SpeakerSearchTaskId — (String)

          The speaker search task ID.

        • SpeakerSearchTaskStatus — (String)

          The status of the speaker search task.

          Possible values include:
          • "NotStarted"
          • "Initializing"
          • "InProgress"
          • "Failed"
          • "Stopping"
          • "Stopped"
        • CreatedTimestamp — (Date)

          The time at which a speaker search task was created.

        • UpdatedTimestamp — (Date)

          The time at which a speaker search task was updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

getVoiceToneAnalysisTask(params = {}, callback) ⇒ AWS.Request

Retrieves the details of a voice tone analysis task.

Service Reference:

Examples:

Calling the getVoiceToneAnalysisTask operation

var params = {
  Identifier: 'STRING_VALUE', /* required */
  VoiceToneAnalysisTaskId: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.getVoiceToneAnalysisTask(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.

    • VoiceToneAnalysisTaskId — (String)

      The ID of the voice tone analysis task.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • VoiceToneAnalysisTask — (map)

        The details of the voice tone analysis task.

        • VoiceToneAnalysisTaskId — (String)

          The ID of the voice tone analysis task.

        • VoiceToneAnalysisTaskStatus — (String)

          The status of a voice tone analysis task.

          Possible values include:
          • "NotStarted"
          • "Initializing"
          • "InProgress"
          • "Failed"
          • "Stopping"
          • "Stopped"
        • CreatedTimestamp — (Date)

          The time at which a voice tone analysis task was created.

        • UpdatedTimestamp — (Date)

          The time at which a voice tone analysis task was updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

listMediaCapturePipelines(params = {}, callback) ⇒ AWS.Request

Returns a list of media pipelines.

Service Reference:

Examples:

Calling the listMediaCapturePipelines operation

var params = {
  MaxResults: 'NUMBER_VALUE',
  NextToken: 'STRING_VALUE'
};
chimesdkmediapipelines.listMediaCapturePipelines(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • NextToken — (String)

      The token used to retrieve the next page of results.

    • MaxResults — (Integer)

      The maximum number of results to return in a single call. Valid Range: 1 - 99.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaCapturePipelines — (Array<map>)

        The media pipeline objects in the list.

        • MediaPipelineId — (String)

          The ID of the media pipeline in the summary.

        • MediaPipelineArn — (String)

          The ARN of the media pipeline in the summary.

      • NextToken — (String)

        The token used to retrieve the next page of results.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

listMediaInsightsPipelineConfigurations(params = {}, callback) ⇒ AWS.Request

Lists the available media insights pipeline configurations.

Examples:

Calling the listMediaInsightsPipelineConfigurations operation

var params = {
  MaxResults: 'NUMBER_VALUE',
  NextToken: 'STRING_VALUE'
};
chimesdkmediapipelines.listMediaInsightsPipelineConfigurations(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • NextToken — (String)

      The token used to return the next page of results.

    • MaxResults — (Integer)

      The maximum number of results to return in a single call.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaInsightsPipelineConfigurations — (Array<map>)

        The requested list of media insights pipeline configurations.

        • MediaInsightsPipelineConfigurationName — (String)

          The name of the media insights pipeline configuration.

        • MediaInsightsPipelineConfigurationId — (String)

          The ID of the media insights pipeline configuration.

        • MediaInsightsPipelineConfigurationArn — (String)

          The ARN of the media insights pipeline configuration.

      • NextToken — (String)

        The token used to return the next page of results.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

listMediaPipelineKinesisVideoStreamPools(params = {}, callback) ⇒ AWS.Request

Lists the video stream pools in the media pipeline.

Examples:

Calling the listMediaPipelineKinesisVideoStreamPools operation

var params = {
  MaxResults: 'NUMBER_VALUE',
  NextToken: 'STRING_VALUE'
};
chimesdkmediapipelines.listMediaPipelineKinesisVideoStreamPools(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • NextToken — (String)

      The token used to return the next page of results.

    • MaxResults — (Integer)

      The maximum number of results to return in a single call.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • KinesisVideoStreamPools — (Array<map>)

        The list of video stream pools.

        • PoolName — (String)

          The name of the video stream pool.

        • PoolId — (String)

          The ID of the video stream pool.

        • PoolArn — (String)

          The ARN of the video stream pool.

      • NextToken — (String)

        The token used to return the next page of results.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

listMediaPipelines(params = {}, callback) ⇒ AWS.Request

Returns a list of media pipelines.

Service Reference:

Examples:

Calling the listMediaPipelines operation

var params = {
  MaxResults: 'NUMBER_VALUE',
  NextToken: 'STRING_VALUE'
};
chimesdkmediapipelines.listMediaPipelines(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • NextToken — (String)

      The token used to retrieve the next page of results.

    • MaxResults — (Integer)

      The maximum number of results to return in a single call. Valid Range: 1 - 99.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaPipelines — (Array<map>)

        The media pipeline objects in the list.

        • MediaPipelineId — (String)

          The ID of the media pipeline in the summary.

        • MediaPipelineArn — (String)

          The ARN of the media pipeline in the summary.

      • NextToken — (String)

        The token used to retrieve the next page of results.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

listTagsForResource(params = {}, callback) ⇒ AWS.Request

Lists the tags available for a media pipeline.

Service Reference:

Examples:

Calling the listTagsForResource operation

var params = {
  ResourceARN: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.listTagsForResource(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • ResourceARN — (String)

      The ARN of the media pipeline associated with any tags. The ARN consists of the pipeline's region, resource ID, and pipeline ID.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • Tags — (Array<map>)

        The tags associated with the specified media pipeline.

        • Keyrequired — (String)

          The key half of a tag.

        • Valuerequired — (String)

          The value half of a tag.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

startSpeakerSearchTask(params = {}, callback) ⇒ AWS.Request

Starts a speaker search task.

Before starting any speaker search tasks, you must provide all notices and obtain all consents from the speaker as required under applicable privacy and biometrics laws, and as required under the AWS service terms for the Amazon Chime SDK.

Service Reference:

Examples:

Calling the startSpeakerSearchTask operation

var params = {
  Identifier: 'STRING_VALUE', /* required */
  VoiceProfileDomainArn: 'STRING_VALUE', /* required */
  ClientRequestToken: 'STRING_VALUE',
  KinesisVideoStreamSourceTaskConfiguration: {
    ChannelId: 'NUMBER_VALUE', /* required */
    StreamArn: 'STRING_VALUE', /* required */
    FragmentNumber: 'STRING_VALUE'
  }
};
chimesdkmediapipelines.startSpeakerSearchTask(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.

    • VoiceProfileDomainArn — (String)

      The ARN of the voice profile domain that will store the voice profile.

    • KinesisVideoStreamSourceTaskConfiguration — (map)

      The task configuration for the Kinesis video stream source of the media insights pipeline.

      • StreamArnrequired — (String)

        The ARN of the stream.

      • ChannelIdrequired — (Integer)

        The channel ID.

      • FragmentNumber — (String)

        The unique identifier of the fragment to begin processing.

    • ClientRequestToken — (String)

      The unique identifier for the client request. Use a different token for different speaker search tasks.

      If a token is not provided, the SDK will use a version 4 UUID.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • SpeakerSearchTask — (map)

        The details of the speaker search task.

        • SpeakerSearchTaskId — (String)

          The speaker search task ID.

        • SpeakerSearchTaskStatus — (String)

          The status of the speaker search task.

          Possible values include:
          • "NotStarted"
          • "Initializing"
          • "InProgress"
          • "Failed"
          • "Stopping"
          • "Stopped"
        • CreatedTimestamp — (Date)

          The time at which a speaker search task was created.

        • UpdatedTimestamp — (Date)

          The time at which a speaker search task was updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

startVoiceToneAnalysisTask(params = {}, callback) ⇒ AWS.Request

Starts a voice tone analysis task. For more information about voice tone analysis, see Using Amazon Chime SDK voice analytics in the Amazon Chime SDK Developer Guide.

Before starting any voice tone analysis tasks, you must provide all notices and obtain all consents from the speaker as required under applicable privacy and biometrics laws, and as required under the AWS service terms for the Amazon Chime SDK.

Service Reference:

Examples:

Calling the startVoiceToneAnalysisTask operation

var params = {
  Identifier: 'STRING_VALUE', /* required */
  LanguageCode: en-US, /* required */
  ClientRequestToken: 'STRING_VALUE',
  KinesisVideoStreamSourceTaskConfiguration: {
    ChannelId: 'NUMBER_VALUE', /* required */
    StreamArn: 'STRING_VALUE', /* required */
    FragmentNumber: 'STRING_VALUE'
  }
};
chimesdkmediapipelines.startVoiceToneAnalysisTask(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.

    • LanguageCode — (String)

      The language code.

      Possible values include:
      • "en-US"
    • KinesisVideoStreamSourceTaskConfiguration — (map)

      The task configuration for the Kinesis video stream source of the media insights pipeline.

      • StreamArnrequired — (String)

        The ARN of the stream.

      • ChannelIdrequired — (Integer)

        The channel ID.

      • FragmentNumber — (String)

        The unique identifier of the fragment to begin processing.

    • ClientRequestToken — (String)

      The unique identifier for the client request. Use a different token for different voice tone analysis tasks.

      If a token is not provided, the SDK will use a version 4 UUID.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • VoiceToneAnalysisTask — (map)

        The details of the voice tone analysis task.

        • VoiceToneAnalysisTaskId — (String)

          The ID of the voice tone analysis task.

        • VoiceToneAnalysisTaskStatus — (String)

          The status of a voice tone analysis task.

          Possible values include:
          • "NotStarted"
          • "Initializing"
          • "InProgress"
          • "Failed"
          • "Stopping"
          • "Stopped"
        • CreatedTimestamp — (Date)

          The time at which a voice tone analysis task was created.

        • UpdatedTimestamp — (Date)

          The time at which a voice tone analysis task was updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

stopSpeakerSearchTask(params = {}, callback) ⇒ AWS.Request

Stops a speaker search task.

Service Reference:

Examples:

Calling the stopSpeakerSearchTask operation

var params = {
  Identifier: 'STRING_VALUE', /* required */
  SpeakerSearchTaskId: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.stopSpeakerSearchTask(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.

    • SpeakerSearchTaskId — (String)

      The speaker search task ID.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

stopVoiceToneAnalysisTask(params = {}, callback) ⇒ AWS.Request

Stops a voice tone analysis task.

Service Reference:

Examples:

Calling the stopVoiceToneAnalysisTask operation

var params = {
  Identifier: 'STRING_VALUE', /* required */
  VoiceToneAnalysisTaskId: 'STRING_VALUE' /* required */
};
chimesdkmediapipelines.stopVoiceToneAnalysisTask(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.

    • VoiceToneAnalysisTaskId — (String)

      The ID of the voice tone analysis task.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

tagResource(params = {}, callback) ⇒ AWS.Request

The ARN of the media pipeline that you want to tag. Consists of the pipeline's endpoint region, resource ID, and pipeline ID.

Service Reference:

Examples:

Calling the tagResource operation

var params = {
  ResourceARN: 'STRING_VALUE', /* required */
  Tags: [ /* required */
    {
      Key: 'STRING_VALUE', /* required */
      Value: 'STRING_VALUE' /* required */
    },
    /* more items */
  ]
};
chimesdkmediapipelines.tagResource(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • ResourceARN — (String)

      The ARN of the media pipeline associated with any tags. The ARN consists of the pipeline's endpoint region, resource ID, and pipeline ID.

    • Tags — (Array<map>)

      The tags associated with the specified media pipeline.

      • Keyrequired — (String)

        The key half of a tag.

      • Valuerequired — (String)

        The value half of a tag.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

untagResource(params = {}, callback) ⇒ AWS.Request

Removes any tags from a media pipeline.

Service Reference:

Examples:

Calling the untagResource operation

var params = {
  ResourceARN: 'STRING_VALUE', /* required */
  TagKeys: [ /* required */
    'STRING_VALUE',
    /* more items */
  ]
};
chimesdkmediapipelines.untagResource(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • ResourceARN — (String)

      The ARN of the pipeline that you want to untag.

    • TagKeys — (Array<String>)

      The key/value pairs in the tag that you want to remove.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

updateMediaInsightsPipelineConfiguration(params = {}, callback) ⇒ AWS.Request

Updates the media insights pipeline's configuration settings.

Examples:

Calling the updateMediaInsightsPipelineConfiguration operation

var params = {
  Elements: [ /* required */
    {
      Type: AmazonTranscribeCallAnalyticsProcessor | VoiceAnalyticsProcessor | AmazonTranscribeProcessor | KinesisDataStreamSink | LambdaFunctionSink | SqsQueueSink | SnsTopicSink | S3RecordingSink | VoiceEnhancementSink, /* required */
      AmazonTranscribeCallAnalyticsProcessorConfiguration: {
        LanguageCode: en-US | en-GB | es-US | fr-CA | fr-FR | en-AU | it-IT | de-DE | pt-BR, /* required */
        CallAnalyticsStreamCategories: [
          'STRING_VALUE',
          /* more items */
        ],
        ContentIdentificationType: PII,
        ContentRedactionType: PII,
        EnablePartialResultsStabilization: true || false,
        FilterPartialResults: true || false,
        LanguageModelName: 'STRING_VALUE',
        PartialResultsStability: high | medium | low,
        PiiEntityTypes: 'STRING_VALUE',
        PostCallAnalyticsSettings: {
          DataAccessRoleArn: 'STRING_VALUE', /* required */
          OutputLocation: 'STRING_VALUE', /* required */
          ContentRedactionOutput: redacted | redacted_and_unredacted,
          OutputEncryptionKMSKeyId: 'STRING_VALUE'
        },
        VocabularyFilterMethod: remove | mask | tag,
        VocabularyFilterName: 'STRING_VALUE',
        VocabularyName: 'STRING_VALUE'
      },
      AmazonTranscribeProcessorConfiguration: {
        ContentIdentificationType: PII,
        ContentRedactionType: PII,
        EnablePartialResultsStabilization: true || false,
        FilterPartialResults: true || false,
        IdentifyLanguage: true || false,
        LanguageCode: en-US | en-GB | es-US | fr-CA | fr-FR | en-AU | it-IT | de-DE | pt-BR,
        LanguageModelName: 'STRING_VALUE',
        LanguageOptions: 'STRING_VALUE',
        PartialResultsStability: high | medium | low,
        PiiEntityTypes: 'STRING_VALUE',
        PreferredLanguage: en-US | en-GB | es-US | fr-CA | fr-FR | en-AU | it-IT | de-DE | pt-BR,
        ShowSpeakerLabel: true || false,
        VocabularyFilterMethod: remove | mask | tag,
        VocabularyFilterName: 'STRING_VALUE',
        VocabularyFilterNames: 'STRING_VALUE',
        VocabularyName: 'STRING_VALUE',
        VocabularyNames: 'STRING_VALUE'
      },
      KinesisDataStreamSinkConfiguration: {
        InsightsTarget: 'STRING_VALUE'
      },
      LambdaFunctionSinkConfiguration: {
        InsightsTarget: 'STRING_VALUE'
      },
      S3RecordingSinkConfiguration: {
        Destination: 'STRING_VALUE',
        RecordingFileFormat: Wav | Opus
      },
      SnsTopicSinkConfiguration: {
        InsightsTarget: 'STRING_VALUE'
      },
      SqsQueueSinkConfiguration: {
        InsightsTarget: 'STRING_VALUE'
      },
      VoiceAnalyticsProcessorConfiguration: {
        SpeakerSearchStatus: Enabled | Disabled,
        VoiceToneAnalysisStatus: Enabled | Disabled
      },
      VoiceEnhancementSinkConfiguration: {
        Disabled: true || false
      }
    },
    /* more items */
  ],
  Identifier: 'STRING_VALUE', /* required */
  ResourceAccessRoleArn: 'STRING_VALUE', /* required */
  RealTimeAlertConfiguration: {
    Disabled: true || false,
    Rules: [
      {
        Type: KeywordMatch | Sentiment | IssueDetection, /* required */
        IssueDetectionConfiguration: {
          RuleName: 'STRING_VALUE' /* required */
        },
        KeywordMatchConfiguration: {
          Keywords: [ /* required */
            'STRING_VALUE',
            /* more items */
          ],
          RuleName: 'STRING_VALUE', /* required */
          Negate: true || false
        },
        SentimentConfiguration: {
          RuleName: 'STRING_VALUE', /* required */
          SentimentType: NEGATIVE, /* required */
          TimePeriod: 'NUMBER_VALUE' /* required */
        }
      },
      /* more items */
    ]
  }
};
chimesdkmediapipelines.updateMediaInsightsPipelineConfiguration(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The unique identifier for the resource to be updated. Valid values include the name and ARN of the media insights pipeline configuration.

    • ResourceAccessRoleArn — (String)

      The ARN of the role used by the service to access Amazon Web Services resources.

    • RealTimeAlertConfiguration — (map)

      The configuration settings for real-time alerts for the media insights pipeline.

      • Disabled — (Boolean)

        Turns off real-time alerts.

      • Rules — (Array<map>)

        The rules in the alert. Rules specify the words or phrases that you want to be notified about.

        • Typerequired — (String)

          The type of alert rule.

          Possible values include:
          • "KeywordMatch"
          • "Sentiment"
          • "IssueDetection"
        • KeywordMatchConfiguration — (map)

          Specifies the settings for matching the keywords in a real-time alert rule.

          • RuleNamerequired — (String)

            The name of the keyword match rule.

          • Keywordsrequired — (Array<String>)

            The keywords or phrases that you want to match.

          • Negate — (Boolean)

            Matches keywords or phrases on their presence or absence. If set to TRUE, the rule matches when all the specified keywords or phrases are absent. Default: FALSE.

        • SentimentConfiguration — (map)

          Specifies the settings for predicting sentiment in a real-time alert rule.

          • RuleNamerequired — (String)

            The name of the rule in the sentiment configuration.

          • SentimentTyperequired — (String)

            The type of sentiment, POSITIVE, NEGATIVE, or NEUTRAL.

            Possible values include:
            • "NEGATIVE"
          • TimePeriodrequired — (Integer)

            Specifies the analysis interval.

        • IssueDetectionConfiguration — (map)

          Specifies the issue detection settings for a real-time alert rule.

          • RuleNamerequired — (String)

            The name of the issue detection rule.

    • Elements — (Array<map>)

      The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream..

      • Typerequired — (String)

        The element type.

        Possible values include:
        • "AmazonTranscribeCallAnalyticsProcessor"
        • "VoiceAnalyticsProcessor"
        • "AmazonTranscribeProcessor"
        • "KinesisDataStreamSink"
        • "LambdaFunctionSink"
        • "SqsQueueSink"
        • "SnsTopicSink"
        • "S3RecordingSink"
        • "VoiceEnhancementSink"
      • AmazonTranscribeCallAnalyticsProcessorConfiguration — (map)

        The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

        • LanguageCoderequired — (String)

          The language code in the configuration.

          Possible values include:
          • "en-US"
          • "en-GB"
          • "es-US"
          • "fr-CA"
          • "fr-FR"
          • "en-AU"
          • "it-IT"
          • "de-DE"
          • "pt-BR"
        • VocabularyName — (String)

          Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

          If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.

          For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterName — (String)

          Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

          If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.

          For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterMethod — (String)

          Specifies how to apply a vocabulary filter to a transcript.

          To replace words with ***, choose mask.

          To delete words, choose remove.

          To flag words without changing them, choose tag.

          Possible values include:
          • "remove"
          • "mask"
          • "tag"
        • LanguageModelName — (String)

          Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

          The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.

          For more information, see Custom language models in the Amazon Transcribe Developer Guide.

        • EnablePartialResultsStabilization — (Boolean)

          Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        • PartialResultsStability — (String)

          Specifies the level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

          Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

          For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "high"
          • "medium"
          • "low"
        • ContentIdentificationType — (String)

          Labels all personally identifiable information (PII) identified in your transcript.

          Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

          You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "PII"
        • ContentRedactionType — (String)

          Redacts all personally identifiable information (PII) identified in your transcript.

          Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

          You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "PII"
        • PiiEntityTypes — (String)

          Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

          To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

          Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

          Length Constraints: Minimum length of 1. Maximum length of 300.

        • FilterPartialResults — (Boolean)

          If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

        • PostCallAnalyticsSettings — (map)

          The settings for a post-call analysis task in an analytics configuration.

          • OutputLocationrequired — (String)

            The URL of the Amazon S3 bucket that contains the post-call data.

          • DataAccessRoleArnrequired — (String)

            The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide.

          • ContentRedactionOutput — (String)

            The content redaction output settings for a post-call analysis task.

            Possible values include:
            • "redacted"
            • "redacted_and_unredacted"
          • OutputEncryptionKMSKeyId — (String)

            The ID of the KMS (Key Management Service) key used to encrypt the output.

        • CallAnalyticsStreamCategories — (Array<String>)

          By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

      • AmazonTranscribeProcessorConfiguration — (map)

        The transcription processor configuration settings in a media insights pipeline configuration element.

        • LanguageCode — (String)

          The language code that represents the language spoken in your audio.

          If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

          For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "en-US"
          • "en-GB"
          • "es-US"
          • "fr-CA"
          • "fr-FR"
          • "en-AU"
          • "it-IT"
          • "de-DE"
          • "pt-BR"
        • VocabularyName — (String)

          The name of the custom vocabulary that you specified in your Call Analytics request.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterName — (String)

          The name of the custom vocabulary filter that you specified in your Call Analytics request.

          Length Constraints: Minimum length of 1. Maximum length of 200.

        • VocabularyFilterMethod — (String)

          The vocabulary filtering method used in your Call Analytics transcription.

          Possible values include:
          • "remove"
          • "mask"
          • "tag"
        • ShowSpeakerLabel — (Boolean)

          Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

          For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide.

        • EnablePartialResultsStabilization — (Boolean)

          Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

          For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

        • PartialResultsStability — (String)

          The level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

          Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

          For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "high"
          • "medium"
          • "low"
        • ContentIdentificationType — (String)

          Labels all personally identifiable information (PII) identified in your transcript.

          Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

          You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "PII"
        • ContentRedactionType — (String)

          Redacts all personally identifiable information (PII) identified in your transcript.

          Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

          You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

          For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

          Possible values include:
          • "PII"
        • PiiEntityTypes — (String)

          The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

          To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

          Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

          If you leave this parameter empty, the default behavior is equivalent to ALL.

        • LanguageModelName — (String)

          The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

          The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

          For more information, see Custom language models in the Amazon Transcribe Developer Guide.

        • FilterPartialResults — (Boolean)

          If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

        • IdentifyLanguage — (Boolean)

          Turns language identification on or off.

        • LanguageOptions — (String)

          The language options for the transcription, such as automatic language detection.

        • PreferredLanguage — (String)

          The preferred language for the transcription.

          Possible values include:
          • "en-US"
          • "en-GB"
          • "es-US"
          • "fr-CA"
          • "fr-FR"
          • "en-AU"
          • "it-IT"
          • "de-DE"
          • "pt-BR"
        • VocabularyNames — (String)

          The names of the custom vocabulary or vocabularies used during transcription.

        • VocabularyFilterNames — (String)

          The names of the custom vocabulary filter or filters using during transcription.

      • KinesisDataStreamSinkConfiguration — (map)

        The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

        • InsightsTarget — (String)

          The ARN of the sink.

      • S3RecordingSinkConfiguration — (map)

        The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

        • Destination — (String)

          The default URI of the Amazon S3 bucket used as the recording sink.

        • RecordingFileFormat — (String)

          The default file format for the media files sent to the Amazon S3 bucket.

          Possible values include:
          • "Wav"
          • "Opus"
      • VoiceAnalyticsProcessorConfiguration — (map)

        The voice analytics configuration settings in a media insights pipeline configuration element.

        • SpeakerSearchStatus — (String)

          The status of the speaker search task.

          Possible values include:
          • "Enabled"
          • "Disabled"
        • VoiceToneAnalysisStatus — (String)

          The status of the voice tone analysis task.

          Possible values include:
          • "Enabled"
          • "Disabled"
      • LambdaFunctionSinkConfiguration — (map)

        The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

        • InsightsTarget — (String)

          The ARN of the sink.

      • SqsQueueSinkConfiguration — (map)

        The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

        • InsightsTarget — (String)

          The ARN of the SQS sink.

      • SnsTopicSinkConfiguration — (map)

        The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

        • InsightsTarget — (String)

          The ARN of the SNS sink.

      • VoiceEnhancementSinkConfiguration — (map)

        The configuration settings for voice enhancement sink in a media insights pipeline configuration element.

        • Disabled — (Boolean)

          Disables the VoiceEnhancementSinkConfiguration element.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • MediaInsightsPipelineConfiguration — (map)

        The updated configuration settings.

        • MediaInsightsPipelineConfigurationName — (String)

          The name of the configuration.

        • MediaInsightsPipelineConfigurationArn — (String)

          The ARN of the configuration.

        • ResourceAccessRoleArn — (String)

          The ARN of the role used by the service to access Amazon Web Services resources.

        • RealTimeAlertConfiguration — (map)

          Lists the rules that trigger a real-time alert.

          • Disabled — (Boolean)

            Turns off real-time alerts.

          • Rules — (Array<map>)

            The rules in the alert. Rules specify the words or phrases that you want to be notified about.

            • Typerequired — (String)

              The type of alert rule.

              Possible values include:
              • "KeywordMatch"
              • "Sentiment"
              • "IssueDetection"
            • KeywordMatchConfiguration — (map)

              Specifies the settings for matching the keywords in a real-time alert rule.

              • RuleNamerequired — (String)

                The name of the keyword match rule.

              • Keywordsrequired — (Array<String>)

                The keywords or phrases that you want to match.

              • Negate — (Boolean)

                Matches keywords or phrases on their presence or absence. If set to TRUE, the rule matches when all the specified keywords or phrases are absent. Default: FALSE.

            • SentimentConfiguration — (map)

              Specifies the settings for predicting sentiment in a real-time alert rule.

              • RuleNamerequired — (String)

                The name of the rule in the sentiment configuration.

              • SentimentTyperequired — (String)

                The type of sentiment, POSITIVE, NEGATIVE, or NEUTRAL.

                Possible values include:
                • "NEGATIVE"
              • TimePeriodrequired — (Integer)

                Specifies the analysis interval.

            • IssueDetectionConfiguration — (map)

              Specifies the issue detection settings for a real-time alert rule.

              • RuleNamerequired — (String)

                The name of the issue detection rule.

        • Elements — (Array<map>)

          The elements in the configuration.

          • Typerequired — (String)

            The element type.

            Possible values include:
            • "AmazonTranscribeCallAnalyticsProcessor"
            • "VoiceAnalyticsProcessor"
            • "AmazonTranscribeProcessor"
            • "KinesisDataStreamSink"
            • "LambdaFunctionSink"
            • "SqsQueueSink"
            • "SnsTopicSink"
            • "S3RecordingSink"
            • "VoiceEnhancementSink"
          • AmazonTranscribeCallAnalyticsProcessorConfiguration — (map)

            The analytics configuration settings for transcribing audio in a media insights pipeline configuration element.

            • LanguageCoderequired — (String)

              The language code in the configuration.

              Possible values include:
              • "en-US"
              • "en-GB"
              • "es-US"
              • "fr-CA"
              • "fr-FR"
              • "en-AU"
              • "it-IT"
              • "de-DE"
              • "pt-BR"
            • VocabularyName — (String)

              Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

              If the language of the specified custom vocabulary doesn't match the language identified in your media, the custom vocabulary is not applied to your transcription.

              For more information, see Custom vocabularies in the Amazon Transcribe Developer Guide.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName — (String)

              Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

              If the language of the specified custom vocabulary filter doesn't match the language identified in your media, the vocabulary filter is not applied to your transcription.

              For more information, see Using vocabulary filtering with unwanted words in the Amazon Transcribe Developer Guide.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod — (String)

              Specifies how to apply a vocabulary filter to a transcript.

              To replace words with ***, choose mask.

              To delete words, choose remove.

              To flag words without changing them, choose tag.

              Possible values include:
              • "remove"
              • "mask"
              • "tag"
            • LanguageModelName — (String)

              Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code specified in the transcription request. If the languages don't match, the custom language model isn't applied. Language mismatches don't generate errors or warnings.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide.

            • EnablePartialResultsStabilization — (Boolean)

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

            • PartialResultsStability — (String)

              Specifies the level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "high"
              • "medium"
              • "low"
            • ContentIdentificationType — (String)

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you do, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • ContentRedactionType — (String)

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you do, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • PiiEntityTypes — (String)

              Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

              Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

              Length Constraints: Minimum length of 1. Maximum length of 300.

            • FilterPartialResults — (Boolean)

              If true, UtteranceEvents with IsPartial: true are filtered out of the insights target.

            • PostCallAnalyticsSettings — (map)

              The settings for a post-call analysis task in an analytics configuration.

              • OutputLocationrequired — (String)

                The URL of the Amazon S3 bucket that contains the post-call data.

              • DataAccessRoleArnrequired — (String)

                The ARN of the role used by Amazon Web Services Transcribe to upload your post call analysis. For more information, see Post-call analytics with real-time transcriptions in the Amazon Transcribe Developer Guide.

              • ContentRedactionOutput — (String)

                The content redaction output settings for a post-call analysis task.

                Possible values include:
                • "redacted"
                • "redacted_and_unredacted"
              • OutputEncryptionKMSKeyId — (String)

                The ID of the KMS (Key Management Service) key used to encrypt the output.

            • CallAnalyticsStreamCategories — (Array<String>)

              By default, all CategoryEvents are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

          • AmazonTranscribeProcessorConfiguration — (map)

            The transcription processor configuration settings in a media insights pipeline configuration element.

            • LanguageCode — (String)

              The language code that represents the language spoken in your audio.

              If you're unsure of the language spoken in your audio, consider using IdentifyLanguage to enable automatic language identification.

              For a list of languages that real-time Call Analytics supports, see the Supported languages table in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "en-US"
              • "en-GB"
              • "es-US"
              • "fr-CA"
              • "fr-FR"
              • "en-AU"
              • "it-IT"
              • "de-DE"
              • "pt-BR"
            • VocabularyName — (String)

              The name of the custom vocabulary that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterName — (String)

              The name of the custom vocabulary filter that you specified in your Call Analytics request.

              Length Constraints: Minimum length of 1. Maximum length of 200.

            • VocabularyFilterMethod — (String)

              The vocabulary filtering method used in your Call Analytics transcription.

              Possible values include:
              • "remove"
              • "mask"
              • "tag"
            • ShowSpeakerLabel — (Boolean)

              Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

              For more information, see Partitioning speakers (diarization) in the Amazon Transcribe Developer Guide.

            • EnablePartialResultsStabilization — (Boolean)

              Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

            • PartialResultsStability — (String)

              The level of stability to use when you enable partial results stabilization (EnablePartialResultsStabilization).

              Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

              For more information, see Partial-result stabilization in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "high"
              • "medium"
              • "low"
            • ContentIdentificationType — (String)

              Labels all personally identifiable information (PII) identified in your transcript.

              Content identification is performed at the segment level; PII specified in PiiEntityTypes is flagged upon complete transcription of an audio segment.

              You can’t set ContentIdentificationType and ContentRedactionType in the same request. If you set both, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • ContentRedactionType — (String)

              Redacts all personally identifiable information (PII) identified in your transcript.

              Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.

              You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a BadRequestException.

              For more information, see Redacting or identifying personally identifiable information in the Amazon Transcribe Developer Guide.

              Possible values include:
              • "PII"
            • PiiEntityTypes — (String)

              The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you'd like, or you can select ALL.

              To include PiiEntityTypes in your Call Analytics request, you must also include ContentIdentificationType or ContentRedactionType, but you can't include both.

              Values must be comma-separated and can include: ADDRESS, BANK_ACCOUNT_NUMBER, BANK_ROUTING, CREDIT_DEBIT_CVV, CREDIT_DEBIT_EXPIRY, CREDIT_DEBIT_NUMBER, EMAIL, NAME, PHONE, PIN, SSN, or ALL.

              If you leave this parameter empty, the default behavior is equivalent to ALL.

            • LanguageModelName — (String)

              The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.

              The language of the specified language model must match the language code you specify in your transcription request. If the languages don't match, the custom language model isn't applied. There are no errors or warnings associated with a language mismatch.

              For more information, see Custom language models in the Amazon Transcribe Developer Guide.

            • FilterPartialResults — (Boolean)

              If true, TranscriptEvents with IsPartial: true are filtered out of the insights target.

            • IdentifyLanguage — (Boolean)

              Turns language identification on or off.

            • LanguageOptions — (String)

              The language options for the transcription, such as automatic language detection.

            • PreferredLanguage — (String)

              The preferred language for the transcription.

              Possible values include:
              • "en-US"
              • "en-GB"
              • "es-US"
              • "fr-CA"
              • "fr-FR"
              • "en-AU"
              • "it-IT"
              • "de-DE"
              • "pt-BR"
            • VocabularyNames — (String)

              The names of the custom vocabulary or vocabularies used during transcription.

            • VocabularyFilterNames — (String)

              The names of the custom vocabulary filter or filters using during transcription.

          • KinesisDataStreamSinkConfiguration — (map)

            The configuration settings for the Kinesis Data Stream Sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the sink.

          • S3RecordingSinkConfiguration — (map)

            The configuration settings for the Amazon S3 recording bucket in a media insights pipeline configuration element.

            • Destination — (String)

              The default URI of the Amazon S3 bucket used as the recording sink.

            • RecordingFileFormat — (String)

              The default file format for the media files sent to the Amazon S3 bucket.

              Possible values include:
              • "Wav"
              • "Opus"
          • VoiceAnalyticsProcessorConfiguration — (map)

            The voice analytics configuration settings in a media insights pipeline configuration element.

            • SpeakerSearchStatus — (String)

              The status of the speaker search task.

              Possible values include:
              • "Enabled"
              • "Disabled"
            • VoiceToneAnalysisStatus — (String)

              The status of the voice tone analysis task.

              Possible values include:
              • "Enabled"
              • "Disabled"
          • LambdaFunctionSinkConfiguration — (map)

            The configuration settings for the Amazon Web Services Lambda sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the sink.

          • SqsQueueSinkConfiguration — (map)

            The configuration settings for an SQS queue sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the SQS sink.

          • SnsTopicSinkConfiguration — (map)

            The configuration settings for an SNS topic sink in a media insights pipeline configuration element.

            • InsightsTarget — (String)

              The ARN of the SNS sink.

          • VoiceEnhancementSinkConfiguration — (map)

            The configuration settings for voice enhancement sink in a media insights pipeline configuration element.

            • Disabled — (Boolean)

              Disables the VoiceEnhancementSinkConfiguration element.

        • MediaInsightsPipelineConfigurationId — (String)

          The ID of the configuration.

        • CreatedTimestamp — (Date)

          The time at which the configuration was created.

        • UpdatedTimestamp — (Date)

          The time at which the configuration was last updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

updateMediaInsightsPipelineStatus(params = {}, callback) ⇒ AWS.Request

Updates the status of a media insights pipeline.

Examples:

Calling the updateMediaInsightsPipelineStatus operation

var params = {
  Identifier: 'STRING_VALUE', /* required */
  UpdateStatus: Pause | Resume /* required */
};
chimesdkmediapipelines.updateMediaInsightsPipelineStatus(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.

    • UpdateStatus — (String)

      The requested status of the media insights pipeline.

      Possible values include:
      • "Pause"
      • "Resume"

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.

updateMediaPipelineKinesisVideoStreamPool(params = {}, callback) ⇒ AWS.Request

Updates an Kinesis video stream pool in a media pipeline.

Examples:

Calling the updateMediaPipelineKinesisVideoStreamPool operation

var params = {
  Identifier: 'STRING_VALUE', /* required */
  StreamConfiguration: {
    DataRetentionInHours: 'NUMBER_VALUE'
  }
};
chimesdkmediapipelines.updateMediaPipelineKinesisVideoStreamPool(params, function(err, data) {
  if (err) console.log(err, err.stack); // an error occurred
  else     console.log(data);           // successful response
});

Parameters:

  • params (Object) (defaults to: {})
    • Identifier — (String)

      The ID of the video stream pool.

    • StreamConfiguration — (map)

      The configuration settings for the video stream.

      • DataRetentionInHours — (Integer)

        The updated time that data is retained.

Callback (callback):

  • function(err, data) { ... }

    Called when a response from the service is returned. If a callback is not supplied, you must call AWS.Request.send() on the returned request object to initiate the request.

    Context (this):

    • (AWS.Response)

      the response object containing error, data properties, and the original request object.

    Parameters:

    • err (Error)

      the error object returned from the request. Set to null if the request is successful.

    • data (Object)

      the de-serialized data returned from the request. Set to null if a request error occurs. The data object has the following properties:

      • KinesisVideoStreamPoolConfiguration — (map)

        The video stream pool configuration object.

        • PoolArn — (String)

          The ARN of the video stream pool configuration.

        • PoolName — (String)

          The name of the video stream pool configuration.

        • PoolId — (String)

          The ID of the video stream pool in the configuration.

        • PoolStatus — (String)

          The status of the video stream pool in the configuration.

          Possible values include:
          • "CREATING"
          • "ACTIVE"
          • "UPDATING"
          • "DELETING"
          • "FAILED"
        • PoolSize — (Integer)

          The size of the video stream pool in the configuration.

        • StreamConfiguration — (map)

          The Kinesis video stream pool configuration object.

          • Regionrequired — (String)

            The Amazon Web Services Region of the video stream.

          • DataRetentionInHours — (Integer)

            The amount of time that data is retained.

        • CreatedTimestamp — (Date)

          The time at which the configuration was created.

        • UpdatedTimestamp — (Date)

          The time at which the configuration was updated.

Returns:

  • (AWS.Request)

    a handle to the operation request for subsequent event callback registration.