Class: Aws::ChimeSDKMediaPipelines::Client
- Inherits:
-
Seahorse::Client::Base
- Object
- Seahorse::Client::Base
- Aws::ChimeSDKMediaPipelines::Client
- Includes:
- Aws::ClientStubs
- Defined in:
- gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb
Overview
An API client for ChimeSDKMediaPipelines. To construct a client, you need to configure a :region
and :credentials
.
client = Aws::ChimeSDKMediaPipelines::Client.new(
region: region_name,
credentials: credentials,
# ...
)
For details on configuring region and credentials see the developer guide.
See #initialize for a full list of supported configuration options.
Instance Attribute Summary
Attributes inherited from Seahorse::Client::Base
API Operations collapse
-
#create_media_capture_pipeline(params = {}) ⇒ Types::CreateMediaCapturePipelineResponse
Creates a media pipeline.
-
#create_media_concatenation_pipeline(params = {}) ⇒ Types::CreateMediaConcatenationPipelineResponse
Creates a media concatenation pipeline.
-
#create_media_insights_pipeline(params = {}) ⇒ Types::CreateMediaInsightsPipelineResponse
Creates a media insights pipeline.
-
#create_media_insights_pipeline_configuration(params = {}) ⇒ Types::CreateMediaInsightsPipelineConfigurationResponse
A structure that contains the static configurations for a media insights pipeline.
-
#create_media_live_connector_pipeline(params = {}) ⇒ Types::CreateMediaLiveConnectorPipelineResponse
Creates a media live connector pipeline in an Amazon Chime SDK meeting.
-
#create_media_pipeline_kinesis_video_stream_pool(params = {}) ⇒ Types::CreateMediaPipelineKinesisVideoStreamPoolResponse
Creates an Amazon Kinesis Video Stream pool for use with media stream pipelines.
-
#create_media_stream_pipeline(params = {}) ⇒ Types::CreateMediaStreamPipelineResponse
Creates a streaming media pipeline.
-
#delete_media_capture_pipeline(params = {}) ⇒ Struct
Deletes the media pipeline.
-
#delete_media_insights_pipeline_configuration(params = {}) ⇒ Struct
Deletes the specified configuration settings.
-
#delete_media_pipeline(params = {}) ⇒ Struct
Deletes the media pipeline.
-
#delete_media_pipeline_kinesis_video_stream_pool(params = {}) ⇒ Struct
Deletes an Amazon Kinesis Video Stream pool.
-
#get_media_capture_pipeline(params = {}) ⇒ Types::GetMediaCapturePipelineResponse
Gets an existing media pipeline.
-
#get_media_insights_pipeline_configuration(params = {}) ⇒ Types::GetMediaInsightsPipelineConfigurationResponse
Gets the configuration settings for a media insights pipeline.
-
#get_media_pipeline(params = {}) ⇒ Types::GetMediaPipelineResponse
Gets an existing media pipeline.
-
#get_media_pipeline_kinesis_video_stream_pool(params = {}) ⇒ Types::GetMediaPipelineKinesisVideoStreamPoolResponse
Gets an Kinesis video stream pool.
-
#get_speaker_search_task(params = {}) ⇒ Types::GetSpeakerSearchTaskResponse
Retrieves the details of the specified speaker search task.
-
#get_voice_tone_analysis_task(params = {}) ⇒ Types::GetVoiceToneAnalysisTaskResponse
Retrieves the details of a voice tone analysis task.
-
#list_media_capture_pipelines(params = {}) ⇒ Types::ListMediaCapturePipelinesResponse
Returns a list of media pipelines.
-
#list_media_insights_pipeline_configurations(params = {}) ⇒ Types::ListMediaInsightsPipelineConfigurationsResponse
Lists the available media insights pipeline configurations.
-
#list_media_pipeline_kinesis_video_stream_pools(params = {}) ⇒ Types::ListMediaPipelineKinesisVideoStreamPoolsResponse
Lists the video stream pools in the media pipeline.
-
#list_media_pipelines(params = {}) ⇒ Types::ListMediaPipelinesResponse
Returns a list of media pipelines.
-
#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse
Lists the tags available for a media pipeline.
-
#start_speaker_search_task(params = {}) ⇒ Types::StartSpeakerSearchTaskResponse
Starts a speaker search task.
-
#start_voice_tone_analysis_task(params = {}) ⇒ Types::StartVoiceToneAnalysisTaskResponse
Starts a voice tone analysis task.
-
#stop_speaker_search_task(params = {}) ⇒ Struct
Stops a speaker search task.
-
#stop_voice_tone_analysis_task(params = {}) ⇒ Struct
Stops a voice tone analysis task.
-
#tag_resource(params = {}) ⇒ Struct
The ARN of the media pipeline that you want to tag.
-
#untag_resource(params = {}) ⇒ Struct
Removes any tags from a media pipeline.
-
#update_media_insights_pipeline_configuration(params = {}) ⇒ Types::UpdateMediaInsightsPipelineConfigurationResponse
Updates the media insights pipeline's configuration settings.
-
#update_media_insights_pipeline_status(params = {}) ⇒ Struct
Updates the status of a media insights pipeline.
-
#update_media_pipeline_kinesis_video_stream_pool(params = {}) ⇒ Types::UpdateMediaPipelineKinesisVideoStreamPoolResponse
Updates an Amazon Kinesis Video Stream pool in a media pipeline.
Instance Method Summary collapse
-
#initialize(options) ⇒ Client
constructor
A new instance of Client.
Methods included from Aws::ClientStubs
#api_requests, #stub_data, #stub_responses
Methods inherited from Seahorse::Client::Base
add_plugin, api, clear_plugins, define, new, #operation_names, plugins, remove_plugin, set_api, set_plugins
Methods included from Seahorse::Client::HandlerBuilder
#handle, #handle_request, #handle_response
Constructor Details
#initialize(options) ⇒ Client
Returns a new instance of Client.
Parameters:
- options (Hash)
Options Hash (options):
-
:plugins
(Array<Seahorse::Client::Plugin>)
— default:
[]]
—
A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.
-
:credentials
(required, Aws::CredentialProvider)
—
Your AWS credentials. This can be an instance of any one of the following classes:
Aws::Credentials
- Used for configuring static, non-refreshing credentials.Aws::SharedCredentials
- Used for loading static credentials from a shared file, such as~/.aws/config
.Aws::AssumeRoleCredentials
- Used when you need to assume a role.Aws::AssumeRoleWebIdentityCredentials
- Used when you need to assume a role after providing credentials via the web.Aws::SSOCredentials
- Used for loading credentials from AWS SSO using an access token generated fromaws login
.Aws::ProcessCredentials
- Used for loading credentials from a process that outputs to stdout.Aws::InstanceProfileCredentials
- Used for loading credentials from an EC2 IMDS on an EC2 instance.Aws::ECSCredentials
- Used for loading credentials from instances running in ECS.Aws::CognitoIdentityCredentials
- Used for loading credentials from the Cognito Identity service.
When
:credentials
are not configured directly, the following locations will be searched for credentials:Aws.config[:credentials]
- The
:access_key_id
,:secret_access_key
,:session_token
, and:account_id
options. - ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'], ENV['AWS_SESSION_TOKEN'], and ENV['AWS_ACCOUNT_ID']
~/.aws/credentials
~/.aws/config
- EC2/ECS IMDS instance profile - When used by default, the timeouts
are very aggressive. Construct and pass an instance of
Aws::InstanceProfileCredentials
orAws::ECSCredentials
to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV['AWS_EC2_METADATA_DISABLED'] to true.
-
:region
(required, String)
—
The AWS region to connect to. The configured
:region
is used to determine the service:endpoint
. When not passed, a default:region
is searched for in the following locations:Aws.config[:region]
ENV['AWS_REGION']
ENV['AMAZON_REGION']
ENV['AWS_DEFAULT_REGION']
~/.aws/credentials
~/.aws/config
- :access_key_id (String)
- :account_id (String)
-
:active_endpoint_cache
(Boolean)
— default:
false
—
When set to
true
, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults tofalse
. -
:adaptive_retry_wait_to_fill
(Boolean)
— default:
true
—
Used only in
adaptive
retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise aRetryCapacityNotAvailableError
and will not retry instead of sleeping. -
:client_side_monitoring
(Boolean)
— default:
false
—
When
true
, client-side metrics will be collected for all API requests from this client. -
:client_side_monitoring_client_id
(String)
— default:
""
—
Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.
-
:client_side_monitoring_host
(String)
— default:
"127.0.0.1"
—
Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.
-
:client_side_monitoring_port
(Integer)
— default:
31000
—
Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.
-
:client_side_monitoring_publisher
(Aws::ClientSideMonitoring::Publisher)
— default:
Aws::ClientSideMonitoring::Publisher
—
Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.
-
:convert_params
(Boolean)
— default:
true
—
When
true
, an attempt is made to coerce request parameters into the required types. -
:correct_clock_skew
(Boolean)
— default:
true
—
Used only in
standard
and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks. -
:defaults_mode
(String)
— default:
"legacy"
—
See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.
-
:disable_host_prefix_injection
(Boolean)
— default:
false
—
Set to true to disable SDK automatically adding host prefix to default service endpoint when available.
-
:disable_request_compression
(Boolean)
— default:
false
—
When set to 'true' the request body will not be compressed for supported operations.
-
:endpoint
(String, URI::HTTPS, URI::HTTP)
—
Normally you should not configure the
:endpoint
option directly. This is normally constructed from the:region
option. Configuring:endpoint
is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:'http://example.com' 'https://example.com' 'http://example.com:123'
-
:endpoint_cache_max_entries
(Integer)
— default:
1000
—
Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.
-
:endpoint_cache_max_threads
(Integer)
— default:
10
—
Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.
-
:endpoint_cache_poll_interval
(Integer)
— default:
60
—
When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.
-
:endpoint_discovery
(Boolean)
— default:
false
—
When set to
true
, endpoint discovery will be enabled for operations when available. -
:ignore_configured_endpoint_urls
(Boolean)
—
Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.
-
:log_formatter
(Aws::Log::Formatter)
— default:
Aws::Log::Formatter.default
—
The log formatter.
-
:log_level
(Symbol)
— default:
:info
—
The log level to send messages to the
:logger
at. -
:logger
(Logger)
—
The Logger instance to send log messages to. If this option is not set, logging will be disabled.
-
:max_attempts
(Integer)
— default:
3
—
An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in
standard
andadaptive
retry modes. -
:profile
(String)
— default:
"default"
—
Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, 'default' is used.
-
:request_min_compression_size_bytes
(Integer)
— default:
10240
—
The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.
-
:retry_backoff
(Proc)
—
A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the
legacy
retry mode. -
:retry_base_delay
(Float)
— default:
0.3
—
The base delay in seconds used by the default backoff function. This option is only used in the
legacy
retry mode. -
:retry_jitter
(Symbol)
— default:
:none
—
A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the
legacy
retry mode.@see https://www.awsarchitectureblog.com/2015/03/backoff.html
-
:retry_limit
(Integer)
— default:
3
—
The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the
legacy
retry mode. -
:retry_max_delay
(Integer)
— default:
0
—
The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the
legacy
retry mode. -
:retry_mode
(String)
— default:
"legacy"
—
Specifies which retry algorithm to use. Values are:
legacy
- The pre-existing retry behavior. This is default value if no retry mode is provided.standard
- A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.adaptive
- An experimental retry mode that includes all the functionality ofstandard
mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.
-
:sdk_ua_app_id
(String)
—
A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.
- :secret_access_key (String)
- :session_token (String)
-
:sigv4a_signing_region_set
(Array)
—
A list of regions that should be signed with SigV4a signing. When not passed, a default
:sigv4a_signing_region_set
is searched for in the following locations:Aws.config[:sigv4a_signing_region_set]
ENV['AWS_SIGV4A_SIGNING_REGION_SET']
~/.aws/config
-
:stub_responses
(Boolean)
— default:
false
—
Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling Aws::ClientStubs#stub_responses. See Aws::ClientStubs for more information.
Please note When response stubbing is enabled, no HTTP requests are made, and retries are disabled.
-
:telemetry_provider
(Aws::Telemetry::TelemetryProviderBase)
— default:
Aws::Telemetry::NoOpTelemetryProvider
—
Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses
NoOpTelemetryProvider
which will not record or emit any telemetry data. The SDK supports the following telemetry providers:- OpenTelemetry (OTel) - To use the OTel provider, install and require the
opentelemetry-sdk
gem and then, pass in an instance of aAws::Telemetry::OTelProvider
for telemetry provider.
- OpenTelemetry (OTel) - To use the OTel provider, install and require the
-
:token_provider
(Aws::TokenProvider)
—
A Bearer Token Provider. This can be an instance of any one of the following classes:
Aws::StaticTokenProvider
- Used for configuring static, non-refreshing tokens.Aws::SSOTokenProvider
- Used for loading tokens from AWS SSO using an access token generated fromaws login
.
When
:token_provider
is not configured directly, theAws::TokenProviderChain
will be used to search for tokens configured for your profile in shared configuration files. -
:use_dualstack_endpoint
(Boolean)
—
When set to
true
, dualstack enabled endpoints (with.aws
TLD) will be used if available. -
:use_fips_endpoint
(Boolean)
—
When set to
true
, fips compatible endpoints will be used if available. When afips
region is used, the region is normalized and this config is set totrue
. -
:validate_params
(Boolean)
— default:
true
—
When
true
, request parameters are validated before sending the request. -
:endpoint_provider
(Aws::ChimeSDKMediaPipelines::EndpointProvider)
—
The endpoint provider used to resolve endpoints. Any object that responds to
#resolve_endpoint(parameters)
whereparameters
is a Struct similar toAws::ChimeSDKMediaPipelines::EndpointParameters
. -
:http_continue_timeout
(Float)
— default:
1
—
The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has "Expect" header set to "100-continue". Defaults to
nil
which disables this behaviour. This value can safely be set per request on the session. -
:http_idle_timeout
(Float)
— default:
5
—
The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.
-
:http_open_timeout
(Float)
— default:
15
—
The default number of seconds to wait for response data. This value can safely be set per-request on the session.
-
:http_proxy
(URI::HTTP, String)
—
A proxy to send requests through. Formatted like 'http://proxy.com:123'.
-
:http_read_timeout
(Float)
— default:
60
—
The default number of seconds to wait for response data. This value can safely be set per-request on the session.
-
:http_wire_trace
(Boolean)
— default:
false
—
When
true
, HTTP debug output will be sent to the:logger
. -
:on_chunk_received
(Proc)
—
When a Proc object is provided, it will be used as callback when each chunk of the response body is received. It provides three arguments: the chunk, the number of bytes received, and the total number of bytes in the response (or nil if the server did not send a
content-length
). -
:on_chunk_sent
(Proc)
—
When a Proc object is provided, it will be used as callback when each chunk of the request body is sent. It provides three arguments: the chunk, the number of bytes read from the body, and the total number of bytes in the body.
-
:raise_response_errors
(Boolean)
— default:
true
—
When
true
, response errors are raised. -
:ssl_ca_bundle
(String)
—
Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass
:ssl_ca_bundle
or:ssl_ca_directory
the the system default will be used if available. -
:ssl_ca_directory
(String)
—
Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass
:ssl_ca_bundle
or:ssl_ca_directory
the the system default will be used if available. -
:ssl_ca_store
(String)
—
Sets the X509::Store to verify peer certificate.
-
:ssl_cert
(OpenSSL::X509::Certificate)
—
Sets a client certificate when creating http connections.
-
:ssl_key
(OpenSSL::PKey)
—
Sets a client key when creating http connections.
-
:ssl_timeout
(Float)
—
Sets the SSL timeout in seconds
-
:ssl_verify_peer
(Boolean)
— default:
true
—
When
true
, SSL peer certificates are verified when establishing a connection.
444 445 446 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 444 def initialize(*args) super end |
Instance Method Details
#create_media_capture_pipeline(params = {}) ⇒ Types::CreateMediaCapturePipelineResponse
Creates a media pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.create_media_capture_pipeline({
source_type: "ChimeSdkMeeting", # required, accepts ChimeSdkMeeting
source_arn: "Arn", # required
sink_type: "S3Bucket", # required, accepts S3Bucket
sink_arn: "Arn", # required
client_request_token: "ClientRequestToken",
chime_sdk_meeting_configuration: {
source_configuration: {
selected_video_streams: {
attendee_ids: ["GuidString"],
external_user_ids: ["ExternalUserIdType"],
},
},
artifacts_configuration: {
audio: { # required
mux_type: "AudioOnly", # required, accepts AudioOnly, AudioWithActiveSpeakerVideo, AudioWithCompositedVideo
},
video: { # required
state: "Enabled", # required, accepts Enabled, Disabled
mux_type: "VideoOnly", # accepts VideoOnly
},
content: { # required
state: "Enabled", # required, accepts Enabled, Disabled
mux_type: "ContentOnly", # accepts ContentOnly
},
composited_video: {
layout: "GridView", # accepts GridView
resolution: "HD", # accepts HD, FHD
grid_view_configuration: { # required
content_share_layout: "PresenterOnly", # required, accepts PresenterOnly, Horizontal, Vertical, ActiveSpeakerOnly
presenter_only_configuration: {
presenter_position: "TopLeft", # accepts TopLeft, TopRight, BottomLeft, BottomRight
},
active_speaker_only_configuration: {
active_speaker_position: "TopLeft", # accepts TopLeft, TopRight, BottomLeft, BottomRight
},
horizontal_layout_configuration: {
tile_order: "JoinSequence", # accepts JoinSequence, SpeakerSequence
tile_position: "Top", # accepts Top, Bottom
tile_count: 1,
tile_aspect_ratio: "TileAspectRatio",
},
vertical_layout_configuration: {
tile_order: "JoinSequence", # accepts JoinSequence, SpeakerSequence
tile_position: "Left", # accepts Left, Right
tile_count: 1,
tile_aspect_ratio: "TileAspectRatio",
},
video_attribute: {
corner_radius: 1,
border_color: "Black", # accepts Black, Blue, Red, Green, White, Yellow
highlight_color: "Black", # accepts Black, Blue, Red, Green, White, Yellow
border_thickness: 1,
},
canvas_orientation: "Landscape", # accepts Landscape, Portrait
},
},
},
},
tags: [
{
key: "TagKey", # required
value: "TagValue", # required
},
],
})
Response structure
Response structure
resp.media_capture_pipeline.media_pipeline_id #=> String
resp.media_capture_pipeline.media_pipeline_arn #=> String
resp.media_capture_pipeline.source_type #=> String, one of "ChimeSdkMeeting"
resp.media_capture_pipeline.source_arn #=> String
resp.media_capture_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_capture_pipeline.sink_type #=> String, one of "S3Bucket"
resp.media_capture_pipeline.sink_arn #=> String
resp.media_capture_pipeline.created_timestamp #=> Time
resp.media_capture_pipeline.updated_timestamp #=> Time
resp.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.attendee_ids #=> Array
resp.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.attendee_ids[0] #=> String
resp.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.external_user_ids #=> Array
resp.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.external_user_ids[0] #=> String
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.audio.mux_type #=> String, one of "AudioOnly", "AudioWithActiveSpeakerVideo", "AudioWithCompositedVideo"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.video.state #=> String, one of "Enabled", "Disabled"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.video.mux_type #=> String, one of "VideoOnly"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.content.state #=> String, one of "Enabled", "Disabled"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.content.mux_type #=> String, one of "ContentOnly"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.layout #=> String, one of "GridView"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.resolution #=> String, one of "HD", "FHD"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.content_share_layout #=> String, one of "PresenterOnly", "Horizontal", "Vertical", "ActiveSpeakerOnly"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.presenter_only_configuration.presenter_position #=> String, one of "TopLeft", "TopRight", "BottomLeft", "BottomRight"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.active_speaker_only_configuration.active_speaker_position #=> String, one of "TopLeft", "TopRight", "BottomLeft", "BottomRight"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_order #=> String, one of "JoinSequence", "SpeakerSequence"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_position #=> String, one of "Top", "Bottom"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_count #=> Integer
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_aspect_ratio #=> String
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_order #=> String, one of "JoinSequence", "SpeakerSequence"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_position #=> String, one of "Left", "Right"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_count #=> Integer
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_aspect_ratio #=> String
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.corner_radius #=> Integer
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.border_color #=> String, one of "Black", "Blue", "Red", "Green", "White", "Yellow"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.highlight_color #=> String, one of "Black", "Blue", "Red", "Green", "White", "Yellow"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.border_thickness #=> Integer
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.canvas_orientation #=> String, one of "Landscape", "Portrait"
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:source_type
(required, String)
—
Source type from which the media artifacts are captured. A Chime SDK Meeting is the only supported source.
-
:source_arn
(required, String)
—
ARN of the source from which the media artifacts are captured.
-
:sink_type
(required, String)
—
Destination type to which the media artifacts are saved. You must use an S3 bucket.
-
:sink_arn
(required, String)
—
The ARN of the sink type.
-
:client_request_token
(String)
—
The unique identifier for the client request. The token makes the API request idempotent. Use a unique token for each media pipeline request.
A suitable default value is auto-generated. You should normally not need to pass this option.**
-
:chime_sdk_meeting_configuration
(Types::ChimeSdkMeetingConfiguration)
—
The configuration for a specified media pipeline.
SourceType
must beChimeSdkMeeting
. -
:tags
(Array<Types::Tag>)
—
The tag key-value pairs.
Returns:
-
(Types::CreateMediaCapturePipelineResponse)
—
Returns a response object which responds to the following methods:
- #media_capture_pipeline => Types::MediaCapturePipeline
See Also:
597 598 599 600 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 597 def create_media_capture_pipeline(params = {}, options = {}) req = build_request(:create_media_capture_pipeline, params) req.send_request(options) end |
#create_media_concatenation_pipeline(params = {}) ⇒ Types::CreateMediaConcatenationPipelineResponse
Creates a media concatenation pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.create_media_concatenation_pipeline({
sources: [ # required
{
type: "MediaCapturePipeline", # required, accepts MediaCapturePipeline
media_capture_pipeline_source_configuration: { # required
media_pipeline_arn: "Arn", # required
chime_sdk_meeting_configuration: { # required
artifacts_configuration: { # required
audio: { # required
state: "Enabled", # required, accepts Enabled
},
video: { # required
state: "Enabled", # required, accepts Enabled, Disabled
},
content: { # required
state: "Enabled", # required, accepts Enabled, Disabled
},
data_channel: { # required
state: "Enabled", # required, accepts Enabled, Disabled
},
transcription_messages: { # required
state: "Enabled", # required, accepts Enabled, Disabled
},
meeting_events: { # required
state: "Enabled", # required, accepts Enabled, Disabled
},
composited_video: { # required
state: "Enabled", # required, accepts Enabled, Disabled
},
},
},
},
},
],
sinks: [ # required
{
type: "S3Bucket", # required, accepts S3Bucket
s3_bucket_sink_configuration: { # required
destination: "Arn", # required
},
},
],
client_request_token: "ClientRequestToken",
tags: [
{
key: "TagKey", # required
value: "TagValue", # required
},
],
})
Response structure
Response structure
resp.media_concatenation_pipeline.media_pipeline_id #=> String
resp.media_concatenation_pipeline.media_pipeline_arn #=> String
resp.media_concatenation_pipeline.sources #=> Array
resp.media_concatenation_pipeline.sources[0].type #=> String, one of "MediaCapturePipeline"
resp.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.media_pipeline_arn #=> String
resp.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.audio.state #=> String, one of "Enabled"
resp.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.video.state #=> String, one of "Enabled", "Disabled"
resp.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.content.state #=> String, one of "Enabled", "Disabled"
resp.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.data_channel.state #=> String, one of "Enabled", "Disabled"
resp.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.transcription_messages.state #=> String, one of "Enabled", "Disabled"
resp.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.meeting_events.state #=> String, one of "Enabled", "Disabled"
resp.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.state #=> String, one of "Enabled", "Disabled"
resp.media_concatenation_pipeline.sinks #=> Array
resp.media_concatenation_pipeline.sinks[0].type #=> String, one of "S3Bucket"
resp.media_concatenation_pipeline.sinks[0].s3_bucket_sink_configuration.destination #=> String
resp.media_concatenation_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_concatenation_pipeline.created_timestamp #=> Time
resp.media_concatenation_pipeline.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:sources
(required, Array<Types::ConcatenationSource>)
—
An object that specifies the sources for the media concatenation pipeline.
-
:sinks
(required, Array<Types::ConcatenationSink>)
—
An object that specifies the data sinks for the media concatenation pipeline.
-
:client_request_token
(String)
—
The unique identifier for the client request. The token makes the API request idempotent. Use a unique token for each media concatenation pipeline request.
A suitable default value is auto-generated. You should normally not need to pass this option.**
-
:tags
(Array<Types::Tag>)
—
The tags associated with the media concatenation pipeline.
Returns:
-
(Types::CreateMediaConcatenationPipelineResponse)
—
Returns a response object which responds to the following methods:
- #media_concatenation_pipeline => Types::MediaConcatenationPipeline
See Also:
705 706 707 708 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 705 def create_media_concatenation_pipeline(params = {}, options = {}) req = build_request(:create_media_concatenation_pipeline, params) req.send_request(options) end |
#create_media_insights_pipeline(params = {}) ⇒ Types::CreateMediaInsightsPipelineResponse
Creates a media insights pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.create_media_insights_pipeline({
media_insights_pipeline_configuration_arn: "Arn", # required
kinesis_video_stream_source_runtime_configuration: {
streams: [ # required
{
stream_arn: "KinesisVideoStreamArn", # required
fragment_number: "FragmentNumberString",
stream_channel_definition: { # required
number_of_channels: 1, # required
channel_definitions: [
{
channel_id: 1, # required
participant_role: "AGENT", # accepts AGENT, CUSTOMER
},
],
},
},
],
media_encoding: "pcm", # required, accepts pcm
media_sample_rate: 1, # required
},
media_insights_runtime_metadata: {
"NonEmptyString" => "String",
},
kinesis_video_stream_recording_source_runtime_configuration: {
streams: [ # required
{
stream_arn: "KinesisVideoStreamArn",
},
],
fragment_selector: { # required
fragment_selector_type: "ProducerTimestamp", # required, accepts ProducerTimestamp, ServerTimestamp
timestamp_range: { # required
start_timestamp: Time.now, # required
end_timestamp: Time.now, # required
},
},
},
s3_recording_sink_runtime_configuration: {
destination: "Arn", # required
recording_file_format: "Wav", # required, accepts Wav, Opus
},
tags: [
{
key: "TagKey", # required
value: "TagValue", # required
},
],
client_request_token: "ClientRequestToken",
})
Response structure
Response structure
resp.media_insights_pipeline.media_pipeline_id #=> String
resp.media_insights_pipeline.media_pipeline_arn #=> String
resp.media_insights_pipeline.media_insights_pipeline_configuration_arn #=> String
resp.media_insights_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams #=> Array
resp.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].stream_arn #=> String
resp.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].fragment_number #=> String
resp.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].stream_channel_definition.number_of_channels #=> Integer
resp.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].stream_channel_definition.channel_definitions #=> Array
resp.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].stream_channel_definition.channel_definitions[0].channel_id #=> Integer
resp.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].stream_channel_definition.channel_definitions[0].participant_role #=> String, one of "AGENT", "CUSTOMER"
resp.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.media_encoding #=> String, one of "pcm"
resp.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.media_sample_rate #=> Integer
resp.media_insights_pipeline.media_insights_runtime_metadata #=> Hash
resp.media_insights_pipeline.media_insights_runtime_metadata["NonEmptyString"] #=> String
resp.media_insights_pipeline.kinesis_video_stream_recording_source_runtime_configuration.streams #=> Array
resp.media_insights_pipeline.kinesis_video_stream_recording_source_runtime_configuration.streams[0].stream_arn #=> String
resp.media_insights_pipeline.kinesis_video_stream_recording_source_runtime_configuration.fragment_selector.fragment_selector_type #=> String, one of "ProducerTimestamp", "ServerTimestamp"
resp.media_insights_pipeline.kinesis_video_stream_recording_source_runtime_configuration.fragment_selector.timestamp_range.start_timestamp #=> Time
resp.media_insights_pipeline.kinesis_video_stream_recording_source_runtime_configuration.fragment_selector.timestamp_range.end_timestamp #=> Time
resp.media_insights_pipeline.s3_recording_sink_runtime_configuration.destination #=> String
resp.media_insights_pipeline.s3_recording_sink_runtime_configuration.recording_file_format #=> String, one of "Wav", "Opus"
resp.media_insights_pipeline.created_timestamp #=> Time
resp.media_insights_pipeline.element_statuses #=> Array
resp.media_insights_pipeline.element_statuses[0].type #=> String, one of "AmazonTranscribeCallAnalyticsProcessor", "VoiceAnalyticsProcessor", "AmazonTranscribeProcessor", "KinesisDataStreamSink", "LambdaFunctionSink", "SqsQueueSink", "SnsTopicSink", "S3RecordingSink", "VoiceEnhancementSink"
resp.media_insights_pipeline.element_statuses[0].status #=> String, one of "NotStarted", "NotSupported", "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused"
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:media_insights_pipeline_configuration_arn
(required, String)
—
The ARN of the pipeline's configuration.
-
:kinesis_video_stream_source_runtime_configuration
(Types::KinesisVideoStreamSourceRuntimeConfiguration)
—
The runtime configuration for the Kinesis video stream source of the media insights pipeline.
-
:media_insights_runtime_metadata
(Hash<String,String>)
—
The runtime metadata for the media insights pipeline. Consists of a key-value map of strings.
-
:kinesis_video_stream_recording_source_runtime_configuration
(Types::KinesisVideoStreamRecordingSourceRuntimeConfiguration)
—
The runtime configuration for the Kinesis video recording stream source.
-
:s3_recording_sink_runtime_configuration
(Types::S3RecordingSinkRuntimeConfiguration)
—
The runtime configuration for the S3 recording sink. If specified, the settings in this structure override any settings in
S3RecordingSinkConfiguration
. -
:tags
(Array<Types::Tag>)
—
The tags assigned to the media insights pipeline.
-
:client_request_token
(String)
—
The unique identifier for the media insights pipeline request.
A suitable default value is auto-generated. You should normally not need to pass this option.**
Returns:
-
(Types::CreateMediaInsightsPipelineResponse)
—
Returns a response object which responds to the following methods:
- #media_insights_pipeline => Types::MediaInsightsPipeline
See Also:
831 832 833 834 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 831 def create_media_insights_pipeline(params = {}, options = {}) req = build_request(:create_media_insights_pipeline, params) req.send_request(options) end |
#create_media_insights_pipeline_configuration(params = {}) ⇒ Types::CreateMediaInsightsPipelineConfigurationResponse
A structure that contains the static configurations for a media insights pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.create_media_insights_pipeline_configuration({
media_insights_pipeline_configuration_name: "MediaInsightsPipelineConfigurationNameString", # required
resource_access_role_arn: "Arn", # required
real_time_alert_configuration: {
disabled: false,
rules: [
{
type: "KeywordMatch", # required, accepts KeywordMatch, Sentiment, IssueDetection
keyword_match_configuration: {
rule_name: "RuleName", # required
keywords: ["Keyword"], # required
negate: false,
},
sentiment_configuration: {
rule_name: "RuleName", # required
sentiment_type: "NEGATIVE", # required, accepts NEGATIVE
time_period: 1, # required
},
issue_detection_configuration: {
rule_name: "RuleName", # required
},
},
],
},
elements: [ # required
{
type: "AmazonTranscribeCallAnalyticsProcessor", # required, accepts AmazonTranscribeCallAnalyticsProcessor, VoiceAnalyticsProcessor, AmazonTranscribeProcessor, KinesisDataStreamSink, LambdaFunctionSink, SqsQueueSink, SnsTopicSink, S3RecordingSink, VoiceEnhancementSink
amazon_transcribe_call_analytics_processor_configuration: {
language_code: "en-US", # required, accepts en-US, en-GB, es-US, fr-CA, fr-FR, en-AU, it-IT, de-DE, pt-BR
vocabulary_name: "VocabularyName",
vocabulary_filter_name: "VocabularyFilterName",
vocabulary_filter_method: "remove", # accepts remove, mask, tag
language_model_name: "ModelName",
enable_partial_results_stabilization: false,
partial_results_stability: "high", # accepts high, medium, low
content_identification_type: "PII", # accepts PII
content_redaction_type: "PII", # accepts PII
pii_entity_types: "PiiEntityTypes",
filter_partial_results: false,
post_call_analytics_settings: {
output_location: "String", # required
data_access_role_arn: "String", # required
content_redaction_output: "redacted", # accepts redacted, redacted_and_unredacted
output_encryption_kms_key_id: "String",
},
call_analytics_stream_categories: ["CategoryName"],
},
amazon_transcribe_processor_configuration: {
language_code: "en-US", # accepts en-US, en-GB, es-US, fr-CA, fr-FR, en-AU, it-IT, de-DE, pt-BR
vocabulary_name: "VocabularyName",
vocabulary_filter_name: "VocabularyFilterName",
vocabulary_filter_method: "remove", # accepts remove, mask, tag
show_speaker_label: false,
enable_partial_results_stabilization: false,
partial_results_stability: "high", # accepts high, medium, low
content_identification_type: "PII", # accepts PII
content_redaction_type: "PII", # accepts PII
pii_entity_types: "PiiEntityTypes",
language_model_name: "ModelName",
filter_partial_results: false,
identify_language: false,
identify_multiple_languages: false,
language_options: "LanguageOptions",
preferred_language: "en-US", # accepts en-US, en-GB, es-US, fr-CA, fr-FR, en-AU, it-IT, de-DE, pt-BR
vocabulary_names: "VocabularyNames",
vocabulary_filter_names: "VocabularyFilterNames",
},
kinesis_data_stream_sink_configuration: {
insights_target: "Arn",
},
s3_recording_sink_configuration: {
destination: "Arn",
recording_file_format: "Wav", # accepts Wav, Opus
},
voice_analytics_processor_configuration: {
speaker_search_status: "Enabled", # accepts Enabled, Disabled
voice_tone_analysis_status: "Enabled", # accepts Enabled, Disabled
},
lambda_function_sink_configuration: {
insights_target: "Arn",
},
sqs_queue_sink_configuration: {
insights_target: "Arn",
},
sns_topic_sink_configuration: {
insights_target: "Arn",
},
voice_enhancement_sink_configuration: {
disabled: false,
},
},
],
tags: [
{
key: "TagKey", # required
value: "TagValue", # required
},
],
client_request_token: "ClientRequestToken",
})
Response structure
Response structure
resp.media_insights_pipeline_configuration.media_insights_pipeline_configuration_name #=> String
resp.media_insights_pipeline_configuration.media_insights_pipeline_configuration_arn #=> String
resp.media_insights_pipeline_configuration.resource_access_role_arn #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.disabled #=> Boolean
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules #=> Array
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].type #=> String, one of "KeywordMatch", "Sentiment", "IssueDetection"
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.rule_name #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.keywords #=> Array
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.keywords[0] #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.negate #=> Boolean
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].sentiment_configuration.rule_name #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].sentiment_configuration.sentiment_type #=> String, one of "NEGATIVE"
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].sentiment_configuration.time_period #=> Integer
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].issue_detection_configuration.rule_name #=> String
resp.media_insights_pipeline_configuration.elements #=> Array
resp.media_insights_pipeline_configuration.elements[0].type #=> String, one of "AmazonTranscribeCallAnalyticsProcessor", "VoiceAnalyticsProcessor", "AmazonTranscribeProcessor", "KinesisDataStreamSink", "LambdaFunctionSink", "SqsQueueSink", "SnsTopicSink", "S3RecordingSink", "VoiceEnhancementSink"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.vocabulary_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.vocabulary_filter_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.vocabulary_filter_method #=> String, one of "remove", "mask", "tag"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.language_model_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.enable_partial_results_stabilization #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.partial_results_stability #=> String, one of "high", "medium", "low"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.content_identification_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.content_redaction_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.pii_entity_types #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.filter_partial_results #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.output_location #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.data_access_role_arn #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.content_redaction_output #=> String, one of "redacted", "redacted_and_unredacted"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.output_encryption_kms_key_id #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.call_analytics_stream_categories #=> Array
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.call_analytics_stream_categories[0] #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_filter_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_filter_method #=> String, one of "remove", "mask", "tag"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.show_speaker_label #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.enable_partial_results_stabilization #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.partial_results_stability #=> String, one of "high", "medium", "low"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.content_identification_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.content_redaction_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.pii_entity_types #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.language_model_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.filter_partial_results #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.identify_language #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.identify_multiple_languages #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.language_options #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.preferred_language #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_names #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_filter_names #=> String
resp.media_insights_pipeline_configuration.elements[0].kinesis_data_stream_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].s3_recording_sink_configuration.destination #=> String
resp.media_insights_pipeline_configuration.elements[0].s3_recording_sink_configuration.recording_file_format #=> String, one of "Wav", "Opus"
resp.media_insights_pipeline_configuration.elements[0].voice_analytics_processor_configuration.speaker_search_status #=> String, one of "Enabled", "Disabled"
resp.media_insights_pipeline_configuration.elements[0].voice_analytics_processor_configuration.voice_tone_analysis_status #=> String, one of "Enabled", "Disabled"
resp.media_insights_pipeline_configuration.elements[0].lambda_function_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].sqs_queue_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].sns_topic_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].voice_enhancement_sink_configuration.disabled #=> Boolean
resp.media_insights_pipeline_configuration.media_insights_pipeline_configuration_id #=> String
resp.media_insights_pipeline_configuration.created_timestamp #=> Time
resp.media_insights_pipeline_configuration.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:media_insights_pipeline_configuration_name
(required, String)
—
The name of the media insights pipeline configuration.
-
:resource_access_role_arn
(required, String)
—
The ARN of the role used by the service to access Amazon Web Services resources, including
Transcribe
andTranscribe Call Analytics
, on the caller’s behalf. -
:real_time_alert_configuration
(Types::RealTimeAlertConfiguration)
—
The configuration settings for the real-time alerts in a media insights pipeline configuration.
-
:elements
(required, Array<Types::MediaInsightsPipelineConfigurationElement>)
—
The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream.
-
:tags
(Array<Types::Tag>)
—
The tags assigned to the media insights pipeline configuration.
-
:client_request_token
(String)
—
The unique identifier for the media insights pipeline configuration request.
A suitable default value is auto-generated. You should normally not need to pass this option.**
Returns:
-
(Types::CreateMediaInsightsPipelineConfigurationResponse)
—
Returns a response object which responds to the following methods:
- #media_insights_pipeline_configuration => Types::MediaInsightsPipelineConfiguration
See Also:
1042 1043 1044 1045 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1042 def create_media_insights_pipeline_configuration(params = {}, options = {}) req = build_request(:create_media_insights_pipeline_configuration, params) req.send_request(options) end |
#create_media_live_connector_pipeline(params = {}) ⇒ Types::CreateMediaLiveConnectorPipelineResponse
Creates a media live connector pipeline in an Amazon Chime SDK meeting.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.create_media_live_connector_pipeline({
sources: [ # required
{
source_type: "ChimeSdkMeeting", # required, accepts ChimeSdkMeeting
chime_sdk_meeting_live_connector_configuration: { # required
arn: "Arn", # required
mux_type: "AudioWithCompositedVideo", # required, accepts AudioWithCompositedVideo, AudioWithActiveSpeakerVideo
composited_video: {
layout: "GridView", # accepts GridView
resolution: "HD", # accepts HD, FHD
grid_view_configuration: { # required
content_share_layout: "PresenterOnly", # required, accepts PresenterOnly, Horizontal, Vertical, ActiveSpeakerOnly
presenter_only_configuration: {
presenter_position: "TopLeft", # accepts TopLeft, TopRight, BottomLeft, BottomRight
},
active_speaker_only_configuration: {
active_speaker_position: "TopLeft", # accepts TopLeft, TopRight, BottomLeft, BottomRight
},
horizontal_layout_configuration: {
tile_order: "JoinSequence", # accepts JoinSequence, SpeakerSequence
tile_position: "Top", # accepts Top, Bottom
tile_count: 1,
tile_aspect_ratio: "TileAspectRatio",
},
vertical_layout_configuration: {
tile_order: "JoinSequence", # accepts JoinSequence, SpeakerSequence
tile_position: "Left", # accepts Left, Right
tile_count: 1,
tile_aspect_ratio: "TileAspectRatio",
},
video_attribute: {
corner_radius: 1,
border_color: "Black", # accepts Black, Blue, Red, Green, White, Yellow
highlight_color: "Black", # accepts Black, Blue, Red, Green, White, Yellow
border_thickness: 1,
},
canvas_orientation: "Landscape", # accepts Landscape, Portrait
},
},
source_configuration: {
selected_video_streams: {
attendee_ids: ["GuidString"],
external_user_ids: ["ExternalUserIdType"],
},
},
},
},
],
sinks: [ # required
{
sink_type: "RTMP", # required, accepts RTMP
rtmp_configuration: { # required
url: "SensitiveString", # required
audio_channels: "Stereo", # accepts Stereo, Mono
audio_sample_rate: "AudioSampleRateOption",
},
},
],
client_request_token: "ClientRequestToken",
tags: [
{
key: "TagKey", # required
value: "TagValue", # required
},
],
})
Response structure
Response structure
resp.media_live_connector_pipeline.sources #=> Array
resp.media_live_connector_pipeline.sources[0].source_type #=> String, one of "ChimeSdkMeeting"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.arn #=> String
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.mux_type #=> String, one of "AudioWithCompositedVideo", "AudioWithActiveSpeakerVideo"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.layout #=> String, one of "GridView"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.resolution #=> String, one of "HD", "FHD"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.content_share_layout #=> String, one of "PresenterOnly", "Horizontal", "Vertical", "ActiveSpeakerOnly"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.presenter_only_configuration.presenter_position #=> String, one of "TopLeft", "TopRight", "BottomLeft", "BottomRight"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.active_speaker_only_configuration.active_speaker_position #=> String, one of "TopLeft", "TopRight", "BottomLeft", "BottomRight"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_order #=> String, one of "JoinSequence", "SpeakerSequence"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_position #=> String, one of "Top", "Bottom"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_count #=> Integer
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_aspect_ratio #=> String
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_order #=> String, one of "JoinSequence", "SpeakerSequence"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_position #=> String, one of "Left", "Right"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_count #=> Integer
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_aspect_ratio #=> String
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.video_attribute.corner_radius #=> Integer
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.video_attribute.border_color #=> String, one of "Black", "Blue", "Red", "Green", "White", "Yellow"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.video_attribute.highlight_color #=> String, one of "Black", "Blue", "Red", "Green", "White", "Yellow"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.video_attribute.border_thickness #=> Integer
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.canvas_orientation #=> String, one of "Landscape", "Portrait"
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.source_configuration.selected_video_streams.attendee_ids #=> Array
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.source_configuration.selected_video_streams.attendee_ids[0] #=> String
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.source_configuration.selected_video_streams.external_user_ids #=> Array
resp.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.source_configuration.selected_video_streams.external_user_ids[0] #=> String
resp.media_live_connector_pipeline.sinks #=> Array
resp.media_live_connector_pipeline.sinks[0].sink_type #=> String, one of "RTMP"
resp.media_live_connector_pipeline.sinks[0].rtmp_configuration.url #=> String
resp.media_live_connector_pipeline.sinks[0].rtmp_configuration.audio_channels #=> String, one of "Stereo", "Mono"
resp.media_live_connector_pipeline.sinks[0].rtmp_configuration.audio_sample_rate #=> String
resp.media_live_connector_pipeline.media_pipeline_id #=> String
resp.media_live_connector_pipeline.media_pipeline_arn #=> String
resp.media_live_connector_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_live_connector_pipeline.created_timestamp #=> Time
resp.media_live_connector_pipeline.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:sources
(required, Array<Types::LiveConnectorSourceConfiguration>)
—
The media live connector pipeline's data sources.
-
:sinks
(required, Array<Types::LiveConnectorSinkConfiguration>)
—
The media live connector pipeline's data sinks.
-
:client_request_token
(String)
—
The token assigned to the client making the request.
A suitable default value is auto-generated. You should normally not need to pass this option.**
-
:tags
(Array<Types::Tag>)
—
The tags associated with the media live connector pipeline.
Returns:
-
(Types::CreateMediaLiveConnectorPipelineResponse)
—
Returns a response object which responds to the following methods:
- #media_live_connector_pipeline => Types::MediaLiveConnectorPipeline
See Also:
1181 1182 1183 1184 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1181 def create_media_live_connector_pipeline(params = {}, options = {}) req = build_request(:create_media_live_connector_pipeline, params) req.send_request(options) end |
#create_media_pipeline_kinesis_video_stream_pool(params = {}) ⇒ Types::CreateMediaPipelineKinesisVideoStreamPoolResponse
Creates an Amazon Kinesis Video Stream pool for use with media stream pipelines.
af-south-1
Region, the KVS stream must also be in af-south-1
.
However, if the meeting uses a Region that AWS turns on by default,
the KVS stream can be in any available Region, including an opt-in
Region. For example, if the meeting uses ca-central-1
, the KVS
stream can be in eu-west-2
, us-east-1
, af-south-1
, or any other
Region that the Amazon Chime SDK supports.
To learn which AWS Region a meeting uses, call the GetMeeting API and use the MediaRegion parameter from the response.
For more information about opt-in Regions, refer to Available Regions in the Amazon Chime SDK Developer Guide, and Specify which AWS Regions your account can use, in the AWS Account Management Reference Guide.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.create_media_pipeline_kinesis_video_stream_pool({
stream_configuration: { # required
region: "AwsRegion", # required
data_retention_in_hours: 1,
},
pool_name: "KinesisVideoStreamPoolName", # required
client_request_token: "ClientRequestToken",
tags: [
{
key: "TagKey", # required
value: "TagValue", # required
},
],
})
Response structure
Response structure
resp.kinesis_video_stream_pool_configuration.pool_arn #=> String
resp.kinesis_video_stream_pool_configuration.pool_name #=> String
resp.kinesis_video_stream_pool_configuration.pool_id #=> String
resp.kinesis_video_stream_pool_configuration.pool_status #=> String, one of "CREATING", "ACTIVE", "UPDATING", "DELETING", "FAILED"
resp.kinesis_video_stream_pool_configuration.pool_size #=> Integer
resp.kinesis_video_stream_pool_configuration.stream_configuration.region #=> String
resp.kinesis_video_stream_pool_configuration.stream_configuration.data_retention_in_hours #=> Integer
resp.kinesis_video_stream_pool_configuration.created_timestamp #=> Time
resp.kinesis_video_stream_pool_configuration.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:stream_configuration
(required, Types::KinesisVideoStreamConfiguration)
—
The configuration settings for the stream.
-
:pool_name
(required, String)
—
The name of the pool.
-
:client_request_token
(String)
—
The token assigned to the client making the request.
A suitable default value is auto-generated. You should normally not need to pass this option.**
-
:tags
(Array<Types::Tag>)
—
The tags assigned to the stream pool.
Returns:
-
(Types::CreateMediaPipelineKinesisVideoStreamPoolResponse)
—
Returns a response object which responds to the following methods:
- #kinesis_video_stream_pool_configuration => Types::KinesisVideoStreamPoolConfiguration
See Also:
1267 1268 1269 1270 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1267 def create_media_pipeline_kinesis_video_stream_pool(params = {}, options = {}) req = build_request(:create_media_pipeline_kinesis_video_stream_pool, params) req.send_request(options) end |
#create_media_stream_pipeline(params = {}) ⇒ Types::CreateMediaStreamPipelineResponse
Creates a streaming media pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.create_media_stream_pipeline({
sources: [ # required
{
source_type: "ChimeSdkMeeting", # required, accepts ChimeSdkMeeting
source_arn: "Arn", # required
},
],
sinks: [ # required
{
sink_arn: "Arn", # required
sink_type: "KinesisVideoStreamPool", # required, accepts KinesisVideoStreamPool
reserved_stream_capacity: 1, # required
media_stream_type: "MixedAudio", # required, accepts MixedAudio, IndividualAudio
},
],
client_request_token: "ClientRequestToken",
tags: [
{
key: "TagKey", # required
value: "TagValue", # required
},
],
})
Response structure
Response structure
resp.media_stream_pipeline.media_pipeline_id #=> String
resp.media_stream_pipeline.media_pipeline_arn #=> String
resp.media_stream_pipeline.created_timestamp #=> Time
resp.media_stream_pipeline.updated_timestamp #=> Time
resp.media_stream_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_stream_pipeline.sources #=> Array
resp.media_stream_pipeline.sources[0].source_type #=> String, one of "ChimeSdkMeeting"
resp.media_stream_pipeline.sources[0].source_arn #=> String
resp.media_stream_pipeline.sinks #=> Array
resp.media_stream_pipeline.sinks[0].sink_arn #=> String
resp.media_stream_pipeline.sinks[0].sink_type #=> String, one of "KinesisVideoStreamPool"
resp.media_stream_pipeline.sinks[0].reserved_stream_capacity #=> Integer
resp.media_stream_pipeline.sinks[0].media_stream_type #=> String, one of "MixedAudio", "IndividualAudio"
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:sources
(required, Array<Types::MediaStreamSource>)
—
The data sources for the media pipeline.
-
:sinks
(required, Array<Types::MediaStreamSink>)
—
The data sink for the media pipeline.
-
:client_request_token
(String)
—
The token assigned to the client making the request.
A suitable default value is auto-generated. You should normally not need to pass this option.**
-
:tags
(Array<Types::Tag>)
—
The tags assigned to the media pipeline.
Returns:
-
(Types::CreateMediaStreamPipelineResponse)
—
Returns a response object which responds to the following methods:
- #media_stream_pipeline => Types::MediaStreamPipeline
See Also:
1339 1340 1341 1342 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1339 def create_media_stream_pipeline(params = {}, options = {}) req = build_request(:create_media_stream_pipeline, params) req.send_request(options) end |
#delete_media_capture_pipeline(params = {}) ⇒ Struct
Deletes the media pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.delete_media_capture_pipeline({
media_pipeline_id: "GuidString", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:media_pipeline_id
(required, String)
—
The ID of the media pipeline being deleted.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
1361 1362 1363 1364 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1361 def delete_media_capture_pipeline(params = {}, options = {}) req = build_request(:delete_media_capture_pipeline, params) req.send_request(options) end |
#delete_media_insights_pipeline_configuration(params = {}) ⇒ Struct
Deletes the specified configuration settings.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.delete_media_insights_pipeline_configuration({
identifier: "NonEmptyString", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the resource to be deleted. Valid values include the name and ARN of the media insights pipeline configuration.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
1384 1385 1386 1387 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1384 def delete_media_insights_pipeline_configuration(params = {}, options = {}) req = build_request(:delete_media_insights_pipeline_configuration, params) req.send_request(options) end |
#delete_media_pipeline(params = {}) ⇒ Struct
Deletes the media pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.delete_media_pipeline({
media_pipeline_id: "GuidString", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:media_pipeline_id
(required, String)
—
The ID of the media pipeline to delete.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
1406 1407 1408 1409 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1406 def delete_media_pipeline(params = {}, options = {}) req = build_request(:delete_media_pipeline, params) req.send_request(options) end |
#delete_media_pipeline_kinesis_video_stream_pool(params = {}) ⇒ Struct
Deletes an Amazon Kinesis Video Stream pool.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.delete_media_pipeline_kinesis_video_stream_pool({
identifier: "NonEmptyString", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the requested resource. Valid values include the name and ARN of the media pipeline Kinesis Video Stream pool.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
1429 1430 1431 1432 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1429 def delete_media_pipeline_kinesis_video_stream_pool(params = {}, options = {}) req = build_request(:delete_media_pipeline_kinesis_video_stream_pool, params) req.send_request(options) end |
#get_media_capture_pipeline(params = {}) ⇒ Types::GetMediaCapturePipelineResponse
Gets an existing media pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.get_media_capture_pipeline({
media_pipeline_id: "GuidString", # required
})
Response structure
Response structure
resp.media_capture_pipeline.media_pipeline_id #=> String
resp.media_capture_pipeline.media_pipeline_arn #=> String
resp.media_capture_pipeline.source_type #=> String, one of "ChimeSdkMeeting"
resp.media_capture_pipeline.source_arn #=> String
resp.media_capture_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_capture_pipeline.sink_type #=> String, one of "S3Bucket"
resp.media_capture_pipeline.sink_arn #=> String
resp.media_capture_pipeline.created_timestamp #=> Time
resp.media_capture_pipeline.updated_timestamp #=> Time
resp.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.attendee_ids #=> Array
resp.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.attendee_ids[0] #=> String
resp.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.external_user_ids #=> Array
resp.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.external_user_ids[0] #=> String
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.audio.mux_type #=> String, one of "AudioOnly", "AudioWithActiveSpeakerVideo", "AudioWithCompositedVideo"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.video.state #=> String, one of "Enabled", "Disabled"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.video.mux_type #=> String, one of "VideoOnly"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.content.state #=> String, one of "Enabled", "Disabled"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.content.mux_type #=> String, one of "ContentOnly"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.layout #=> String, one of "GridView"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.resolution #=> String, one of "HD", "FHD"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.content_share_layout #=> String, one of "PresenterOnly", "Horizontal", "Vertical", "ActiveSpeakerOnly"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.presenter_only_configuration.presenter_position #=> String, one of "TopLeft", "TopRight", "BottomLeft", "BottomRight"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.active_speaker_only_configuration.active_speaker_position #=> String, one of "TopLeft", "TopRight", "BottomLeft", "BottomRight"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_order #=> String, one of "JoinSequence", "SpeakerSequence"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_position #=> String, one of "Top", "Bottom"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_count #=> Integer
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_aspect_ratio #=> String
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_order #=> String, one of "JoinSequence", "SpeakerSequence"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_position #=> String, one of "Left", "Right"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_count #=> Integer
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_aspect_ratio #=> String
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.corner_radius #=> Integer
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.border_color #=> String, one of "Black", "Blue", "Red", "Green", "White", "Yellow"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.highlight_color #=> String, one of "Black", "Blue", "Red", "Green", "White", "Yellow"
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.border_thickness #=> Integer
resp.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.canvas_orientation #=> String, one of "Landscape", "Portrait"
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:media_pipeline_id
(required, String)
—
The ID of the pipeline that you want to get.
Returns:
-
(Types::GetMediaCapturePipelineResponse)
—
Returns a response object which responds to the following methods:
- #media_capture_pipeline => Types::MediaCapturePipeline
See Also:
1492 1493 1494 1495 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1492 def get_media_capture_pipeline(params = {}, options = {}) req = build_request(:get_media_capture_pipeline, params) req.send_request(options) end |
#get_media_insights_pipeline_configuration(params = {}) ⇒ Types::GetMediaInsightsPipelineConfigurationResponse
Gets the configuration settings for a media insights pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.get_media_insights_pipeline_configuration({
identifier: "NonEmptyString", # required
})
Response structure
Response structure
resp.media_insights_pipeline_configuration.media_insights_pipeline_configuration_name #=> String
resp.media_insights_pipeline_configuration.media_insights_pipeline_configuration_arn #=> String
resp.media_insights_pipeline_configuration.resource_access_role_arn #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.disabled #=> Boolean
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules #=> Array
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].type #=> String, one of "KeywordMatch", "Sentiment", "IssueDetection"
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.rule_name #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.keywords #=> Array
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.keywords[0] #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.negate #=> Boolean
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].sentiment_configuration.rule_name #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].sentiment_configuration.sentiment_type #=> String, one of "NEGATIVE"
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].sentiment_configuration.time_period #=> Integer
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].issue_detection_configuration.rule_name #=> String
resp.media_insights_pipeline_configuration.elements #=> Array
resp.media_insights_pipeline_configuration.elements[0].type #=> String, one of "AmazonTranscribeCallAnalyticsProcessor", "VoiceAnalyticsProcessor", "AmazonTranscribeProcessor", "KinesisDataStreamSink", "LambdaFunctionSink", "SqsQueueSink", "SnsTopicSink", "S3RecordingSink", "VoiceEnhancementSink"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.vocabulary_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.vocabulary_filter_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.vocabulary_filter_method #=> String, one of "remove", "mask", "tag"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.language_model_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.enable_partial_results_stabilization #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.partial_results_stability #=> String, one of "high", "medium", "low"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.content_identification_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.content_redaction_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.pii_entity_types #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.filter_partial_results #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.output_location #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.data_access_role_arn #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.content_redaction_output #=> String, one of "redacted", "redacted_and_unredacted"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.output_encryption_kms_key_id #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.call_analytics_stream_categories #=> Array
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.call_analytics_stream_categories[0] #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_filter_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_filter_method #=> String, one of "remove", "mask", "tag"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.show_speaker_label #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.enable_partial_results_stabilization #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.partial_results_stability #=> String, one of "high", "medium", "low"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.content_identification_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.content_redaction_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.pii_entity_types #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.language_model_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.filter_partial_results #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.identify_language #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.identify_multiple_languages #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.language_options #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.preferred_language #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_names #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_filter_names #=> String
resp.media_insights_pipeline_configuration.elements[0].kinesis_data_stream_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].s3_recording_sink_configuration.destination #=> String
resp.media_insights_pipeline_configuration.elements[0].s3_recording_sink_configuration.recording_file_format #=> String, one of "Wav", "Opus"
resp.media_insights_pipeline_configuration.elements[0].voice_analytics_processor_configuration.speaker_search_status #=> String, one of "Enabled", "Disabled"
resp.media_insights_pipeline_configuration.elements[0].voice_analytics_processor_configuration.voice_tone_analysis_status #=> String, one of "Enabled", "Disabled"
resp.media_insights_pipeline_configuration.elements[0].lambda_function_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].sqs_queue_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].sns_topic_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].voice_enhancement_sink_configuration.disabled #=> Boolean
resp.media_insights_pipeline_configuration.media_insights_pipeline_configuration_id #=> String
resp.media_insights_pipeline_configuration.created_timestamp #=> Time
resp.media_insights_pipeline_configuration.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the requested resource. Valid values include the name and ARN of the media insights pipeline configuration.
Returns:
-
(Types::GetMediaInsightsPipelineConfigurationResponse)
—
Returns a response object which responds to the following methods:
- #media_insights_pipeline_configuration => Types::MediaInsightsPipelineConfiguration
See Also:
1583 1584 1585 1586 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1583 def get_media_insights_pipeline_configuration(params = {}, options = {}) req = build_request(:get_media_insights_pipeline_configuration, params) req.send_request(options) end |
#get_media_pipeline(params = {}) ⇒ Types::GetMediaPipelineResponse
Gets an existing media pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.get_media_pipeline({
media_pipeline_id: "GuidString", # required
})
Response structure
Response structure
resp.media_pipeline.media_capture_pipeline.media_pipeline_id #=> String
resp.media_pipeline.media_capture_pipeline.media_pipeline_arn #=> String
resp.media_pipeline.media_capture_pipeline.source_type #=> String, one of "ChimeSdkMeeting"
resp.media_pipeline.media_capture_pipeline.source_arn #=> String
resp.media_pipeline.media_capture_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_pipeline.media_capture_pipeline.sink_type #=> String, one of "S3Bucket"
resp.media_pipeline.media_capture_pipeline.sink_arn #=> String
resp.media_pipeline.media_capture_pipeline.created_timestamp #=> Time
resp.media_pipeline.media_capture_pipeline.updated_timestamp #=> Time
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.attendee_ids #=> Array
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.attendee_ids[0] #=> String
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.external_user_ids #=> Array
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.source_configuration.selected_video_streams.external_user_ids[0] #=> String
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.audio.mux_type #=> String, one of "AudioOnly", "AudioWithActiveSpeakerVideo", "AudioWithCompositedVideo"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.video.state #=> String, one of "Enabled", "Disabled"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.video.mux_type #=> String, one of "VideoOnly"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.content.state #=> String, one of "Enabled", "Disabled"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.content.mux_type #=> String, one of "ContentOnly"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.layout #=> String, one of "GridView"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.resolution #=> String, one of "HD", "FHD"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.content_share_layout #=> String, one of "PresenterOnly", "Horizontal", "Vertical", "ActiveSpeakerOnly"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.presenter_only_configuration.presenter_position #=> String, one of "TopLeft", "TopRight", "BottomLeft", "BottomRight"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.active_speaker_only_configuration.active_speaker_position #=> String, one of "TopLeft", "TopRight", "BottomLeft", "BottomRight"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_order #=> String, one of "JoinSequence", "SpeakerSequence"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_position #=> String, one of "Top", "Bottom"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_count #=> Integer
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_aspect_ratio #=> String
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_order #=> String, one of "JoinSequence", "SpeakerSequence"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_position #=> String, one of "Left", "Right"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_count #=> Integer
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_aspect_ratio #=> String
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.corner_radius #=> Integer
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.border_color #=> String, one of "Black", "Blue", "Red", "Green", "White", "Yellow"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.highlight_color #=> String, one of "Black", "Blue", "Red", "Green", "White", "Yellow"
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.video_attribute.border_thickness #=> Integer
resp.media_pipeline.media_capture_pipeline.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.grid_view_configuration.canvas_orientation #=> String, one of "Landscape", "Portrait"
resp.media_pipeline.media_live_connector_pipeline.sources #=> Array
resp.media_pipeline.media_live_connector_pipeline.sources[0].source_type #=> String, one of "ChimeSdkMeeting"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.arn #=> String
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.mux_type #=> String, one of "AudioWithCompositedVideo", "AudioWithActiveSpeakerVideo"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.layout #=> String, one of "GridView"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.resolution #=> String, one of "HD", "FHD"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.content_share_layout #=> String, one of "PresenterOnly", "Horizontal", "Vertical", "ActiveSpeakerOnly"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.presenter_only_configuration.presenter_position #=> String, one of "TopLeft", "TopRight", "BottomLeft", "BottomRight"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.active_speaker_only_configuration.active_speaker_position #=> String, one of "TopLeft", "TopRight", "BottomLeft", "BottomRight"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_order #=> String, one of "JoinSequence", "SpeakerSequence"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_position #=> String, one of "Top", "Bottom"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_count #=> Integer
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.horizontal_layout_configuration.tile_aspect_ratio #=> String
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_order #=> String, one of "JoinSequence", "SpeakerSequence"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_position #=> String, one of "Left", "Right"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_count #=> Integer
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.vertical_layout_configuration.tile_aspect_ratio #=> String
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.video_attribute.corner_radius #=> Integer
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.video_attribute.border_color #=> String, one of "Black", "Blue", "Red", "Green", "White", "Yellow"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.video_attribute.highlight_color #=> String, one of "Black", "Blue", "Red", "Green", "White", "Yellow"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.video_attribute.border_thickness #=> Integer
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.composited_video.grid_view_configuration.canvas_orientation #=> String, one of "Landscape", "Portrait"
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.source_configuration.selected_video_streams.attendee_ids #=> Array
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.source_configuration.selected_video_streams.attendee_ids[0] #=> String
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.source_configuration.selected_video_streams.external_user_ids #=> Array
resp.media_pipeline.media_live_connector_pipeline.sources[0].chime_sdk_meeting_live_connector_configuration.source_configuration.selected_video_streams.external_user_ids[0] #=> String
resp.media_pipeline.media_live_connector_pipeline.sinks #=> Array
resp.media_pipeline.media_live_connector_pipeline.sinks[0].sink_type #=> String, one of "RTMP"
resp.media_pipeline.media_live_connector_pipeline.sinks[0].rtmp_configuration.url #=> String
resp.media_pipeline.media_live_connector_pipeline.sinks[0].rtmp_configuration.audio_channels #=> String, one of "Stereo", "Mono"
resp.media_pipeline.media_live_connector_pipeline.sinks[0].rtmp_configuration.audio_sample_rate #=> String
resp.media_pipeline.media_live_connector_pipeline.media_pipeline_id #=> String
resp.media_pipeline.media_live_connector_pipeline.media_pipeline_arn #=> String
resp.media_pipeline.media_live_connector_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_pipeline.media_live_connector_pipeline.created_timestamp #=> Time
resp.media_pipeline.media_live_connector_pipeline.updated_timestamp #=> Time
resp.media_pipeline.media_concatenation_pipeline.media_pipeline_id #=> String
resp.media_pipeline.media_concatenation_pipeline.media_pipeline_arn #=> String
resp.media_pipeline.media_concatenation_pipeline.sources #=> Array
resp.media_pipeline.media_concatenation_pipeline.sources[0].type #=> String, one of "MediaCapturePipeline"
resp.media_pipeline.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.media_pipeline_arn #=> String
resp.media_pipeline.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.audio.state #=> String, one of "Enabled"
resp.media_pipeline.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.video.state #=> String, one of "Enabled", "Disabled"
resp.media_pipeline.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.content.state #=> String, one of "Enabled", "Disabled"
resp.media_pipeline.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.data_channel.state #=> String, one of "Enabled", "Disabled"
resp.media_pipeline.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.transcription_messages.state #=> String, one of "Enabled", "Disabled"
resp.media_pipeline.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.meeting_events.state #=> String, one of "Enabled", "Disabled"
resp.media_pipeline.media_concatenation_pipeline.sources[0].media_capture_pipeline_source_configuration.chime_sdk_meeting_configuration.artifacts_configuration.composited_video.state #=> String, one of "Enabled", "Disabled"
resp.media_pipeline.media_concatenation_pipeline.sinks #=> Array
resp.media_pipeline.media_concatenation_pipeline.sinks[0].type #=> String, one of "S3Bucket"
resp.media_pipeline.media_concatenation_pipeline.sinks[0].s3_bucket_sink_configuration.destination #=> String
resp.media_pipeline.media_concatenation_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_pipeline.media_concatenation_pipeline.created_timestamp #=> Time
resp.media_pipeline.media_concatenation_pipeline.updated_timestamp #=> Time
resp.media_pipeline.media_insights_pipeline.media_pipeline_id #=> String
resp.media_pipeline.media_insights_pipeline.media_pipeline_arn #=> String
resp.media_pipeline.media_insights_pipeline.media_insights_pipeline_configuration_arn #=> String
resp.media_pipeline.media_insights_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams #=> Array
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].stream_arn #=> String
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].fragment_number #=> String
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].stream_channel_definition.number_of_channels #=> Integer
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].stream_channel_definition.channel_definitions #=> Array
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].stream_channel_definition.channel_definitions[0].channel_id #=> Integer
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.streams[0].stream_channel_definition.channel_definitions[0].participant_role #=> String, one of "AGENT", "CUSTOMER"
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.media_encoding #=> String, one of "pcm"
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_source_runtime_configuration.media_sample_rate #=> Integer
resp.media_pipeline.media_insights_pipeline.media_insights_runtime_metadata #=> Hash
resp.media_pipeline.media_insights_pipeline.media_insights_runtime_metadata["NonEmptyString"] #=> String
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_recording_source_runtime_configuration.streams #=> Array
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_recording_source_runtime_configuration.streams[0].stream_arn #=> String
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_recording_source_runtime_configuration.fragment_selector.fragment_selector_type #=> String, one of "ProducerTimestamp", "ServerTimestamp"
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_recording_source_runtime_configuration.fragment_selector.timestamp_range.start_timestamp #=> Time
resp.media_pipeline.media_insights_pipeline.kinesis_video_stream_recording_source_runtime_configuration.fragment_selector.timestamp_range.end_timestamp #=> Time
resp.media_pipeline.media_insights_pipeline.s3_recording_sink_runtime_configuration.destination #=> String
resp.media_pipeline.media_insights_pipeline.s3_recording_sink_runtime_configuration.recording_file_format #=> String, one of "Wav", "Opus"
resp.media_pipeline.media_insights_pipeline.created_timestamp #=> Time
resp.media_pipeline.media_insights_pipeline.element_statuses #=> Array
resp.media_pipeline.media_insights_pipeline.element_statuses[0].type #=> String, one of "AmazonTranscribeCallAnalyticsProcessor", "VoiceAnalyticsProcessor", "AmazonTranscribeProcessor", "KinesisDataStreamSink", "LambdaFunctionSink", "SqsQueueSink", "SnsTopicSink", "S3RecordingSink", "VoiceEnhancementSink"
resp.media_pipeline.media_insights_pipeline.element_statuses[0].status #=> String, one of "NotStarted", "NotSupported", "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused"
resp.media_pipeline.media_stream_pipeline.media_pipeline_id #=> String
resp.media_pipeline.media_stream_pipeline.media_pipeline_arn #=> String
resp.media_pipeline.media_stream_pipeline.created_timestamp #=> Time
resp.media_pipeline.media_stream_pipeline.updated_timestamp #=> Time
resp.media_pipeline.media_stream_pipeline.status #=> String, one of "Initializing", "InProgress", "Failed", "Stopping", "Stopped", "Paused", "NotStarted"
resp.media_pipeline.media_stream_pipeline.sources #=> Array
resp.media_pipeline.media_stream_pipeline.sources[0].source_type #=> String, one of "ChimeSdkMeeting"
resp.media_pipeline.media_stream_pipeline.sources[0].source_arn #=> String
resp.media_pipeline.media_stream_pipeline.sinks #=> Array
resp.media_pipeline.media_stream_pipeline.sinks[0].sink_arn #=> String
resp.media_pipeline.media_stream_pipeline.sinks[0].sink_type #=> String, one of "KinesisVideoStreamPool"
resp.media_pipeline.media_stream_pipeline.sinks[0].reserved_stream_capacity #=> Integer
resp.media_pipeline.media_stream_pipeline.sinks[0].media_stream_type #=> String, one of "MixedAudio", "IndividualAudio"
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:media_pipeline_id
(required, String)
—
The ID of the pipeline that you want to get.
Returns:
-
(Types::GetMediaPipelineResponse)
—
Returns a response object which responds to the following methods:
- #media_pipeline => Types::MediaPipeline
See Also:
1739 1740 1741 1742 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1739 def get_media_pipeline(params = {}, options = {}) req = build_request(:get_media_pipeline, params) req.send_request(options) end |
#get_media_pipeline_kinesis_video_stream_pool(params = {}) ⇒ Types::GetMediaPipelineKinesisVideoStreamPoolResponse
Gets an Kinesis video stream pool.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.get_media_pipeline_kinesis_video_stream_pool({
identifier: "NonEmptyString", # required
})
Response structure
Response structure
resp.kinesis_video_stream_pool_configuration.pool_arn #=> String
resp.kinesis_video_stream_pool_configuration.pool_name #=> String
resp.kinesis_video_stream_pool_configuration.pool_id #=> String
resp.kinesis_video_stream_pool_configuration.pool_status #=> String, one of "CREATING", "ACTIVE", "UPDATING", "DELETING", "FAILED"
resp.kinesis_video_stream_pool_configuration.pool_size #=> Integer
resp.kinesis_video_stream_pool_configuration.stream_configuration.region #=> String
resp.kinesis_video_stream_pool_configuration.stream_configuration.data_retention_in_hours #=> Integer
resp.kinesis_video_stream_pool_configuration.created_timestamp #=> Time
resp.kinesis_video_stream_pool_configuration.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the requested resource. Valid values include the name and ARN of the media pipeline Kinesis Video Stream pool.
Returns:
-
(Types::GetMediaPipelineKinesisVideoStreamPoolResponse)
—
Returns a response object which responds to the following methods:
- #kinesis_video_stream_pool_configuration => Types::KinesisVideoStreamPoolConfiguration
See Also:
1776 1777 1778 1779 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1776 def get_media_pipeline_kinesis_video_stream_pool(params = {}, options = {}) req = build_request(:get_media_pipeline_kinesis_video_stream_pool, params) req.send_request(options) end |
#get_speaker_search_task(params = {}) ⇒ Types::GetSpeakerSearchTaskResponse
Retrieves the details of the specified speaker search task.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.get_speaker_search_task({
identifier: "NonEmptyString", # required
speaker_search_task_id: "GuidString", # required
})
Response structure
Response structure
resp.speaker_search_task.speaker_search_task_id #=> String
resp.speaker_search_task.speaker_search_task_status #=> String, one of "NotStarted", "Initializing", "InProgress", "Failed", "Stopping", "Stopped"
resp.speaker_search_task.created_timestamp #=> Time
resp.speaker_search_task.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.
-
:speaker_search_task_id
(required, String)
—
The ID of the speaker search task.
Returns:
-
(Types::GetSpeakerSearchTaskResponse)
—
Returns a response object which responds to the following methods:
- #speaker_search_task => Types::SpeakerSearchTask
See Also:
1812 1813 1814 1815 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1812 def get_speaker_search_task(params = {}, options = {}) req = build_request(:get_speaker_search_task, params) req.send_request(options) end |
#get_voice_tone_analysis_task(params = {}) ⇒ Types::GetVoiceToneAnalysisTaskResponse
Retrieves the details of a voice tone analysis task.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.get_voice_tone_analysis_task({
identifier: "NonEmptyString", # required
voice_tone_analysis_task_id: "GuidString", # required
})
Response structure
Response structure
resp.voice_tone_analysis_task.voice_tone_analysis_task_id #=> String
resp.voice_tone_analysis_task.voice_tone_analysis_task_status #=> String, one of "NotStarted", "Initializing", "InProgress", "Failed", "Stopping", "Stopped"
resp.voice_tone_analysis_task.created_timestamp #=> Time
resp.voice_tone_analysis_task.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.
-
:voice_tone_analysis_task_id
(required, String)
—
The ID of the voice tone analysis task.
Returns:
-
(Types::GetVoiceToneAnalysisTaskResponse)
—
Returns a response object which responds to the following methods:
- #voice_tone_analysis_task => Types::VoiceToneAnalysisTask
See Also:
1848 1849 1850 1851 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1848 def get_voice_tone_analysis_task(params = {}, options = {}) req = build_request(:get_voice_tone_analysis_task, params) req.send_request(options) end |
#list_media_capture_pipelines(params = {}) ⇒ Types::ListMediaCapturePipelinesResponse
Returns a list of media pipelines.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.list_media_capture_pipelines({
next_token: "String",
max_results: 1,
})
Response structure
Response structure
resp.media_capture_pipelines #=> Array
resp.media_capture_pipelines[0].media_pipeline_id #=> String
resp.media_capture_pipelines[0].media_pipeline_arn #=> String
resp.next_token #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:next_token
(String)
—
The token used to retrieve the next page of results.
-
:max_results
(Integer)
—
The maximum number of results to return in a single call. Valid Range: 1 - 99.
Returns:
-
(Types::ListMediaCapturePipelinesResponse)
—
Returns a response object which responds to the following methods:
- #media_capture_pipelines => Array<Types::MediaCapturePipelineSummary>
- #next_token => String
See Also:
1887 1888 1889 1890 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1887 def list_media_capture_pipelines(params = {}, options = {}) req = build_request(:list_media_capture_pipelines, params) req.send_request(options) end |
#list_media_insights_pipeline_configurations(params = {}) ⇒ Types::ListMediaInsightsPipelineConfigurationsResponse
Lists the available media insights pipeline configurations.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.list_media_insights_pipeline_configurations({
next_token: "String",
max_results: 1,
})
Response structure
Response structure
resp.media_insights_pipeline_configurations #=> Array
resp.media_insights_pipeline_configurations[0].media_insights_pipeline_configuration_name #=> String
resp.media_insights_pipeline_configurations[0].media_insights_pipeline_configuration_id #=> String
resp.media_insights_pipeline_configurations[0].media_insights_pipeline_configuration_arn #=> String
resp.next_token #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:next_token
(String)
—
The token used to return the next page of results.
-
:max_results
(Integer)
—
The maximum number of results to return in a single call.
Returns:
-
(Types::ListMediaInsightsPipelineConfigurationsResponse)
—
Returns a response object which responds to the following methods:
- #media_insights_pipeline_configurations => Array<Types::MediaInsightsPipelineConfigurationSummary>
- #next_token => String
See Also:
1926 1927 1928 1929 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1926 def list_media_insights_pipeline_configurations(params = {}, options = {}) req = build_request(:list_media_insights_pipeline_configurations, params) req.send_request(options) end |
#list_media_pipeline_kinesis_video_stream_pools(params = {}) ⇒ Types::ListMediaPipelineKinesisVideoStreamPoolsResponse
Lists the video stream pools in the media pipeline.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.list_media_pipeline_kinesis_video_stream_pools({
next_token: "String",
max_results: 1,
})
Response structure
Response structure
resp.kinesis_video_stream_pools #=> Array
resp.kinesis_video_stream_pools[0].pool_name #=> String
resp.kinesis_video_stream_pools[0].pool_id #=> String
resp.kinesis_video_stream_pools[0].pool_arn #=> String
resp.next_token #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:next_token
(String)
—
The token used to return the next page of results.
-
:max_results
(Integer)
—
The maximum number of results to return in a single call.
Returns:
-
(Types::ListMediaPipelineKinesisVideoStreamPoolsResponse)
—
Returns a response object which responds to the following methods:
- #kinesis_video_stream_pools => Array<Types::KinesisVideoStreamPoolSummary>
- #next_token => String
See Also:
1965 1966 1967 1968 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 1965 def list_media_pipeline_kinesis_video_stream_pools(params = {}, options = {}) req = build_request(:list_media_pipeline_kinesis_video_stream_pools, params) req.send_request(options) end |
#list_media_pipelines(params = {}) ⇒ Types::ListMediaPipelinesResponse
Returns a list of media pipelines.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.list_media_pipelines({
next_token: "String",
max_results: 1,
})
Response structure
Response structure
resp.media_pipelines #=> Array
resp.media_pipelines[0].media_pipeline_id #=> String
resp.media_pipelines[0].media_pipeline_arn #=> String
resp.next_token #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:next_token
(String)
—
The token used to retrieve the next page of results.
-
:max_results
(Integer)
—
The maximum number of results to return in a single call. Valid Range: 1 - 99.
Returns:
-
(Types::ListMediaPipelinesResponse)
—
Returns a response object which responds to the following methods:
- #media_pipelines => Array<Types::MediaPipelineSummary>
- #next_token => String
See Also:
2004 2005 2006 2007 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2004 def list_media_pipelines(params = {}, options = {}) req = build_request(:list_media_pipelines, params) req.send_request(options) end |
#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse
Lists the tags available for a media pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.list_tags_for_resource({
resource_arn: "AmazonResourceName", # required
})
Response structure
Response structure
resp.tags #=> Array
resp.tags[0].key #=> String
resp.tags[0].value #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:resource_arn
(required, String)
—
The ARN of the media pipeline associated with any tags. The ARN consists of the pipeline's region, resource ID, and pipeline ID.
Returns:
See Also:
2035 2036 2037 2038 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2035 def list_tags_for_resource(params = {}, options = {}) req = build_request(:list_tags_for_resource, params) req.send_request(options) end |
#start_speaker_search_task(params = {}) ⇒ Types::StartSpeakerSearchTaskResponse
Starts a speaker search task.
Before starting any speaker search tasks, you must provide all notices and obtain all consents from the speaker as required under applicable privacy and biometrics laws, and as required under the AWS service terms for the Amazon Chime SDK.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.start_speaker_search_task({
identifier: "NonEmptyString", # required
voice_profile_domain_arn: "Arn", # required
kinesis_video_stream_source_task_configuration: {
stream_arn: "KinesisVideoStreamArn", # required
channel_id: 1, # required
fragment_number: "FragmentNumberString",
},
client_request_token: "ClientRequestToken",
})
Response structure
Response structure
resp.speaker_search_task.speaker_search_task_id #=> String
resp.speaker_search_task.speaker_search_task_status #=> String, one of "NotStarted", "Initializing", "InProgress", "Failed", "Stopping", "Stopped"
resp.speaker_search_task.created_timestamp #=> Time
resp.speaker_search_task.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.
-
:voice_profile_domain_arn
(required, String)
—
The ARN of the voice profile domain that will store the voice profile.
-
:kinesis_video_stream_source_task_configuration
(Types::KinesisVideoStreamSourceTaskConfiguration)
—
The task configuration for the Kinesis video stream source of the media insights pipeline.
-
:client_request_token
(String)
—
The unique identifier for the client request. Use a different token for different speaker search tasks.
A suitable default value is auto-generated. You should normally not need to pass this option.**
Returns:
-
(Types::StartSpeakerSearchTaskResponse)
—
Returns a response object which responds to the following methods:
- #speaker_search_task => Types::SpeakerSearchTask
See Also:
2097 2098 2099 2100 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2097 def start_speaker_search_task(params = {}, options = {}) req = build_request(:start_speaker_search_task, params) req.send_request(options) end |
#start_voice_tone_analysis_task(params = {}) ⇒ Types::StartVoiceToneAnalysisTaskResponse
Starts a voice tone analysis task. For more information about voice tone analysis, see Using Amazon Chime SDK voice analytics in the Amazon Chime SDK Developer Guide.
Before starting any voice tone analysis tasks, you must provide all notices and obtain all consents from the speaker as required under applicable privacy and biometrics laws, and as required under the AWS service terms for the Amazon Chime SDK.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.start_voice_tone_analysis_task({
identifier: "NonEmptyString", # required
language_code: "en-US", # required, accepts en-US
kinesis_video_stream_source_task_configuration: {
stream_arn: "KinesisVideoStreamArn", # required
channel_id: 1, # required
fragment_number: "FragmentNumberString",
},
client_request_token: "ClientRequestToken",
})
Response structure
Response structure
resp.voice_tone_analysis_task.voice_tone_analysis_task_id #=> String
resp.voice_tone_analysis_task.voice_tone_analysis_task_status #=> String, one of "NotStarted", "Initializing", "InProgress", "Failed", "Stopping", "Stopped"
resp.voice_tone_analysis_task.created_timestamp #=> Time
resp.voice_tone_analysis_task.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.
-
:language_code
(required, String)
—
The language code.
-
:kinesis_video_stream_source_task_configuration
(Types::KinesisVideoStreamSourceTaskConfiguration)
—
The task configuration for the Kinesis video stream source of the media insights pipeline.
-
:client_request_token
(String)
—
The unique identifier for the client request. Use a different token for different voice tone analysis tasks.
A suitable default value is auto-generated. You should normally not need to pass this option.**
Returns:
-
(Types::StartVoiceToneAnalysisTaskResponse)
—
Returns a response object which responds to the following methods:
- #voice_tone_analysis_task => Types::VoiceToneAnalysisTask
See Also:
2162 2163 2164 2165 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2162 def start_voice_tone_analysis_task(params = {}, options = {}) req = build_request(:start_voice_tone_analysis_task, params) req.send_request(options) end |
#stop_speaker_search_task(params = {}) ⇒ Struct
Stops a speaker search task.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.stop_speaker_search_task({
identifier: "NonEmptyString", # required
speaker_search_task_id: "GuidString", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.
-
:speaker_search_task_id
(required, String)
—
The speaker search task ID.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
2189 2190 2191 2192 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2189 def stop_speaker_search_task(params = {}, options = {}) req = build_request(:stop_speaker_search_task, params) req.send_request(options) end |
#stop_voice_tone_analysis_task(params = {}) ⇒ Struct
Stops a voice tone analysis task.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.stop_voice_tone_analysis_task({
identifier: "NonEmptyString", # required
voice_tone_analysis_task_id: "GuidString", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.
-
:voice_tone_analysis_task_id
(required, String)
—
The ID of the voice tone analysis task.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
2216 2217 2218 2219 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2216 def stop_voice_tone_analysis_task(params = {}, options = {}) req = build_request(:stop_voice_tone_analysis_task, params) req.send_request(options) end |
#tag_resource(params = {}) ⇒ Struct
The ARN of the media pipeline that you want to tag. Consists of the pipeline's endpoint region, resource ID, and pipeline ID.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.tag_resource({
resource_arn: "AmazonResourceName", # required
tags: [ # required
{
key: "TagKey", # required
value: "TagValue", # required
},
],
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:resource_arn
(required, String)
—
The ARN of the media pipeline associated with any tags. The ARN consists of the pipeline's endpoint region, resource ID, and pipeline ID.
-
:tags
(required, Array<Types::Tag>)
—
The tags associated with the specified media pipeline.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
2250 2251 2252 2253 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2250 def tag_resource(params = {}, options = {}) req = build_request(:tag_resource, params) req.send_request(options) end |
#untag_resource(params = {}) ⇒ Struct
Removes any tags from a media pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.untag_resource({
resource_arn: "AmazonResourceName", # required
tag_keys: ["TagKey"], # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:resource_arn
(required, String)
—
The ARN of the pipeline that you want to untag.
-
:tag_keys
(required, Array<String>)
—
The key/value pairs in the tag that you want to remove.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
2276 2277 2278 2279 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2276 def untag_resource(params = {}, options = {}) req = build_request(:untag_resource, params) req.send_request(options) end |
#update_media_insights_pipeline_configuration(params = {}) ⇒ Types::UpdateMediaInsightsPipelineConfigurationResponse
Updates the media insights pipeline's configuration settings.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.update_media_insights_pipeline_configuration({
identifier: "NonEmptyString", # required
resource_access_role_arn: "Arn", # required
real_time_alert_configuration: {
disabled: false,
rules: [
{
type: "KeywordMatch", # required, accepts KeywordMatch, Sentiment, IssueDetection
keyword_match_configuration: {
rule_name: "RuleName", # required
keywords: ["Keyword"], # required
negate: false,
},
sentiment_configuration: {
rule_name: "RuleName", # required
sentiment_type: "NEGATIVE", # required, accepts NEGATIVE
time_period: 1, # required
},
issue_detection_configuration: {
rule_name: "RuleName", # required
},
},
],
},
elements: [ # required
{
type: "AmazonTranscribeCallAnalyticsProcessor", # required, accepts AmazonTranscribeCallAnalyticsProcessor, VoiceAnalyticsProcessor, AmazonTranscribeProcessor, KinesisDataStreamSink, LambdaFunctionSink, SqsQueueSink, SnsTopicSink, S3RecordingSink, VoiceEnhancementSink
amazon_transcribe_call_analytics_processor_configuration: {
language_code: "en-US", # required, accepts en-US, en-GB, es-US, fr-CA, fr-FR, en-AU, it-IT, de-DE, pt-BR
vocabulary_name: "VocabularyName",
vocabulary_filter_name: "VocabularyFilterName",
vocabulary_filter_method: "remove", # accepts remove, mask, tag
language_model_name: "ModelName",
enable_partial_results_stabilization: false,
partial_results_stability: "high", # accepts high, medium, low
content_identification_type: "PII", # accepts PII
content_redaction_type: "PII", # accepts PII
pii_entity_types: "PiiEntityTypes",
filter_partial_results: false,
post_call_analytics_settings: {
output_location: "String", # required
data_access_role_arn: "String", # required
content_redaction_output: "redacted", # accepts redacted, redacted_and_unredacted
output_encryption_kms_key_id: "String",
},
call_analytics_stream_categories: ["CategoryName"],
},
amazon_transcribe_processor_configuration: {
language_code: "en-US", # accepts en-US, en-GB, es-US, fr-CA, fr-FR, en-AU, it-IT, de-DE, pt-BR
vocabulary_name: "VocabularyName",
vocabulary_filter_name: "VocabularyFilterName",
vocabulary_filter_method: "remove", # accepts remove, mask, tag
show_speaker_label: false,
enable_partial_results_stabilization: false,
partial_results_stability: "high", # accepts high, medium, low
content_identification_type: "PII", # accepts PII
content_redaction_type: "PII", # accepts PII
pii_entity_types: "PiiEntityTypes",
language_model_name: "ModelName",
filter_partial_results: false,
identify_language: false,
identify_multiple_languages: false,
language_options: "LanguageOptions",
preferred_language: "en-US", # accepts en-US, en-GB, es-US, fr-CA, fr-FR, en-AU, it-IT, de-DE, pt-BR
vocabulary_names: "VocabularyNames",
vocabulary_filter_names: "VocabularyFilterNames",
},
kinesis_data_stream_sink_configuration: {
insights_target: "Arn",
},
s3_recording_sink_configuration: {
destination: "Arn",
recording_file_format: "Wav", # accepts Wav, Opus
},
voice_analytics_processor_configuration: {
speaker_search_status: "Enabled", # accepts Enabled, Disabled
voice_tone_analysis_status: "Enabled", # accepts Enabled, Disabled
},
lambda_function_sink_configuration: {
insights_target: "Arn",
},
sqs_queue_sink_configuration: {
insights_target: "Arn",
},
sns_topic_sink_configuration: {
insights_target: "Arn",
},
voice_enhancement_sink_configuration: {
disabled: false,
},
},
],
})
Response structure
Response structure
resp.media_insights_pipeline_configuration.media_insights_pipeline_configuration_name #=> String
resp.media_insights_pipeline_configuration.media_insights_pipeline_configuration_arn #=> String
resp.media_insights_pipeline_configuration.resource_access_role_arn #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.disabled #=> Boolean
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules #=> Array
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].type #=> String, one of "KeywordMatch", "Sentiment", "IssueDetection"
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.rule_name #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.keywords #=> Array
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.keywords[0] #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].keyword_match_configuration.negate #=> Boolean
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].sentiment_configuration.rule_name #=> String
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].sentiment_configuration.sentiment_type #=> String, one of "NEGATIVE"
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].sentiment_configuration.time_period #=> Integer
resp.media_insights_pipeline_configuration.real_time_alert_configuration.rules[0].issue_detection_configuration.rule_name #=> String
resp.media_insights_pipeline_configuration.elements #=> Array
resp.media_insights_pipeline_configuration.elements[0].type #=> String, one of "AmazonTranscribeCallAnalyticsProcessor", "VoiceAnalyticsProcessor", "AmazonTranscribeProcessor", "KinesisDataStreamSink", "LambdaFunctionSink", "SqsQueueSink", "SnsTopicSink", "S3RecordingSink", "VoiceEnhancementSink"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.vocabulary_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.vocabulary_filter_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.vocabulary_filter_method #=> String, one of "remove", "mask", "tag"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.language_model_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.enable_partial_results_stabilization #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.partial_results_stability #=> String, one of "high", "medium", "low"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.content_identification_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.content_redaction_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.pii_entity_types #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.filter_partial_results #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.output_location #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.data_access_role_arn #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.content_redaction_output #=> String, one of "redacted", "redacted_and_unredacted"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.post_call_analytics_settings.output_encryption_kms_key_id #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.call_analytics_stream_categories #=> Array
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_call_analytics_processor_configuration.call_analytics_stream_categories[0] #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.language_code #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_filter_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_filter_method #=> String, one of "remove", "mask", "tag"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.show_speaker_label #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.enable_partial_results_stabilization #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.partial_results_stability #=> String, one of "high", "medium", "low"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.content_identification_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.content_redaction_type #=> String, one of "PII"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.pii_entity_types #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.language_model_name #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.filter_partial_results #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.identify_language #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.identify_multiple_languages #=> Boolean
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.language_options #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.preferred_language #=> String, one of "en-US", "en-GB", "es-US", "fr-CA", "fr-FR", "en-AU", "it-IT", "de-DE", "pt-BR"
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_names #=> String
resp.media_insights_pipeline_configuration.elements[0].amazon_transcribe_processor_configuration.vocabulary_filter_names #=> String
resp.media_insights_pipeline_configuration.elements[0].kinesis_data_stream_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].s3_recording_sink_configuration.destination #=> String
resp.media_insights_pipeline_configuration.elements[0].s3_recording_sink_configuration.recording_file_format #=> String, one of "Wav", "Opus"
resp.media_insights_pipeline_configuration.elements[0].voice_analytics_processor_configuration.speaker_search_status #=> String, one of "Enabled", "Disabled"
resp.media_insights_pipeline_configuration.elements[0].voice_analytics_processor_configuration.voice_tone_analysis_status #=> String, one of "Enabled", "Disabled"
resp.media_insights_pipeline_configuration.elements[0].lambda_function_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].sqs_queue_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].sns_topic_sink_configuration.insights_target #=> String
resp.media_insights_pipeline_configuration.elements[0].voice_enhancement_sink_configuration.disabled #=> Boolean
resp.media_insights_pipeline_configuration.media_insights_pipeline_configuration_id #=> String
resp.media_insights_pipeline_configuration.created_timestamp #=> Time
resp.media_insights_pipeline_configuration.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier for the resource to be updated. Valid values include the name and ARN of the media insights pipeline configuration.
-
:resource_access_role_arn
(required, String)
—
The ARN of the role used by the service to access Amazon Web Services resources.
-
:real_time_alert_configuration
(Types::RealTimeAlertConfiguration)
—
The configuration settings for real-time alerts for the media insights pipeline.
-
:elements
(required, Array<Types::MediaInsightsPipelineConfigurationElement>)
—
The elements in the request, such as a processor for Amazon Transcribe or a sink for a Kinesis Data Stream..
Returns:
-
(Types::UpdateMediaInsightsPipelineConfigurationResponse)
—
Returns a response object which responds to the following methods:
- #media_insights_pipeline_configuration => Types::MediaInsightsPipelineConfiguration
See Also:
2469 2470 2471 2472 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2469 def update_media_insights_pipeline_configuration(params = {}, options = {}) req = build_request(:update_media_insights_pipeline_configuration, params) req.send_request(options) end |
#update_media_insights_pipeline_status(params = {}) ⇒ Struct
Updates the status of a media insights pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.update_media_insights_pipeline_status({
identifier: "NonEmptyString", # required
update_status: "Pause", # required, accepts Pause, Resume
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the resource to be updated. Valid values include the ID and ARN of the media insights pipeline.
-
:update_status
(required, String)
—
The requested status of the media insights pipeline.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
2496 2497 2498 2499 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2496 def update_media_insights_pipeline_status(params = {}, options = {}) req = build_request(:update_media_insights_pipeline_status, params) req.send_request(options) end |
#update_media_pipeline_kinesis_video_stream_pool(params = {}) ⇒ Types::UpdateMediaPipelineKinesisVideoStreamPoolResponse
Updates an Amazon Kinesis Video Stream pool in a media pipeline.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.update_media_pipeline_kinesis_video_stream_pool({
identifier: "NonEmptyString", # required
stream_configuration: {
data_retention_in_hours: 1,
},
})
Response structure
Response structure
resp.kinesis_video_stream_pool_configuration.pool_arn #=> String
resp.kinesis_video_stream_pool_configuration.pool_name #=> String
resp.kinesis_video_stream_pool_configuration.pool_id #=> String
resp.kinesis_video_stream_pool_configuration.pool_status #=> String, one of "CREATING", "ACTIVE", "UPDATING", "DELETING", "FAILED"
resp.kinesis_video_stream_pool_configuration.pool_size #=> Integer
resp.kinesis_video_stream_pool_configuration.stream_configuration.region #=> String
resp.kinesis_video_stream_pool_configuration.stream_configuration.data_retention_in_hours #=> Integer
resp.kinesis_video_stream_pool_configuration.created_timestamp #=> Time
resp.kinesis_video_stream_pool_configuration.updated_timestamp #=> Time
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:identifier
(required, String)
—
The unique identifier of the requested resource. Valid values include the name and ARN of the media pipeline Kinesis Video Stream pool.
-
:stream_configuration
(Types::KinesisVideoStreamConfigurationUpdate)
—
The configuration settings for the video stream.
Returns:
-
(Types::UpdateMediaPipelineKinesisVideoStreamPoolResponse)
—
Returns a response object which responds to the following methods:
- #kinesis_video_stream_pool_configuration => Types::KinesisVideoStreamPoolConfiguration
See Also:
2539 2540 2541 2542 |
# File 'gems/aws-sdk-chimesdkmediapipelines/lib/aws-sdk-chimesdkmediapipelines/client.rb', line 2539 def update_media_pipeline_kinesis_video_stream_pool(params = {}, options = {}) req = build_request(:update_media_pipeline_kinesis_video_stream_pool, params) req.send_request(options) end |