CfnStreamProcessor
- class aws_cdk.aws_rekognition.CfnStreamProcessor(scope, id, *, kinesis_video_stream, role_arn, bounding_box_regions_of_interest=None, connected_home_settings=None, data_sharing_preference=None, face_search_settings=None, kinesis_data_stream=None, kms_key_id=None, name=None, notification_channel=None, polygon_regions_of_interest=None, s3_destination=None, tags=None)
Bases:
CfnResource
The
AWS::Rekognition::StreamProcessor
type creates a stream processor used to detect and recognize faces or to detect connected home labels in a streaming video.Amazon Rekognition Video is a consumer of live video from Amazon Kinesis Video Streams. There are two different settings for stream processors in Amazon Rekognition, one for detecting faces and one for connected home features.
If you are creating a stream processor for detecting faces, you provide a Kinesis video stream (input) and a Kinesis data stream (output). You also specify the face recognition criteria in FaceSearchSettings. For example, the collection containing faces that you want to recognize.
If you are creating a stream processor for detection of connected home labels, you provide a Kinesis video stream for input, and for output an Amazon S3 bucket and an Amazon SNS topic. You can also provide a KMS key ID to encrypt the data sent to your Amazon S3 bucket. You specify what you want to detect in ConnectedHomeSettings, such as people, packages, and pets.
You can also specify where in the frame you want Amazon Rekognition to monitor with BoundingBoxRegionsOfInterest and PolygonRegionsOfInterest. The Name is used to manage the stream processor and it is the identifier for the stream processor. The
AWS::Rekognition::StreamProcessor
resource creates a stream processor in the same Region where you create the Amazon CloudFormation stack.For more information, see CreateStreamProcessor .
- See:
- CloudformationResource:
AWS::Rekognition::StreamProcessor
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_rekognition as rekognition # polygon_regions_of_interest: Any cfn_stream_processor = rekognition.CfnStreamProcessor(self, "MyCfnStreamProcessor", kinesis_video_stream=rekognition.CfnStreamProcessor.KinesisVideoStreamProperty( arn="arn" ), role_arn="roleArn", # the properties below are optional bounding_box_regions_of_interest=[rekognition.CfnStreamProcessor.BoundingBoxProperty( height=123, left=123, top=123, width=123 )], connected_home_settings=rekognition.CfnStreamProcessor.ConnectedHomeSettingsProperty( labels=["labels"], # the properties below are optional min_confidence=123 ), data_sharing_preference=rekognition.CfnStreamProcessor.DataSharingPreferenceProperty( opt_in=False ), face_search_settings=rekognition.CfnStreamProcessor.FaceSearchSettingsProperty( collection_id="collectionId", # the properties below are optional face_match_threshold=123 ), kinesis_data_stream=rekognition.CfnStreamProcessor.KinesisDataStreamProperty( arn="arn" ), kms_key_id="kmsKeyId", name="name", notification_channel=rekognition.CfnStreamProcessor.NotificationChannelProperty( arn="arn" ), polygon_regions_of_interest=polygon_regions_of_interest, s3_destination=rekognition.CfnStreamProcessor.S3DestinationProperty( bucket_name="bucketName", # the properties below are optional object_key_prefix="objectKeyPrefix" ), tags=[CfnTag( key="key", value="value" )] )
- Parameters:
scope (
Construct
) – Scope in which this resource is defined.id (
str
) – Construct identifier for this resource (unique in its scope).kinesis_video_stream (
Union
[IResolvable
,KinesisVideoStreamProperty
,Dict
[str
,Any
]]) – The Kinesis video stream that provides the source of the streaming video for an Amazon Rekognition Video stream processor. For more information, see KinesisVideoStream .role_arn (
str
) –The ARN of the IAM role that allows access to the stream processor. The IAM role provides Rekognition read permissions to the Kinesis stream. It also provides write permissions to an Amazon S3 bucket and Amazon Simple Notification Service topic for a connected home stream processor. This is required for both face search and connected home stream processors. For information about constraints, see the RoleArn section of CreateStreamProcessor .
bounding_box_regions_of_interest (
Union
[IResolvable
,Sequence
[Union
[IResolvable
,BoundingBoxProperty
,Dict
[str
,Any
]]],None
]) – List of BoundingBox objects, each of which denotes a region of interest on screen. For more information, see the BoundingBox field of RegionOfInterest .connected_home_settings (
Union
[IResolvable
,ConnectedHomeSettingsProperty
,Dict
[str
,Any
],None
]) – Connected home settings to use on a streaming video. You can use a stream processor for connected home features and select what you want the stream processor to detect, such as people or pets. When the stream processor has started, one notification is sent for each object class specified. For more information, see the ConnectedHome section of StreamProcessorSettings .data_sharing_preference (
Union
[IResolvable
,DataSharingPreferenceProperty
,Dict
[str
,Any
],None
]) – Allows you to opt in or opt out to share data with Rekognition to improve model performance. You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level this setting is ignored on individual streams. For more information, see StreamProcessorDataSharingPreference .face_search_settings (
Union
[IResolvable
,FaceSearchSettingsProperty
,Dict
[str
,Any
],None
]) – The input parameters used to recognize faces in a streaming video analyzed by an Amazon Rekognition stream processor. For more information regarding the contents of the parameters, see FaceSearchSettings .kinesis_data_stream (
Union
[IResolvable
,KinesisDataStreamProperty
,Dict
[str
,Any
],None
]) – Amazon Rekognition’s Video Stream Processor takes a Kinesis video stream as input. This is the Amazon Kinesis Data Streams instance to which the Amazon Rekognition stream processor streams the analysis results. This must be created within the constraints specified at KinesisDataStream .kms_key_id (
Optional
[str
]) –The identifier for your Amazon Key Management Service key (Amazon KMS key). Optional parameter for connected home stream processors used to encrypt results and data published to your Amazon S3 bucket. For more information, see the KMSKeyId section of CreateStreamProcessor .
name (
Optional
[str
]) – The Name attribute specifies the name of the stream processor and it must be within the constraints described in the Name section of StreamProcessor . If you don’t specify a name, Amazon CloudFormation generates a unique ID and uses that ID for the stream processor name.notification_channel (
Union
[IResolvable
,NotificationChannelProperty
,Dict
[str
,Any
],None
]) – The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation. Amazon Rekognition publishes a notification the first time an object of interest or a person is detected in the video stream. Amazon Rekognition also publishes an end-of-session notification with a summary when the stream processing session is complete. For more information, see StreamProcessorNotificationChannel .polygon_regions_of_interest (
Any
) –A set of ordered lists of Point objects. Each entry of the set contains a polygon denoting a region of interest on the screen. Each polygon is an ordered list of Point objects. For more information, see the Polygon field of RegionOfInterest .
s3_destination (
Union
[IResolvable
,S3DestinationProperty
,Dict
[str
,Any
],None
]) – The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation. For more information, see the S3Destination section of StreamProcessorOutput .tags (
Optional
[Sequence
[Union
[CfnTag
,Dict
[str
,Any
]]]]) –A set of tags (key-value pairs) that you want to attach to the stream processor. For more information, see the Tags section of CreateStreamProcessor .
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined)
.- Parameters:
path (
str
) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource
) –- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –value (
Any
) –
- See:
- Return type:
None
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverride
or prefixpath
with “Properties.” (i.e.Properties.TopicName
).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.
in the property name, prefix with a\
. In most programming languages you will need to write this as"\\."
because the\
itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
value
argument toaddOverride
will not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str
) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any
) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str
) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value)
.- Parameters:
property_path (
str
) – The path of the property.value (
Any
) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY
), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN
). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT
). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional
[RemovalPolicy
]) –apply_to_update_replace_policy (
Optional
[bool
]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional
[RemovalPolicy
]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn
), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str
) – The name of the attribute.type_hint (
Optional
[ResolutionTypeHint
]) –
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str
) –- See:
- Return type:
Any
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/metadata-section-structure.html
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector
) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List
[Union
[Stack
,CfnResource
]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List
[CfnResource
]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str
) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource
) –- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource
) – The dependency to replace.new_target (
CfnResource
) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str
- Returns:
a string representation of this resource
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::Rekognition::StreamProcessor'
- attr_arn
Amazon Resource Name for the newly created stream processor.
- CloudformationAttribute:
Arn
- attr_status
Current status of the Amazon Rekognition stream processor.
- CloudformationAttribute:
Status
- attr_status_message
Detailed status message about the stream processor.
- CloudformationAttribute:
StatusMessage
- bounding_box_regions_of_interest
List of BoundingBox objects, each of which denotes a region of interest on screen.
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- connected_home_settings
Connected home settings to use on a streaming video.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- data_sharing_preference
Allows you to opt in or opt out to share data with Rekognition to improve model performance.
- face_search_settings
The input parameters used to recognize faces in a streaming video analyzed by an Amazon Rekognition stream processor.
- kinesis_data_stream
Amazon Rekognition’s Video Stream Processor takes a Kinesis video stream as input.
- kinesis_video_stream
The Kinesis video stream that provides the source of the streaming video for an Amazon Rekognition Video stream processor.
- kms_key_id
The identifier for your Amazon Key Management Service key (Amazon KMS key).
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId)
.- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- name
//docs.aws.amazon.com/rekognition/latest/APIReference/API_StreamProcessor>`_ . If you don’t specify a name, Amazon CloudFormation generates a unique ID and uses that ID for the stream processor name.
- Type:
The Name attribute specifies the name of the stream processor and it must be within the constraints described in the Name section of `StreamProcessor <https
- node
The tree node.
- notification_channel
The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.
- polygon_regions_of_interest
//docs.aws.amazon.com/rekognition/latest/APIReference/API_Point>`_ objects. For more information, see the Polygon field of RegionOfInterest .
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }
for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref })
.
- role_arn
The ARN of the IAM role that allows access to the stream processor.
- s3_destination
The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation.
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
Tag Manager which manages the tags for this resource.
- tags_raw
A set of tags (key-value pairs) that you want to attach to the stream processor.
Static Methods
- classmethod is_cfn_element(x)
Returns
true
if a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceof
to allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any
) –- Return type:
bool
- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any
) –- Return type:
bool
- classmethod is_construct(x)
Checks if
x
is a construct.Use this method instead of
instanceof
to properly detectConstruct
instances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructs
library on disk are seen as independent, completely different libraries. As a consequence, the classConstruct
in each copy of theconstructs
library is seen as a different class, and an instance of one class will not test asinstanceof
the other class.npm install
will not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructs
library can be accidentally installed, andinstanceof
will behave unpredictably. It is safest to avoid usinginstanceof
, and using this type-testing method instead.- Parameters:
x (
Any
) – Any object.- Return type:
bool
- Returns:
true if
x
is an object created from a class which extendsConstruct
.
BoundingBoxProperty
- class CfnStreamProcessor.BoundingBoxProperty(*, height, left, top, width)
Bases:
object
Identifies the bounding box around the label, face, text, or personal protective equipment.
The
left
(x-coordinate) andtop
(y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).The
top
andleft
values returned are ratios of the overall image size. For example, if the input image is 700x200 pixels, and the top-left coordinate of the bounding box is 350x50 pixels, the API returns aleft
value of 0.5 (350/700) and atop
value of 0.25 (50/200).The
width
andheight
values represent the dimensions of the bounding box as a ratio of the overall image dimension. For example, if the input image is 700x200 pixels, and the bounding box width is 70 pixels, the width returned is 0.1. For more information, see BoundingBox . .. epigraph:The bounding box coordinates can have negative values. For example, if Amazon Rekognition is able to detect a face that is at the image edge and is only partially visible, the service can return coordinates that are outside the image bounds and, depending on the image edge, you might get negative values or values greater than 1 for the ``left`` or ``top`` values.
- Parameters:
height (
Union
[int
,float
]) – Height of the bounding box as a ratio of the overall image height.left (
Union
[int
,float
]) – Left coordinate of the bounding box as a ratio of overall image width.top (
Union
[int
,float
]) – Top coordinate of the bounding box as a ratio of overall image height.width (
Union
[int
,float
]) – Width of the bounding box as a ratio of the overall image width.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_rekognition as rekognition bounding_box_property = rekognition.CfnStreamProcessor.BoundingBoxProperty( height=123, left=123, top=123, width=123 )
Attributes
- height
Height of the bounding box as a ratio of the overall image height.
- left
Left coordinate of the bounding box as a ratio of overall image width.
- top
Top coordinate of the bounding box as a ratio of overall image height.
- width
Width of the bounding box as a ratio of the overall image width.
ConnectedHomeSettingsProperty
- class CfnStreamProcessor.ConnectedHomeSettingsProperty(*, labels, min_confidence=None)
Bases:
object
Connected home settings to use on a streaming video.
Defining the settings is required in the request parameter for
CreateStreamProcessor
. Including this setting in the CreateStreamProcessor request lets you use the stream processor for connected home features. You can then select what you want the stream processor to detect, such as people or pets.When the stream processor has started, one notification is sent for each object class specified. For example, if packages and pets are selected, one SNS notification is published the first time a package is detected and one SNS notification is published the first time a pet is detected. An end-of-session summary is also published. For more information, see the ConnectedHome section of StreamProcessorSettings .
- Parameters:
labels (
Sequence
[str
]) – Specifies what you want to detect in the video, such as people, packages, or pets. The current valid labels you can include in this list are: “PERSON”, “PET”, “PACKAGE”, and “ALL”.min_confidence (
Union
[int
,float
,None
]) – The minimum confidence required to label an object in the video.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_rekognition as rekognition connected_home_settings_property = rekognition.CfnStreamProcessor.ConnectedHomeSettingsProperty( labels=["labels"], # the properties below are optional min_confidence=123 )
Attributes
- labels
Specifies what you want to detect in the video, such as people, packages, or pets.
The current valid labels you can include in this list are: “PERSON”, “PET”, “PACKAGE”, and “ALL”.
- min_confidence
The minimum confidence required to label an object in the video.
DataSharingPreferenceProperty
- class CfnStreamProcessor.DataSharingPreferenceProperty(*, opt_in)
Bases:
object
Allows you to opt in or opt out to share data with Rekognition to improve model performance.
You can choose this option at the account level or on a per-stream basis. Note that if you opt out at the account level, this setting is ignored on individual streams. For more information, see StreamProcessorDataSharingPreference .
- Parameters:
opt_in (
Union
[bool
,IResolvable
]) – Describes the opt-in status applied to a stream processor’s data sharing policy.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_rekognition as rekognition data_sharing_preference_property = rekognition.CfnStreamProcessor.DataSharingPreferenceProperty( opt_in=False )
Attributes
- opt_in
Describes the opt-in status applied to a stream processor’s data sharing policy.
FaceSearchSettingsProperty
- class CfnStreamProcessor.FaceSearchSettingsProperty(*, collection_id, face_match_threshold=None)
Bases:
object
The input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor.
FaceSearchSettings
is a request parameter for CreateStreamProcessor . For more information, see FaceSearchSettings .- Parameters:
collection_id (
str
) – The ID of a collection that contains faces that you want to search for.face_match_threshold (
Union
[int
,float
,None
]) – Minimum face match confidence score that must be met to return a result for a recognized face. The default is 80. 0 is the lowest confidence. 100 is the highest confidence. Values between 0 and 100 are accepted, and values lower than 80 are set to 80.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_rekognition as rekognition face_search_settings_property = rekognition.CfnStreamProcessor.FaceSearchSettingsProperty( collection_id="collectionId", # the properties below are optional face_match_threshold=123 )
Attributes
- collection_id
The ID of a collection that contains faces that you want to search for.
- face_match_threshold
Minimum face match confidence score that must be met to return a result for a recognized face.
The default is 80. 0 is the lowest confidence. 100 is the highest confidence. Values between 0 and 100 are accepted, and values lower than 80 are set to 80.
KinesisDataStreamProperty
- class CfnStreamProcessor.KinesisDataStreamProperty(*, arn)
Bases:
object
Amazon Rekognition Video Stream Processor take as input a Kinesis video stream (Input) and a Kinesis data stream (Output).
This is the Amazon Kinesis Data Streams instance to which the Amazon Rekognition stream processor streams the analysis results. This must be created within the constraints specified at KinesisDataStream .
- Parameters:
arn (
str
) – ARN of the output Amazon Kinesis Data Streams stream.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_rekognition as rekognition kinesis_data_stream_property = rekognition.CfnStreamProcessor.KinesisDataStreamProperty( arn="arn" )
Attributes
- arn
ARN of the output Amazon Kinesis Data Streams stream.
KinesisVideoStreamProperty
- class CfnStreamProcessor.KinesisVideoStreamProperty(*, arn)
Bases:
object
The Kinesis video stream that provides the source of the streaming video for an Amazon Rekognition Video stream processor.
For more information, see KinesisVideoStream .
- Parameters:
arn (
str
) – ARN of the Kinesis video stream stream that streams the source video.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_rekognition as rekognition kinesis_video_stream_property = rekognition.CfnStreamProcessor.KinesisVideoStreamProperty( arn="arn" )
Attributes
- arn
ARN of the Kinesis video stream stream that streams the source video.
NotificationChannelProperty
- class CfnStreamProcessor.NotificationChannelProperty(*, arn)
Bases:
object
The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the object detection results and completion status of a video analysis operation.
Amazon Rekognition publishes a notification the first time an object of interest or a person is detected in the video stream. Amazon Rekognition also publishes an an end-of-session notification with a summary when the stream processing session is complete. For more information, see StreamProcessorNotificationChannel .
- Parameters:
arn (
str
) – The ARN of the SNS topic that receives notifications.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_rekognition as rekognition notification_channel_property = rekognition.CfnStreamProcessor.NotificationChannelProperty( arn="arn" )
Attributes
- arn
The ARN of the SNS topic that receives notifications.
PointProperty
- class CfnStreamProcessor.PointProperty(*, x, y)
Bases:
object
The X and Y coordinates of a point on an image or video frame.
The X and Y values are ratios of the overall image size or video resolution. For example, if the input image is 700x200 and the values are X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
An array of
Point
objects,Polygon
, is returned by DetectText and by DetectCustomLabels or used to define regions of interest in Amazon Rekognition Video operations such asCreateStreamProcessor
.Polygon
represents a fine-grained polygon around a detected item. For more information, see Geometry .- Parameters:
x (
Union
[int
,float
]) – The value of the X coordinate for a point on aPolygon
.y (
Union
[int
,float
]) – The value of the Y coordinate for a point on aPolygon
.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_rekognition as rekognition point_property = rekognition.CfnStreamProcessor.PointProperty( x=123, y=123 )
Attributes
- x
The value of the X coordinate for a point on a
Polygon
.
- y
The value of the Y coordinate for a point on a
Polygon
.
S3DestinationProperty
- class CfnStreamProcessor.S3DestinationProperty(*, bucket_name, object_key_prefix=None)
Bases:
object
The Amazon S3 bucket location to which Amazon Rekognition publishes the detailed inference results of a video analysis operation.
These results include the name of the stream processor resource, the session ID of the stream processing session, and labeled timestamps and bounding boxes for detected labels. For more information, see S3Destination .
- Parameters:
bucket_name (
str
) – Describes the destination Amazon Simple Storage Service (Amazon S3) bucket name of a stream processor’s exports.object_key_prefix (
Optional
[str
]) – Describes the destination Amazon Simple Storage Service (Amazon S3) object keys of a stream processor’s exports.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_rekognition as rekognition s3_destination_property = rekognition.CfnStreamProcessor.S3DestinationProperty( bucket_name="bucketName", # the properties below are optional object_key_prefix="objectKeyPrefix" )
Attributes
- bucket_name
Describes the destination Amazon Simple Storage Service (Amazon S3) bucket name of a stream processor’s exports.
- object_key_prefix
Describes the destination Amazon Simple Storage Service (Amazon S3) object keys of a stream processor’s exports.