Gets the unsafe content analysis results for a Amazon Rekognition Video analysis started by StartContentModeration
Unsafe content analysis of a video is an asynchronous operation. You start analysis by calling StartContentModeration
which returns a job identifier (
). When analysis finishes, Amazon Rekognition Video publishes a completion status to the Amazon Simple Notification Service topic registered in the initial call to
. To get the results of the unsafe content analysis, first check that the status value published to the Amazon SNS topic is
. If so, call
and pass the job identifier (
) from the initial call to
For more information, see Working with Stored Videos in the Amazon Rekognition Devlopers Guide.
returns detected unsafe content labels, and the time they are detected, in an array,
, of ContentModerationDetection
By default, the moderated labels are returned sorted by time, in milliseconds from the start of the video. You can also sort them by moderated label by specifying
Since video analysis can return a large number of results, use the
parameter to limit the number of labels returned in a single call to
. If there are more results than specified in
, the value of
in the operation response contains a pagination token for getting the next set of results. To get the next page of results, call
and populate the
request parameter with the value of
returned from the previous call to
For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
In the AWS.Tools.Rekognition module, this cmdlet automatically pages all available results to the pipeline - parameters related to iteration are only needed if you want to manually control the paginated output. To disable autopagination, use -NoAutoIteration.