Menu
Amazon Rekognition
Developer Guide

Recognizing Faces in a Streaming Video

Amazon Rekognition Video can search faces in a collection that match faces that are detected in a streaming video. For more information about collections, see Searching Faces in a Collection. The following procedure describes the steps you take to recognize faces in a streaming video.

Prerequisites

To run this procedure, you need to have the AWS SDK for Java installed. For more information, see Getting Started with Amazon Rekognition. The AWS account you use must have access permissions to the Amazon Rekognition API. For more information, see Amazon Rekognition API Permissions: Actions, Permissions, and Resources Reference.

To recognize faces in a video stream (AWS SDK)

  1. If you haven't already, create an IAM service role to give Amazon Rekognition Video access to your Kinesis video streams and your Kinesis data streams. Note the ARN. For more information, see Giving Access to Your Kinesis Video Streams and Kinesis Data Streams.

  2. Create a collection and note the collection identifier you used.

  3. Index the faces you want to search for into the collection you created in step 2.

  4. Create a Kinesis video stream and note the stream's Amazon Resource Name (ARN).

  5. Create a Kinesis data stream. Prepend the stream name with AmazonRekognition and note the stream's ARN.

  6. Create the stream processor. Pass the following as parameters to CreateStreamProcessor: a name of your choosing, the Kinesis video stream ARN (step 4), the Kinesis data stream ARN (step 5), and the collection identifier (step 2).

  7. Start the stream processor using the stream processor name that you chose in step 6.

  8. Use the PutMedia operation to stream the source video into the Kinesis video stream that you created in step 4. For more information, see PutMedia API Example.

  9. Consume the analysis output from Amazon Rekognition Video.

On this page: