Working with streaming videos - Amazon Rekognition

Working with streaming videos

You can use Amazon Rekognition Video to detect and recognize faces in streaming video. A typical use case is when you want to detect a known face in a video stream. Amazon Rekognition Video uses Amazon Kinesis Video Streams to receive and process a video stream. The analysis results are output from Amazon Rekognition Video to a Kinesis data stream and then read by your client application. Amazon Rekognition Video provides a stream processor (CreateStreamProcessor) that you can use to start and manage the analysis of streaming video.


The Amazon Rekognition Video streaming API is available in the following regions only: US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), EU (Frankfurt), and EU (Ireland).

The following diagram shows how Amazon Rekognition Video detects and recognizes faces in a streaming video.

To use Amazon Rekognition Video with streaming video, your application needs to implement the following:

This section contains information about writing an application that creates the Kinesis video stream and the Kinesis data stream, streams video into Amazon Rekognition Video, and consumes the analysis results. If you are streaming from a Matroska (MKV) encoded file, you can use the PutMedia operation to stream the source video into the Kinesis video stream that you created. For more information, see PutMedia API Example. Otherwise, you can use Gstreamer, a third-party multimedia framework software, and you can install a Amazon Kinesis Video Streams plugin that streams video from a device camera.