Amazon Rekognition
Developer Guide

Storage-Based API Operations: Storing Faces and Searching Face Matches

Amazon Rekognition supports the IndexFaces operation, which you can use to detect faces in an image and persist information about facial features detected in a database on the server. This is an example of a storage-based API operation because the service persists information on the server.

To store facial information, you must first create a face collection in one of AWS Regions in your account. You specify this face collection when you call the IndexFaces operation. After you create a face collection and store facial feature information for all faces, you can search the collection for face matches.


The service does not persist actual image bytes. Instead, the underlying detection algorithm first detects the faces in the input image, extracts facial features into a feature vector for each face, and then stores it in the database. Amazon Rekognition uses these feature vectors when performing face matches.

For example, you might create a face collection to store scanned badge images using the IndexFaces operation, which extracts faces and stores them as searchable image vectors. When an employee enters the building, an image of the employee's face is captured and sent to the SearchFacesByImage operation. If the face match produces a sufficiently high similarity score (say 99%), you can authenticate the employee.