Menu
Amazon Rekognition
Developer Guide

Storing Faces in a Face Collection: The IndexFaces Operation

After you create a face collection, you can store faces in it. Amazon Rekognition provides the IndexFaces operation that can detect faces in the input image (JPEG or PNG) and adds them to the specified face collection. For more information about collections, see Managing Face Collections. After you persist faces, you can search the face collection for face matches.

Important

Amazon Rekognition does not save the actual faces detected. Instead, the underlying detection algorithm first detects the faces in the input image, extracts facial features for each face, and then stores the feature information in a database. Then, Amazon Rekognition uses this information in subsequent operations such as searching a face collection for matching faces.

For each face, the IndexFaces operation persists the following information:

  • Multidimensional facial featuresIndexFaces uses facial analysis to extract multidimensional information about the facial features and stores the information in the face collection. You cannot access this information directly. However, Amazon Rekognition uses this information when searching a face collection for face matches.

     

  • Metadata – The metadata for each face includes a bounding box, confidence level (that the bounding box contains a face), IDs assigned by Amazon Rekognition (face ID and image ID), and an external image ID (if you provided it) in the request. This information is returned to you in response to the IndexFaces API call. For an example, see the face element in the following example response.

    The service returns this metadata in response to the following API calls:

     

    • ListFaces

    • Search faces operations – The responses for SearchFaces and SearchFacesByImage return the confidence in the match for each matching face, along with this metadata of the matched face.

In addition to the preceding information that the API persists in the face collection, the API also returns face details that are not persisted in the collection (see the faceDetail element in the following example response).

Note

DetectFaces returns the same information, so you don't need to call both DetectFaces and IndexFaces for the same image.

Copy
{ "FaceRecords": [ { "FaceDetail": { "BoundingBox": { "Width": 0.6154, "Top": 0.2442, "Left": 0.1765, "Height": 0.4692 }, "Landmarks": [ { "Y": 0.41730427742004395, "X": 0.36835095286369324, "Type": "eyeLeft" }, { "Y": 0.4281611740589142, "X": 0.5960656404495239, "Type": "eyeRight" }, { "Y": 0.5349795818328857, "X": 0.47817257046699524, "Type": "nose" }, { "Y": 0.5721957683563232, "X": 0.352621465921402, "Type": "mouthLeft" }, { "Y": 0.5792245864868164, "X": 0.5936088562011719, "Type": "mouthRight" } ], "Pose": { "Yaw": 1.8526556491851807, "Roll": 3.623055934906006, "Pitch": -10.605680465698242 }, "Quality": { "Sharpness": 130.0, "Brightness": 49.129302978515625 }, "Confidence": 99.99968719482422 }, "Face": { "BoundingBox": { "Width": 0.6154, "Top": 0.2442, "Left": 0.1765, "Height": 0.4692 }, "FaceId": "84de1c86-5059-53f2-a432-34ebb704615d", "Confidence": 99.9997, "ImageId": "d38ebf91-1a11-58fc-ba42-f978b3f32f60" } } ], "OrientationCorrection": "ROTATE_0" }