Menu
Amazon Rekognition
Developer Guide

Comparing Faces

To compare a face in the source image with each face in the target image, use the CompareFaces operation.

Note

If the source image contains more than one face, the service detects the largest face and uses it for comparison.

To specify the minimum level of confidence in the match that you want returned in the response, use similarityThreshold in the request. For more information, see CompareFaces.

The API returns an array of face matches, source face information, image orientation, and an array of unmatched faces. The following is an example response.

Copy
{ "FaceMatches": [{ "Face": { "BoundingBox": { "Width": 0.5521978139877319, "Top": 0.1203877404332161, "Left": 0.23626373708248138, "Height": 0.3126954436302185 }, "Confidence": 99.98751068115234, "Pose": { "Yaw": -82.36799621582031, "Roll": -62.13221740722656, "Pitch": 0.8652129173278809 }, "Quality": { "Sharpness": 99.99880981445312, "Brightness": 54.49755096435547 }, "Landmarks": [{ "Y": 0.2996366024017334, "X": 0.41685718297958374, "Type": "eyeLeft" }, { "Y": 0.2658946216106415, "X": 0.4414493441581726, "Type": "eyeRight" }, { "Y": 0.3465650677680969, "X": 0.48636093735694885, "Type": "nose" }, { "Y": 0.30935320258140564, "X": 0.6251809000968933, "Type": "mouthLeft" }, { "Y": 0.26942989230155945, "X": 0.6454493403434753, "Type": "mouthRight" } ] }, "Similarity": 100.0 }], "SourceImageOrientationCorrection": "ROTATE_90", "TargetImageOrientationCorrection": "ROTATE_90", "UnmatchedFaces": [{ "BoundingBox": { "Width": 0.4890109896659851, "Top": 0.6566604375839233, "Left": 0.10989011079072952, "Height": 0.278298944234848 }, "Confidence": 99.99992370605469, "Pose": { "Yaw": 51.51519012451172, "Roll": -110.32493591308594, "Pitch": -2.322134017944336 }, "Quality": { "Sharpness": 99.99671173095703, "Brightness": 57.23163986206055 }, "Landmarks": [{ "Y": 0.8288310766220093, "X": 0.3133862614631653, "Type": "eyeLeft" }, { "Y": 0.7632885575294495, "X": 0.28091415762901306, "Type": "eyeRight" }, { "Y": 0.7417283654212952, "X": 0.3631140887737274, "Type": "nose" }, { "Y": 0.8081989884376526, "X": 0.48565614223480225, "Type": "mouthLeft" }, { "Y": 0.7548204660415649, "X": 0.46090251207351685, "Type": "mouthRight" } ] }], "SourceImageFace": { "BoundingBox": { "Width": 0.5521978139877319, "Top": 0.1203877404332161, "Left": 0.23626373708248138, "Height": 0.3126954436302185 }, "Confidence": 99.98751068115234 } }

In the response, note the following:

  • Face match information – The example shows that one face match was found in the target image. For that face match, it provides a bounding box and a confidence value (the level of confidence that Amazon Rekognition has that the bounding box contains a face). The similarity score of 99.99 indicates how similar the faces are. The face match information also includes an array of landmark locations.

    If multiple faces match, the faceMatches array includes all of the face matches.

  • Source face information – The response includes information about the face from the source image that was used for comparison, including the bounding box and confidence value.

  • Image Orientation – The response includes information about the orientation of the source and target images. Amazon Rekognition needs this to display the images and retrieve the correct location of the matched face in the target image.

  • Unmatched face match information – The example shows one face that Amazon Rekognition found in the target image that didn't match the face analysed in the source image. For that face, it provides a bounding box and a confidence value, indicating the level of confidence that Amazon Rekognition has that the bounding box contains a face. The face information also includes an array of landmark locations.

    If Amazon Rekognition finds mutiple faces that do not match, the UnmatchedFaces array includes all of the faces that didn't match.