Menu
Amazon Rekognition
Developer Guide

Example 2: Storing Faces

This section provides working examples of storing faces in a collection. Examples using both the AWS CLI and the AWS SDK for Java are provided.

For information about the collections and storing faces API operations, see Storage-Based API Operations: Storing Faces and Searching Face Matches.

Storing Faces: Using the AWS CLI

The following index-faces AWS CLI command detects faces in the input images, and for each face extracts facial features and store the feature information in a database. In addition, the command stores metadata for each face detected in the specified face collection.

Copy
aws rekognition index-faces \ --image '{"S3Object":{"Bucket":"bucket","Name":"S3ObjectKey"}}' \ --collection-id "collection-id" \ --region us-east-1 \ --profile adminuser

For, more information, see Storing Faces in a Face Collection: The IndexFaces Operation

In the following example response, note the following:

  • Information in the faceDetail element is not persisted on the server. It is only returned as part of this response. The faceDetail includes five facial landmarks (see landmark element), pose, and quality.

  • Information in the face element is the face metadata that is persisted on the server. This is the same information the ListFaces API returns in response.

Copy
{ "FaceRecords": [ { "FaceDetail": { "BoundingBox": { "Width": 0.6154, "Top": 0.2442, "Left": 0.1765, "Height": 0.4692 }, "Landmarks": [ { "Y": 0.41730427742004395, "X": 0.36835095286369324, "Type": "eyeLeft" }, { "Y": 0.4281611740589142, "X": 0.5960656404495239, "Type": "eyeRight" }, { "Y": 0.5349795818328857, "X": 0.47817257046699524, "Type": "nose" }, { "Y": 0.5721957683563232, "X": 0.352621465921402, "Type": "mouthLeft" }, { "Y": 0.5792245864868164, "X": 0.5936088562011719, "Type": "mouthRight" } ], "Pose": { "Yaw": 1.8526556491851807, "Roll": 3.623055934906006, "Pitch": -10.605680465698242 }, "Quality": { "Sharpness": 130.0, "Brightness": 49.129302978515625 }, "Confidence": 99.99968719482422 }, "Face": { "BoundingBox": { "Width": 0.6154, "Top": 0.2442, "Left": 0.1765, "Height": 0.4692 }, "FaceId": "84de1c86-5059-53f2-a432-34ebb704615d", "Confidence": 99.9997, "ImageId": "d38ebf91-1a11-58fc-ba42-f978b3f32f60" } } ], "OrientationCorrection": "ROTATE_0" }

The following index-faces command specifies two optional parameters:

  • --detection-attribute parameter to request all facial attributes in the response.

  • --external-image-id parameter to specify an ID to be associated with all faces in this image. You might use this information on the client side, for example, you might maintain a client-side index of images and faces in the images.

Copy
aws rekognition index-faces \ --image '{"S3Object":{"Bucket":"bucketname","Name":"object-key"}}' \ --collection-id "collection-id" \ --detection-attributes "ALL" \ --external-image-id "example-image.jpg" \ --region us-east-1 \ --profile adminuser

In the following example response, note the additional information in the faceDetail element, which is not persisted on the server:

  • 25 facial landmarks (compared to only five in the preceding example)

  • Nine facial attributes (eyeglasses, beard, etc)

  • Emotions (see the emotion element)

The face element provides metadata that is persisted on the server.

Copy
{ "FaceRecords": [ { "FaceDetail": { "Confidence": 99.99968719482422, "Eyeglasses": { "Confidence": 99.94019317626953, "Value": false }, "Sunglasses": { "Confidence": 99.62261199951172, "Value": false }, "Gender": { "Confidence": 99.92701721191406, "Value": "Male" }, "Pose": { "Yaw": 1.8526556491851807, "Roll": 3.623055934906006, "Pitch": -10.605680465698242 }, "Emotions": [ { "Confidence": 99.38518524169922, "Type": "HAPPY" }, { "Confidence": 1.1799871921539307, "Type": "ANGRY" }, { "Confidence": 1.0325908660888672, "Type": "CONFUSED" } ], "EyesOpen": { "Confidence": 54.15227508544922, "Value": false }, "Quality": { "Sharpness": 130.0, "Brightness": 49.129302978515625 }, "BoundingBox": { "Width": 0.6153846383094788, "Top": 0.24423076212406158, "Left": 0.17654477059841156, "Height": 0.4692307710647583 }, "Smile": { "Confidence": 99.8236083984375, "Value": true }, "MouthOpen": { "Confidence": 88.39942169189453, "Value": true }, "Landmarks": [ { "Y": 0.41730427742004395, "X": 0.36835095286369324, "Type": "eyeLeft" }, { "Y": 0.4281611740589142, "X": 0.5960656404495239, "Type": "eyeRight" }, { "Y": 0.5349795818328857, "X": 0.47817257046699524, "Type": "nose" }, { "Y": 0.5721957683563232, "X": 0.352621465921402, "Type": "mouthLeft" }, { "Y": 0.5792245864868164, "X": 0.5936088562011719, "Type": "mouthRight" }, { "Y": 0.4163532555103302, "X": 0.3697868585586548, "Type": "leftPupil" }, { "Y": 0.42626339197158813, "X": 0.6037314534187317, "Type": "rightPupil" }, { "Y": 0.38954615592956543, "X": 0.27343833446502686, "Type": "leftEyeBrowLeft" }, { "Y": 0.3775958716869354, "X": 0.35098740458488464, "Type": "leftEyeBrowRight" }, { "Y": 0.39108505845069885, "X": 0.433648943901062, "Type": "leftEyeBrowUp" }, { "Y": 0.3952394127845764, "X": 0.5416828989982605, "Type": "rightEyeBrowLeft" }, { "Y": 0.38667190074920654, "X": 0.6171167492866516, "Type": "rightEyeBrowRight" }, { "Y": 0.40419116616249084, "X": 0.6827319264411926, "Type": "rightEyeBrowUp" }, { "Y": 0.41925403475761414, "X": 0.32195475697517395, "Type": "leftEyeLeft" }, { "Y": 0.4225293695926666, "X": 0.41227561235427856, "Type": "leftEyeRight" }, { "Y": 0.4096950888633728, "X": 0.3705553412437439, "Type": "leftEyeUp" }, { "Y": 0.4213259816169739, "X": 0.36738231778144836, "Type": "leftEyeDown" }, { "Y": 0.4294262230396271, "X": 0.5498995184898376, "Type": "rightEyeLeft" }, { "Y": 0.4327501356601715, "X": 0.6390777826309204, "Type": "rightEyeRight" }, { "Y": 0.42076829075813293, "X": 0.5977370738983154, "Type": "rightEyeUp" }, { "Y": 0.4326271116733551, "X": 0.5959710478782654, "Type": "rightEyeDown" }, { "Y": 0.5411174893379211, "X": 0.4253743588924408, "Type": "noseLeft" }, { "Y": 0.5450678467750549, "X": 0.5309309959411621, "Type": "noseRight" }, { "Y": 0.5795656442642212, "X": 0.47389525175094604, "Type": "mouthUp" }, { "Y": 0.6466911435127258, "X": 0.47393468022346497, "Type": "mouthDown" } ], "Mustache": { "Confidence": 99.75302124023438, "Value": false }, "Beard": { "Confidence": 89.82911682128906, "Value": false } }, "Face": { "BoundingBox": { "Width": 0.6153846383094788, "Top": 0.24423076212406158, "Left": 0.17654477059841156, "Height": 0.4692307710647583 }, "FaceId": "407b95a5-f8f7-50c7-bf86-27c9ba5c6931", "ExternalImageId": "example-image.jpg", "Confidence": 99.99968719482422, "ImageId": "af554b0d-fcb2-56e8-9658-69aec6c901be" } } ], "OrientationCorrection": "ROTATE_0" }

You can use the list-faces command to get a list of faces in a collection:

Copy
aws rekognition list-faces \ --collection-id "collection-id" \ --region us-east-1 --profile adminuser

The command returns faces in the collection along with a NextToken in the response. You can use this in your subsequent request (by adding the --next-token parameter in the AWS CLI command) to fetch next set of faces.

Storing Faces: Using the AWS SDK for Java

The following example AWS SDK for Java code stores two faces to a collection. You need to update the code by providing an S3 bucket name, two object keys (.jpg objects), and an Amazon Rekognition face collection name.

Copy
import java.util.List; import com.amazonaws.AmazonClientException; import com.amazonaws.auth.AWSCredentials; import com.amazonaws.auth.AWSStaticCredentialsProvider; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.regions.Regions; import com.amazonaws.services.rekognition.AmazonRekognition; import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder; import com.amazonaws.services.rekognition.model.Face; import com.amazonaws.services.rekognition.model.FaceRecord; import com.amazonaws.services.rekognition.model.Image; import com.amazonaws.services.rekognition.model.IndexFacesRequest; import com.amazonaws.services.rekognition.model.IndexFacesResult; import com.amazonaws.services.rekognition.model.ListFacesRequest; import com.amazonaws.services.rekognition.model.ListFacesResult; import com.amazonaws.services.rekognition.model.S3Object; import com.fasterxml.jackson.databind.ObjectMapper; public class IndexAndListFacesExample { public static final String COLLECTION_ID = "collectionid"; public static final String S3_BUCKET = "S3Bucket"; public static void main(String[] args) throws Exception { AWSCredentials credentials; try { credentials = new ProfileCredentialsProvider("AdminUser").getCredentials(); } catch (Exception e) { throw new AmazonClientException( "Cannot load the credentials from the credential profiles file. " + "Please make sure that your credentials file is at the correct " + "location (/Users/userid/.aws/credentials), and is in valid format.", e); } ObjectMapper objectMapper = new ObjectMapper(); AmazonRekognition amazonRekognition = AmazonRekognitionClientBuilder .standard() .withRegion(Regions.US_WEST_2) .withCredentials(new AWSStaticCredentialsProvider(credentials)) .build(); // 1. Index face 1 Image image = getImageUtil(S3_BUCKET, "image1.jpg"); String externalImageId = "image1.jpg"; IndexFacesResult indexFacesResult = callIndexFaces(COLLECTION_ID, externalImageId, "ALL", image, amazonRekognition); System.out.println(externalImageId + " added"); List < FaceRecord > faceRecords = indexFacesResult.getFaceRecords(); for (FaceRecord faceRecord: faceRecords) { System.out.println("Face detected: Faceid is " + faceRecord.getFace().getFaceId()); } // 2. Index face 2 indexFacesResult = null; faceRecords = null; Image image2 = getImageUtil(S3_BUCKET, "image2.jpg"); String externalImageId2 = "image2.jpg"; System.out.println(externalImageId2 + " added"); indexFacesResult = callIndexFaces(COLLECTION_ID, externalImageId2, "ALL", image2, amazonRekognition); faceRecords = indexFacesResult.getFaceRecords(); for (FaceRecord faceRecord: faceRecords) { System.out.println("Face detected. Faceid is " + faceRecord.getFace().getFaceId()); } // 3. Page through the faces with ListFaces ListFacesResult listFacesResult = null; System.out.println("Faces in collection " + COLLECTION_ID); String paginationToken = null; do { if (listFacesResult != null) { paginationToken = listFacesResult.getNextToken(); } listFacesResult = callListFaces(COLLECTION_ID, 1, paginationToken, amazonRekognition); List < Face > faces = listFacesResult.getFaces(); for (Face face: faces) { System.out.println(objectMapper.writerWithDefaultPrettyPrinter() .writeValueAsString(face)); } } while (listFacesResult != null && listFacesResult.getNextToken() != null); } private static IndexFacesResult callIndexFaces(String collectionId, String externalImageId, String attributes, Image image, AmazonRekognition amazonRekognition) { IndexFacesRequest indexFacesRequest = new IndexFacesRequest() .withImage(image) .withCollectionId(collectionId) .withExternalImageId(externalImageId) .withDetectionAttributes(attributes); return amazonRekognition.indexFaces(indexFacesRequest); } private static ListFacesResult callListFaces(String collectionId, int limit, String paginationToken, AmazonRekognition amazonRekognition) { ListFacesRequest listFacesRequest = new ListFacesRequest() .withCollectionId(collectionId) .withMaxResults(limit) .withNextToken(paginationToken); return amazonRekognition.listFaces(listFacesRequest); } private static Image getImageUtil(String bucket, String key) { return new Image() .withS3Object(new S3Object() .withBucket(bucket) .withName(key)); } }