AWSRekognitionIndexFacesRequest Class Reference

Inherits from AWSRequest : AWSModel : AWSMTLModel
Declared in AWSRekognitionModel.h


The ID of an existing collection to which you want to add the faces that are detected in the input images.

@property (nonatomic, strong) NSString *collectionId

Declared In



A list of facial attributes that you want to be returned. This can be the default list of attributes or all attributes. If you don't specify a value for Attributes or if you specify ["DEFAULT"], the API returns the following subset of facial attributes: BoundingBox, Confidence, Pose, Quality and Landmarks. If you provide ["ALL"], all facial attributes are returned but the operation will take longer to complete.

If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).

@property (nonatomic, strong) NSArray<NSString*> *detectionAttributes

Declared In



ID you want to assign to all the faces detected in the image.

@property (nonatomic, strong) NSString *externalImageId

Declared In



Provides the source image either as bytes or an S3 object.

The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

You may need to Base64-encode the image bytes depending on the language you are using and whether or not you are using the AWS SDK. For more information, see example4.

If you use the Amazon CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see manage-access-resource-policies.

@property (nonatomic, strong) AWSRekognitionImage *image

Declared In