AWSRekognitionSearchFacesByImageRequest Class Reference

Inherits from AWSRequest : AWSModel : AWSMTLModel
Declared in AWSRekognitionModel.h
AWSRekognitionModel.m

  collectionId

ID of the collection to search.

@property (nonatomic, strong) NSString *collectionId

Declared In

AWSRekognitionModel.h

  faceMatchThreshold

(Optional) Specifies the minimum confidence in the face match to return. For example, don't return any matches where confidence in matches is less than 70%.

@property (nonatomic, strong) NSNumber *faceMatchThreshold

Declared In

AWSRekognitionModel.h

  image

Provides the source image either as bytes or an S3 object.

The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.

You may need to Base64-encode the image bytes depending on the language you are using and whether or not you are using the AWS SDK. For more information, see example4.

If you use the Amazon CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.

For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see manage-access-resource-policies.

@property (nonatomic, strong) AWSRekognitionImage *image

Declared In

AWSRekognitionModel.h

  maxFaces

Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.

@property (nonatomic, strong) NSNumber *maxFaces

Declared In

AWSRekognitionModel.h