AWSRekognitionSearchFacesByImageRequest Class Reference
|Inherits from||AWSRequest : AWSModel : AWSMTLModel|
ID of the collection to search.
@property (nonatomic, strong) NSString *collectionId
(Optional) Specifies the minimum confidence in the face match to return. For example, don't return any matches where confidence in matches is less than 70%.
@property (nonatomic, strong) NSNumber *faceMatchThreshold
Provides the source image either as bytes or an S3 object.
The region for the S3 bucket containing the S3 object must match the region you use for Amazon Rekognition operations.
You may need to Base64-encode the image bytes depending on the language you are using and whether or not you are using the AWS SDK. For more information, see example4.
If you use the Amazon CLI to call Amazon Rekognition operations, passing image bytes using the Bytes property is not supported. You must first upload the image to an Amazon S3 bucket and then call the operation using the S3Object property.
For Amazon Rekognition to process an S3 object, the user must have permission to access the S3 object. For more information, see manage-access-resource-policies.
@property (nonatomic, strong) AWSRekognitionImage *image
Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.
@property (nonatomic, strong) NSNumber *maxFaces