AWS Tools for Windows PowerShell
Command Reference

AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.

Synopsis

Calls the Amazon Rekognition IndexFaces API operation.

Syntax

Add-REKDetectedFacesToCollection
-CollectionId <String>
-ImageBucket <String>
-ImageContent <Byte[]>
-DetectionAttribute <String[]>
-ExternalImageId <String>
-MaxFace <Int32>
-ImageName <String>
-QualityFilter <QualityFilter>
-ImageVersion <String>
-Force <SwitchParameter>

Description

Detects faces in the input image and adds them to the specified collection. Amazon Rekognition doesn't save the actual faces that are detected. Instead, the underlying detection algorithm first detects the faces in the input image. For each face, the algorithm extracts facial features into a feature vector, and stores it in the backend database. Amazon Rekognition uses feature vectors when it performs face match and search operations using the SearchFaces and SearchFacesByImage operations. For more information, see Adding Faces to a Collection in the Amazon Rekognition Developer Guide. To get the number of faces in a collection, call DescribeCollection. If you're using version 1.0 of the face detection model, IndexFaces indexes the 15 largest faces in the input image. Later versions of the face detection model index the 100 largest faces in the input image. If you're using version 4 or later of the face model, image orientation information is not returned in the OrientationCorrection field. To determine which version of the model you're using, call DescribeCollection and supply the collection ID. You can also get the model version from the value of FaceModelVersion in the response from IndexFaces For more information, see Model Versioning in the Amazon Rekognition Developer Guide. If you provide the optional ExternalImageID for the input image you provided, Amazon Rekognition associates this ID with all faces that it detects. When you call the ListFaces operation, the response returns the external ID. You can use this external image ID to create a client-side index to associate the faces with each image. You can then use the index to find all faces in an image. You can specify the maximum number of faces to index with the MaxFaces input parameter. This is useful when you want to index the largest faces in an image and don't want to index smaller faces, such as those belonging to people standing in the background. The QualityFilter input parameter allows you to filter out detected faces that don’t meet the required quality bar chosen by Amazon Rekognition. The quality bar is based on a variety of common use cases. By default, IndexFaces filters detected faces. You can also explicitly filter detected faces by specifying AUTO for the value of QualityFilter. If you do not want to filter detected faces, specify NONE. To use quality filtering, you need a collection associated with version 3 of the face model. To get the version of the face model associated with a collection, call DescribeCollection. Information about faces detected in an image, but not indexed, is returned in an array of UnindexedFace objects, UnindexedFaces. Faces aren't indexed for reasons such as:
  • The number of faces detected exceeds the value of the MaxFaces request parameter.
  • The face is too small compared to the image dimensions.
  • The face is too blurry.
  • The image is too dark.
  • The face has an extreme pose.
In response, the IndexFaces operation returns an array of metadata for all detected faces, FaceRecords. This includes:
  • The bounding box, BoundingBox, of the detected face.
  • A confidence value, Confidence, which indicates the confidence that the bounding box contains a face.
  • A face ID, FaceId, assigned by the service for each face that's detected and stored.
  • An image ID, ImageId, assigned by the service for the input image.
If you request all facial attributes (by using the detectionAttributes parameter), Amazon Rekognition returns detailed facial attributes, such as facial landmarks (for example, location of eye and mouth) and other facial attributes like gender. If you provide the same image, specify the same collection, and use the same external ID in the IndexFaces operation, Amazon Rekognition doesn't save duplicate face metadata. The input image is passed either as base64-encoded image bytes, or as a reference to an image in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes isn't supported. The image must be formatted as a PNG or JPEG file. This operation requires permissions to perform the rekognition:IndexFaces action.

Parameters

-CollectionId <String>
The ID of an existing collection to which you want to add the faces that are detected in the input images.
Required?False
Position?1
Accept pipeline input?True (ByValue, )
-DetectionAttribute <String[]>
An array of facial attributes that you want to be returned. This can be the default list of attributes or all attributes. If you don't specify a value for Attributes or if you specify ["DEFAULT"], the API returns the following subset of facial attributes: BoundingBox, Confidence, Pose, Quality, and Landmarks. If you provide ["ALL"], all facial attributes are returned, but the operation takes longer to complete.If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator to determine which attributes to return (in this case, all attributes).
Required?False
Position?Named
Accept pipeline input?False
AliasesDetectionAttributes
-ExternalImageId <String>
The ID you want to assign to all the faces detected in the image.
Required?False
Position?Named
Accept pipeline input?True (ByPropertyName)
-Force <SwitchParameter>
This parameter overrides confirmation prompts to force the cmdlet to continue its operation. This parameter should always be used with caution.
Required?False
Position?Named
Accept pipeline input?False
-ImageBucket <String>
Name of the S3 bucket.
Required?False
Position?Named
Accept pipeline input?False
-ImageContent <Byte[]>
Blob of image bytes up to 5 MBs.
Required?False
Position?Named
Accept pipeline input?False
-ImageName <String>
S3 object key name.
Required?False
Position?Named
Accept pipeline input?False
-ImageVersion <String>
If the bucket is versioning enabled, you can specify the object version.
Required?False
Position?Named
Accept pipeline input?False
-MaxFace <Int32>
The maximum number of faces to index. The value of MaxFaces must be greater than or equal to 1. IndexFaces returns no more than 100 detected faces in an image, even if you specify a larger value for MaxFaces.If IndexFaces detects more faces than the value of MaxFaces, the faces with the lowest quality are filtered out first. If there are still more faces than the value of MaxFaces, the faces with the smallest bounding boxes are filtered out (up to the number that's needed to satisfy the value of MaxFaces). Information about the unindexed faces is available in the UnindexedFaces array. The faces that are returned by IndexFaces are sorted by the largest face bounding box size to the smallest size, in descending order.MaxFaces can be used with a collection associated with any version of the face model.
Required?False
Position?Named
Accept pipeline input?False
AliasesMaxFaces
-QualityFilter <QualityFilter>
A filter that specifies how much filtering is done to identify faces that are detected with low quality. Filtered faces aren't indexed. If you specify AUTO, filtering prioritizes the identification of faces that don’t meet the required quality bar chosen by Amazon Rekognition. The quality bar is based on a variety of common use cases. Low-quality detections can occur for a number of reasons. Some examples are an object that's misidentified as a face, a face that's too blurry, or a face with a pose that's too extreme to use. If you specify NONE, no filtering is performed. The default value is AUTO.To use quality filtering, the collection you are using must be associated with version 3 of the face model.
Required?False
Position?Named
Accept pipeline input?False

Common Credential and Region Parameters

-AccessKey <String>
The AWS access key for the user account. This can be a temporary access key if the corresponding session token is supplied to the -SessionToken parameter.
Required? False
Position? Named
Accept pipeline input? False
-Credential <AWSCredentials>
An AWSCredentials object instance containing access and secret key information, and optionally a token for session-based credentials.
Required? False
Position? Named
Accept pipeline input? False
-ProfileLocation <String>

Used to specify the name and location of the ini-format credential file (shared with the AWS CLI and other AWS SDKs)

If this optional parameter is omitted this cmdlet will search the encrypted credential file used by the AWS SDK for .NET and AWS Toolkit for Visual Studio first. If the profile is not found then the cmdlet will search in the ini-format credential file at the default location: (user's home directory)\.aws\credentials. Note that the encrypted credential file is not supported on all platforms. It will be skipped when searching for profiles on Windows Nano Server, Mac, and Linux platforms.

If this parameter is specified then this cmdlet will only search the ini-format credential file at the location given.

As the current folder can vary in a shell or during script execution it is advised that you use specify a fully qualified path instead of a relative path.

Required? False
Position? Named
Accept pipeline input? False
-ProfileName <String>
The user-defined name of an AWS credentials or SAML-based role profile containing credential information. The profile is expected to be found in the secure credential file shared with the AWS SDK for .NET and AWS Toolkit for Visual Studio. You can also specify the name of a profile stored in the .ini-format credential file used with the AWS CLI and other AWS SDKs.
Required? False
Position? Named
Accept pipeline input? False
-NetworkCredential <PSCredential>
Used with SAML-based authentication when ProfileName references a SAML role profile. Contains the network credentials to be supplied during authentication with the configured identity provider's endpoint. This parameter is not required if the user's default network identity can or should be used during authentication.
Required? False
Position? Named
Accept pipeline input? False
-SecretKey <String>
The AWS secret key for the user account. This can be a temporary secret key if the corresponding session token is supplied to the -SessionToken parameter.
Required? False
Position? Named
Accept pipeline input? False
-SessionToken <String>
The session token if the access and secret keys are temporary session-based credentials.
Required? False
Position? Named
Accept pipeline input? False
-Region <String>
The system name of the AWS region in which the operation should be invoked. For example, us-east-1, eu-west-1 etc.
Required? False
Position? Named
Accept pipeline input? False
-EndpointUrl <String>

The endpoint to make the call against.

Note: This parameter is primarily for internal AWS use and is not required/should not be specified for normal usage. The cmdlets normally determine which endpoint to call based on the region specified to the -Region parameter or set as default in the shell (via Set-DefaultAWSRegion). Only specify this parameter if you must direct the call to a specific custom endpoint.

Required? False
Position? Named
Accept pipeline input? False

Inputs

You can pipe a String object to this cmdlet for the CollectionId parameter.

Outputs

This cmdlet returns a Amazon.Rekognition.Model.IndexFacesResponse object containing multiple properties. The object can also be referenced from properties attached to the cmdlet entry in the $AWSHistory stack.

Supported Version

AWS Tools for PowerShell: 2.x.y.z