Amazon Rekognition
Developer Guide

Use Cases that Involve Public Safety

In addition to the recommendations listed in Best Practices for Sensors, Input Images, and Videos and Guidance for using IndexFaces, you should use the following best practices when deploying face detection and recognition systems in use cases that involve public safety. First, you should use confidence thresholds of 99% or higher to reduce errors and false positives. Second, you should involve human reviewers to verify results received from a face detection or recognition system, and you should not make decisions based on system output without additional human review. Face detection and recognition systems should serve as a tool to help narrow the field and allow humans to expeditiously review and consider options. Third, we recommend that you should be transparent about the use of face detection and recognition systems in these use cases, including, wherever possible, informing end users and subjects about the use of these systems, obtaining consent for such use, and providing a mechanism where end users and subjects can provide feedback to improve the system.

When you use facial recognition in law enforcement scenarios, you should use confidence thresholds of 99% or higher and not make decisions based solely on the predictions returned from facial recognition software. A human should confirm facial recognition software predictions and also ensure that a person’s civil rights aren’t violated.

For example, for any law enforcement use of facial recognition to identify a person of interest in a criminal investigation, law enforcement agents should manually review the match before making any decision to interview or detain the individual. In all cases, facial recognition matches should be viewed in the context of other compelling evidence, and shouldn't be used as the sole determinant for taking action. However, if facial recognition is used for non-law-enforcement scenarios (for example, for unlocking a phone or authenticating an employee’s identity to access a secure, private office building), these decisions wouldn't require a manual audit because they wouldn't impinge on an individual’s civil rights.

If you're planning to use a face detection or face recognition system for use cases that involve public safety you should employ the best practices mentioned previously. In addition, you should consult published resources on the use of face recognition. This includes the Face Recognition Policy Development Template For Use In Criminal Intelligence and Investigative Activities provided by the Bureau of Justice Assistance of the Department of Justice. The template provides several facial recognition and biometric-related resources and is designed to provide law enforcement and public safety agencies with a framework for developing face recognition policies that comply with applicable laws, reduce privacy risks, and establish entity accountability and oversight. Additional resources include Best Privacy Practices for Commercial Use of Facial Recognition by the National Telecommunications and Information Administration and Best Practices for Common Uses of Facial Recognition by the staff of the Federal Trade Commission. Other resources may be developed and published in the future, and you should continuously educate yourself on this important topic.

As a reminder, you must comply with all applicable laws in their use of AWS services, and you may not use any AWS service in a manner that violates the rights of others or may be harmful to others. This means that you may not use AWS services for public safety use cases in a way that illegally discriminates against a person or violates a person’s due process, privacy, or civil liberties. You should obtain appropriate legal advice as necessary to review any legal requirements or questions regarding your use case.

Using Amazon Rekognition to Help Public Safety

Amazon Rekognition can help in public safety and law enforcement scenarios—such as finding lost children, combating human trafficking, or preventing crimes. In public safety and law enforcement scenarios, consider the following:

  • Use Amazon Rekognition as the first step in finding possible matches. The responses from Amazon Rekognition face operations allow you to quickly get a set of potential matches for further consideration.

  • Don’t use Amazon Rekognition responses to make autonomous decisions for scenarios that require analysis by a human. An example of this is determining who committed a crime. Instead, have a human review the responses, and use that information to inform further decisions.

  • Use a 99% similarity threshold for scenarios where highly accurate face similarity matches are necessary. An example of this is authenticating access to a building.

  • When civil rights are a concern, such as use cases involving law enforcement, use confidence thresholds of 99% or higher and employ human review of facial recognition predictions to ensure that a person’s civil rights aren't violated.

  • Use a similarity threshold lower than 99% for scenarios that benefit from a larger set of potential matches. An example of this is finding missing persons. If necessary, you can use the Similarity response attribute to determine how similar potential matches are to the person you want to recognize. 

  • Have a plan for false-positive face matches that are returned by Amazon Rekognition. For example, improve matching by using multiple images of the same person when you build the index with the IndexFaces operation. For more information, see Guidance for using IndexFaces.

In other use cases (such as social media), we recommend you use your best judgement to assess if the Amazon Rekognition results need human review. Also, depending on your application’s requirements, the similarity threshold can be lower.