偵測已存放影片中的人臉 - Amazon Rekognition

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

偵測已存放影片中的人臉

Amazon Rekognition Video 可以偵測存放在 Amazon S3 儲存貯體之影片中的人臉,並提供下列資訊:

  • 在影片中偵測到臉部的次數。

  • 偵測到臉部時臉部在影片影格中的位置。

  • 臉部特徵點,例如左眼的位置。

  • 其他屬性,如 人臉屬性的準則 頁面上所述。

Amazon Rekognition Video 已儲存影片中的人臉偵測是一項非同步操作。要開始檢測視頻中的臉部,請致電StartFaceDetection。Amazon Rekognition Video 會將影片分析的完成状态发布至 Amazon Simple Notification Service (Amazon SNS) 主题。如果視頻分析成功,則可以調用GetFaceDetection以獲取視頻分析的結果。如需開始影片分析並取得結果的詳細資訊,請參閱 呼叫 Amazon Rekognition Video 操作

此程序會展開 使用 Java 或 Python (SDK) 分析儲存於 Amazon S3 儲存貯體中的影片 中的程式碼,該操作使用 Amazon Simple Queue Service (Amazon SQS) 佇列來取得影片分析要求的完成狀態。

偵測存放在 Amazon S3 儲存貯體 (SDK) 之影片中的人臉
  1. 執行 使用 Java 或 Python (SDK) 分析儲存於 Amazon S3 儲存貯體中的影片

  2. 將下列程式碼新增至您在步驟 1 中建立的類別 VideoDetect

    AWS CLI
    • 在以下程式碼範例中,將 bucket-namevideo-name 變更為您在步驟 2 中指定的 Amazon S3 儲存貯體與文檔名稱。

    • region-name 變更為您正在使用的 AWS 區域。使用您開發人員設定檔的名稱取代 profile_name 的值。

    • TopicARN 變更為您在 設定 Amazon Rekognition Video 步驟 3 建立的 Amazon SNS 主題 ARN。

    • RoleARN 變更為您在步驟 7 建立的 設定 Amazon Rekognition Video IAM 服務角色的 ARN。

    aws rekognition start-face-detection --video "{"S3Object":{"Bucket":"Bucket-Name","Name":"Video-Name"}}" --notification-channel \ "{"SNSTopicArn":"Topic-ARN","RoleArn":"Role-ARN"}" --region region-name --profile profile-name

    如果您在 Windows 裝置上存取 CLI,請使用雙引號而非單引號,並以反斜線 (即\) 替代內部雙引號,以解決您可能遇到的任何剖析器錯誤。例如,請參閱下列內容:

    aws rekognition start-face-detection --video "{\"S3Object\":{\"Bucket\":\"Bucket-Name\",\"Name\":\"Video-Name\"}}" --notification-channel \ "{\"SNSTopicArn\":\"Topic-ARN\",\"RoleArn\":\"Role-ARN\"}" --region region-name --profile profile-name

    執行 StartFaceDetection 操作,並取得工作 ID 編號之後,請執行下列 GetFaceDetection 作業並提供工作 ID 號碼:

    aws rekognition get-face-detection --job-id job-id-number --profile profile-name
    Java
    //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) private static void StartFaceDetection(String bucket, String video) throws Exception{ NotificationChannel channel= new NotificationChannel() .withSNSTopicArn(snsTopicArn) .withRoleArn(roleArn); StartFaceDetectionRequest req = new StartFaceDetectionRequest() .withVideo(new Video() .withS3Object(new S3Object() .withBucket(bucket) .withName(video))) .withNotificationChannel(channel); StartFaceDetectionResult startLabelDetectionResult = rek.startFaceDetection(req); startJobId=startLabelDetectionResult.getJobId(); } private static void GetFaceDetectionResults() throws Exception{ int maxResults=10; String paginationToken=null; GetFaceDetectionResult faceDetectionResult=null; do{ if (faceDetectionResult !=null){ paginationToken = faceDetectionResult.getNextToken(); } faceDetectionResult = rek.getFaceDetection(new GetFaceDetectionRequest() .withJobId(startJobId) .withNextToken(paginationToken) .withMaxResults(maxResults)); VideoMetadata videoMetaData=faceDetectionResult.getVideoMetadata(); System.out.println("Format: " + videoMetaData.getFormat()); System.out.println("Codec: " + videoMetaData.getCodec()); System.out.println("Duration: " + videoMetaData.getDurationMillis()); System.out.println("FrameRate: " + videoMetaData.getFrameRate()); //Show faces, confidence and detection times List<FaceDetection> faces= faceDetectionResult.getFaces(); for (FaceDetection face: faces) { long seconds=face.getTimestamp()/1000; System.out.print("Sec: " + Long.toString(seconds) + " "); System.out.println(face.getFace().toString()); System.out.println(); } } while (faceDetectionResult !=null && faceDetectionResult.getNextToken() != null); }

    在函數 main 中,將下行:

    StartLabelDetection(bucket, video); if (GetSQSMessageSuccess()==true) GetLabelDetectionResults();

    取代為:

    StartFaceDetection(bucket, video); if (GetSQSMessageSuccess()==true) GetFaceDetectionResults();
    Java V2

    此代碼取自 AWS 文檔 SDK 示例 GitHub 存儲庫。請參閱此處的完整範例。

    //snippet-start:[rekognition.java2.recognize_video_faces.import] import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.*; import java.util.List; //snippet-end:[rekognition.java2.recognize_video_faces.import] /** * Before running this Java V2 code example, set up your development environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoDetectFaces { private static String startJobId =""; public static void main(String[] args) { final String usage = "\n" + "Usage: " + " <bucket> <video> <topicArn> <roleArn>\n\n" + "Where:\n" + " bucket - The name of the bucket in which the video is located (for example, (for example, myBucket). \n\n"+ " video - The name of video (for example, people.mp4). \n\n" + " topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic. \n\n" + " roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use. \n\n" ; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .credentialsProvider(ProfileCredentialsProvider.create("profile-name")) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); StartFaceDetection(rekClient, channel, bucket, video); GetFaceResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } // snippet-start:[rekognition.java2.recognize_video_faces.main] public static void StartFaceDetection(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartFaceDetectionRequest faceDetectionRequest = StartFaceDetectionRequest.builder() .jobTag("Faces") .faceAttributes(FaceAttributes.ALL) .notificationChannel(channel) .video(vidOb) .build(); StartFaceDetectionResponse startLabelDetectionResult = rekClient.startFaceDetection(faceDetectionRequest); startJobId=startLabelDetectionResult.jobId(); } catch(RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void GetFaceResults(RekognitionClient rekClient) { try { String paginationToken=null; GetFaceDetectionResponse faceDetectionResponse=null; boolean finished = false; String status; int yy=0 ; do{ if (faceDetectionResponse !=null) paginationToken = faceDetectionResponse.nextToken(); GetFaceDetectionRequest recognitionRequest = GetFaceDetectionRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds while (!finished) { faceDetectionResponse = rekClient.getFaceDetection(recognitionRequest); status = faceDetectionResponse.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null VideoMetadata videoMetaData=faceDetectionResponse.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); // Show face information List<FaceDetection> faces= faceDetectionResponse.faces(); for (FaceDetection face: faces) { String age = face.face().ageRange().toString(); String smile = face.face().smile().toString(); System.out.println("The detected face is estimated to be" + age + " years old."); System.out.println("There is a smile : "+smile); } } while (faceDetectionResponse !=null && faceDetectionResponse.nextToken() != null); } catch(RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } // snippet-end:[rekognition.java2.recognize_video_faces.main] }
    Python
    #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) # ============== Faces=============== def StartFaceDetection(self): response=self.rek.start_face_detection(Video={'S3Object': {'Bucket': self.bucket, 'Name': self.video}}, NotificationChannel={'RoleArn': self.roleArn, 'SNSTopicArn': self.snsTopicArn}) self.startJobId=response['JobId'] print('Start Job Id: ' + self.startJobId) def GetFaceDetectionResults(self): maxResults = 10 paginationToken = '' finished = False while finished == False: response = self.rek.get_face_detection(JobId=self.startJobId, MaxResults=maxResults, NextToken=paginationToken) print('Codec: ' + response['VideoMetadata']['Codec']) print('Duration: ' + str(response['VideoMetadata']['DurationMillis'])) print('Format: ' + response['VideoMetadata']['Format']) print('Frame rate: ' + str(response['VideoMetadata']['FrameRate'])) print() for faceDetection in response['Faces']: print('Face: ' + str(faceDetection['Face'])) print('Confidence: ' + str(faceDetection['Face']['Confidence'])) print('Timestamp: ' + str(faceDetection['Timestamp'])) print() if 'NextToken' in response: paginationToken = response['NextToken'] else: finished = True

    在函數 main 中,將下行:

    analyzer.StartLabelDetection() if analyzer.GetSQSMessageSuccess()==True: analyzer.GetLabelDetectionResults()

    取代為:

    analyzer.StartFaceDetection() if analyzer.GetSQSMessageSuccess()==True: analyzer.GetFaceDetectionResults()
    注意

    如果您已執行 使用 Java 或 Python (SDK) 分析儲存於 Amazon S3 儲存貯體中的影片 以外的影片範例,要取代的函數名稱會不同。

  3. 執行程式碼。影片中偵測到之臉部的資訊便會顯示。

GetFaceDetection 作業回應

GetFaceDetection 會傳回陣列 (Faces),其中包含影片中偵測到之臉孔的相關資訊。每次在視訊中偵測到臉孔時,都會存在陣列元素。FaceDetection傳回的陣列元素會依從影片開頭起的時間 (以毫秒為單位) 順序排序。

下列範例是 GetFaceDetection 的部分 JSON 回應。在回應中,請注意下列事項:

  • 週框方塊:人臉周圍的週框方塊座標。

  • 可信度:週框方塊包含人臉的可信度。

  • 人臉特徵點:人臉特徵點的陣列。對於每個特徵點 (例如左眼、右眼與嘴巴),回應均會提供 xy 座標。

  • 臉部屬性 — 一組面部屬性,其中包括:鬍子 AgeRange,情感,眼鏡 EyesOpen,性別,鬍子 MouthOpen,微笑和太陽鏡。該值的類型可能不同,例如布林值類型 (此人是否戴著太陽眼鏡) 或字串 (此人是男性或女性)。此外,對於大部分屬性,該回應也會提供偵測到之屬性值的可信度。請注意,雖 FaceOccluded 然使用時支持和 EyeDirection屬性DetectFaces,但使用StartFaceDetection和分析視頻時不支持它們GetFaceDetection

  • 時間戳記:在影片中偵測到臉孔的時間。

  • 分頁資訊:該範例顯示一頁人臉偵測資訊。您可以在 GetFaceDetectionMaxResults 輸入參數中指定要傳回的人物元素數目。如果結果數目超過 MaxResultsGetFaceDetection 會傳回用來取得下一頁結果的字符 (NextToken)。如需詳細資訊,請參閱 取得 Amazon Rekognition Video 分析結果

  • 影片資訊:回應包含 GetFaceDetection 所傳回之每頁資訊中影片格式 (VideoMetadata) 的相關資訊。

  • 畫質:描述人臉的亮度與銳利度。

  • 姿態:描述映像內人臉的旋轉角度。

{ "Faces": [ { "Face": { "BoundingBox": { "Height": 0.23000000417232513, "Left": 0.42500001192092896, "Top": 0.16333332657814026, "Width": 0.12937499582767487 }, "Confidence": 99.97504425048828, "Landmarks": [ { "Type": "eyeLeft", "X": 0.46415066719055176, "Y": 0.2572723925113678 }, { "Type": "eyeRight", "X": 0.5068183541297913, "Y": 0.23705792427062988 }, { "Type": "nose", "X": 0.49765899777412415, "Y": 0.28383663296699524 }, { "Type": "mouthLeft", "X": 0.487221896648407, "Y": 0.3452930748462677 }, { "Type": "mouthRight", "X": 0.5142884850502014, "Y": 0.33167609572410583 } ], "Pose": { "Pitch": 15.966927528381348, "Roll": -15.547388076782227, "Yaw": 11.34195613861084 }, "Quality": { "Brightness": 44.80223083496094, "Sharpness": 99.95819854736328 } }, "Timestamp": 0 }, { "Face": { "BoundingBox": { "Height": 0.20000000298023224, "Left": 0.029999999329447746, "Top": 0.2199999988079071, "Width": 0.11249999701976776 }, "Confidence": 99.85971069335938, "Landmarks": [ { "Type": "eyeLeft", "X": 0.06842322647571564, "Y": 0.3010137975215912 }, { "Type": "eyeRight", "X": 0.10543643683195114, "Y": 0.29697132110595703 }, { "Type": "nose", "X": 0.09569807350635529, "Y": 0.33701086044311523 }, { "Type": "mouthLeft", "X": 0.0732642263174057, "Y": 0.3757539987564087 }, { "Type": "mouthRight", "X": 0.10589495301246643, "Y": 0.3722417950630188 } ], "Pose": { "Pitch": -0.5589138865470886, "Roll": -5.1093974113464355, "Yaw": 18.69594955444336 }, "Quality": { "Brightness": 43.052337646484375, "Sharpness": 99.68138885498047 } }, "Timestamp": 0 }, { "Face": { "BoundingBox": { "Height": 0.2177777737379074, "Left": 0.7593749761581421, "Top": 0.13333334028720856, "Width": 0.12250000238418579 }, "Confidence": 99.63436889648438, "Landmarks": [ { "Type": "eyeLeft", "X": 0.8005779385566711, "Y": 0.20915353298187256 }, { "Type": "eyeRight", "X": 0.8391435146331787, "Y": 0.21049551665782928 }, { "Type": "nose", "X": 0.8191410899162292, "Y": 0.2523227035999298 }, { "Type": "mouthLeft", "X": 0.8093273043632507, "Y": 0.29053622484207153 }, { "Type": "mouthRight", "X": 0.8366993069648743, "Y": 0.29101791977882385 } ], "Pose": { "Pitch": 3.165884017944336, "Roll": 1.4182015657424927, "Yaw": -11.151537895202637 }, "Quality": { "Brightness": 28.910892486572266, "Sharpness": 97.61507415771484 } }, "Timestamp": 0 }....... ], "JobStatus": "SUCCEEDED", "NextToken": "i7fj5XPV/fwviXqz0eag9Ow332Jd5G8ZGWf7hooirD/6V1qFmjKFOQZ6QPWUiqv29HbyuhMNqQ==", "VideoMetadata": { "Codec": "h264", "DurationMillis": 67301, "FileExtension": "mp4", "Format": "QuickTime / MOV", "FrameHeight": 1080, "FrameRate": 29.970029830932617, "FrameWidth": 1920 } }