人物路徑 - Amazon Rekognition

本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。

人物路徑

Amazon Rekognition Video 能建立人物在影片中的路徑資訊,例如:

  • 偵測到路徑時,該人物出現在視訊影格中的位置。

  • 偵測到的臉部特徵點,例如左眼的位置。

已儲存影片中的 Amazon Rekognition Video 人物路徑是一種非同步操作。要啟動人們在視頻通話StartPersonTracking的路徑。Amazon Rekognition Video 向其發佈影片分析操作的物件偵測結果和完成狀態的 Amazon Simple Notification Service 主題。如果視頻分析成功,請致電GetPersonTracking以獲取視頻分析的結果。如需呼叫 Amazon Rekognition Video API 操作的詳細資訊,請參閱 呼叫 Amazon Rekognition Video 操作

下列程序說明如何在儲存於 Amazon S3 儲存貯體的影片中追蹤人物路徑。此範例會展開 使用 Java 或 Python (SDK) 分析儲存於 Amazon S3 儲存貯體中的影片 中的程式碼,這會使用 Amazon Simple Queue Service 佇列來取得視訊分析要求的完成狀態。

若要偵測存放在 Amazon S3 儲存貯體 (SDK) 之視訊中的人物
  1. 執行 使用 Java 或 Python (SDK) 分析儲存於 Amazon S3 儲存貯體中的影片

  2. 將下列程式碼新增至您在步驟 1 中建立的類別 VideoDetect

    Java
    //Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. //PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) //Persons======================================================================== private static void StartPersonDetection(String bucket, String video) throws Exception{ NotificationChannel channel= new NotificationChannel() .withSNSTopicArn(snsTopicArn) .withRoleArn(roleArn); StartPersonTrackingRequest req = new StartPersonTrackingRequest() .withVideo(new Video() .withS3Object(new S3Object() .withBucket(bucket) .withName(video))) .withNotificationChannel(channel); StartPersonTrackingResult startPersonDetectionResult = rek.startPersonTracking(req); startJobId=startPersonDetectionResult.getJobId(); } private static void GetPersonDetectionResults() throws Exception{ int maxResults=10; String paginationToken=null; GetPersonTrackingResult personTrackingResult=null; do{ if (personTrackingResult !=null){ paginationToken = personTrackingResult.getNextToken(); } personTrackingResult = rek.getPersonTracking(new GetPersonTrackingRequest() .withJobId(startJobId) .withNextToken(paginationToken) .withSortBy(PersonTrackingSortBy.TIMESTAMP) .withMaxResults(maxResults)); VideoMetadata videoMetaData=personTrackingResult.getVideoMetadata(); System.out.println("Format: " + videoMetaData.getFormat()); System.out.println("Codec: " + videoMetaData.getCodec()); System.out.println("Duration: " + videoMetaData.getDurationMillis()); System.out.println("FrameRate: " + videoMetaData.getFrameRate()); //Show persons, confidence and detection times List<PersonDetection> detectedPersons= personTrackingResult.getPersons(); for (PersonDetection detectedPerson: detectedPersons) { long seconds=detectedPerson.getTimestamp()/1000; System.out.print("Sec: " + Long.toString(seconds) + " "); System.out.println("Person Identifier: " + detectedPerson.getPerson().getIndex()); System.out.println(); } } while (personTrackingResult !=null && personTrackingResult.getNextToken() != null); }

    在函數 main 中,將下行:

    StartLabelDetection(bucket, video); if (GetSQSMessageSuccess()==true) GetLabelDetectionResults();

    取代為:

    StartPersonDetection(bucket, video); if (GetSQSMessageSuccess()==true) GetPersonDetectionResults();
    Java V2

    此代碼取自AWS文檔 SDK 示例 GitHub 存儲庫。請參閱此處的完整範例。

    import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.rekognition.RekognitionClient; import software.amazon.awssdk.services.rekognition.model.S3Object; import software.amazon.awssdk.services.rekognition.model.NotificationChannel; import software.amazon.awssdk.services.rekognition.model.StartPersonTrackingRequest; import software.amazon.awssdk.services.rekognition.model.Video; import software.amazon.awssdk.services.rekognition.model.StartPersonTrackingResponse; import software.amazon.awssdk.services.rekognition.model.RekognitionException; import software.amazon.awssdk.services.rekognition.model.GetPersonTrackingResponse; import software.amazon.awssdk.services.rekognition.model.GetPersonTrackingRequest; import software.amazon.awssdk.services.rekognition.model.VideoMetadata; import software.amazon.awssdk.services.rekognition.model.PersonDetection; import java.util.List; /** * Before running this Java V2 code example, set up your development * environment, including your credentials. * * For more information, see the following documentation topic: * * https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html */ public class VideoPersonDetection { private static String startJobId = ""; public static void main(String[] args) { final String usage = """ Usage: <bucket> <video> <topicArn> <roleArn> Where: bucket - The name of the bucket in which the video is located (for example, (for example, myBucket).\s video - The name of video (for example, people.mp4).\s topicArn - The ARN of the Amazon Simple Notification Service (Amazon SNS) topic.\s roleArn - The ARN of the AWS Identity and Access Management (IAM) role to use.\s """; if (args.length != 4) { System.out.println(usage); System.exit(1); } String bucket = args[0]; String video = args[1]; String topicArn = args[2]; String roleArn = args[3]; Region region = Region.US_EAST_1; RekognitionClient rekClient = RekognitionClient.builder() .region(region) .build(); NotificationChannel channel = NotificationChannel.builder() .snsTopicArn(topicArn) .roleArn(roleArn) .build(); startPersonLabels(rekClient, channel, bucket, video); getPersonDetectionResults(rekClient); System.out.println("This example is done!"); rekClient.close(); } public static void startPersonLabels(RekognitionClient rekClient, NotificationChannel channel, String bucket, String video) { try { S3Object s3Obj = S3Object.builder() .bucket(bucket) .name(video) .build(); Video vidOb = Video.builder() .s3Object(s3Obj) .build(); StartPersonTrackingRequest personTrackingRequest = StartPersonTrackingRequest.builder() .jobTag("DetectingLabels") .video(vidOb) .notificationChannel(channel) .build(); StartPersonTrackingResponse labelDetectionResponse = rekClient.startPersonTracking(personTrackingRequest); startJobId = labelDetectionResponse.jobId(); } catch (RekognitionException e) { System.out.println(e.getMessage()); System.exit(1); } } public static void getPersonDetectionResults(RekognitionClient rekClient) { try { String paginationToken = null; GetPersonTrackingResponse personTrackingResult = null; boolean finished = false; String status; int yy = 0; do { if (personTrackingResult != null) paginationToken = personTrackingResult.nextToken(); GetPersonTrackingRequest recognitionRequest = GetPersonTrackingRequest.builder() .jobId(startJobId) .nextToken(paginationToken) .maxResults(10) .build(); // Wait until the job succeeds while (!finished) { personTrackingResult = rekClient.getPersonTracking(recognitionRequest); status = personTrackingResult.jobStatusAsString(); if (status.compareTo("SUCCEEDED") == 0) finished = true; else { System.out.println(yy + " status is: " + status); Thread.sleep(1000); } yy++; } finished = false; // Proceed when the job is done - otherwise VideoMetadata is null. VideoMetadata videoMetaData = personTrackingResult.videoMetadata(); System.out.println("Format: " + videoMetaData.format()); System.out.println("Codec: " + videoMetaData.codec()); System.out.println("Duration: " + videoMetaData.durationMillis()); System.out.println("FrameRate: " + videoMetaData.frameRate()); System.out.println("Job"); List<PersonDetection> detectedPersons = personTrackingResult.persons(); for (PersonDetection detectedPerson : detectedPersons) { long seconds = detectedPerson.timestamp() / 1000; System.out.print("Sec: " + seconds + " "); System.out.println("Person Identifier: " + detectedPerson.person().index()); System.out.println(); } } while (personTrackingResult != null && personTrackingResult.nextToken() != null); } catch (RekognitionException | InterruptedException e) { System.out.println(e.getMessage()); System.exit(1); } } }
    Python
    #Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. #PDX-License-Identifier: MIT-0 (For details, see https://github.com/awsdocs/amazon-rekognition-developer-guide/blob/master/LICENSE-SAMPLECODE.) # ============== People pathing =============== def StartPersonPathing(self): response=self.rek.start_person_tracking(Video={'S3Object': {'Bucket': self.bucket, 'Name': self.video}}, NotificationChannel={'RoleArn': self.roleArn, 'SNSTopicArn': self.snsTopicArn}) self.startJobId=response['JobId'] print('Start Job Id: ' + self.startJobId) def GetPersonPathingResults(self): maxResults = 10 paginationToken = '' finished = False while finished == False: response = self.rek.get_person_tracking(JobId=self.startJobId, MaxResults=maxResults, NextToken=paginationToken) print('Codec: ' + response['VideoMetadata']['Codec']) print('Duration: ' + str(response['VideoMetadata']['DurationMillis'])) print('Format: ' + response['VideoMetadata']['Format']) print('Frame rate: ' + str(response['VideoMetadata']['FrameRate'])) print() for personDetection in response['Persons']: print('Index: ' + str(personDetection['Person']['Index'])) print('Timestamp: ' + str(personDetection['Timestamp'])) print() if 'NextToken' in response: paginationToken = response['NextToken'] else: finished = True

    在函數 main 中,將下行:

    analyzer.StartLabelDetection() if analyzer.GetSQSMessageSuccess()==True: analyzer.GetLabelDetectionResults()

    取代為:

    analyzer.StartPersonPathing() if analyzer.GetSQSMessageSuccess()==True: analyzer.GetPersonPathingResults()
    CLI

    執行下列 AWS CLI 命令,在視訊中啟動人員路徑分析。

    aws rekognition start-person-tracking --video "{"S3Object":{"Bucket":"bucket-name","Name":"video-name"}}" \ --notification-channel "{"SNSTopicArn":"topic-ARN","RoleArn":"role-ARN"}" \ --region region-name --profile profile-name

    更新下列的值:

    • bucket-namevideo-name 變更為您在步驟 2 中指定的 Amazon S3 儲存貯體與視訊檔名稱。

    • region-name 變更為您正在使用的 AWS 區域。

    • 將建立 Rekognition 工作階段的行中 profile-name 的值取代為您的開發人員設定檔的名稱。

    • topic-ARN 變更為您在步驟 3 建立的 設定 Amazon Rekognition Video Amazon SNS 主題的 ARN。

    • role-ARN 變更為您在步驟 7 建立的 設定 Amazon Rekognition Video 服務角色的 ARN。

    如果您在 Windows 裝置上存取 CLI,請使用雙引號而非單引號,並以反斜線 (即\) 替代內部雙引號,以解決您可能遇到的任何剖析器錯誤。如需範例,請參閱下列內容:

    aws rekognition start-person-tracking --video "{\"S3Object\":{\"Bucket\":\"bucket-name\",\"Name\":\"video-name\"}}" --notification-channel "{\"SNSTopicArn\":\"topic-ARN\",\"RoleArn\":\"role-ARN\"}" \ --region region-name --profile profile-name

    執行正在進行的程式碼範例之後,複製傳回的程式碼 jobID,並將其提供給下列 GetPersonTracking 命令,以 job-id-number 取代您之前收到的 jobID 結果:

    aws rekognition get-person-tracking --job-id job-id-number
    注意

    如果您已執行 使用 Java 或 Python (SDK) 分析儲存於 Amazon S3 儲存貯體中的影片 以外的視訊範例,要取代的程式碼可能會不同。

  3. 執行程式碼。將顯示追蹤到的人物之專有識別碼,同時以秒為單位顯示人物路徑被追蹤到的時間。

GetPersonTracking 作業回應

GetPersonTracking 傳回 Persons 物件的 PersonDetection 陣列,其中包含影片中追蹤到的人物詳細資訊,以及追蹤到他們路徑的時間。

您可以使用 SortBy 輸入參數對 Persons 排序。追蹤到影片中的人物路徑時,指定 TIMESTAMP 對元素排序。指定 INDEX 來排序影片中追蹤到的人物。在每個人物的結果組合中,依據追蹤路徑精確度的可信度遞減排序元素。在預設情況下,將根據 Persons 的排序傳回 TIMESTAMP。以下是來自 GetPersonDetection 的 JSON 回應範例。結果會依據人物路徑從影片開始後在視訊中被追蹤到的時間排序,以毫秒為單位。在回應中,請注意下列事項:

  • 人物資訊PersonDetection 陣列元素包含有關偵測到的人物之資訊。例如,該人物被偵測到的時間 (Timestamp)、該人物被偵測到時所在的視訊影格位置 (BoundingBox)、以及 Amazon Rekognition Video 對於正確偵測該人物的可信度 (Confidence)。

    不會在每次追蹤到人物路徑的時間戳記時傳回臉部特徵。此外在部分情況下,被追蹤到的人物身體可能無法顯示,因此只會傳回他們的臉部位置。

  • 分頁資訊:範例顯示人物偵測資訊的一頁。您可以在 GetPersonTrackingMaxResults 輸入參數中指定要傳回的人物元素數目。如果結果數目超過 MaxResultsGetPersonTracking 會傳回用來取得下一頁結果的字符 (NextToken)。如需詳細資訊,請參閱 取得 Amazon Rekognition Video 分析結果

  • 索引:在影片中識別人物的專有識別碼。

  • 視訊資訊:回應包含 GetPersonDetection 所傳回之每頁資訊中視訊格式 (VideoMetadata) 的相關資訊。

{ "JobStatus": "SUCCEEDED", "NextToken": "AcDymG0fSSoaI6+BBYpka5wVlqttysSPP8VvWcujMDluj1QpFo/vf+mrMoqBGk8eUEiFlllR6g==", "Persons": [ { "Person": { "BoundingBox": { "Height": 0.8787037134170532, "Left": 0.00572916679084301, "Top": 0.12129629403352737, "Width": 0.21666666865348816 }, "Face": { "BoundingBox": { "Height": 0.20000000298023224, "Left": 0.029999999329447746, "Top": 0.2199999988079071, "Width": 0.11249999701976776 }, "Confidence": 99.85971069335938, "Landmarks": [ { "Type": "eyeLeft", "X": 0.06842322647571564, "Y": 0.3010137975215912 }, { "Type": "eyeRight", "X": 0.10543643683195114, "Y": 0.29697132110595703 }, { "Type": "nose", "X": 0.09569807350635529, "Y": 0.33701086044311523 }, { "Type": "mouthLeft", "X": 0.0732642263174057, "Y": 0.3757539987564087 }, { "Type": "mouthRight", "X": 0.10589495301246643, "Y": 0.3722417950630188 } ], "Pose": { "Pitch": -0.5589138865470886, "Roll": -5.1093974113464355, "Yaw": 18.69594955444336 }, "Quality": { "Brightness": 43.052337646484375, "Sharpness": 99.68138885498047 } }, "Index": 0 }, "Timestamp": 0 }, { "Person": { "BoundingBox": { "Height": 0.9074074029922485, "Left": 0.24791666865348816, "Top": 0.09259258955717087, "Width": 0.375 }, "Face": { "BoundingBox": { "Height": 0.23000000417232513, "Left": 0.42500001192092896, "Top": 0.16333332657814026, "Width": 0.12937499582767487 }, "Confidence": 99.97504425048828, "Landmarks": [ { "Type": "eyeLeft", "X": 0.46415066719055176, "Y": 0.2572723925113678 }, { "Type": "eyeRight", "X": 0.5068183541297913, "Y": 0.23705792427062988 }, { "Type": "nose", "X": 0.49765899777412415, "Y": 0.28383663296699524 }, { "Type": "mouthLeft", "X": 0.487221896648407, "Y": 0.3452930748462677 }, { "Type": "mouthRight", "X": 0.5142884850502014, "Y": 0.33167609572410583 } ], "Pose": { "Pitch": 15.966927528381348, "Roll": -15.547388076782227, "Yaw": 11.34195613861084 }, "Quality": { "Brightness": 44.80223083496094, "Sharpness": 99.95819854736328 } }, "Index": 1 }, "Timestamp": 0 }..... ], "VideoMetadata": { "Codec": "h264", "DurationMillis": 67301, "FileExtension": "mp4", "Format": "QuickTime / MOV", "FrameHeight": 1080, "FrameRate": 29.970029830932617, "FrameWidth": 1920 } }