쿠키 기본 설정 선택

당사는 사이트와 서비스를 제공하는 데 필요한 필수 쿠키 및 유사한 도구를 사용합니다. 고객이 사이트를 어떻게 사용하는지 파악하고 개선할 수 있도록 성능 쿠키를 사용해 익명의 통계를 수집합니다. 필수 쿠키는 비활성화할 수 없지만 '사용자 지정' 또는 ‘거부’를 클릭하여 성능 쿠키를 거부할 수 있습니다.

사용자가 동의하는 경우 AWS와 승인된 제3자도 쿠키를 사용하여 유용한 사이트 기능을 제공하고, 사용자의 기본 설정을 기억하고, 관련 광고를 비롯한 관련 콘텐츠를 표시합니다. 필수가 아닌 모든 쿠키를 수락하거나 거부하려면 ‘수락’ 또는 ‘거부’를 클릭하세요. 더 자세한 내용을 선택하려면 ‘사용자 정의’를 클릭하세요.

AWS SDK를 사용하여 Device Farm을 사용하여 모바일 디바이스 패키지 업로드 및 테스트 - AWS SDK 코드 예제

Doc AWS SDK 예제 GitHub 리포지토리에서 더 많은 SDK 예제를 사용할 수 있습니다. AWS

기계 번역으로 제공되는 번역입니다. 제공된 번역과 원본 영어의 내용이 상충하는 경우에는 영어 버전이 우선합니다.

Doc AWS SDK 예제 GitHub 리포지토리에서 더 많은 SDK 예제를 사용할 수 있습니다. AWS

기계 번역으로 제공되는 번역입니다. 제공된 번역과 원본 영어의 내용이 상충하는 경우에는 영어 버전이 우선합니다.

AWS SDK를 사용하여 Device Farm을 사용하여 모바일 디바이스 패키지 업로드 및 테스트

다음 코드 예제에서는 Device Farm으로 모바일 장치 패키지를 업로드하고 테스트하는 방법을 보여줍니다.

Python
SDK for Python (Boto3)
참고

GitHub에 더 많은 내용이 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

컴파일된 Android 애플리케이션 및 테스트 패키지를 Device Farm에 업로드하고, 테스트를 시작하고, 테스트가 완료될 때까지 기다린 다음 결과를 보고합니다.

import boto3 import os import requests import string import random import datetime import time # Update this dict with your own values before you run the example: config = { # This is our app under test. "appFilePath": "app-debug.apk", "projectArn": "arn:aws:devicefarm:us-west-2:111222333444:project:581f5703-e040-4ac9-b7ae-0ba007bfb8e6", # Since we care about the most popular devices, we'll use a curated pool. "testSpecArn": "arn:aws:devicefarm:us-west-2::upload:20fcf771-eae3-4137-aa76-92e17fb3131b", "poolArn": "arn:aws:devicefarm:us-west-2::devicepool:4a869d91-6f17-491f-9a95-0a601aee2406", "namePrefix": "MyAppTest", # This is our test package. This tutorial won't go into how to make these. "testPackage": "tests.zip", } client = boto3.client("devicefarm") unique = ( config["namePrefix"] + "-" + (datetime.date.today().isoformat()) + ("".join(random.sample(string.ascii_letters, 8))) ) print( f"The unique identifier for this run is '{unique}'. All uploads will be prefixed " f"with this." ) def upload_df_file(filename, type_, mime="application/octet-stream"): upload_response = client.create_upload( projectArn=config["projectArn"], name=unique + "_" + os.path.basename(filename), type=type_, contentType=mime, ) upload_arn = upload_response["upload"]["arn"] # Extract the URL of the upload and use Requests to upload it. upload_url = upload_response["upload"]["url"] with open(filename, "rb") as file_stream: print( f"Uploading {filename} to Device Farm as " f"{upload_response['upload']['name']}... ", end="", ) put_req = requests.put( upload_url, data=file_stream, headers={"content-type": mime} ) print(" done") if not put_req.ok: raise Exception(f"Couldn't upload. Requests says: {put_req.reason}") started = datetime.datetime.now() while True: print( f"Upload of {filename} in state {upload_response['upload']['status']} " f"after " + str(datetime.datetime.now() - started) ) if upload_response["upload"]["status"] == "FAILED": raise Exception( f"The upload failed processing. Device Farm says the reason is: \n" f"{+upload_response['upload']['message']}" ) if upload_response["upload"]["status"] == "SUCCEEDED": break time.sleep(5) upload_response = client.get_upload(arn=upload_arn) print("") return upload_arn our_upload_arn = upload_df_file(config["appFilePath"], "ANDROID_APP") our_test_package_arn = upload_df_file( config["testPackage"], "APPIUM_PYTHON_TEST_PACKAGE" ) print(our_upload_arn, our_test_package_arn) response = client.schedule_run( projectArn=config["projectArn"], appArn=our_upload_arn, devicePoolArn=config["poolArn"], name=unique, test={ "type": "APPIUM_PYTHON", "testSpecArn": config["testSpecArn"], "testPackageArn": our_test_package_arn, }, ) run_arn = response["run"]["arn"] start_time = datetime.datetime.now() print(f"Run {unique} is scheduled as arn {run_arn} ") state = "UNKNOWN" try: while True: response = client.get_run(arn=run_arn) state = response["run"]["status"] if state == "COMPLETED" or state == "ERRORED": break else: print( f" Run {unique} in state {state}, total " f"time {datetime.datetime.now() - start_time}" ) time.sleep(10) except: client.stop_run(arn=run_arn) exit(1) print(f"Tests finished in state {state} after {datetime.datetime.now() - start_time}") # Pull all the logs. jobs_response = client.list_jobs(arn=run_arn) # Save the output somewhere, using the unique value. save_path = os.path.join(os.getcwd(), "results", unique) os.mkdir(save_path) # Save the last run information. for job in jobs_response["jobs"]: job_name = job["name"] os.makedirs(os.path.join(save_path, job_name), exist_ok=True) # Get each suite within the job. suites = client.list_suites(arn=job["arn"])["suites"] for suite in suites: for test in client.list_tests(arn=suite["arn"])["tests"]: # Get the artifacts. for artifact_type in ["FILE", "SCREENSHOT", "LOG"]: artifacts = client.list_artifacts(type=artifact_type, arn=test["arn"])[ "artifacts" ] for artifact in artifacts: # Replace `:` because it has a special meaning in Windows & macOS. path_to = os.path.join( save_path, job_name, suite["name"], test["name"].replace(":", "_"), ) os.makedirs(path_to, exist_ok=True) filename = ( artifact["type"] + "_" + artifact["name"] + "." + artifact["extension"] ) artifact_save_path = os.path.join(path_to, filename) print(f"Downloading {artifact_save_path}") with open(artifact_save_path, "wb") as fn: with requests.get( artifact["url"], allow_redirects=True ) as request: fn.write(request.content) print("Finished")
SDK for Python (Boto3)
참고

GitHub에 더 많은 내용이 있습니다. AWS 코드 예시 리포지토리에서 전체 예시를 찾고 설정 및 실행하는 방법을 배워보세요.

컴파일된 Android 애플리케이션 및 테스트 패키지를 Device Farm에 업로드하고, 테스트를 시작하고, 테스트가 완료될 때까지 기다린 다음 결과를 보고합니다.

import boto3 import os import requests import string import random import datetime import time # Update this dict with your own values before you run the example: config = { # This is our app under test. "appFilePath": "app-debug.apk", "projectArn": "arn:aws:devicefarm:us-west-2:111222333444:project:581f5703-e040-4ac9-b7ae-0ba007bfb8e6", # Since we care about the most popular devices, we'll use a curated pool. "testSpecArn": "arn:aws:devicefarm:us-west-2::upload:20fcf771-eae3-4137-aa76-92e17fb3131b", "poolArn": "arn:aws:devicefarm:us-west-2::devicepool:4a869d91-6f17-491f-9a95-0a601aee2406", "namePrefix": "MyAppTest", # This is our test package. This tutorial won't go into how to make these. "testPackage": "tests.zip", } client = boto3.client("devicefarm") unique = ( config["namePrefix"] + "-" + (datetime.date.today().isoformat()) + ("".join(random.sample(string.ascii_letters, 8))) ) print( f"The unique identifier for this run is '{unique}'. All uploads will be prefixed " f"with this." ) def upload_df_file(filename, type_, mime="application/octet-stream"): upload_response = client.create_upload( projectArn=config["projectArn"], name=unique + "_" + os.path.basename(filename), type=type_, contentType=mime, ) upload_arn = upload_response["upload"]["arn"] # Extract the URL of the upload and use Requests to upload it. upload_url = upload_response["upload"]["url"] with open(filename, "rb") as file_stream: print( f"Uploading {filename} to Device Farm as " f"{upload_response['upload']['name']}... ", end="", ) put_req = requests.put( upload_url, data=file_stream, headers={"content-type": mime} ) print(" done") if not put_req.ok: raise Exception(f"Couldn't upload. Requests says: {put_req.reason}") started = datetime.datetime.now() while True: print( f"Upload of {filename} in state {upload_response['upload']['status']} " f"after " + str(datetime.datetime.now() - started) ) if upload_response["upload"]["status"] == "FAILED": raise Exception( f"The upload failed processing. Device Farm says the reason is: \n" f"{+upload_response['upload']['message']}" ) if upload_response["upload"]["status"] == "SUCCEEDED": break time.sleep(5) upload_response = client.get_upload(arn=upload_arn) print("") return upload_arn our_upload_arn = upload_df_file(config["appFilePath"], "ANDROID_APP") our_test_package_arn = upload_df_file( config["testPackage"], "APPIUM_PYTHON_TEST_PACKAGE" ) print(our_upload_arn, our_test_package_arn) response = client.schedule_run( projectArn=config["projectArn"], appArn=our_upload_arn, devicePoolArn=config["poolArn"], name=unique, test={ "type": "APPIUM_PYTHON", "testSpecArn": config["testSpecArn"], "testPackageArn": our_test_package_arn, }, ) run_arn = response["run"]["arn"] start_time = datetime.datetime.now() print(f"Run {unique} is scheduled as arn {run_arn} ") state = "UNKNOWN" try: while True: response = client.get_run(arn=run_arn) state = response["run"]["status"] if state == "COMPLETED" or state == "ERRORED": break else: print( f" Run {unique} in state {state}, total " f"time {datetime.datetime.now() - start_time}" ) time.sleep(10) except: client.stop_run(arn=run_arn) exit(1) print(f"Tests finished in state {state} after {datetime.datetime.now() - start_time}") # Pull all the logs. jobs_response = client.list_jobs(arn=run_arn) # Save the output somewhere, using the unique value. save_path = os.path.join(os.getcwd(), "results", unique) os.mkdir(save_path) # Save the last run information. for job in jobs_response["jobs"]: job_name = job["name"] os.makedirs(os.path.join(save_path, job_name), exist_ok=True) # Get each suite within the job. suites = client.list_suites(arn=job["arn"])["suites"] for suite in suites: for test in client.list_tests(arn=suite["arn"])["tests"]: # Get the artifacts. for artifact_type in ["FILE", "SCREENSHOT", "LOG"]: artifacts = client.list_artifacts(type=artifact_type, arn=test["arn"])[ "artifacts" ] for artifact in artifacts: # Replace `:` because it has a special meaning in Windows & macOS. path_to = os.path.join( save_path, job_name, suite["name"], test["name"].replace(":", "_"), ) os.makedirs(path_to, exist_ok=True) filename = ( artifact["type"] + "_" + artifact["name"] + "." + artifact["extension"] ) artifact_save_path = os.path.join(path_to, filename) print(f"Downloading {artifact_save_path}") with open(artifact_save_path, "wb") as fn: with requests.get( artifact["url"], allow_redirects=True ) as request: fn.write(request.content) print("Finished")
프라이버시사이트 이용 약관쿠키 기본 설정
© 2025, Amazon Web Services, Inc. 또는 계열사. All rights reserved.