启动您的 Amazon Lookout for Vision 模型 - Amazon Lookout for Vision

本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。

启动您的 Amazon Lookout for Vision 模型

在使用 Amazon Lookout for Vision 模型进行异常检测之前,必须先启动该模型。您可以通过调用 StartModel API 并传递以下项来启动模型:

  • ProjectName:包含要启动的模型的项目的名称。

  • ModelVersion:要启动的模型版本。

  • MinInferenceUnits:推理单位的最小数量。有关更多信息,请参阅 推理单位

  • (可选)MaxInferenceUnits:Amazon Lookout for Vision 可用来自动扩缩模型的最大推理单位数量。有关更多信息,请参阅 自动扩缩推理单位数量

Amazon Lookout for Vision 控制台提供了示例代码,可用于启动和停止模型。

注意

您需要按照模型的运行时间量付费。要停止正在运行的模型,请参阅停止您的 Amazon Lookout for Vision 模型

您可以使用 AWS SDK,查看在所有可使用 Lookout for Vision 的 AWS 区域中运行的模型。有关示例代码,请参阅 find_running_models.py

启动您的模型(控制台)

Amazon Lookout for Vision 控制台提供了 AWS CLI 命令,可用于启动模型。模型启动后,您便可以开始检测图像中的异常。有关更多信息,请参阅 检测图像中的异常

启动模型(控制台)
  1. 安装并配置 AWS CLI 和 AWS SDK(如果尚未如此)。有关更多信息,请参阅 步骤 4:设置 AWS CLI 和 AWS 软件开发工具包

  2. 打开 Amazon Lookout for Vision 控制台,网址为 https://console.aws.amazon.com/lookoutvision/

  3. 选择开始使用

  4. 在左侧导航窗格中,选择项目

  5. 项目资源页面上,选择包含要启动的已训练模型的项目。

  6. 模型部分,选择要启动的模型。

  7. 在模型的详细信息页面上,选择使用模型,然后选择将 API 集成到云

    提示

    如果要将您的模型部署到边缘设备,请选择创建模型打包任务。有关更多信息,请参阅 将您的 Amazon Lookout for Vision 模型打包

  8. Amazon CLI 命令下,复制用于调用 start-model 的 AWS CLI 命令。

  9. 在命令提示符处,输入您在上一步中复制的 start-model 命令。如果您使用 lookoutvision 配置文件来获取凭证,请添加 --profile lookoutvision-access 参数。

  10. 在控制台中,选择左侧导航页面中的模型

  11. 查看状态列以了解模型的当前状态,当状态为已托管时,您便可以使用模型来检测图像中的异常。有关更多信息,请参阅 检测图像中的异常

启动您的 Amazon Lookout for Vision 模型 (SDK)

您可以通过调用 StartModel 操作来启动模型。

模型可能需要一段时间才能启动。您可以通过调用 DescribeModel 来检查当前状态。有关更多信息,请参阅 查看您的模型

启动您的模型 (SDK)
  1. 安装并配置 AWS CLI 和 AWS SDK(如果尚未如此)。有关更多信息,请参阅 步骤 4:设置 AWS CLI 和 AWS 软件开发工具包

  2. 使用以下示例代码启动模型。

    CLI

    更改以下值:

    • project-name 更改为包含要启动的模型的项目的名称。

    • model-version 更改为要启动的模型版本。

    • --min-inference-units 更改为要使用的推理单位数。

    • (可选)--max-inference-units 更改为 Amazon Lookout for Vision 可用来自动扩缩模型的最大推理单位数量。

    aws lookoutvision start-model --project-name "project name"\ --model-version model version\ --min-inference-units minimum number of units\ --max-inference-units max number of units \ --profile lookoutvision-access
    Python

    此代码取自 AWS 文档 SDK 示例 GitHub 存储库。请在此处查看完整示例。

    @staticmethod def start_model( lookoutvision_client, project_name, model_version, min_inference_units, max_inference_units = None): """ Starts the hosting of a Lookout for Vision model. :param lookoutvision_client: A Boto3 Lookout for Vision client. :param project_name: The name of the project that contains the version of the model that you want to start hosting. :param model_version: The version of the model that you want to start hosting. :param min_inference_units: The number of inference units to use for hosting. :param max_inference_units: (Optional) The maximum number of inference units that Lookout for Vision can use to automatically scale the model. """ try: logger.info( "Starting model version %s for project %s", model_version, project_name) if max_inference_units is None: lookoutvision_client.start_model( ProjectName = project_name, ModelVersion = model_version, MinInferenceUnits = min_inference_units) else: lookoutvision_client.start_model( ProjectName = project_name, ModelVersion = model_version, MinInferenceUnits = min_inference_units, MaxInferenceUnits = max_inference_units) print("Starting hosting...") status = "" finished = False # Wait until hosted or failed. while finished is False: model_description = lookoutvision_client.describe_model( ProjectName=project_name, ModelVersion=model_version) status = model_description["ModelDescription"]["Status"] if status == "STARTING_HOSTING": logger.info("Host starting in progress...") time.sleep(10) continue if status == "HOSTED": logger.info("Model is hosted and ready for use.") finished = True continue logger.info("Model hosting failed and the model can't be used.") finished = True if status != "HOSTED": logger.error("Error hosting model: %s", status) raise Exception(f"Error hosting model: {status}") except ClientError: logger.exception("Couldn't host model.") raise
    Java V2

    此代码取自 AWS 文档 SDK 示例 GitHub 存储库。请在此处查看完整示例。

    /** * Starts hosting an Amazon Lookout for Vision model. Returns when the model has * started or if hosting fails. You are charged for the amount of time that a * model is hosted. To stop hosting a model, use the StopModel operation. * * @param lfvClient An Amazon Lookout for Vision client. * @param projectName The name of the project that contains the model that you * want to host. * @modelVersion The version of the model that you want to host. * @minInferenceUnits The number of inference units to use for hosting. * @maxInferenceUnits The maximum number of inference units that Lookout for * Vision can use for automatically scaling the model. If the * value is null, automatic scaling doesn't happen. * @return ModelDescription The description of the model, which includes the * model hosting status. */ public static ModelDescription startModel(LookoutVisionClient lfvClient, String projectName, String modelVersion, Integer minInferenceUnits, Integer maxInferenceUnits) throws LookoutVisionException, InterruptedException { logger.log(Level.INFO, "Starting Model version {0} for project {1}.", new Object[] { modelVersion, projectName }); StartModelRequest startModelRequest = null; if (maxInferenceUnits == null) { startModelRequest = StartModelRequest.builder().projectName(projectName).modelVersion(modelVersion) .minInferenceUnits(minInferenceUnits).build(); } else { startModelRequest = StartModelRequest.builder().projectName(projectName).modelVersion(modelVersion) .minInferenceUnits(minInferenceUnits).maxInferenceUnits(maxInferenceUnits).build(); } // Start hosting the model. lfvClient.startModel(startModelRequest); DescribeModelRequest describeModelRequest = DescribeModelRequest.builder().projectName(projectName) .modelVersion(modelVersion).build(); ModelDescription modelDescription = null; boolean finished = false; // Wait until model is hosted or failure occurs. do { modelDescription = lfvClient.describeModel(describeModelRequest).modelDescription(); switch (modelDescription.status()) { case HOSTED: logger.log(Level.INFO, "Model version {0} for project {1} is running.", new Object[] { modelVersion, projectName }); finished = true; break; case STARTING_HOSTING: logger.log(Level.INFO, "Model version {0} for project {1} is starting.", new Object[] { modelVersion, projectName }); TimeUnit.SECONDS.sleep(60); break; case HOSTING_FAILED: logger.log(Level.SEVERE, "Hosting failed for model version {0} for project {1}.", new Object[] { modelVersion, projectName }); finished = true; break; default: logger.log(Level.SEVERE, "Unexpected error when hosting model version {0} for project {1}: {2}.", new Object[] { projectName, modelVersion, modelDescription.status() }); finished = true; break; } } while (!finished); logger.log(Level.INFO, "Finished starting model version {0} for project {1} status: {2}", new Object[] { modelVersion, projectName, modelDescription.statusMessage() }); return modelDescription; }
  3. 如果代码的输出为 Model is hosted and ready for use,则可以使用模型来检测图像中的异常。有关更多信息,请参阅 检测图像中的异常