Automate Amazon Lookout for Vision training and deployment for anomaly detection - AWS Prescriptive Guidance

Automate Amazon Lookout for Vision training and deployment for anomaly detection

Created by Michael Wallner (AWS), Gabriel Rodriguez Garcia (AWS), Kangkang Wang (AWS), Shukhrat Khodjaev (AWS), Sanjay Ashok (AWS), Yassine Zaafouri (AWS), and Gabriel Zylka (AWS)

Code repository: automated-silicon-wafer-anomaly-detection-using-amazon-lookout-for-vision

Environment: Production

Technologies: Machine learning & AI; CloudNative; DevOps

AWS services: AWS CloudFormation; AWS CodeBuild; AWS CodeCommit; AWS CodePipeline; AWS Lambda; Amazon Lookout for Vision

Summary

This pattern helps you automate the training and deployment of Amazon Lookout for Vision machine learning models for visual inspection. Although this pattern concentrates on anomaly detection for silicon wafers, you can adapt the solution for use in a wide range of products and industries.

In 2020, the annual capacity of one of the largest semiconductor manufacturers in the world exceeded 12 million 12-inch equivalent wafers. To ensure the quality and reliability of these wafers, visual inspection is an essential step in the production process. The traditional methods of visual inspection, such as manual sampling or the use of outdated, legacy tools that rely on statistical measures, can be time-consuming and inefficient. Given the scale of this process and its importance to the broader semiconductor industry, there is a significant opportunity to optimize and automate visual inspection by using advanced artificial intelligence (AI) technologies.

Lookout for Vision helps streamline the image and object inspection process, reducing the need for costly and inconsistent manual inspection. This solution improves quality control, facilitates accurate defect and damage assessment, and ensures compliance with industry standards. Additionally, you can automate the Lookout for Vision inspection process, without specialized machine learning expertise.

Using this solution, you can integrate your computer vision model into any system. For instance, you might integrate a model into a website where users upload images and analyze them for defects. The following image shows an example of a silicon wafer with scratch defects from a chemical mechanical polishing (CMP) process. You can use Lookout for Vision to detect these anomalies. For example, Lookout for Vision detected anomalies in this image with 99.04% confidence.

Silicon wafer with scratch defects

This solution is based on the code and the use case described in the Build an event-based tracking solution using Amazon Lookout for Vision blog post. This solution modifies the original code to enable CI/CD pipeline automation and to integrate the open source Amazon Lookout for Vision Python SDK (GitHub). For more information about the Python SDK, see the Build, train, and deploy Amazon Lookout for Vision models using the Python SDK blog post.

Prerequisites and limitations

Prerequisites

Architecture

Target architecture

Architecture diagram of this solution

This architecture illustrates automation of build, train and deployment of Amazon Lookout for Vision models through a CI/CD pipeline. The diagram shows the following workflow:

  1. The code is stored in an Amazon CodeCommit repository. Developers can modify the code, change input images, or add other steps to the automation pipeline.

  2. After deploying the solution or updating the main branch of the CodeCommit repository, Amazon CodePipeline automatically pushes the code into Amazon CodeBuild.

  3. CodeBuild uses the Lookout for Vision Python SDK to train and deploy the images classification model. The images used for training are stored in an Amazon Simple Storage Service (Amazon S3) bucket. CodeBuild automatically downloads these images and stores them. To customize the solution to your needs, you can import your own images.

  4. The Lookout for Vision model is exposed to end users through AWS Lambda. However, you are not limited to this approach. You can also deploy Lookout for Vision at the edge on IoT devices, or you can run it as batch process on a scheduled basis to generate predictions.

Tools

AWS services

  • AWS CodeBuild is a fully managed build service that helps you compile source code, run unit tests, and produce artifacts that are ready to deploy.

  • AWS CodeCommit is a version control service that helps you privately store and manage Git repositories, without needing to manage your own source control system.

  • AWS CodePipeline helps you quickly model and configure the different stages of a software release and automate the steps required to release software changes continuously.

  • AWS Key Management Service (AWS KMS) helps you create and control cryptographic keys to help protect your data.

  • AWS Lambda is a compute service that helps you run code without needing to provision or manage servers. It runs your code only when needed and scales automatically, so you pay only for the compute time that you use.

  • Amazon Lookout for Vision uses computer vision to find visual detects in industrial products, accurately and at scale.

  • Amazon Simple Storage Service (Amazon S3) is a cloud-based object storage service that helps you store, protect, and retrieve any amount of data.

Code repository

The code for this pattern is available in the GitHub Automate Amazon Lookout for Vision training and deployment for Silicon Wafer Anomaly Detection repository.

Best practices

When running the code as an experiment, make sure to stop your Amazon Lookout for Vision endpoint.

Epics

TaskDescriptionSkills required

Clone the GitHub repository.

Clone the GitHub Automate Amazon Lookout for Vision training and deployment for Silicon Wafer Anomaly Detection repository to your local workstation.

git clone https://github.com/aws-samples/automated-silicon-wafer-anomaly-detection-using-amazon-lookout-for-vision.git

Bash

Create a virtual environment.

Enter the following command to create a virtual environment on your local workstation.

python3 -m venv .venv
Python

Install dependencies.

After the virtual environment is created, enter the following command to install the required dependencies.

pip install -r requirements.txt
Python

(Linux users only) Activate the virtual environment.

After the initialization is completed and the virtual environment is created, use the following command to activate the virtual environment.

source .venv/bin/activate
Bash

(Windows users only) Activate the virtual environment.

After the initialization is completed and the virtual environment is created, use the following command to activate the virtual environment.

.venv\Scripts\activate.bat
PowerShell

Deploy the stack.

  1. In the AWS CDK CLI, enter the following command to synthesize the AWS CloudFormation template.

    cdk synth
  2. Enter the following command to deploy the CloudFormation stack.

    cdk deploy --all --require-approval never

    The --all flag ensures that all components are installed at once. --require-approval never eliminates the need to approve each component deployment.

AWS administrator
TaskDescriptionSkills required

Enter an example test event.

  1. Open the Functions page of the Lambda console.

  2. Choose the amazon-lookout-for-vision-project-lambda function.

  3. Choose the Test tab.

  4. Under Test event, choose Create new event.

  5. Enter the following.

  6. Choose Test.

    { "tbd": "tbd" }
  7. To review the test results, under Execution result, expand Details.

General AWS

Related resources

AWS documentation

AWS blog posts