Troubleshoot EC2 Image Builder
EC2 Image Builder integrates with AWS services for monitoring and troubleshooting to help you troubleshoot image build issues. Image Builder tracks and displays the progress for each step in the image building process. Additionally, Image Builder can export logs to an Amazon S3 location that you provide.
For advanced troubleshooting, you can run predefined commands and scripts using AWS Systems Manager Run Command.
Troubleshoot pipeline builds
If an Image Builder pipeline build fails, Image Builder returns an error message that describes the failure. Image Builder also returns a Systems Manager execution ID in the failure message, such as the one in the following example.
Systems Manager execution
'aaaaaaaa-bbbb-cccc-dddd-example12345'
failed with status…
Image Builder uses AWS Systems Manager Automation to orchestrate image build actions. To review additional details to help troubleshoot a build failure, search the Systems Manager Automation console for the execution ID provided by Image Builder and review the Automation execution.
All build activity is also logged in AWS CloudTrail if it is enabled in your account. Filter
CloudTrail events by the source imagebuilder.amazonaws.com
, or search for the Amazon EC2
instance ID that is returned in the execution log to see more details about the pipeline
execution.
By default, Image Builder shuts down the Amazon EC2 build or test instance that is running when the pipeline fails. You can change the instance settings for the infrastructure configuration resource that your pipeline uses, to retain your build or test instance for troubleshooting.
To change the instance settings in the console, you must clear the Terminate instance on failure check box located in the Troubleshooting settings section of your infrastructure configuration resource.
You can also change the instance settings with the
update-infrastructure-configuration command in the AWS CLI. Set the
terminateInstanceOnFailure
value to false
in the JSON file
that the command references with the --cli-input-json
parameter. For
details, see Update an
infrastructure configuration.
The logs that you send to your S3 bucket show the steps and error messages for
activity on the EC2 instance during the image build process. The logs include log
outputs from the component manager, the definitions of the components that were run, and
the detailed output (in JSON) of all of the steps taken on the instance. If you
encounter an issue, you should review these files, starting with the
application.log
, to diagnose the cause of the problem on the instance.
Troubleshooting scenarios
This section lists the following detailed troubleshooting scenarios:
To see the details of a scenario, choose the scenario title to expand it. You can have multiple titles expanded at the same time.
Description
The pipeline build fails with "AccessDenied: Access Denied status code: 403".
Cause
Possible causes include:
-
The instance profile does not have the required permissions to access APIs or component resources.
-
The instance profile role is missing permissions that are required for logging to Amazon S3. Most commonly, this occurs when the instance profile role does not have PutObject permissions for your S3 buckets.
Solution
Depending on the cause, this issue can be resolved as follows:
-
Instance profile is missing managed policies – Add the missing policies to your instance profile role. Then run the pipeline again.
-
Instance profile is missing write permissions for S3 bucket – Add a policy to your instance profile role that grants PutObject permissions to write to your S3 bucket. Then run the pipeline again.
Description
The pipeline build fails with "status = 'TimedOut'" and "failure message = 'Step timed out while step is verifying the Systems Manager Agent availability on the target instance(s)'".
Cause
Possible causes include:
-
The instance that was launched to perform the build operations and to run components was not able to access the Systems Manager endpoint.
-
The instance profile does not have the required permissions.
Solution
Depending on the possible cause, this issue can be resolved as follows:
-
Access issue, private subnet – If you are building in a private subnet, make sure that you have set up PrivateLink endpoints for Systems Manager, Image Builder, and, if you want logging, Amazon S3/CloudWatch. For more information about setting up PrivateLink endpoints, see VPC endpoints concepts (AWS PrivateLink).
-
Missing permissions – Add the following managed policies to your IAM service-linked role for Image Builder:
-
EC2InstanceProfileForImageBuilder
-
EC2InstanceProfileForImageBuilderECRContainerBuilds
-
AmazonSSMManagedInstanceCore
For more information about the Image Builder service-linked role, see Using service-linked roles for EC2 Image Builder.
-
Description
When the instance type used to build an Image Builder Windows AMI does not match the instance type that is used to launch from the AMI, an issue can occur where non-root volumes are offline at launch. This primarily happens when the build instance is using a newer architecture than the launch instance.
The following example demonstrates what happens when an Image Builder AMI is built on an EC2 Nitro instance type and launched on an EC2 Xen instance:
Build instance type: m5.large (Nitro)
Launch instance type: t2.medium (Xen)
PS C:\Users\Administrator>
get-diskNumber Friendly Name Serial Number Health Status Operational Status Total Size Partition Style ------ ------------- ------------- ------------- ------------------ ---------- --------------- 0 AWS PVDISK vol0abc12d34e567f8a9 Healthy Online 30 GB MBR 1 AWS PVDISK vol1bcd23e45f678a9b0 Healthy Offline 8 GB MBR
Cause
Because of Windows default settings, newly discovered disks are not automatically brought online and formatted. When the instance type is changed on EC2, Windows treats this as new disks being discovered. This is because of the underlying driver change.
Solution
We recommend that you use the same system of instance types when building your Windows AMI that you intend to launch from. Do not include instance types that are built on different systems in your infrastructure configuration. If any of the instance types you specify use the Nitro system, then they should all use the Nitro system.
For more information about instances that are built on the Nitro system, see Instances built on the Nitro System in the Amazon EC2 User Guide for Windows Instances.
Description
You are using a CIS hardened base image and the build fails.
Cause
When the /tmp
directory is classified as
noexec
, it can cause Image Builder to fail.
Solution
Choose a different location for your working directory in the
workingDirectory
field of the image recipe. For more
information, see the
ImageRecipe
data type description.
Description
Systems Manager Automation shows a failure in the AssertInventoryCollection
automation step.
Cause
You or your organization might have created a Systems Manager State Manager association that collects inventory information for EC2 instances. If enhanced image metadata collection is enabled for your Image Builder pipeline (this is the default), Image Builder attempts to create a new inventory association for the build instance. However, Systems Manager does not allow multiple inventory associations for managed instances, and prevents a new association if one already exists. This causes the operation to fail, and results in a failed pipeline build.
Solution
To resolve this issue, turn off enhanced image metadata collection, using one of the following methods:
-
Update your image pipeline in the console, to clear the Enable enhanced metadata collection check box. Save your changes and run a pipeline build.
For more information about updating your AMI image pipeline using the EC2 Image Builder console, see . For more information about updating your container image pipeline using the EC2 Image Builder console, see .
-
You can also update your image pipeline with the update-image-pipeline command in the AWS CLI. To do this, include the
EnhancedImageMetadataEnabled
property in your JSON file, set tofalse
. The following example shows the property set tofalse
.{ "name": "
MyWindows2019Pipeline
", "description": "Builds Windows 2019 Images
", "enhancedImageMetadataEnabled":false
, "imageRecipeArn": "arn:aws:imagebuilder:us-west-2:123456789012
:image-recipe/my-example-recipe
/2020.12.03", "infrastructureConfigurationArn": "arn:aws:imagebuilder:us-west-2:123456789012
:infrastructure-configuration/my-example-infrastructure-configuration
", "distributionConfigurationArn": "arn:aws:imagebuilder:us-west-2:123456789012
:distribution-configuration/my-example-distribution-configuration
", "imageTestsConfiguration": { "imageTestsEnabled": true, "timeoutMinutes": 60 }, "schedule": { "scheduleExpression": "cron(0 0 * * SUN *)", "pipelineExecutionStartCondition": "EXPRESSION_MATCH_AND_DEPENDENCY_UPDATES_AVAILABLE" }, "status": "ENABLED" }
To prevent this from happening for new pipelines, clear the
Enable enhanced metadata collection check box when you create
a new pipeline using the EC2 Image Builder console,
or set the value of the EnhancedImageMetadataEnabled
property in your
JSON file to false
when you create your pipeline using the AWS CLI.