Build a pipeline for hardened container images using EC2 Image Builder and Terraform
Created by Mike Saintcross (AWS) and Andrew Ranes (AWS)
Summary
This pattern builds an EC2 Image Builder pipeline that produces a hardened Amazon Linux 2
The build includes two Amazon EventBridge rules. One rule starts the container image pipeline when the Amazon Inspector finding is High or Critical so that non-secure images are replaced. This rule requires both Amazon Inspector and Amazon Elastic Container Registry (Amazon ECR) enhanced scanning to be enabled. The other rule sends notifications to an Amazon Simple Queue Service (Amazon SQS) queue after a successful image push to the Amazon ECR repository, to help you use the latest container images.
Note
Amazon Linux 2 is nearing end of support. For more information, see the Amazon Linux 2 FAQs
Prerequisites and limitations
Prerequisites
An AWS account
that you can deploy the infrastructure in. AWS Command Line Interface (AWS CLI) installed for setting your AWS credentials for local deployment.
Terraform downloaded
and set up by following the instructions in the Terraform documentation. Git
(if you’re provisioning from a local machine). A role within the AWS account that you can use to create AWS resources.
All variables defined in the .tfvars
file. Or, you can define all variables when you apply the Terraform configuration.
Limitations
This solution creates an Amazon Virtual Private Cloud (Amazon VPC) infrastructure that includes a NAT gateway and an internet gateway for internet connectivity from its private subnet. You cannot use VPC endpoints, because the bootstrap process by AWS Task Orchestrator and Executor (AWSTOE
) installs AWS CLI version 2 from the internet.
Product versions
Amazon Linux 2
AWS CLI version 1.1 or later
Architecture
Target technology stack
This pattern creates 43 resources, including:
Two Amazon Simple Storage Service (Amazon S3) buckets: one for the pipeline component files and one for server access and Amazon VPC flow logs
A virtual private cloud (VPC) that contains a public subnet, a private subnet, route tables, a NAT gateway, and an internet gateway
An EC2 Image Builder pipeline, recipe, and components
A container image
An AWS Key Management Service (AWS KMS) key for image encryption
An SQS queue
Three roles: one to run the EC2 Image Builder pipeline, one instance profile for EC2 Image Builder, and one for EventBridge rules
Two EventBridge rules
Terraform module structure
For the source code, see the GitHub repository Terraform EC2 Image Builder Container Hardening Pipeline
├── components.tf ├── config.tf ├── dist-config.tf ├── files │ └──assumption-policy.json ├── hardening-pipeline.tfvars ├── image.tf ├── infr-config.tf ├── infra-network-config.tf ├── kms-key.tf ├── main.tf ├── outputs.tf ├── pipeline.tf ├── recipes.tf ├── roles.tf ├── sec-groups.tf ├── trigger-build.tf └── variables.tf
Module details
components.tf
contains an Amazon S3 upload resource to upload the contents of the/files
directory. You can also modularly add custom component YAML files here as well./files
contains the.yml
files that define the components used incomponents.tf
.image.tf
contains the definitions for the base image operating system. This is where you can modify the definitions for a different base image pipeline.infr-config.tf
anddist-config.tf
contain the resources for the minimum AWS infrastructure needed to spin up and distribute the image.infra-network-config.tf
contains the minimum VPC infrastructure to deploy the container image into.hardening-pipeline.tfvars
contains the Terraform variables to be used at apply time.pipeline.tf
creates and manages an EC2 Image Builder pipeline in Terraform.recipes.tf
is where you can specify different mixtures of components to create container recipes.roles.tf
contains the AWS Identity and Access Management (IAM) policy definitions for the Amazon Elastic Compute Cloud (Amazon EC2) instance profile and pipeline deployment role.trigger-build.tf
contains the EventBridge rules and SQS queue resources.
Target architecture
The diagram illustrates the following workflow:
EC2 Image Builder builds a container image by using the defined recipe, which installs operating system updates and applies the RHEL Medium STIG to the Amazon Linux 2 base image.
The hardened image is published to a private Amazon ECR registry, and an EventBridge rule sends a message to an SQS queue when the image has been published successfully.
If Amazon Inspector is configured for enhanced scanning, it scans the Amazon ECR registry.
If Amazon Inspector generates a Critical or High severity finding for the image, an EventBridge rule triggers the EC2 Image Builder pipeline to run again and publish a newly hardened image.
Automation and scale
This pattern describes how to provision the infrastructure and build the pipeline on your computer. However, it is intended to be used at scale. Instead of deploying the Terraform modules locally, you can use them in a multi-account environment, such as an AWS Control Tower with Account Factory for Terraform
environment. In that case, you should use a backend state S3 bucket to manage Terraform state files instead of managing the configuration state locally. For scaled use, deploy the solution to one central account, such as a Shared Services or Common Services account, from a Control Tower or landing zone account model, and grant consumer accounts permission to access the Amazon ECR repository and AWS KMS key. For more information about the setup, see the re:Post article How can I allow a secondary account to push or pull images in my Amazon ECR image repository?
For example, in an account vending machine or Account Factory for Terraform, add permissions to each account baseline or account customization baseline to provide access to that Amazon ECR repository and encryption key. After the container image pipeline is deployed, you can modify it by using EC2 Image Builder features such as components, which help you package more components into the Docker build.
The AWS KMS key that is used to encrypt the container image should be shared across the accounts that the image is intended to be used in.
You can add support for other images by duplicating the entire Terraform module and modifying the following
recipes.tf
attributes:Modify
parent_image = "amazonlinux:latest"
to another image type.Modify
repository_name
to point to an existing Amazon ECR repository. This creates another pipeline that deploys a different parent image type to your existing Amazon ECR repository.
Tools
Tools
Terraform (IaC provisioning)
Git (if provisioning locally)
AWS CLI version 1 or version 2 (if provisioning locally)
Code
The code for this pattern is in the GitHub repository Terraform EC2 Image Builder Container Hardening Pipeline
Epics
Task | Description | Skills required |
---|---|---|
Set up local credentials. | Set up your AWS temporary credentials.
| AWS DevOps |
Clone the repository. |
| AWS DevOps |
Update variables. | Update the variables in the
Here’s a description of each variable:
| AWS DevOps |
Initialize Terraform. | After you update your variable values, you can initialize the Terraform configuration directory. Initializing a configuration directory downloads and installs the AWS provider, which is defined in the configuration.
You should see a message that says Terraform has been successfully initialized and identifies the version of the provider that was installed. | AWS DevOps |
Deploy the infrastructure and create a container image. | Use the following command to initialize, validate, and apply the Terraform modules to the environment by using the variables defined in your
| AWS DevOps |
Customize the container. | You can create a new version of a container recipe after EC2 Image Builder deploys the pipeline and initial recipe. You can add any of the 31+ components available within EC2 Image Builder to customize the container build. For more information, see the Components section of Create a new version of a container recipe in the EC2 Image Builder documentation. | AWS administrator |
Task | Description | Skills required |
---|---|---|
Validate AWS infrastructure provisioning. | After you have successfully completed your first Terraform
| AWS DevOps |
Validate individual AWS infrastructure resources. | To validate the individual resources that were deployed, if you’re provisioning locally, you can run the following command:
This command returns a list of 43 resources. | AWS DevOps |
Task | Description | Skills required |
---|---|---|
Remove the infrastructure and container image. | When you’ve finished working with your Terraform configuration, you can run the following command to remove resources:
| AWS DevOps |
Troubleshooting
Issue | Solution |
---|---|
Error validating provider credentials | When you run the Terraform
This error is caused by the expiration of the security token for the credentials used in your local machine’s configuration. To resolve the error, see Set and view configuration settings in the AWS CLI documentation. |
Related resources
Terraform EC2 Image Builder Container Hardening Pipeline
(GitHub repository) AWS Control Tower Account Factory for Terraform
(AWS blog post) Backend state S3 bucket
(Terraform documentation) Installing or updating the latest version of the AWS CLI (AWS CLI documentation)