Build a pipeline for hardened container images using EC2 Image Builder and Terraform - AWS Prescriptive Guidance

Build a pipeline for hardened container images using EC2 Image Builder and Terraform

Created by Mike Saintcross (AWS) and Andrew Ranes (AWS)

Code repository: Terraform EC2 Image Builder Container Hardening Pipeline

Environment: Production

Source: Packer, Chef, or Pure Ansible

Target: EC2 Image Builder

R Type: Re-architect

Workload: Open-source

Technologies: Security, identity, compliance; DevOps

AWS services: Amazon EC2 Container Registry; Amazon EC2 Image Builder

Summary

This pattern builds an EC2 Image Builder pipeline that produces a hardened Amazon Linux 2 base container image. Terraform is used as an infrastructure as code (IaC) tool to configure and provision the infrastructure that is used to create hardened container images. The recipe helps you deploy a Docker-based Amazon Linux 2 container image that has been hardened according to Red Hat Enterprise Linux (RHEL) 7 STIG Version 3 Release 7 ‒ Medium. (See STIG-Build-Linux-Medium version 2022.2.1 in the Linux STIG components section of the EC2 Image Builder documentation.) This is referred to as a golden container image.

The build includes two Amazon EventBridge rules. One rule starts the container image pipeline when the Amazon Inspector finding is High or Critical so that non-secure images are replaced. This rule requires both Amazon Inspector and Amazon Elastic Container Registry (Amazon ECR) enhanced scanning to be enabled. The other rule sends notifications to an Amazon Simple Queue Service (Amazon SQS) queue after a successful image push to the Amazon ECR repository, to help you use the latest container images.

Prerequisites and limitations

Prerequisites

  • An AWS account that you can deploy the infrastructure in.

  • AWS Command Line Interface (AWS CLI) installed for setting your AWS credentials for local deployment.

  • Terraform downloaded and set up by following the instructions in the Terraform documentation.

  • Git (if you’re provisioning from a local machine).

  • A role within the AWS account that you can use to create AWS resources.

  • All variables defined in the .tfvars file.  Or, you can define all variables when you apply the Terraform configuration.

Limitations

Product versions

  • Amazon Linux 2

  • AWS CLI version 1.1 or later

Architecture

Target technology stack

This pattern creates 43 resources, including:

  • Two Amazon Simple Storage Service (Amazon S3) buckets: one for the pipeline component files and one for server access and Amazon VPC flow logs

  • An Amazon ECR repository

  • A virtual private cloud (VPC) that contains a public subnet, a private subnet, route tables, a NAT gateway, and an internet gateway

  • An EC2 Image Builder pipeline, recipe, and components

  • A container image

  • An AWS Key Management Service (AWS KMS) key for image encryption

  • An SQS queue

  • Three roles: one to run the EC2 Image Builder pipeline, one instance profile for EC2 Image Builder, and one for EventBridge rules

  • Two EventBridge rules

Terraform module structure

For the source code, see the GitHub repository Terraform EC2 Image Builder Container Hardening Pipeline.

├── components.tf ├── config.tf ├── dist-config.tf ├── files │ └──assumption-policy.json ├── hardening-pipeline.tfvars ├── image.tf ├── infr-config.tf ├── infra-network-config.tf ├── kms-key.tf ├── main.tf ├── outputs.tf ├── pipeline.tf ├── recipes.tf ├── roles.tf ├── sec-groups.tf ├── trigger-build.tf └── variables.tf

Module details

  • components.tf contains an Amazon S3 upload resource to upload the contents of the /files directory. You can also modularly add custom component YAML files here as well.

  • /files contains the .yml files that define the components used in components.tf.

  • image.tf contains the definitions for the base image operating system. This is where you can modify the definitions for a different base image pipeline.

  • infr-config.tf and dist-config.tf contain the resources for the minimum AWS infrastructure needed to spin up and distribute the image.

  • infra-network-config.tf contains the minimum VPC infrastructure to deploy the container image into.

  • hardening-pipeline.tfvars contains the Terraform variables to be used at apply time.

  • pipeline.tf creates and manages an EC2 Image Builder pipeline in Terraform.

  • recipes.tf is where you can specify different mixtures of components to create container recipes.

  • roles.tf contains the AWS Identity and Access Management (IAM) policy definitions for the Amazon Elastic Compute Cloud (Amazon EC2) instance profile and pipeline deployment role.

  • trigger-build.tf contains the EventBridge rules and SQS queue resources.

Target architecture

Architecture and workflow for building a pipeline for hardened container images

The diagram illustrates the following workflow:

  1. EC2 Image Builder builds a container image by using the defined recipe, which installs operating system updates and applies the RHEL Medium STIG to the Amazon Linux 2 base image.

  2. The hardened image is published to a private Amazon ECR registry, and an EventBridge rule sends a message to an SQS queue when the image has been published successfully.

  3. If Amazon Inspector is configured for enhanced scanning, it scans the Amazon ECR registry.

  4. If Amazon Inspector generates a Critical or High severity finding for the image, an EventBridge rule triggers the EC2 Image Builder pipeline to run again and publish a newly hardened image.

Automation and scale

  • This pattern describes how to provision the infrastructure and build the pipeline on your computer. However, it is intended to be used at scale. Instead of deploying the Terraform modules locally, you can use them in a multi-account environment, such as an AWS Control Tower with Account Factory for Terraform environment. In that case, you should use a backend state S3 bucket to manage Terraform state files instead of managing the configuration state locally.

  • For scaled use, deploy the solution to one central account, such as a Shared Services or Common Services account, from a Control Tower or landing zone account model, and grant consumer accounts permission to access the Amazon ECR repository and AWS KMS key. For more information about the setup, see the re:Post article How can I allow a secondary account to push or pull images in my Amazon ECR image repository? For example, in an account vending machine or Account Factory for Terraform, add permissions to each account baseline or account customization baseline to provide access to that Amazon ECR repository and encryption key.

  • After the container image pipeline is deployed, you can modify it by using EC2 Image Builder features such as components, which help you package more components into the Docker build.

  • The AWS KMS key that is used to encrypt the container image should be shared across the accounts that the image is intended to be used in.

  • You can add support for other images by duplicating the entire Terraform module and modifying the following recipes.tf attributes:

    • Modify parent_image = "amazonlinux:latest" to another image type.

    • Modify repository_name to point to an existing Amazon ECR repository. This creates another pipeline that deploys a different parent image type to your existing Amazon ECR repository.

Tools

Tools

  • Terraform (IaC provisioning)

  • Git (if provisioning locally)

  • AWS CLI version 1 or version 2 (if provisioning locally)

Code

The code for this pattern is in the GitHub repository Terraform EC2 Image Builder Container Hardening Pipeline. To use the sample code, follow the instructions in the next section.

Epics

TaskDescriptionSkills required

Set up local credentials.

Set up your AWS temporary credentials.

  1. See if the AWS CLI is installed:

    $ aws --version aws-cli/1.16.249 Python/3.6.8...
  2. Run aws configure and provide the following values:

    $ aws configure AWS Access Key ID [*************xxxx]: <Your AWS access key ID> AWS Secret Access Key [**************xxxx]: <Your AWS secret access key> Default region name: [us-east-1]: <Your desired Region for deployment> Default output format [None]: <Your desired output format>
AWS DevOps

Clone the repository.

  1. Clone the repository that’s provided with this pattern. You can use HTTPS or Secure Shell (SSH).

    HTTPS:

    git clone https://github.com/aws-samples/terraform-ec2-image-builder-container-hardening-pipeline

    SSH:

    git clone git@github.com:aws-samples/terraform-ec2-image-builder-container-hardening-pipeline.git
  2. Navigate to your local directory that contains this solution:

    cd terraform-ec2-image-builder-container-hardening-pipeline
AWS DevOps

Update variables.

Update the variables in the hardening-pipeline.tfvars file to match your environment and your desired configuration. You must provide your own account_id. However, you should also modify the rest of the variables to fit your desired deployment. All variables are required.

account_id = "<DEPLOYMENT-ACCOUNT-ID>" aws_region = "us-east-1" vpc_name = "example-hardening-pipeline-vpc" kms_key_alias = "image-builder-container-key" ec2_iam_role_name = "example-hardening-instance-role" hardening_pipeline_role_name = "example-hardening-pipeline-role" aws_s3_ami_resources_bucket = "example-hardening-ami-resources-bucket-0123" image_name = "example-hardening-al2-container-image" ecr_name = "example-hardening-container-repo" recipe_version = "1.0.0" ebs_root_vol_size = 10

Here’s a description of each variable:

  • account_id ‒ The AWS account number that you want to deploy the solution into.

  • aws_region ‒ The AWS Region that you want to deploy the solution into.

  • vpc_name ‒ The name for your VPC infrastructure.

  • kms_key_alias ‒ The AWS KMS key name to be used by the EC2 Image Builder infrastructure configuration.

  • ec2_iam_role_name ‒ The name for the role that will be used as the EC2 instance profile.

  • hardening_pipeline_role_name ‒ The name for the role that will be used to deploy the hardening pipeline.

  • aws_s3_ami_resources_bucket ‒ The name for an S3 bucket that will host all files necessary to build the pipeline and container images.

  • image_name ‒ The container image name. This value must be between 3 and 50 characters and should contain alphanumeric characters and hyphens only.

  • ecr_name ‒ The name of the Amazon ECR registry to store the container images in.

  • recipe_version ‒ The version of the image recipe. The default value is 1.0.0.

  • ebs_root_vol_size ‒ The size (in gigabytes) of the Amazon Elastic Block Store (Amazon EBS) root volume. The default value is 10 gigabytes.

AWS DevOps

Initialize Terraform.

After you update your variable values, you can initialize the Terraform configuration directory. Initializing a configuration directory downloads and installs the AWS provider, which is defined in the configuration.

terraform init

You should see a message that says Terraform has been successfully initialized and identifies the version of the provider that was installed.

AWS DevOps

Deploy the infrastructure and create a container image.

Use the following command to initialize, validate, and apply the Terraform modules to the environment by using the variables defined in your .tfvars file:

terraform init && terraform validate && terraform apply -var-file *.tfvars -auto-approve
AWS DevOps

Customize the container.

You can create a new version of a container recipe after EC2 Image Builder deploys the pipeline and initial recipe.

You can add any of the 31+ components available within EC2 Image Builder to customize the container build. For more information, see the Components section of Create a new version of a container recipe in the EC2 Image Builder documentation.

AWS administrator
TaskDescriptionSkills required

Validate AWS infrastructure provisioning.

After you have successfully completed your first Terraform apply command, if you’re provisioning locally, you should see this snippet in your local machine’s terminal:

Apply complete! Resources: 43 added, 0 changed, 0 destroyed.
AWS DevOps

Validate individual AWS infrastructure resources.

To validate the individual resources that were deployed, if you’re provisioning locally, you can run the following command:

terraform state list

This command returns a list of 43 resources.

AWS DevOps
TaskDescriptionSkills required

Remove the infrastructure and container image.

When you’ve finished working with your Terraform configuration, you can run the following command to remove resources:

terraform init && terraform validate && terraform destroy -var-file *.tfvars -auto-approve
AWS DevOps

Troubleshooting

IssueSolution

Error validating provider credentials

When you run the Terraform apply or destroy command from your local machine, you might encounter an error similar to the following:

Error: configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: operation error STS: GetCallerIdentity, https response error StatusCode: 403, RequestID: 123456a9-fbc1-40ed-b8d8-513d0133ba7f, api error InvalidClientTokenId: The security token included in the request is invalid.

This error is caused by the expiration of the security token for the credentials used in your local machine’s configuration.

To resolve the error, see Set and view configuration settings in the AWS CLI documentation.

Related resources