Skip navigation links

Package software.amazon.awscdk.pipelines

CDK Pipelines

See: Description

Package software.amazon.awscdk.pipelines Description

CDK Pipelines

---

cdk-constructs: Stable


A construct library for painless Continuous Delivery of CDK applications.

This module contains two sets of APIs: an original and a modern version of CDK Pipelines. The modern API has been updated to be easier to work with and customize, and will be the preferred API going forward. The original version of the API is still available for backwards compatibility, but we recommend migrating to the new version if possible.

Compared to the original API, the modern API: has more sensible defaults; is more flexible; supports parallel deployments; supports multiple synth inputs; allows more control of CodeBuild project generation; supports deployment engines other than CodePipeline.

The README for the original API, as well as a migration guide, can be found in our GitHub repository.

At a glance

Deploying your application continuously starts by defining a MyApplicationStage, a subclass of Stage that contains the stacks that make up a single copy of your application.

You then define a Pipeline, instantiate as many instances of MyApplicationStage as you want for your test and production environments, with different parameters for each, and calling pipeline.addStage() for each of them. You can deploy to the same account and Region, or to a different one, with the same amount of code. The CDK Pipelines library takes care of the details.

CDK Pipelines supports multiple deployment engines (see below), and comes with a deployment engine that deployes CDK apps using AWS CodePipeline. To use the CodePipeline engine, define a CodePipeline construct. The following example creates a CodePipeline that deploys an application from GitHub:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 /** The stacks for our app are defined in my-stacks.ts.  The internals of these
   * stacks aren't important, except that DatabaseStack exposes an attribute
   * "table" for a database table it defines, and ComputeStack accepts a reference
   * to this table in its properties.
   * /
 import lib.my.stacks.DatabaseStack;
 import lib.my.stacks.ComputeStack;
 import software.amazon.awscdk.core.Construct;
 import software.amazon.awscdk.core.Stage;
 import software.amazon.awscdk.core.Stack;
 import software.amazon.awscdk.core.StackProps;
 import software.amazon.awscdk.core.StageProps;
 import software.amazon.awscdk.pipelines.CodePipeline;
 import software.amazon.awscdk.pipelines.CodePipelineSource;
 import software.amazon.awscdk.pipelines.ShellStep;
 
 /**
  * Stack to hold the pipeline
  * /
 public class MyPipelineStack extends Stack {
     public MyPipelineStack(Construct scope, String id) {
         this(scope, id, null);
     }
 
     public MyPipelineStack(Construct scope, String id, StackProps props) {
         super(scope, id, props);
 
         CodePipeline pipeline = new CodePipeline(this, "Pipeline", new CodePipelineProps()
                 .synth(new ShellStep("Synth", new ShellStepProps()
                         // Use a connection created using the AWS console to authenticate to GitHub
                         // Other sources are available.
                         .input(CodePipelineSource.connection("my-org/my-app", "main", new ConnectionSourceOptions()
                                 .connectionArn("arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41")))
                         .commands(asList("npm ci", "npm run build", "npx cdk synth")))));
 
         // 'MyApplication' is defined below. Call `addStage` as many times as
         // necessary with any account and region (may be different from the
         // pipeline's).
         pipeline.addStage(new MyApplication(this, "Prod", new StageProps()
                 .env(new Environment()
                         .account("123456789012")
                         .region("eu-west-1"))));
     }
 }
 
 /**
  * Your application
  *
  * May consist of one or more Stacks (here, two)
  *
  * By declaring our DatabaseStack and our ComputeStack inside a Stage,
  * we make sure they are deployed together, or not at all.
  * /
 public class MyApplication extends Stage {
     public MyApplication(Construct scope, String id) {
         this(scope, id, null);
     }
 
     public MyApplication(Construct scope, String id, StageProps props) {
         super(scope, id, props);
 
         Object dbStack = new DatabaseStack(this, "Database");
         ComputeStack.Builder.create(this, "Compute")
                 .table(dbStack.getTable())
                 .build();
     }
 }
 
 // In your main file
 // In your main file
 new MyPipelineStack(app, "PipelineStack", new StackProps()
         .env(new Environment()
                 .account("123456789012")
                 .region("eu-west-1")));
 

The pipeline is self-mutating, which means that if you add new application stages in the source code, or new stacks to MyApplication, the pipeline will automatically reconfigure itself to deploy those new stages and stacks.

(Note that have to bootstrap all environments before the above code will work, see the section CDK Environment Bootstrapping below).

CDK Versioning

This library uses prerelease features of the CDK framework, which can be enabled by adding the following to cdk.json:

 {
   // ...
   "context": {
     "@aws-cdk/core:newStyleStackSynthesis": true
   }
 }
 

Provisioning the pipeline

To provision the pipeline you have defined, making sure the target environment has been bootstrapped (see below), and then executing deploying the PipelineStack once. Afterwards, the pipeline will keep itself up-to-date.

Important: be sure to git commit and git push before deploying the Pipeline stack using cdk deploy!

The reason is that the pipeline will start deploying and self-mutating right away based on the sources in the repository, so the sources it finds in there should be the ones you want it to find.

Run the following commands to get the pipeline going:

 $ git commit -a
 $ git push
 $ cdk deploy PipelineStack
 

Administrative permissions to the account are only necessary up until this point. We recommend you shed access to these credentials after doing this.

Working on the pipeline

The self-mutation feature of the Pipeline might at times get in the way of the pipeline development workflow. Each change to the pipeline must be pushed to git, otherwise, after the pipeline was updated using cdk deploy, it will automatically revert to the state found in git.

To make the development more convenient, the self-mutation feature can be turned off temporarily, by passing selfMutation: false property, example:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 // Modern API
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         .selfMutation(false)...
         .build();
 
 // Original API
 Object pipeline = CdkPipeline.Builder.create(this, "Pipeline")
         .selfMutating(false)...
         .build();
 

Definining the pipeline

This section of the documentation describes the AWS CodePipeline engine, which comes with this library. If you want to use a different deployment engine, read the section Using a different deployment engine below.

Synth and sources

To define a pipeline, instantiate a CodePipeline construct from the @aws-cdk/pipelines module. It takes one argument, a synth step, which is expected to produce the CDK Cloud Assembly as its single output (the contents of the cdk.out directory after running cdk synth). "Steps" are arbitrary actions in the pipeline, typically used to run scripts or commands.

For the synth, use a ShellStep and specify the commands necessary to install dependencies, the CDK CLI, build your project and run cdk synth; the specific commands required will depend on the programming language you are using. For a typical NPM-based project, the synth will look like this:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object source = ;
 
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         .synth(ShellStep.Builder.create("Synth")
                 .input(source)
                 .commands(asList("npm ci", "npm run build", "npx cdk synth"))
                 .build())
         .build();
 

The pipeline assumes that your ShellStep will produce a cdk.out directory in the root, containing the CDK cloud assembly. If your CDK project lives in a subdirectory, be sure to adjust the primaryOutputDirectory to match:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         .synth(ShellStep.Builder.create("Synth")
                 .input(source)
                 .commands(asList("cd mysubdir", "npm ci", "npm run build", "npx cdk synth"))
                 .primaryOutputDirectory("mysubdir/cdk.out")
                 .build())
         .build();
 

The underlying @aws-cdk/aws-codepipeline.Pipeline construct will be produced when app.synth() is called. You can also force it to be produced earlier by calling pipeline.buildPipeline(). After you've called that method, you can inspect the constructs that were produced by accessing the properties of the pipeline object.

Commands for other languages and package managers

The commands you pass to new ShellStep will be very similar to the commands you run on your own workstation to install dependencies and synth your CDK project. Here are some (non-exhaustive) examples for what those commands might look like in a number of different situations.

For Yarn, the install commands are different:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         .synth(ShellStep.Builder.create("Synth")
                 .input(source)
                 .commands(asList("yarn install --frozen-lockfile", "yarn build", "npx cdk synth"))
                 .build())
         .build();
 

For Python projects, remember to install the CDK CLI globally (as there is no package.json to automatically install it for you):

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         .synth(ShellStep.Builder.create("Synth")
                 .input(source)
                 .commands(asList("pip install -r requirements.txt", "npm install -g aws-cdk", "cdk synth"))
                 .build())
         .build();
 

For Java projects, remember to install the CDK CLI globally (as there is no package.json to automatically install it for you), and the Maven compilation step is automatically executed for you as you run cdk synth:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         .synth(ShellStep.Builder.create("Synth")
                 .input(source)
                 .commands(asList("npm install -g aws-cdk", "cdk synth"))
                 .build())
         .build();
 

You can adapt these examples to your own situation.

CodePipeline Sources

In CodePipeline, Sources define where the source of your application lives. When a change to the source is detected, the pipeline will start executing. Source objects can be created by factory methods on the CodePipelineSource class:

GitHub, GitHub Enterprise, BitBucket using a connection

The recommended way of connecting to GitHub or BitBucket is by using a connection. You will first use the AWS Console to authenticate to the source control provider, and then use the connection ARN in your pipeline definition:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 CodePipelineSource.connection("org/repo", "branch", Map.of(
         "connectionArn", "arn:aws:codestar-connections:us-east-1:222222222222:connection/7d2469ff-514a-4e4f-9003-5ca4a43cdc41"));
 

GitHub using OAuth

You can also authenticate to GitHub using a personal access token. This expects that you've created a personal access token and stored it in Secrets Manager. By default, the source object will look for a secret named github-token, but you can change the name. The token should have the repo and admin:repo_hook scopes.

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 CodePipelineSource.gitHub("org/repo", "branch", Map.of(
         // This is optional
         "authentication", SecretValue.secretsManager("my-token")));
 

CodeCommit

You can use a CodeCommit repository as the source. Either create or import that the CodeCommit repository and then use CodePipelineSource.codeCommit to reference it:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object repository = codecommit.fromRepositoryName(this, "Repository", "my-repository");
 CodePipelineSource.codeCommit(repository);
 

S3

You can use a zip file in S3 as the source of the pipeline. The pipeline will be triggered every time the file in S3 is changed:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 IBucket bucket = s3.Bucket.fromBucketName(this, "Bucket", "my-bucket");
 CodePipelineSource.s3(bucket, "my/source.zip");
 

Additional inputs

ShellStep allows passing in more than one input: additional inputs will be placed in the directories you specify. Any step that produces an output file set can be used as an input, such as a CodePipelineSource, but also other ShellStep:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object prebuild = ShellStep.Builder.create("Prebuild")
         .input(CodePipelineSource.gitHub("myorg/repo1"))
         .primaryOutputDirectory("./build")
         .commands(asList("./build.sh"))
         .build();
 
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         .synth(ShellStep.Builder.create("Synth")
                 .input(CodePipelineSource.gitHub("myorg/repo2"))
                 .additionalInputs(Map.of(
                         "subdir", CodePipelineSource.gitHub("myorg/repo3"),
                         "../siblingdir", prebuild))
 
                 .commands(asList("./build.sh"))
                 .build())
         .build();
 

CDK application deployments

After you have defined the pipeline and the synth step, you can add one or more CDK Stages which will be deployed to their target environments. To do so, call pipeline.addStage() on the Stage object:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 // Do this as many times as necessary with any account and region
 // Account and region may different from the pipeline's.
 pipeline.addStage(MyApplicationStage.Builder.create(this, "Prod")
         .env(Map.of(
                 "account", "123456789012",
                 "region", "eu-west-1"))
         .build());
 

CDK Pipelines will automatically discover all Stacks in the given Stage object, determine their dependency order, and add appropriate actions to the pipeline to publish the assets referenced in those stacks and deploy the stacks in the right order.

If the Stacks are targeted at an environment in a different AWS account or Region and that environment has been bootstrapped , CDK Pipelines will transparently make sure the IAM roles are set up correctly and any requisite replication Buckets are created.

Deploying in parallel

By default, all applications added to CDK Pipelines by calling addStage() will be deployed in sequence, one after the other. If you have a lot of stages, you can speed up the pipeline by choosing to deploy some stages in parallel. You do this by calling addWave() instead of addStage(): a wave is a set of stages that are all deployed in parallel instead of sequentially. Waves themselves are still deployed in sequence. For example, the following will deploy two copies of your application to eu-west-1 and eu-central-1 in parallel:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object europeWave = pipeline.addWave("Europe");
 europeWave.addStage(MyApplicationStage.Builder.create(this, "Ireland")
         .env(Map.of("region", "eu-west-1"))
         .build());
 europeWave.addStage(MyApplicationStage.Builder.create(this, "Germany")
         .env(Map.of("region", "eu-central-1"))
         .build());
 

Deploying to other accounts / encrypting the Artifact Bucket

CDK Pipelines can transparently deploy to other Regions and other accounts (provided those target environments have been bootstrapped). However, deploying to another account requires one additional piece of configuration: you need to enable crossAccountKeys: true when creating the pipeline.

This will encrypt the artifact bucket(s), but incurs a cost for maintaining the KMS key.

Example:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         // Encrypt artifacts, required for cross-account deployments
         .crossAccountKeys(true)
         .build();
 

Validation

Every addStage() and addWave() command takes additional options. As part of these options, you can specify pre and post steps, which are arbitrary steps that run before or after the contents of the stage or wave, respectively. You can use these to add validations like manual or automated gates to your pipeline. We recommend putting manual approval gates in the set of pre steps, and automated approval gates in the set of post steps.

The following example shows both an automated approval in the form of a ShellStep, and a manual approval in the form of a ManualApprovalStep added to the pipeline. Both must pass in order to promote from the PreProd to the Prod environment:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object preprod = MyApplicationStage.Builder.create(this, "PreProd")....build();
 Object prod = MyApplicationStage.Builder.create(this, "Prod")....build();
 
 pipeline.addStage(preprod, Map.of(
         "post", asList(
             ShellStep.Builder.create("Validate Endpoint")
                     .commands(asList("curl -Ssf https://my.webservice.com/"))
                     .build())));
 pipeline.addStage(prod, Map.of(
         "pre", asList(
             new ManualApprovalStep("PromoteToProd"))));
 

You can also specify steps to be executed at the stack level. To achieve this, you can specify the stack and step via the stackSteps property:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 pipeline.addStage(prod, Map.of(
         "stackSteps", asList(Map.of(
                 "stack", prod.getStack1(),
                 "pre", asList(new ManualApprovalStep("Pre-Stack Check")), // Executed before stack is prepared
                 "changeSet", asList(new ManualApprovalStep("ChangeSet Approval")), // Executed after stack is prepared but before the stack is deployed
                 "post", asList(new ManualApprovalStep("Post-Deploy Check"))), Map.of(
                 "stack", prod.getStack2(),
                 "post", asList(new ManualApprovalStep("Post-Deploy Check"))))));
 

Using CloudFormation Stack Outputs in approvals

Because many CloudFormation deployments result in the generation of resources with unpredictable names, validations have support for reading back CloudFormation Outputs after a deployment. This makes it possible to pass (for example) the generated URL of a load balancer to the test set.

To use Stack Outputs, expose the CfnOutput object you're interested in, and pass it to envFromCfnOutputs of the ShellStep:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 public class MyApplicationStage extends Stage {
     public final CfnOutput loadBalancerAddress;
 }
 
 MyApplicationStage lbApp = new MyApplicationStage(this, "MyApp", new StageProps());
 pipeline.addStage(lbApp, Map.of(
         "post", asList(
             ShellStep.Builder.create("HitEndpoint")
                     .envFromCfnOutputs(Map.of(
                             // Make the load balancer address available as $URL inside the commands
                             "URL", lbApp.getLoadBalancerAddress()))
                     .commands(asList("curl -Ssf $URL"))
                     .build())));
 

Running scripts compiled during the synth step

As part of a validation, you probably want to run a test suite that's more elaborate than what can be expressed in a couple of lines of shell script. You can bring additional files into the shell script validation by supplying the input or additionalInputs property of ShellStep. The input can be produced by the Synth step, or come from a source or any other build step.

Here's an example that captures an additional output directory in the synth step and runs tests from there:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object synth = ShellStep.Builder.create("Synth").build();
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline").synth(synth).build();
 
 pipeline.addStage(Map.of(
         "post", asList(
             ShellStep.Builder.create("Approve")
                     // Use the contents of the 'integ' directory from the synth step as the input
                     .input(synth.addOutputDirectory("integ"))
                     .commands(asList("cd integ && ./run.sh"))
                     .build())));
 

Customizing CodeBuild Projects

CDK pipelines will generate CodeBuild projects for each ShellStep you use, and it will also generate CodeBuild projects to publish assets and perform the self-mutation of the pipeline. To control the various aspects of the CodeBuild projects that get generated, use a CodeBuildStep instead of a ShellStep. This class has a number of properties that allow you to customize various aspects of the projects:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 CodeBuildStep.Builder.create("Synth")
         // ...standard ShellStep props...
         .commands(asList())
         .env(Map.of())
 
         // If you are using a CodeBuildStep explicitly, set the 'cdk.out' directory
         // to be the synth step's output.
         .primaryOutputDirectory("cdk.out")
 
         // Control the name of the project
         .projectName("MyProject")
 
         // Control parts of the BuildSpec other than the regular 'build' and 'install' commands
         .partialBuildSpec(codebuild.BuildSpec.fromObject(Map.of(
                 "version", "0.2")))
 
         // Control the build environment
         .buildEnvironment(Map.of(
                 "computeType", codebuild.ComputeType.getLARGE()))
 
         // Control Elastic Network Interface creation
         .vpc(vpc)
         .subnetSelection(Map.of("subnetType", ec2.SubnetType.getPRIVATE()))
         .securityGroups(asList(mySecurityGroup))
 
         // Additional policy statements for the execution role
         .rolePolicyStatements(asList(
             new PolicyStatement(new PolicyStatementProps())))
         .build();
 

You can also configure defaults for all CodeBuild projects by passing codeBuildDefaults, or just for the synth, asset publishing, and self-mutation projects by passing synthCodeBuildDefaults, assetPublishingCodeBuildDefaults, or selfMutationCodeBuildDefaults:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 CodePipeline.Builder.create(this, "Pipeline")
         // ...
 
         // Defaults for all CodeBuild projects
         .codeBuildDefaults(Map.of(
                 // Prepend commands and configuration to all projects
                 "partialBuildSpec", codebuild.BuildSpec.fromObject(Map.of(
                         "version", "0.2")),
 
                 // Control the build environment
                 "buildEnvironment", Map.of(
                         "computeType", codebuild.ComputeType.getLARGE()),
 
                 // Control Elastic Network Interface creation
                 "vpc", vpc,
                 "subnetSelection", Map.of("subnetType", ec2.SubnetType.getPRIVATE()),
                 "securityGroups", asList(mySecurityGroup),
 
                 // Additional policy statements for the execution role
                 "rolePolicy", asList(
                     new PolicyStatement(new PolicyStatementProps()))))
 
         .synthCodeBuildDefaults(Map.of())
         .assetPublishingCodeBuildDefaults(Map.of())
         .selfMutationCodeBuildDefaults(Map.of())
         .build();
 

Arbitrary CodePipeline actions

If you want to add a type of CodePipeline action to the CDK Pipeline that doesn't have a matching class yet, you can define your own step class that extends Step and implements ICodePipelineActionFactory.

Here's an example that adds a Jenkins step:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 public class MyJenkinsStep extends Step implements ICodePipelineActionFactory {
     public MyJenkinsStep(JenkinsProvider provider, FileSet input) {
     }
 
     public void produceAction(IStage stage, ProduceActionOptions options) {
 
         // This is where you control what type of Action gets added to the
         // CodePipeline
         stage.addAction(JenkinsAction.Builder.create()
                 // Copy 'actionName' and 'runOrder' from the options
                 .actionName(options.getActionName())
                 .runOrder(options.getRunOrder())
 
                 // Jenkins-specific configuration
                 .type(cpactions.JenkinsActionType.getTEST())
                 .jenkinsProvider(this.provider)
                 .projectName("MyJenkinsProject")
 
                 // Translate the FileSet into a codepipeline.Artifact
                 .inputs(asList(options.artifacts.toCodePipeline(this.input)))
                 .build());return Map.of("runOrdersConsumed", 1)
     }
 }
 

Using Docker in the pipeline

Docker can be used in 3 different places in the pipeline:

For the first case, you don't need to do anything special. For the other two cases, you need to make sure that privileged mode is enabled on the correct CodeBuild projects, so that Docker can run correctly. The follow sections describe how to do that.

You may also need to authenticate to Docker registries to avoid being throttled. See the section Authenticating to Docker registries below for information on how to do that.

Using Docker image assets in the pipeline

If your PipelineStack is using Docker image assets (as opposed to the application stacks the pipeline is deploying), for example by the use of LinuxBuildImage.fromAsset(), you need to pass dockerEnabledForSelfMutation: true to the pipeline. For example:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         // ...
 
         // Turn this on because the pipeline uses Docker image assets
         .dockerEnabledForSelfMutation(true)
         .build();
 
 pipeline.addWave("MyWave", Map.of(
         "post", asList(
             CodeBuildStep.Builder.create("RunApproval")
                     .commands(asList("command-from-image"))
                     .buildEnvironment(Map.of(
                             // The user of a Docker image asset in the pipeline requires turning on
                             // 'dockerEnabledForSelfMutation'.
                             "buildImage", LinuxBuildImage.fromAsset(this, "Image", Map.of(
                                     "directory", "./docker-image"))))
                     .build())));
 

Important: You must turn on the dockerEnabledForSelfMutation flag, commit and allow the pipeline to self-update before adding the actual Docker asset.

Using bundled file assets

If you are using asset bundling anywhere (such as automatically done for you if you add a construct like @aws-cdk/aws-lambda-nodejs), you need to pass dockerEnabledForSynth: true to the pipeline. For example:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         // ...
 
         // Turn this on because the application uses bundled file assets
         .dockerEnabledForSynth(true)
         .build();
 

Important: You must turn on the dockerEnabledForSynth flag, commit and allow the pipeline to self-update before adding the actual Docker asset.

Authenticating to Docker registries

You can specify credentials to use for authenticating to Docker registries as part of the pipeline definition. This can be useful if any Docker image assets — in the pipeline or any of the application stages — require authentication, either due to being in a different environment (e.g., ECR repo) or to avoid throttling (e.g., DockerHub).

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object dockerHubSecret = secretsmanager.Secret.fromSecretCompleteArn(this, "DHSecret", "arn:aws:...");
 Object customRegSecret = secretsmanager.Secret.fromSecretCompleteArn(this, "CRSecret", "arn:aws:...");
 Object repo1 = ecr.Repository.fromRepositoryArn(stack, "Repo", "arn:aws:ecr:eu-west-1:0123456789012:repository/Repo1");
 Object repo2 = ecr.Repository.fromRepositoryArn(stack, "Repo", "arn:aws:ecr:eu-west-1:0123456789012:repository/Repo2");
 
 Object pipeline = CodePipeline.Builder.create(this, "Pipeline")
         .dockerCredentials(asList(DockerCredential.dockerHub(dockerHubSecret), DockerCredential.customRegistry("dockerregistry.example.com", customRegSecret), DockerCredential.ecr(asList(repo1, repo2))))
         .build();
 

For authenticating to Docker registries that require a username and password combination (like DockerHub), create a Secrets Manager Secret with fields named username and secret, and import it (the field names change be customized).

Authentication to ECR repostories is done using the execution role of the relevant CodeBuild job. Both types of credentials can be provided with an optional role to assume before requesting the credentials.

By default, the Docker credentials provided to the pipeline will be available to the Synth, Self-Update, and Asset Publishing actions within the *pipeline. The scope of the credentials can be limited via the DockerCredentialUsage option.

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object dockerHubSecret = secretsmanager.Secret.fromSecretCompleteArn(this, "DHSecret", "arn:aws:...");
 // Only the image asset publishing actions will be granted read access to the secret.
 Object creds = DockerCredential.dockerHub(dockerHubSecret, Map.of("usages", asList(DockerCredentialUsage.getASSET_PUBLISHING())));
 

CDK Environment Bootstrapping

An environment is an (account, region) pair where you want to deploy a CDK stack (see Environments in the CDK Developer Guide). In a Continuous Deployment pipeline, there are at least two environments involved: the environment where the pipeline is provisioned, and the environment where you want to deploy the application (or different stages of the application). These can be the same, though best practices recommend you isolate your different application stages from each other in different AWS accounts or regions.

Before you can provision the pipeline, you have to bootstrap the environment you want to create it in. If you are deploying your application to different environments, you also have to bootstrap those and be sure to add a trust relationship.

After you have bootstrapped an environment and created a pipeline that deploys to it, it's important that you don't delete the stack or change its Qualifier, or future deployments to this environment will fail. If you want to upgrade the bootstrap stack to a newer version, do that by updating it in-place.

This library requires the modern bootstrapping stack which has been updated specifically to support cross-account continuous delivery. Starting, in CDK v2 this new bootstrapping stack will become the default, but for now it is still opt-in.

The commands below assume you are running cdk bootstrap in a directory where cdk.json contains the "@aws-cdk/core:newStyleStackSynthesis": true setting in its context, which will switch to the new bootstrapping stack automatically.

If run from another directory, be sure to run the bootstrap command with the environment variable CDK_NEW_BOOTSTRAP=1 set.

To bootstrap an environment for provisioning the pipeline:

 $ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
     [--profile admin-profile-1] \
     --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
     aws://111111111111/us-east-1
 

To bootstrap a different environment for deploying CDK applications into using a pipeline in account 111111111111:

 $ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
     [--profile admin-profile-2] \
     --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
     --trust 11111111111 \
     aws://222222222222/us-east-2
 

If you only want to trust an account to do lookups (e.g, when your CDK application has a Vpc.fromLookup() call), use the option --trust-for-lookup:

 $ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
     [--profile admin-profile-2] \
     --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
     --trust-for-lookup 11111111111 \
     aws://222222222222/us-east-2
 

These command lines explained:

Be aware that anyone who has access to the trusted Accounts effectively has all permissions conferred by the configured CloudFormation execution policies, allowing them to do things like read arbitrary S3 buckets and create arbitrary infrastructure in the bootstrapped account. Restrict the list of --trusted Accounts, or restrict the policies configured by --cloudformation-execution-policies.


Security tip: we recommend that you use administrative credentials to an account only to bootstrap it and provision the initial pipeline. Otherwise, access to administrative credentials should be dropped as soon as possible.


On the use of AdministratorAccess: The use of the AdministratorAccess policy ensures that your pipeline can deploy every type of AWS resource to your account. Make sure you trust all the code and dependencies that make up your CDK app. Check with the appropriate department within your organization to decide on the proper policy to use.

If your policy includes permissions to create on attach permission to a role, developers can escalate their privilege with more permissive permission. Thus, we recommend implementing permissions boundary in the CDK Execution role. To do this, you can bootstrap with the --template option with a customized template that contains a permission boundary.

Migrating from old bootstrap stack

The bootstrap stack is a CloudFormation stack in your account named CDKToolkit that provisions a set of resources required for the CDK to deploy into that environment.

The "new" bootstrap stack (obtained by running cdk bootstrap with CDK_NEW_BOOTSTRAP=1) is slightly more elaborate than the "old" stack. It contains:

It is possible and safe to migrate from the old bootstrap stack to the new bootstrap stack. This will create a new S3 file asset bucket in your account and orphan the old bucket. You should manually delete the orphaned bucket after you are sure you have redeployed all CDK applications and there are no more references to the old asset bucket.

Context Lookups

You might be using CDK constructs that need to look up runtime context, which is information from the target AWS Account and Region the CDK needs to synthesize CloudFormation templates appropriate for that environment. Examples of this kind of context lookups are the number of Availability Zones available to you, a Route53 Hosted Zone ID, or the ID of an AMI in a given region. This information is automatically looked up when you run cdk synth.

By default, a cdk synth performed in a pipeline will not have permissions to perform these lookups, and the lookups will fail. This is by design.

Our recommended way of using lookups is by running cdk synth on the developer workstation and checking in the cdk.context.json file, which contains the results of the context lookups. This will make sure your synthesized infrastructure is consistent and repeatable. If you do not commit cdk.context.json, the results of the lookups may suddenly be different in unexpected ways, and even produce results that cannot be deployed or will cause data loss. To give an account permissions to perform lookups against an environment, without being able to deploy to it and make changes, run cdk bootstrap --trust-for-lookup=<account>.

If you want to use lookups directly from the pipeline, you either need to accept the risk of nondeterminism, or make sure you save and load the cdk.context.json file somewhere between synth runs. Finally, you should give the synth CodeBuild execution role permissions to assume the bootstrapped lookup roles. As an example, doing so would look like this:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 CodePipeline.Builder.create(this, "Pipeline")
         .synth(CodeBuildStep.Builder.create("Synth")
                 .input(commands).... , npm ci , npm run build , npx cdk synth , ... , ()
                 .rolePolicyStatements(asList(
                     new PolicyStatement(new PolicyStatementProps()
                             .actions(asList("sts:AssumeRole"))
                             .resources(asList("*"))
                             .conditions(Map.of(
                                     "StringEquals", Map.of(
                                             "iam:ResourceTag/aws-cdk:bootstrap-role", "lookup"))))))
                 .build())
         .build();
 

The above example requires that the target environments have all been bootstrapped with bootstrap stack version 8, released with CDK CLI 1.114.0.

Security Considerations

It's important to stay safe while employing Continuous Delivery. The CDK Pipelines library comes with secure defaults to the best of our ability, but by its very nature the library cannot take care of everything.

We therefore expect you to mind the following:

Confirm permissions broadening

To keep tabs on the security impact of changes going out through your pipeline, you can insert a security check before any stage deployment. This security check will check if the upcoming deployment would add any new IAM permissions or security group rules, and if so pause the pipeline and require you to confirm the changes.

The security check will appear as two distinct actions in your pipeline: first a CodeBuild project that runs cdk diff on the stage that's about to be deployed, followed by a Manual Approval action that pauses the pipeline. If it so happens that there no new IAM permissions or security group rules will be added by the deployment, the manual approval step is automatically satisfied. The pipeline will look like this:

 Pipeline
 ├── ...
 ├── MyApplicationStage
 │    ├── MyApplicationSecurityCheck       // Security Diff Action
 │    ├── MyApplicationManualApproval      // Manual Approval Action
 │    ├── Stack.Prepare
 │    └── Stack.Deploy
 └── ...
 

You can insert the security check by using a ConfirmPermissionsBroadening step:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object stage = new MyApplicationStage(this, "MyApplication");
 pipeline.addStage(stage, Map.of(
         "pre", asList(
             ConfirmPermissionsBroadening.Builder.create("Check").stage(stage).build())));
 

To get notified when there is a change that needs your manual approval, create an SNS Topic, subscribe your own email address, and pass it in as as the notificationTopic property:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 import software.amazon.awscdk.services.sns.*;
 import software.amazon.awscdk.services.sns.subscriptions.*;
 import software.amazon.awscdk.pipelines.*;
 
 Topic topic = new Topic(this, "SecurityChangesTopic");
 topic.addSubscription(new EmailSubscription("test@email.com"));
 
 Object stage = new MyApplicationStage(this, "MyApplication");
 pipeline.addStage(stage, Map.of(
         "pre", asList(
             ConfirmPermissionsBroadening.Builder.create("Check")
                     .stage(stage)
                     .notificationTopic(topic)
                     .build())));
 

Note: Manual Approvals notifications only apply when an application has security check enabled.

Troubleshooting

Here are some common errors you may encounter while using this library.

Pipeline: Internal Failure

If you see the following error during deployment of your pipeline:

 CREATE_FAILED  | AWS::CodePipeline::Pipeline | Pipeline/Pipeline
 Internal Failure
 

There's something wrong with your GitHub access token. It might be missing, or not have the right permissions to access the repository you're trying to access.

Key: Policy contains a statement with one or more invalid principals

If you see the following error during deployment of your pipeline:

 CREATE_FAILED | AWS::KMS::Key | Pipeline/Pipeline/ArtifactsBucketEncryptionKey
 Policy contains a statement with one or more invalid principals.
 

One of the target (account, region) environments has not been bootstrapped with the new bootstrap stack. Check your target environments and make sure they are all bootstrapped.

Message: no matching base directory path found for cdk.out

If you see this error during the Synth step, it means that CodeBuild is expecting to find a cdk.out directory in the root of your CodeBuild project, but the directory wasn't there. There are two common causes for this:

is in ROLLBACK_COMPLETE state and can not be updated

If you see the following error during execution of your pipeline:

 Stack ... is in ROLLBACK_COMPLETE state and can not be updated. (Service:
 AmazonCloudFormation; Status Code: 400; Error Code: ValidationError; Request
 ID: ...)
 

The stack failed its previous deployment, and is in a non-retryable state. Go into the CloudFormation console, delete the stack, and retry the deployment.

Cannot find module 'xxxx' or its corresponding type declarations

You may see this if you are using TypeScript or other NPM-based languages, when using NPM 7 on your workstation (where you generate package-lock.json) and NPM 6 on the CodeBuild image used for synthesizing.

It looks like NPM 7 has started writing less information to package-lock.json, leading NPM 6 reading that same file to not install all required packages anymore.

Make sure you are using the same NPM version everywhere, either downgrade your workstation's version or upgrade the CodeBuild version.

Cannot find module '.../check-node-version.js' (MODULE_NOT_FOUND)

The above error may be produced by npx when executing the CDK CLI, or any project that uses the AWS SDK for JavaScript, without the target application having been installed yet. For example, it can be triggered by npx cdk synth if aws-cdk is not in your package.json.

Work around this by either installing the target application using NPM before running npx, or set the environment variable NPM_CONFIG_UNSAFE_PERM=true.

Cannot connect to the Docker daemon at unix:///var/run/docker.sock

If, in the 'Synth' action (inside the 'Build' stage) of your pipeline, you get an error like this:

 stderr: docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
 See 'docker run --help'.
 

It means that the AWS CodeBuild project for 'Synth' is not configured to run in privileged mode, which prevents Docker builds from happening. This typically happens if you use a CDK construct that bundles asset using tools run via Docker, like aws-lambda-nodejs, aws-lambda-python, aws-lambda-go and others.

Make sure you set the privileged environment variable to true in the synth definition:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object pipeline = CdkPipeline.Builder.create(this, "MyPipeline")
         (SpreadAssignment ...
 
               synthAction
           synthAction).SimpleSynthAction(SimpleSynthAction).(.standardNpmSynth(Map.of(
                 "sourceArtifact", , ...,
                 "cloudAssemblyArtifact", , ...,
 
                 "environment", Map.of(
                         "privileged", true))))
         .build();
 

After turning on privilegedMode: true, you will need to do a one-time manual cdk deploy of your pipeline to get it going again (as with a broken 'synth' the pipeline will not be able to self update to the right state).

S3 error: Access Denied

An "S3 Access Denied" error can have two causes:

Self-mutation step has been removed

Some constructs, such as EKS clusters, generate nested stacks. When CloudFormation tries to deploy those stacks, it may fail with this error:

 S3 error: Access Denied For more information check http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
 

This happens because the pipeline is not self-mutating and, as a consequence, the FileAssetX build projects get out-of-sync with the generated templates. To fix this, make sure the selfMutating property is set to true:

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 Object pipeline = CdkPipeline.Builder.create(this, "MyPipeline")
         .selfMutating(true)...
         .build();
 

Bootstrap roles have been renamed or recreated

While attempting to deploy an application stage, the "Prepare" or "Deploy" stage may fail with a cryptic error like:

Action execution failed Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 0123456ABCDEFGH; S3 Extended Request ID: 3hWcrVkhFGxfiMb/rTJO0Bk7Qn95x5ll4gyHiFsX6Pmk/NT+uX9+Z1moEcfkL7H3cjH7sWZfeD0=; Proxy: null)

This generally indicates that the roles necessary to deploy have been deleted (or deleted and re-created); for example, if the bootstrap stack has been deleted and re-created, this scenario will happen. Under the hood, the resources that rely on these roles (e.g., cdk-$qualifier-deploy-role-$account-$region) point to different canonical IDs than the recreated versions of these roles, which causes the errors. There are no simple solutions to this issue, and for that reason we strongly recommend that bootstrap stacks not be deleted and re-created once created.

The most automated way to solve the issue is to introduce a secondary bootstrap stack. By changing the qualifier that the pipeline stack looks for, a change will be detected and the impacted policies and resources will be updated. A hypothetical recovery workflow would look something like this:

 $ env CDK_NEW_BOOTSTRAP=1 npx cdk bootstrap \
     --qualifier randchars1234
     --toolkit-stack-name CDKToolkitTemp
     aws://111111111111/us-east-1
 

 // Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
 new MyStack(this, "MyStack", new StackProps()
         // Update this qualifier to match the one used above.
         .synthesizer(DefaultStackSynthesizer.Builder.create()
                 .qualifier("randchars1234")
                 .build()));
 

Manual Alternative

Alternatively, the errors can be resolved by finding each impacted resource and policy, and correcting the policies by replacing the canonical IDs (e.g., AROAYBRETNYCYV6ZF2R93) with the appropriate ARNs. As an example, the KMS encryption key policy for the artifacts bucket may have a statement that looks like the following:

 {
   "Effect" : "Allow",
   "Principal" : {
     // "AWS" : "AROAYBRETNYCYV6ZF2R93"  // Indicates this issue; replace this value
     "AWS": "arn:aws:iam::0123456789012:role/cdk-hnb659fds-deploy-role-0123456789012-eu-west-1", // Correct value
   },
   "Action" : [ "kms:Decrypt", "kms:DescribeKey" ],
   "Resource" : "*"
 }
 

Any resource or policy that references the qualifier (hnb659fds by default) will need to be updated.

Known Issues

There are some usability issues that are caused by underlying technology, and cannot be remedied by CDK at this point. They are reproduced here for completeness.

Skip navigation links