User Guide (API Version 2015-07-09)

Troubleshooting CodePipeline

The following information might help you troubleshoot common issues in AWS CodePipeline.


Pipeline Error: A pipeline configured with AWS Elastic Beanstalk returns an error message: "Deployment failed. The provided role does not have sufficient permissions: Service:AmazonElasticLoadBalancing"

Problem: The service role for CodePipeline does not have sufficient permissions for AWS Elastic Beanstalk, including, but not limited to, some operations in Elastic Load Balancing. The service role for CodePipeline was updated on August 6, 2015 to address this issue. Customers who created their service role before this date must modify the policy statement for their service role to add the required permissions.

Possible fixes: The easiest solution is to copy the updated policy statement for service roles from Review the Default CodePipeline Service Role Policy, edit your service role, and overwrite the old policy statement with the current policy. This portion of the policy statement applies to Elastic Beanstalk:

{ "Action": [ "elasticbeanstalk:*", "ec2:*", "elasticloadbalancing:*", "autoscaling:*", "cloudwatch:*", "s3:*", "sns:*", "cloudformation:*", "rds:*", "sqs:*", "ecs:*", "iam:PassRole" ], "Resource": "*", "Effect": "Allow" },

After you apply the edited policy, follow the steps in Start a Pipeline Manually in AWS CodePipeline to manually rerun any pipelines that use Elastic Beanstalk.

Depending on your security needs, you can modify the permissions in other ways, too.

Deployment Error: A pipeline configured with an AWS Elastic Beanstalk deploy action hangs instead of failing if the "DescribeEvents" permission is missing

Problem: The service role for CodePipeline must include the "elasticbeanstalk:DescribeEvents" action for any pipelines that use AWS Elastic Beanstalk. Without this permission, AWS Elastic Beanstalk deploy actions hang without failing or indicating an error. If this action is missing from your service role, then CodePipeline does not have permissions to run the pipeline deployment stage in AWS Elastic Beanstalk on your behalf.

Possible fixes: Review your CodePipeline service role. If the "elasticbeanstalk:DescribeEvents" action is missing, use the steps in Add Permissions for Other AWS Services to add it using the Edit Policy feature in the IAM console.

After you apply the edited policy, follow the steps in Start a Pipeline Manually in AWS CodePipeline to manually rerun any pipelines that use Elastic Beanstalk.

For more information about the default service role, see Review the Default CodePipeline Service Role Policy.

Pipeline Error: A source action returns the insufficient permissions message: "Could not access the CodeCommit repository repository-name. Make sure that the pipeline IAM role has sufficient permissions to access the repository."

Problem: The service role for CodePipeline does not have sufficient permissions for CodeCommit and likely was created before support for using CodeCommit repositories was added on April 18, 2016. Customers who created their service role before this date must modify the policy statement for their service role to add the required permissions.

Possible fixes: Add the required permissions for CodeCommit to your CodePipeline service role's policy. For more information, see Add Permissions for Other AWS Services.

Pipeline Error: A Jenkins build or test action runs for a long time and then fails due to lack of credentials or permissions

Problem: If the Jenkins server is installed on an Amazon EC2 instance, the instance might not have been created with an instance role that has the permissions required for CodePipeline. If you are using an IAM user on a Jenkins server, an on-premises instance, or an Amazon EC2 instance created without the required IAM role, the IAM user either does not have the required permissions, or the Jenkins server cannot access those credentials through the profile configured on the server.

Possible fixes: Make sure that Amazon EC2 instance role or IAM user is configured with the AWSCodePipelineCustomActionAccess managed policy or with the equivalent permissions. For more information, see AWS Managed (Predefined) Policies for CodePipeline.

If you are using an IAM user, make sure the AWS profile configured on the instance uses the IAM user configured with the correct permissions. You might have to provide the IAM user credentials you configured for integration between Jenkins and CodePipeline directly into the Jenkins UI. This is not a recommended best practice. If you must do so, be sure the Jenkins server is secured and uses HTTPS instead of HTTP.

Pipeline Error: My GitHub source stage contains Git submodules, but CodePipeline doesn't initialize them

Problem: CodePipeline does not support git submodules. CodePipeline relies on the archive link API from GitHub, which does not support submodules.

Possible fixes: Consider cloning the GitHub repository directly as part of a separate script. For example, you could include a clone action in a Jenkins script.

Pipeline Error: I receive a pipeline error that includes the following message: "PermissionError: Could not access the GitHub repository"

Problem: CodePipeline uses OAuth tokens to integrate with GitHub. You might have revoked the permissions of the OAuth token for CodePipeline. Additionally, the number of tokens is limited (see the GitHub documentation for details). If CodePipeline reaches that limit, older tokens will stop working, and actions in pipelines that rely upon that token will fail.

Possible fixes: Check to see if the permission for CodePipeline has been revoked. Sign in to GitHub, go to Applications, and choose Authorized OAuth Apps. If you do not see CodePipeline in the list, open the CodePipeline console, edit the pipeline, and choose Connect to GitHub to restore the authorization.

If CodePipeline is present in the list of authorized applications for GitHub, you might have exceeded the allowed number of tokens. To fix this issue, you can manually configure one OAuth token as a personal access token, and then configure all pipelines in your AWS account to use that token.

It is a security best practice to rotate your personal access token on a regular basis. For more information, see Use GitHub and the CodePipeline CLI to Create and Rotate Your GitHub Personal Access Token on a Regular Basis.

To configure a pipeline to use a personal access token from GitHub

  1. Sign in to your GitHub account and follow the instructions in the GitHub documentation for creating a personal access token. Make sure the token is configured to have the following GitHub scopes: admin:repo_hook and repo. Copy the token.

  2. At a terminal (Linux, macOS, or Unix) or command prompt (Windows), run the get-pipeline command on the pipeline where you want to change the OAuth token, and then copy the output of the command to a JSON file. For example, for a pipeline named MyFirstPipeline, you would type something similar to the following:

    aws codepipeline get-pipeline --name MyFirstPipeline >pipeline.json

    The output of the command is sent to the pipeline.json file.

  3. Open the file in a plain-text editor and edit the value in the OAuthTokenField of your GitHub action. Replace the asterisks (****) with the token you copied from GitHub. For example:

    "configuration": { "Owner": "MyGitHubUserName", "Repo": "test-repo", "Branch": "master", "OAuthToken": "Replace the **** with your personal access token" },
  4. If you are working with the pipeline structure retrieved using the get-pipeline command, you must modify the structure in the JSON file by removing the metadata lines from the file, or the update-pipeline command will not be able to use it. Remove the section from the pipeline structure in the JSON file (the "metadata": { } lines and the "created," "pipelineARN," and "updated" fields within).

    For example, remove the following lines from the structure:

    "metadata": { "pipelineArn": "arn:aws:codepipeline:region:account-ID:pipeline-name", "created": "date", "updated": "date" }
  5. Save the file, and then run the update-pipeline with the --cli-input-json parameter to specify the JSON file you just edited. For example, to update a pipeline named MyFirstPipeline, you would type something similar to the following:


    Be sure to include file:// before the file name. It is required in this command.

    aws codepipeline update-pipeline --cli-input-json file://pipeline.json
  6. Repeat steps 2 through 4 for every pipeline that contains a GitHub action.

  7. When you have finished updating your pipelines, delete the .json files used to update those pipelines.

Pipeline Error: A pipeline created in one AWS region using a bucket created in another AWS region returns an "InternalError" with the code "JobFailed"

Problem: The download of an artifact stored in an Amazon S3 bucket will fail if the pipeline and bucket are created in different AWS regions.

Possible fixes: Make sure the Amazon S3 bucket where your artifact is stored is in the same AWS region as the pipeline you have created.

Deployment Error: A ZIP file that contains a WAR file is deployed successfully to AWS Elastic Beanstalk, but the application URL reports a 404 Not Found error

Problem: A WAR file is deployed successfully to an AWS Elastic Beanstalk environment, but the application URL returns a 404 Not Found error.

Possible fixes: AWS Elastic Beanstalk can unpack a ZIP file, but not a WAR file contained in a ZIP file. Instead of specifying a WAR file in your buildspec.yml file, specify a folder that contains the content to be deployed. For example:

version: 0.1 phases: post_build: commands: - mvn package - mv target/my-web-app ./ artifacts: files: - my-web-app/**/* discard-paths: yes

For an example, see AWS Elastic Beanstalk Sample for CodeBuild.

File permissions are not retained on source files from GitHub when ZIP does not preserve external attributes

Problem: Users with source files from GitHub may experience an issue where the input artifact loses file permissions when stored in the Amazon S3 artifact bucket for the pipeline. The CodePipeline source action that zips the files from GitHub uses a ZIP file process that does not preserve external attributes, so the files do not retain file permissions.

Possible fixes: You must modify the file permissions via a chmod command from where the artifacts are consumed. Update the build spec file in the build provider, such as CodeBuild, to restore file permissions each time the build stage is run. The following example shows a build spec for CodeBuild with a chmod command under the build: section:

version: 0.2 phases: build: commands: - dotnet restore - dotnet build - chmod a+x bin/Debug/myTests - bin/Debug/myTests artifacts: files: - bin/Debug/myApplication


To use the CodePipeline console to confirm the name of the build input artifact, display the pipeline and then, in the Build action, rest your mouse pointer on the tooltip. Make a note of the value for Input artifact (for example, MyApp). To use the AWS CLI to get the name of the S3 artifact bucket, run the AWS CodePipeline get-pipeline command. The output contains an artifactStore object with a location field that displays the name of the bucket.

The console I'm viewing doesn't seem to match the procedures in this guide

Problem: When you view the content in this user guide, the procedures and images show a console that does not look like the one you are using.

Possible fixes: The new console design is the default experience, but if you chose to use the prior version of the console at some point, that choice will persist in subsequent sessions. The procedures in this guide support the new console design. If you choose to use the older version of the console, you will find minor discrepancies, but the concepts and basic procedures in this guide still apply.

Pipeline artifact folder names appear to be truncated

Problem: When you view pipeline artifact names in CodePipeline, the names appear to be truncated. This can make the names appear to be similar or seem to no longer contain the entire pipeline name.

Explanation: CodePipeline truncates artifact names to ensure that the full Amazon S3 path does not exceed policy size limits when CodePipeline generates temporary credentials for job workers.

Even though the artifact name appears to be truncated, CodePipeline maps to the artifact bucket in a way that is not affected by artifacts with truncated names. The pipeline can function normally. This is not an issue with the folder or artifacts. There is a 100-character limit to pipeline names. Although the artifact folder name might appear to be shortened, it is still unique for your pipeline.

Need Help with a Different Issue?

Try these other resources:

On this page: