Troubleshooting - Amazon S3 Glacier Re:Freezer


Use the following scenarios to troubleshoot the root cause when the value of the CloudWatch dashboard counter for objects Copied to Destination does not match the number of archives in your source S3 Glacier vault.

Scenario 1: File name already exists in destination S3 bucket

If an S3 object file name already exists in the destination S3 bucket, the solution won’t copy that archive to the destination bucket. The solution stores the archive file in question in the staging bucket for further investigation.

Use the following process to access your staging S3 bucket and investigate your uncopied data.

  1. Sign in to the AWS Management Console and navigate to your Amazon S3 bucket.

    You can find the name of your staging S3 bucket in the Outputs tab of your CloudFormation stack.

    Figure 10: Same S3 staging bucket

  2. Verify that there are only 4 folders inside the S3 bucket: glue, inventory, partitioned, and results (as shown in figure 11).

    Figure 11: Sample glue, inventory, partitioned, and results buckets

    If you have an additional folder named stagingdata, then you have data that was not successfully copied to the destination S3 bucket.

    Figure 12: Sample stagingdata bucket

  3. Open the stagingdata folder.

  4. Investigate the data that is in this folder. Manually copy this data (using the S3 console) to your destination S3 bucket or delete the stagingdata folder.

  5. Proceed with deployment from Step 3. Post archive copy verification tasks.

Scenario 2: Troubleshooting the solution with CloudWatch Logs

Use the following process troubleshoot using the solution’s CloudWatch Logs.

  1. Sign in to the Amazon CloudWatch console.

  2. Select Log groups.

  3. In the search field, enter your CloudFormation stack’s name.

The search returns the solution’s CloudWatch Logs.

  1. Choose a log.

  2. In the next window, in the Log streams section, choose Search All.

  3. In the search field, enter the name of an archive or item you wish to troubleshoot (for example, 100MBfile001). The search returns log entry details and any errors related to the searched item.

Figure 13: Log events for a sample file

  1. Repeat steps 4–7 to troubleshoot with different CloudWatch Logs.

Scenario 3: Value for Total Archive not updated after 8 hours

Use the following procedure when if your CloudFormation stack has been deployed for more than 8 hours, but the dashboard counter value for Total Archives has not changed.

  1. Sign in to the AWS Step Functions console.

  2. From the Step Functions console window, select your state machine name. Your state machine name will be in the following format: <your-stack-name>-stageTwoOrchestrator.

  3. Note any failures. If there are no records (pass or fail), there might be an issue with Glacier generating the inventory file. Contact Amazon S3 Glacier support.

Figure 14: Sample failure

  1. Selecting the step provides additional information about the failure. Graph Inspector highlights the step failing in red.

Figure 15: Failure highlighted in Graph Inspector

Scenario 4: CloudFormation stack deletion failed, staging bucket is not empty

Use the following process if your CloudFormation stack returns with a DELETE_FAILED error:

  1. Navigate to your CloudFormation stack’s Events tab.

If the status reason is The following resource(s) failed to delete: [stagingbucket]. Continue with the follow steps.

  1. The CloudFormation stack deletion process cleans up all the deployed solution components EXCEPT the stagingdata folder located in the staging S3 bucket. The staging S3 bucket is used to temporarily store the copied archives so the SHA256 Treehash can be calculated. After the SHA256 is validated, the archives are moved to the final destination S3 bucket.

  2. If the stagingdata folder is not empty, that indicates the final move process has failed and the downloaded and validated archives are retained in the stagingdata/ folder.

Possible scenarios where this can occur:

  • In situations where the destination S3 bucket selected is not empty and there is a name clash, or the move process fails due to the fact that the file already exists in the destination S3 bucket. In this scenario refer to the previous section of, Scenario 1: File name already exists in destination S3 bucket for remediation steps.

  • In situations when the first copy archive operation from the staging folder to the destination S3 bucket did not succeed, but the copy operation was successfully completed by Amazon S3. In situations such as these, the second copy archive operation will fail because the archive already exists in the destination bucket. To assist with troubleshooting you can utilize the CloudWatch Logs that are generated by the solution. For this situation, you can utilize the following CloudWatch Log for further troubleshooting (/aws/lambda/<your-stack-name>-calculateTreehash). Refer to the earlier section of, Scenario 2: Troubleshooting solution using CloudWatch Logs, which contains instructions how to access and navigate Amazon CloudWatch Logs.

Scenario 5: Discovering and deleting incomplete Multipart Uploads

Use the following procedure if multipart upload to staging or destination bucket is incomplete.

  1. Sign in to the AWS Management Console and open the Amazon S3 console.

  2. In the navigation pane, choose Storage Lens, Dashboards.

  3. In the Dashboards list, choose the dashboard that you want to view.


    If you are setting up a new S3 Storage Lens dashboard or accessing your default dashboard for the first time, be mindful that it can take up to 48 hours to generate your initial metrics.

  4. With S3 Storage Lens, you can identify if your staging or destination bucket contains incomplete MPU (Multipart Upload) parts.

Figure 16: Top buckets containing incomplete multipart upload bytes

To delete incomplete multipart uploads using S3 Lifecycle, refer to Discovering and Deleting Incomplete Multipart Uploads to Lower Amazon S3 Costs.