AWS IoT Greengrass
Developer Guide

The AWS Documentation website is getting a new look!
Try it now and let us know what you think. Switch to the new look >>

You can return to the original look by selecting English in the language selector above.

Troubleshooting AWS IoT Greengrass

This section provides troubleshooting information and possible solutions to help resolve issues with AWS IoT Greengrass.

For information about AWS IoT Greengrass limits, see AWS IoT Greengrass Limits in the Amazon Web Services General Reference.

AWS IoT Greengrass Core Issues

If the AWS IoT Greengrass Core software does not start, try the following general troublehooting steps:

Search the following symptoms and errors to find information to help troubleshoot issues with an AWS IoT Greengrass core.

Topics

 

Error: The configuration file is missing the CaPath, CertPath or KeyPath. The Greengrass daemon process with [pid = <pid>] died.

Solution: You might see this error in crash.log when the AWS IoT Greengrass Core software does not start. This can occur if you're running v1.6 or earlier. Do one of the following:

  • Upgrade to v1.7 or later. We recommend that you always run the latest version of the AWS IoT Greengrass Core software. For download information, see AWS IoT Greengrass Core Software.

  • Use the correct config.json format for your AWS IoT Greengrass Core software version. For more information, see AWS IoT Greengrass Core Configuration File.

    Note

    To find which version of the AWS IoT Greengrass Core software is installed on the core device, run the following commands in your device terminal.

    cd /greengrass-root/ggc/core/ sudo ./greengrassd --version

 

Error: Failed to parse /<greengrass-root>/config/config.json.

Solution: You might see this error when the AWS IoT Greengrass Core software does not start. Make sure the Greengrass configuration file is using valid JSON format.

Open config.json (located in /greengrass-root/config) and validate the JSON format. For example, make sure that commas are used correctly.

 

Error: Runtime failed to start: unable to start workers: container test timed out.

Solution: You might see this error when the AWS IoT Greengrass Core software does not start. Set the postStartHealthCheckTimeout property in the Greengrass configuration file. This optional property configures the amount of time (in milliseconds) that the Greengrass daemon waits for the post-start health check to finish. The default value is 30 seconds (30000 ms).

Open config.json (located in /greengrass-root/config). In the runtime object, add the postStartHealthCheckTimeout property and set the value to a number greater than 30000. Add a comma where needed to create a valid JSON document. For example:

... "runtime" : { "cgroup" : { "useSystemd" : "yes" }, "postStartHealthCheckTimeout" : 40000 }, ...

 

Error: Failed to invoke PutLogEvents on local Cloudwatch, logGroup: /GreengrassSystem/connection_manager, error: RequestError: send request failed caused by: Post http://<path>/cloudwatch/logs/: dial tcp <address>: getsockopt: connection refused, response: { }.

Solution: You might see this error when the AWS IoT Greengrass Core software does not start. This can occur if you're running AWS IoT Greengrass on a Raspberry Pi with the Raspbian Stretch OS, and the required memory setup has not been completed. For more information, see this step.

 

Error: Unable to create server due to: failed to load group: chmod /<greengrass-root>/ggc/deployment/lambda/arn:aws:lambda:<region>:<account-id>:function:<function-name>:<version>/<file-name>: no such file or directory.

Solution: You might see this error when the AWS IoT Greengrass Core software does not start. If you deployed a Lambda executable to the core, check the function's Handler property in the group.json file (located in /greengrass-root/ggc/deployment/group). If the handler is not the exact name of your compiled executable, replace the contents of the group.json file with an empty JSON object ({}), and run the following commands to start AWS IoT Greengrass:

cd /greengrass/ggc/core/ sudo ./greengrassd start

Then, use the AWS Lambda API to update the function configuration's handler parameter, publish a new function version, and update the alias. For more information, see AWS Lambda Function Versioning and Aliases.

Assuming that you added the function to your Greengrass group by alias (recommended), you can now redeploy your group. (If not, you must point to the new function version or alias in your group definition and subscriptions before you deploy the group.)

 

The AWS IoT Greengrass Core software doesn't start after you changed from running with no containerization to running in a Greengrass container.

Solution: Check that you are not missing any container dependencies.

 

Error: Spool size should be at least 262144 bytes.

Solution: You might see this error when the AWS IoT Greengrass Core software does not start. Open the group.json file (located in /greengrass-root/ggc/deployment/group), replace the contents of the file with an empty JSON object ({}), and run the following commands to start AWS IoT Greengrass:

cd /greengrass/ggc/core/ sudo ./greengrassd start

Then follow the steps in the To Cache Messages in Local Storage procedure. For the GGCloudSpooler function, make sure to specify a GG_CONFIG_MAX_SIZE_BYTES value that's greater than or equal to 262144.

 

Error: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"rootfs_linux.go:64: mounting \\\"/greengrass/ggc/socket/greengrass_ipc.sock\\\" to rootfs \\\"/greengrass/ggc/packages/<version>/rootfs/merged\\\" at \\\"/greengrass_ipc.sock\\\" caused \\\"stat /greengrass/ggc/socket/greengrass_ipc.sock: permission denied\\\"\"".

Solution: You might see this error in runtime.log when the AWS IoT Greengrass Core software does not start. This occurs if your umask is higher than 0022. To resolve this issue, you must set the umask to 0022 or lower. A value of 0022 grants everyone read permission to new files by default.

 

Error: Greengrass daemon running with PID: <process-id>. Some system components failed to start. Check 'runtime.log' for errors.

Solution: You might see this error when the AWS IoT Greengrass Core software does not start. Check runtime.log and crash.log for specific error information. For more information, see Troubleshooting with Logs.

 

Device shadow does not sync with the cloud.

Solution: Make sure that AWS IoT Greengrass has permissions for iot:UpdateThingShadow and iot:GetThingShadow actions in the Greengrass service role. If the service role uses the AWSGreengrassResourceAccessRolePolicy managed policy, these permissions are included by default.

See Troubleshooting Shadow Synchronization Timeout Issues.

 

The AWS IoT Greengrass Core software does not run on Raspberry Pi because user namespace is not enabled.

Solution: This solution applies to Jessie Raspbian distributions only. Do not run this command on Stretch distributions. User namespaces are enabled in Stretch distributions by default.

On the Jessie device, run BRANCH=stable rpi-update to update to the latest stable versions of the firmware and kernel.

Note

User namespace support is required to run AWS IoT Greengrass in the default Greengrass container mode. It isn't required to run in No container mode. For more information, see Considerations When Choosing Lambda Function Containerization.

 

ERROR: unable to accept TCP connection. accept tcp [::]:8000: accept4: too many open files.

Solution: You might see this error in the greengrassd script output. This can occur if the file descriptor limit for the AWS IoT Greengrass Core software has reached the threshold and must be increased.

Use the following command and then restart the AWS IoT Greengrass Core software.

ulimit -n 2048

Note

In this example, the limit is increased to 2048. Choose a value appropriate for your use case.

 

Error: Runtime execution error: unable to start lambda container. container_linux.go:259: starting container process caused "process_linux.go:345: container init caused \"rootfs_linux.go:50: preparing rootfs caused \\\"permission denied\\\"\"".

Solution: Either install AWS IoT Greengrass directly under the root directory, or make sure that the directory where the AWS IoT Greengrass Core software is installed and its parent directories have execute permissions for everyone.

 

Warning: [WARN]-[5]GK Remote: Error retrieving public key data: ErrPrincipalNotConfigured: private key for MqttCertificate is not set.

Solution: AWS IoT Greengrass uses a common handler to validate the properties of all security principals. This warning in runtime.log is expected unless you specified a custom private key for the local MQTT server. For more information, see AWS IoT Greengrass Core Security Principals.

 

Error: Permission denied when attempting to use role arn:aws:iam::<account-id>:role/<role-name> to access s3 url https://<region>-greengrass-updates.s3.<region>.amazonaws.com/core/<architecture>/greengrass-core-<distribution-version>.tar.gz.

Solution: You might see this error when an over-the-air (OTA) update fails. In the signer role policy, add the target AWS Region as a Resource. This signer role is used to presign the S3 URL for the AWS IoT Greengrass software update. For more information, see S3 URL signer role.

 

The AWS IoT Greengrass core is configured to use a network proxy and your Lambda function can't make outgoing connections.

Solution: Depending on your runtime and the executables used by the Lambda function to create connections, you might also receive connection timeout errors. Make sure your Lambda functions use the appropriate proxy configuration to connect through the network proxy. AWS IoT Greengrass passes the proxy configuration to user-defined Lambda functions through the http_proxy, https_proxy, and no_proxy environment variables. They can be accessed as shown in the following Python snippet.

import os print(os.environ['HTTP_PROXY'])

Note

Most common libraries used to make connections (such as boto3 or cURL and python requests packages) use these environment variables by default.

 

The core is in an infinite connect-disconnect loop. The runtime.log file contains a continuous series of connect and disconnect entries.

Solution: This can happen when another device is hard-coded to use the core thing name as the client ID for MQTT connections to AWS IoT. Simultaneous connections in the same AWS Region and AWS account must use unique client IDs. By default, the core uses the core thing name as the client ID for these connections.

To resolve this issue, you can change the client ID used by the other device for the connection (recommended) or override the default value for the core.

To override the default client ID for the core device

  1. Run the following command to stop the AWS IoT Greengrass daemon:

    cd /greengrass-root/ggc/core/ sudo ./greengrassd stop
  2. Open greengrass-root/config/config.json for editing as the su user.

  3. In the coreThing object, add the coreClientId property, and set the value to your custom client ID. The value must be between 1 and 128 characters. It must be unique in the current AWS Region for the AWS account.

    "coreClientId": "MyCustomClientId"
  4. Start the daemon.

    cd /greengrass-root/ggc/core/ sudo ./greengrassd start

 

Error: unable to start lambda container. container_linux.go:259: starting container process caused "process_linux.go:345: container init caused \"rootfs_linux.go:62: mounting \\\"proc\\\" to rootfs \\\"

Solution: On some platforms, you might see this error in runtime.log when AWS IoT Greengrass tries to mount the /proc file system to create a Lambda container. Or, you might see similar errors, such as operation not permitted or EPERM. These errors can occur even if tests run on the platform by the dependency checker script pass.

Try one of the following possible solutions:

  • Enable the CONFIG_DEVPTS_MULTIPLE_INSTANCES option in the Linux kernel.

  • Set the /proc mount options on the host to rw,relatim only.

  • Upgrade the Linux kernel to 4.9 or later.

Note

This issue is not related to mounting /proc for local resource access.

 

Deployment Issues

Use the following information to help troubleshoot deployment issues.

 

Your current deployment does not work and you want to revert to a previous working deployment.

Solution: Use the AWS IoT console or AWS IoT Greengrass API to redeploy a previous working deployment. This deploys the corresponding group version to your core device.

To redeploy a deployment (console)

  1. On the group configuration page, choose Deployments. This page displays the deployment history for the group, including the date and time, group version, and status of each deployment attempt.

  2. Find the row that contains the deployment you want to redeploy. In the Status column, choose the ellipsis (), and then choose Re-deploy.

    Deployments page showing the Re-Deploy action for a deployment.

To redeploy a deployment (CLI)

  1. Use ListDeployments to find the ID of the deployment you want to redeploy. For example:

    aws greengrass list-deployments --group-id 74d0b623-c2f2-4cad-9acc-ef92f61fcaf7

    The command returns the list of deployments for the group.

    { "Deployments": [ { "DeploymentId": "8d179428-f617-4a77-8a0c-3d61fb8446a6", "DeploymentType": "NewDeployment", "GroupArn": "arn:aws:greengrass:us-west-2:123456789012:/greengrass/groups/74d0b623-c2f2-4cad-9acc-ef92f61fcaf7/versions/8dd1d899-4ac9-4f5d-afe4-22de086efc62", "CreatedAt": "2019-07-01T20:56:49.641Z" }, { "DeploymentId": "f8e4c455-8ac4-453a-8252-512dc3e9c596", "DeploymentType": "NewDeployment", "GroupArn": "arn:aws:greengrass:us-west-2::123456789012:/greengrass/groups/74d0b623-c2f2-4cad-9acc-ef92f61fcaf7/versions/4ad66e5d-3808-446b-940a-b1a788898382", "CreatedAt": "2019-07-01T20:41:47.048Z" }, { "DeploymentId": "e4aca044-bbd8-41b4-b697-930ca7c40f3e", "DeploymentType": "NewDeployment", "GroupArn": "arn:aws:greengrass:us-west-2::123456789012:/greengrass/groups/74d0b623-c2f2-4cad-9acc-ef92f61fcaf7/versions/1f3870b6-850e-4c97-8018-c872e17b235b", "CreatedAt": "2019-06-18T15:16:02.965Z" } ] }

    Note

    These AWS CLI commands use example values for the group and deployment ID. When you run the commands, make sure to replace the example values.

  2. Use CreateDeployment to redeploy the target deployment. Set the deployment type to Redeployment. For example:

    aws greengrass create-deployment --deployment-type Redeployment \ --group-id 74d0b623-c2f2-4cad-9acc-ef92f61fcaf7 \ --deployment-id f8e4c455-8ac4-453a-8252-512dc3e9c596

    The command returns the ARN and ID of the new deployment.

    { "DeploymentId": "f9ed02b7-c28e-4df6-83b1-e9553ddd0fc2", "DeploymentArn": "arn:aws:greengrass:us-west-2::123456789012:/greengrass/groups/74d0b623-c2f2-4cad-9acc-ef92f61fcaf7/deployments/f9ed02b7-c28e-4df6-83b1-e9553ddd0fc2" }
  3. Use GetDeploymentStatus to get the status of the deployment.

 

You see a 403 Forbidden error on deployment in the logs.

Solution: Make sure the policy of the AWS IoT Greengrass core in the cloud includes "greengrass:*" as an allowed action.

 

A ConcurrentDeployment error occurs when you run the create-deployment command for the first time.

Solution: A deployment might be in progress. You can run get-deployment-status to see if a deployment was created. If not, try creating the deployment again.

 

Error: Greengrass is not authorized to assume the Service Role associated with this account, or the error: Failed: TES service role is not associated with this account.

Solution: You might see this error when the deployment fails. Use the get-service-role-for-account command in the AWS CLI to check that an appropriate service role is associated with your AWS account in the current AWS Region. To associate a Greengrass service role with your AWS account in the target AWS Region, use associate-service-role-to-account. For more information, see Greengrass Service Role.

 

The deployment doesn't finish.

Solution: Do the following:

  • Make sure that the AWS IoT Greengrass daemon is running on your core device. Run the following commands in your core device terminal to check whether the daemon is running and start it, if needed.

    1. To check whether the daemon is running:

      ps aux | grep -E 'greengrass.*daemon'

      If the output contains a root entry for /greengrass/ggc/packages/1.9.3/bin/daemon, then the daemon is running.

      The version in the path depends on the AWS IoT Greengrass Core software version that's installed on your core device.

    2. To start the daemon:

      cd /greengrass/ggc/core/ sudo ./greengrassd start
  • Make sure that your core device is connected and the core connection endpoints are configured properly.

 

The deployment doesn't finish, and runtime.log contains multiple "wait 1s for container to stop" entries.

Solution: Run the following commands in your core device terminal to restart the AWS IoT Greengrass daemon.

cd /greengrass/ggc/core/ sudo ./greengrassd stop sudo ./greengrassd start

 

Error: Deployment <guid> of type NewDeployment for group <guid> failed error: Error while processing. group config is invalid: 112 or [119 0] don't have rw permission on the file: <path>.

Solution: Make sure that the owner group of the <path> directory has read and write permissions to the directory.

 

Error: <list-of-function-arns> are configured to run as root but Greengrass is not configured to run Lambda functions with root permissions.

Solution: You might see this error in runtime.log when the deployment fails. Make sure that you have configured AWS IoT Greengrass to allow Lambda functions to run with root permissions. Either change the value of allowFunctionsToRunAsRoot in greengrass_root/config/config.json to yes or change the Lambda function to run as another user/group. For more information, see Running a Lambda Function as Root.

 

Error: Deployment <deployment-id> of type NewDeployment for group <group-id> failed error: process start failed: container_linux.go:259: starting container process caused "process_linux.go:250: running exec setns process for init caused \"wait: no child processes\"".

Solution: You might see this error when the deployment fails. Retry the deployment.

 

Error: [WARN]-MQTT[client] dial tcp: lookup <host-prefix>-ats.iot.<region>.amazonaws.com: no such host ... [ERROR]-Greengrass deployment error: failed to report deployment status back to cloud ... net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Solution: You might see this error if you're using systemd-resolved, which enables the DNSSEC setting by default. As a result, many public domains are not recognized. Attempts to reach the AWS IoT Greengrass endpoint fail to find the host, so your deployments remain in the In Progress state.

You can use the following commands and output to test for this issue. Replace the region placeholder in the endpoints with your AWS Region.

$ ping greengrass-ats.iot.region.amazonaws.com ping: greengrass-ats.iot.region.amazonaws.com: Name or service not known
$ systemd-resolve greengrass-ats.iot.region.amazonaws.com greengrass-ats.iot.region.amazonaws.com: resolve call failed: DNSSEC validation failed: failed-auxiliary

One possible solution is to disable DNSSEC. When DNSSEC is false, DNS lookups are not DNSSEC validated. For more information, see this known issue for systemd.

  1. Add DNSSEC=false to /etc/systemd/resolved.conf.

  2. Restart systemd-resolved.

For information about resolved.conf and DNSSEC, run man resolved.conf in your terminal.

 

Create Group/Create Function Issues

Use the following information to help troubleshoot issues with creating an AWS IoT Greengrass group or Greengrass Lambda function.

 

Error: Your 'IsolationMode' configuration for the group is invalid.

Solution: This error occurs when the IsolationMode value in the DefaultConfig of function-definition-version is not supported. Supported values are GreengrassContainer and NoContainer.

 

Error: Your 'IsolationMode' configuration for function with arn <function-arn> is invalid.

Solution: This error occurs when the IsolationMode value in the <function-arn> of the function-definition-version is not supported. Supported values are GreengrassContainer and NoContainer.

 

Error: MemorySize configuration for function with arn <function-arn> is not allowed in IsolationMode=NoContainer.

Solution: This error occurs when you specify a MemorySize value and you choose to run without containerization. Lambda functions that are run without containerization cannot have memory limits. You can either remove the limit or you can change the Lambda function to run in an AWS IoT Greengrass container.

 

Error: Access Sysfs configuration for function with arn <function-arn> is not allowed in IsolationMode=NoContainer.

Solution: This error occurs when you specify true for AccessSysfs and you choose to run without containerization. Lambda functions run without containerization must have their code updated to access the file system directly and cannot use AccessSysfs. You can either specify a value of false for AccessSysfs or you can change the Lambda function to run in an AWS IoT Greengrass container.

 

Error: MemorySize configuration for function with arn <function-arn> is required in IsolationMode=GreengrassContainer.

Solution: This error occurs because you did not specify a MemorySize limit for a Lambda function that you are running in an AWS IoT Greengrass container. Specify a MemorySize value to resolve the error.

 

Error: Function <function-arn> refers to resource of type <resource-type> that is not allowed in IsolationMode=NoContainer.

Solution: You cannot access Local.Device, Local.Volume, ML_Model.SageMaker.Job, ML_Model.S3_Object, or S3_Object.Generic_Archive resource types when you run a Lambda function without containerization. If you need those resource types, you must run in an AWS IoT Greengrass container. You can also access local devices directly when running without containerization by changing the code in your Lambda function.

 

Error: Execution configuration for function with arn <function-arn> is not allowed.

Solution: This error occurs when you create a system Lambda function with GGIPDetector or GGCloudSpooler and you specified IsolationMode or RunAs configuration. You must omit the Execution parameters for this system Lambda function.

 

AWS IoT Greengrass Core in Docker Issues

Use the following information to help troubleshoot issues with running an AWS IoT Greengrass core in a Docker container.

 

Error: Unknown options: -no-include-email

Solution: This error can occur when you run the aws ecr get-login command. Make sure that you have the latest AWS CLI version installed (for example, run: pip install awscli --upgrade --user). If you're using Windows and you installed the CLI using the MSI installer, you must repeat the installation process. For more information, see Installing the AWS Command Line Interface on Microsoft Windows in the AWS Command Line Interface User Guide.

 

Warning: IPv4 is disabled. Networking will not work.

Solution: You might receive this warning or a similar message when running AWS IoT Greengrass on a Linux computer. Enable IPv4 network forwarding as described in this step. AWS IoT Greengrass cloud deployment and MQTT communications don't work when IPv4 forwarding isn't enabled. For more information, see Configure namespaced kernel parameters (sysctls) at runtime in the Docker documentation.

 

Error: A firewall is blocking file Sharing between windows and the containers.

Solution: You might receive this error or a Firewall Detected message when running Docker on a Windows computer. See the Error: A firewall is blocking file sharing between Windows and the containers Docker support issue. This can also occur if you are signed in on a virtual private network (VPN) and your network settings are preventing the shared drive from being mounted. In that situation, turn off VPN and re-run the Docker container.

 

Error: Cannot create container for the service greengrass: Conflict. The container name "/aws-iot-greengrass" is already in use.

Solution: This can occur when the container name is used by an older container. To resolve this issue, run the following command to remove the old Docker container:

docker rm -f $(docker ps -a -q -f "name=aws-iot-greengrass")

 

Error: [FATAL]-Failed to reset thread's mount namespace due to an unexpected error: "operation not permitted". To maintain consistency, GGC will crash and need to be manually restarted.

Solution: This error in runtime.log can occur when you try to deploy a GreengrassContainer Lambda function to an AWS IoT Greengrass core running in a Docker container. Currently, only NoContainer Lambda functions can be deployed to a Greengrass Docker container.

To resolve this issue, make sure that all Lambda functions are in NoContainer mode and start a new deployment. Then, when starting the container, don't bind-mount the existing deployment directory onto the AWS IoT Greengrass core Docker container. Instead, create an empty deployment directory in its place and bind-mount that in the Docker container. This allows the new Docker container to receive the latest deployment with Lambda functions running in NoContainer mode.

For more information, see Running AWS IoT Greengrass in a Docker Container.

Troubleshooting with Logs

If logs are configured to be stored on the local file system, start looking in the following locations. Reading the logs on the file system requires root permissions.

greengrass-root/ggc/var/log/crash.log

Shows messages generated when an AWS IoT Greengrass core crashes.

greengrass-root/ggc/var/log/system/runtime.log

Shows messages about which component failed.

greengrass-root/ggc/var/log/system/

Contains all logs from AWS IoT Greengrass system components, such as the certificate manager and the connection manager. By using the messages in ggc/var/log/system/ and ggc/var/log/system/runtime.log, you should be able to find out which error occurred in AWS IoT Greengrass system components.

greengrass-root/ggc/var/log/user/

Contains all logs from user-defined Lambda functions. Check this folder to find error messages from your local Lambda functions.

Note

By default, greengrass-root is the /greengrass directory. If a write directory is configured, then the logs are under that directory.

If the logs are configured to be stored on the cloud, use CloudWatch Logs to view log messages. crash.log is found only in file system logs on the AWS IoT Greengrass core device.

If AWS IoT is configured to write logs to CloudWatch, check those logs if connection errors occur when system components attempt to connect to AWS IoT.

For more information about AWS IoT Greengrass logging, see Monitoring with AWS IoT Greengrass Logs.

Note

Logs for AWS IoT Greengrass Core software v1.0 are stored under the greengrass-root/var/log directory.

Troubleshooting Storage Issues

When the local file storage is full, some components might start failing:

  • Local shadow updates do not occur.

  • New AWS IoT Greengrass core MQTT server certificates cannot be downloaded locally.

  • Deployments fail.

You should always be aware of the amount of free space available locally. You can calculate free space based on the sizes of deployed Lambda functions, the logging configuration (see Troubleshooting with Logs), and the number of shadows stored locally.

Troubleshooting Messages

All messages sent in AWS IoT Greengrass are sent with QoS 0. By default, AWS IoT Greengrass stores messages in an in-memory queue. Therefore, unprocessed messages are lost when the AWS IoT Greengrass core restarts (for example, after a group deployment or device reboot). However, you can configure AWS IoT Greengrass (v1.6 or later) to cache messages to the file system so they persist across core restarts. You can also configure the queue size. For more information, see MQTT Message Queue.

Note

When using the default in-memory queue, we recommend that you deploy groups or restart the device when the service disruption is the lowest.

If you configure a queue size, make sure that it's greater than or equal to 262144 bytes (256 KB). Otherwise, AWS IoT Greengrass might not start properly.

Troubleshooting Shadow Synchronization Timeout Issues

Significant delays in communication between a Greengrass core device and the cloud might cause shadow synchronization to fail because of a timeout. In this case, you should see log entries similar to the following:

[2017-07-20T10:01:58.006Z][ERROR]-cloud_shadow_client.go:57,Cloud shadow client error: unable to get cloud shadow what_the_thing_is_named for synchronization. Get https://1234567890abcd.iot.us-west-2.amazonaws.com:8443/things/what_the_thing_is_named/shadow: net/http: request canceled (Client.Timeout exceeded while awaiting headers) [2017-07-20T10:01:58.006Z][WARN]-sync_manager.go:263,Failed to get cloud copy: Get https://1234567890abcd.iot.us-west-2.amazonaws.com:8443/things/what_the_thing_is_named/shadow: net/http: request canceled (Client.Timeout exceeded while awaiting headers) [2017-07-20T10:01:58.006Z][ERROR]-sync_manager.go:375,Failed to execute sync operation {what_the_thing_is_named VersionDiscontinued []}"

A possible fix is to configure the amount of time that the core device waits for a host response. Open the config.json file in greengrass-root/config and add a system.shadowSyncTimeout field with a timeout value in seconds. For example:

{ "system": { "shadowSyncTimeout": 10 }, "coreThing": { "caPath": "root-ca.pem", "certPath": "cloud.pem.crt", "keyPath": "cloud.pem.key", ... }, ... }

If no shadowSyncTimeout value is specified in config.json, the default is 5 seconds.

Note

For AWS IoT Greengrass Core software v1.6 and earlier, the default shadowSyncTimeout is 1 second.

Check the AWS IoT Greengrass Forum

If you're unable to resolve your issue using the troubleshooting information in this topic, you can search the AWS IoT Greengrass Forum for related issues or post a new forum thread. Members of the AWS IoT Greengrass team actively monitor the forum.