Security best practices for Deadline Cloud - AWS Deadline Cloud

Security best practices for Deadline Cloud

AWS Deadline Cloud (Deadline Cloud) provides a number of security features to consider as you develop and implement your own security policies. The following best practices are general guidelines and don’t represent a complete security solution. Because these best practices might not be appropriate or sufficient for your environment, treat them as helpful considerations rather than prescriptions.

Note

For more information about the importance of many security topics, see the Shared Responsibility Model.

Data protection

For data protection purposes, we recommend that you protect AWS account credentials and set up individual accounts with AWS Identity and Access Management (IAM). That way, each user is given only the permissions necessary to fulfill their job duties. We also recommend that you secure your data in the following ways:

  • Use multi-factor authentication (MFA) with each account.

  • Use SSL/TLS to communicate with AWS resources. We require TLS 1.2 and recommend TLS 1.3.

  • Set up API and user activity logging with AWS CloudTrail.

  • Use AWS encryption solutions, along with all default security controls within AWS services.

  • Use advanced managed security services such as Amazon Macie, which assists in discovering and securing personal data that is stored in Amazon Simple Storage Service (Amazon S3).

  • If you require FIPS 140-2 validated cryptographic modules when accessing AWS through a command line interface or an API, use a FIPS endpoint. For more information about the available FIPS endpoints, see Federal Information Processing Standard (FIPS) 140-2.

We strongly recommend that you never put sensitive identifying information, such as your customers' account numbers, into free-form fields such as a Name field. This includes when you work with AWS Deadline Cloud or other AWS services using the console, API, AWS CLI, or AWS SDKs. Any data that you enter into Deadline Cloud or other services might get picked up for inclusion in diagnostic logs. When you provide a URL to an external server, don’t include credentials information in the URL to validate your request to that server.

AWS Identity and Access Management permissions

Manage access to AWS resources using users, AWS Identity and Access Management (IAM) roles, and by granting the least privilege to users. Establish credential management policies and procedures for creating, distributing, rotating, and revoking AWS access credentials. For more information, see IAM Best Practices in the IAM User Guide.

Run jobs as users and groups

When using queue functionality in Deadline Cloud, it’s a best practice to specify an operating system (OS) user and its primary group so that the OS user has least-privilege permissions for the queue’s jobs.

When you specify a “Run as user” (and group), any processes for jobs submitted to the queue will be run using that OS user and will inherit that user’s associated OS permissions.

The fleet and queue configurations combine to establish a security posture. On the queue side, the “Job run as user” and IAM role can be specified to use the OS and AWS permissions for the queue’s jobs. The fleet defines the infrastructure (worker hosts, networks, mounted shared storage) that, when associated to a particular queue, run jobs within the queue. The data available on the worker hosts needs to be accessed by jobs from one or more associated queues. Specifying a user or group helps protect the data in jobs from other queues, other installed software, or other users with access to the worker hosts. When a queue is without a user, it runs as the agent user which can impersonate (sudo) any queue user. In this way, a queue without a user can escalate privileges to another queue.

Networking

To prevent traffic from being intercepted or redirected, it's essential to secure how and where your network traffic is routed.

We recommend that you secure your networking environment in the following ways:

  • Secure Amazon Virtual Private Cloud (Amazon VPC) subnet route tables to control how IP layer traffic is routed.

  • If you are using Amazon Route 53 (Route 53) as a DNS provider in your farm or workstation setup, secure access to the Route 53 API.

  • If you connect to Deadline Cloud outside of AWS such as by using on-premises workstations or other data centers, secure any on-premises networking infrastructure. This includes DNS servers and route tables on routers, switches, and other networking devices.

Jobs and job data

Deadline Cloud jobs run within sessions on worker hosts. Each session runs one or more processes on the worker host, which generally require that you input data to produce output.

To secure this data, you can configure operating system users with queues. The worker agent uses the queue OS user to run session sub-processes. These sub-processes inherit the queue OS user's permissions.

We recommend that you follow best practices to secure access to the data these sub-processes access. For more information, see Shared responsibility model.

Farm structure

You can arrange Deadline Cloud fleets and queues many ways. However, there are security implications with certain arrangements.

A farm has one of the most secure boundaries because it can't share Deadline Cloud resources with other farms, including fleets, queues, and storage profiles. However, you can share external AWS resources within a farm, which compromises the security boundary.

You can also establish security boundaries between queues within the same farm using the appropriate configuration.

Follow these best practices to create secure queues in the same farm:

  • Associate a fleet only with queues within the same security boundary. Note the following:

    • After job runs on the worker host, data may remain behind, such as in a temporary directory or the queue user's home directory.

    • The same OS user runs all the jobs on a service-owned fleet worker host, regardless of which queue you submit the job to.

    • A job might leave processes running on a worker host, making it possible for jobs from other queues to observe other running processes.

  • Ensure that only queues within the same security boundary share an Amazon S3 bucket for job attachments.

  • Ensure that only queues within the same security boundary share an OS user.

  • Secure any other AWS resources that are integrated into the farm to the boundary.

Job attachment queues

Job attachments are associated with a queue, which uses your Amazon S3 bucket.

  • Job attachments write to and read from a root prefix in the Amazon S3 bucket. You specify this root prefix in the CreateQueue API call.

  • The bucket has a corresponding Queue Role, which specifies the role that grants queue users access to the bucket and root prefix. When creating a queue, you specify the Queue Role Amazon Resource Name (ARN) alongside the job attachments bucket and root prefix.

  • Authorized calls to the AssumeQueueRoleForRead, AssumeQueueRoleForUser, and AssumeQueueRoleForWorker API operations return a set of temporary security credentials for the Queue Role.

If you create a queue and reuse an Amazon S3 bucket and root prefix, there is a risk of information being disclosed to unauthorized parties. For example, QueueA and QueueB share the same bucket and root prefix. In a secure workflow, ArtistA has access to QueueA but not QueueB. However, when multiple queues share a bucket, ArtistA can access the data in QueueB data because it uses the same bucket and root prefix as QueueA.

The console sets up queues that are secure by default. Ensure that the queues have a distinct combination of Amazon S3 bucket and root prefix unless they're part of a common security boundary.

To isolate your queues, you must configure the Queue Role to only allow queue access to the bucket and root prefix. In the following example, replace each placeholder with your resource-specific information.

{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket", "s3:GetBucketLocation" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::JOB_ATTACHMENTS_BUCKET_NAME", "arn:aws:s3:::JOB_ATTACHMENTS_BUCKET_NAME/JOB_ATTACHMENTS_ROOT_PREFIX/*" ], "Condition": { "StringEquals": { "aws:ResourceAccount": "ACCOUNT_ID" } } }, { "Action": ["logs:GetLogEvents"], "Effect": "Allow", "Resource": "arn:aws:logs:REGION:ACCOUNT_ID:log-group:/aws/deadline/FARM_ID/*" } ] }

You must also set a trust policy on the role. In the following example, replace the placeholder text with your resource-specific information.

{ "Version": "2012-10-17", "Statement": [ { "Action": ["sts:AssumeRole"], "Effect": "Allow", "Principal": { "Service": "deadline.amazonaws.com" }, "Condition": { "StringEquals": { "aws:SourceAccount": "ACCOUNT_ID" }, "ArnEquals": { "aws:SourceArn": "arn:aws:deadline:REGION:ACCOUNT_ID:farm/FARM_ID" } } }, { "Action": ["sts:AssumeRole"], "Effect": "Allow", "Principal": { "Service": "credentials.deadline.amazonaws.com" }, "Condition": { "StringEquals": { "aws:SourceAccount": "ACCOUNT_ID" }, "ArnEquals": { "aws:SourceArn": "arn:aws:deadline:REGION:ACCOUNT_ID:farm/FARM_ID" } } } ] }

Custom software Amazon S3 buckets

You can add the following statement to your Queue Role to access custom software in your Amazon S3 bucket. In the following example, replace SOFTWARE_BUCKET_NAME with the name of your S3 bucket.

"Statement": [ { "Action": [ "s3:GetObject", "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::SOFTWARE_BUCKET_NAME", "arn:aws:s3:::SOFTWARE_BUCKET_NAME/*" ] } ]

For more information about Amazon S3 security best practices, see Security best practices for Amazon S3 in the Amazon Simple Storage Service User Guide.

Worker hosts

Secure worker hosts to help ensure that each user can only perform operations for their assigned role.

We recommend the following best practices to secure worker hosts:

  • Don’t use the same jobRunAsUser value with multiple queues unless jobs submitted to those queues are within the same security boundary.

  • Don’t set the queue jobRunAsUser to the name of the OS user that the worker agent runs as.

  • Grant queue users least-privileged OS permissions required for the intended queue workloads. Ensure that they don't have filesystem write permissions to work agent program files or other shared software.

  • Ensure only the root user on Linux and the Administrator owns account on Windows owns and can modify the worker agent program files.

  • On Linux worker hosts, consider configuring a umask override in /etc/sudoers that allows the worker agent user to launch processes as queue users. This configuration helps ensure other users can't access files written to the queue.

  • Grant trusted individuals least-privileged access to worker hosts.

  • Restrict permissions to local DNS override configuration files (/etc/hosts on Linux and C:\Windows\system32\etc\hosts on Windows, and to route tables on workstations and worker host operating systems.

  • Restrict permissions to DNS configuration on workstations and worker host operating systems.

  • Regularly patch the operating system and all installed software. This approach includes software specifically used with Deadline Cloud such as submitters, adaptors, worker agents, OpenJD packages, and others.

  • Use strong passwords for the Windows queue jobRunAsUser.

  • Regularly rotate the passwords for your queue jobRunAsUser.

  • Ensure least privilege access to the Windows password secretes and delete unused secrets.

  • Don't give the queue jobRunAsUser permission the schedule commands to run in the future:

    • On Linux, deny these accounts access to cron and at.

    • On Windows, deny these accounts access to the Windows task scheduler.

Note

For more information about the importance of regularly patching the operating system and installed software, see the Shared Responsibility Model.

Workstations

It's important to secure workstations with access to Deadline Cloud. This approach helps ensure that any jobs you submit to Deadline Cloud can't run arbitrary workloads billed to your AWS account.

We recommend the following best practice to secure artist workstations. For more information, see the Shared Responsibility Model.

  • Secure any persisted credentials that provide access to AWS, including Deadline Cloud. For more information, see Managing access keys for IAM users in the IAM User Guide.

  • Only install trusted, secure software.

  • Require users federate with an identity provider to access AWS with temporary credentials.

  • Use secure permissions on Deadline Cloud submitter program files to prevent tampering.

  • Grant trusted individuals least-privileged access to artist workstations.

  • Only use submitters and adaptors that you obtain through the Deadline Cloud Monitor.

  • Restrict permissions to /etc/hosts and route tables on workstations and worker host operating systems.

  • Restrict permissions to /etc/resolv.conf on workstations and worker host operating systems.

  • Regularly patch the operating system and all installed software. This approach includes software specifically used with Deadline Cloud such as submitters, adaptors, worker agents, OpenJD packages, and others.