Integrate Deadline Cloud into your pipeline
You can integrate your existing rendering pipelines with AWS Deadline Cloud to streamline your workflow management and job submission processes.
What is pipeline integration?
A pipeline integration of Deadline Cloud refers to how a Deadline Cloud farm provides batch processing for your interactive and automated workflows. This example uses a visual effects pipeline that you can adapt to the applications and processes your operators use in their workflows.
A visual effects pipeline consists of the stages of post-production to process input footage, 3D models, animation, textures, lighting, rendered images, and more. It prescribes how different departments exchange assets to perform the tasks they are responsible for. A well-designed pipeline facilitates efficient creation of final images for a television show or similar.
By integrating a Deadline Cloud farm into your pipeline, you can offload long-running jobs onto a queue, and prioritize how Deadline Cloud schedules them on fleets of worker hosts. You can use fleets managed by the service, and you can create your own fleets on-premises or on AWS.
For creating your pipeline integration, consider the following factors:
-
Where is your asset data stored, and how will you provide them to worker hosts in the farm?
-
Which applications and plugins do your jobs need, and how will you provision them onto worker hosts in the farm?
-
When artists or other operators have jobs to run, how will they submit them to the farm?
-
Who will monitor the progress and status of jobs, and how will you control costs and optimize the utilization of worker hosts?
Example of an on-premises studio with a farm on AWS
This example focuses on a pipeline where artists work together on-premises and submit jobs to a farm on AWS for rendering. The approach presented here is quick to onboard onto Deadline Cloud and provides a flexible starting point for customization.
Here are the factors for pipeline integration of this example studio:
-
Asset data is stored on a NAS shared file system in their on-premises office.
-
On Windows, projects are mounted to the P: drive and utilities are mounted to X:.
-
On macOS, projects are mounted to /Volumes/Projects and utilities are mounted to /Volumes/Utilities.
-
-
They use Maya for 3D modeling, Arnold for rendering, and Nuke for compositing. No custom plugins are installed in these applications.
-
They want to use the default submission experience.
-
Artists will monitor their own jobs and producers will monitor costs and adjust priorities when needed.
The pipeline integration for this studio uses job attachments to transfer data from the studio premises to and from AWS, as it can be easy to get started with and can scale to large fleet sizes. The job attachments S3 bucket configured on the queue acts as a cache tier between the on-premises NAS and worker hosts on AWS.
When artists submit jobs from Maya or Nuke, the Deadline Cloud integrated submitter scans the scene to identify the files needed for the job to run, and then attaches them to the job by uploading them to S3. A high performance hash is used to identify files that were previously uploaded by any artist in the studio. This way, when an artist is iteratively submitting new versions of the same shot, or one artist hands a shot off to another, only new or modified files need to be uploaded in the process of submitting the job.
The studio uses both Windows and macOS workstations, so they configure storage profiles with file system locations of local type for both their projects and utilities drive. See the Storage profiles for job attachments topic for more details about how this supports the path mapping necessary when jobs run on a different operating system than they are submitted from. They also configure a Linux host on their network to automatically download the output of all tasks of jobs in the queue when they complete. To learn how to set it up, see Automatic downloads for job attachments.
The farm contains two Linux service-managed fleets with vCPUs and RAM requirements set to ranges starting from a minimum specification the
studio needs for their jobs. One of the fleets is configured to provide a small number of spot instances to provide consistent render capacity during
work hours, and the other fleet is configured as wait and save to render more jobs during off-peak hours at a lower cost. All of Maya, the Maya for
Arnold plugin, and Nuke are provided for Linux service-managed fleets from the deadline-cloud conda channel, alongside usage-based licensing.
In order to save overhead from application installation, they replace the default Conda environment configured for the queue in the Deadline Cloud console
with the
github sample conda queue environment with improved caching
To support job submission, they set up Deadline Cloud submitters on each workstation, selecting the Maya and Nuke integrations. With Deadline Cloud monitor, they can log into the farm, monitor progress of jobs, and view log outputs for diagnosing issues. Both the Maya and Nuke submitters feature integrated dialogs to submit jobs from within the application interface.
When configuring user access levels in the farm, they give Contributor access to artists so they can submit jobs, view all jobs, and modify properties of their own jobs. They give Manager access to render wranglers so they can modify properties of all the jobs. They give Owner access to producers, so they can track spending and usage by creating budgets and exploring usage costs.