Prerequisites - Research and Engineering Studio

Prerequisites

Create an AWS account with an administrative user

You must have an AWS account with an administrative user:

  1. Open https://portal.aws.amazon.com/billing/signup.

  2. Follow the online instructions.

    Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad.

    When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.

Create an Amazon EC2 SSH key pair

If you do not have Amazon EC2 SSH key pair, you will need to create one. For more information, see Create a key pair using Amazon EC2 in the Amazon EC2 User Guide.

Increase service quotas

We recommend increasing the service quotas for:

  • Amazon VPC

    • Increase the Elastic IP address quota per NAT gateway from five to eight.

    • Increase the NAT gateways per Availability Zone from five to ten.

  • Amazon EC2

    • Increase the EC2-VPC Elastic IPs from five to ten

Your AWS account has default quotas, formerly referred to as limits, for each AWS service. Unless otherwise noted, each quota is Region-specific. You can request increases for some quotas, and other quotas cannot be increased. For more information, see Quotas for AWS services in this product.

Create a public domain (optional)

We recommend using a custom domain for the product in order to have a user-friendly URL. You will need to register a domain using Amazon Route 53 or another provider and import a certificate for the domain using AWS Certificate Manager. If you already have a public domain and certificate, you may skip this step.

  1. Follow the directions to register a domain with Route53. You should receive a confirmation email.

  2. Retrieve the hosted zone for your domain. This is created automatically by Route53.

    1. Open the Route53 console.

    2. Choose Hosted zones from the left navigation.

    3. Open the hosted zone created for your domain name and copy the Hosted zone ID.

  3. Open AWS Certificate Manager and follow these steps to request a domain certificate. Ensure you are in the Region where you plan to deploy the solution.

  4. Choose List certificates from the navigation, and find your certificate request. The request should be pending.

  5. Choose your Certificate ID to open the request.

  6. From the Domains section, choose Create records in Route53. It will take approximately ten minutes for the request to process.

  7. Once the certificate is issued, copy the ARN from the Certificate status section.

Create domain (GovCloud only)

If you are deploying in the AWS GovCloud (US-West) Region and you are using a custom domain for Research and Engineering Studio, you will need to complete these prerequisite steps.

  1. Deploy the Certificate AWS CloudFormation stack in the commercial-partition AWS Account where the public hosted domain was created.

  2. From the Certificate CloudFormation Outputs, find and note the CertificateARN and PrivateKeySecretARN.

  3. In the GovCloud partition account, create a secret with the value of the CertificateARN output. Note the new secret ARN and add two tags to the secret so vdc-gateway can access the secret value:

    1. res:ModuleName = virtual-desktop-controller

    2. res:EnvironmentName = [environment name] (This could be res-demo.)

  4. In the GovCloud partition account, create a secret with the value of the PrivateKeySecretArn output. Note the new secret ARN and add two tags to the secret so vdc-gateway can access the secret value:

    1. res:ModuleName = virtual-desktop-controller

    2. res:EnvironmentName = [environment name] (This could be res-demo.)

Provide external resources

Research and Engineering Studio on AWS expects the following external resources to exist when it is deployed.

  • Networking (VPC, Public Subnets, and Private Subnets)

    This is where you will run the EC2 instances used to host the RES environment, the Active Directory (AD), and shared storage.

  • Storage (Amazon EFS)

    The storage volumes contain files and data needed for the virtual desktop infrastructure (VDI).

  • Directory service (AWS Directory Service for Microsoft Active Directory)

    The directory service authenticates users to the RES environment.

  • A secret that contains the service account password

    Research and Engineering Studio accesses secrets that you provide, including the service account password, using AWS Secrets Manager.

Tip

If you are deploying a demo environment and do not have these external resources available, you can use AWS High Performance Compute recipes to generate the external resources. See the following section, Create external resources, to deploy resources in your account.

For demo deployments in the AWS GovCloud (US-West) Region, you will need to complete the prerequisite steps in Create domain (GovCloud only).

Configure LDAPS in your environment (optional)

If you plan to use LDAPS communication in your environment, you must complete these steps to create and attach certificates to the AWS Managed Microsoft AD (AD) domain controller to provide communication between AD and RES.

  1. Follow the steps provided in How to enable server-side LDAPS for your AWS Managed Microsoft AD. You can skip this step if you have already enabled LDAPS.

  2. After confirming that LDAPS is configured on the AD, export the AD certificate:

    1. Go to your Active Directory server.

    2. Open PowerShell as an administrator.

    3. Run certmgr.msc to open the certificate list.

    4. Open the certificate list by first opening the Trusted Root Certification Authorities and then Certificates.

    5. Select and hold (or right-click) the certificate with the same name as your AD server and choose All tasks and then Export.

    6. Select Base-64 encoded X.509 (.CER) and choose Next.

    7. Select a directory and then choose Next.

  3. Create a secret in AWS Secrets Manager:

    When creating your Secret in the Secrets Manager, choose Other type of secrets under secret type and paste your PEM encoded certificate in the Plaintext field.

  4. Note the ARN created and input it as the DomainTLSCertificateSecretARN parameter in Step 1: Launch the product.

Configure a private VPC (optional)

Deploying Research and Engineering Studio in an isolated VPC offers enhanced security to meet your organization's compliance and governance requirements. However, the standard RES deployment relies on internet access for installing dependencies. To install RES in a private VPC, you will need to satisfy the following prerequisites:

Prepare Amazon Machine Images (AMIs)

  1. Download dependencies. To deploy in an isolated VPC, the RES infrastructure requires the availability of dependencies without having public internet access.

  2. Create an IAM role with Amazon S3 read-only access and trusted identity as Amazon EC2.

    1. Open the IAM console at https://console.aws.amazon.com/iam/.

    2. From Roles, choose Create role.

    3. On the Select trusted entity page:

      • Under Trusted entity type, choose AWS service.

      • For Use case under Service or use case, choose EC2 and choose Next.

    4. On Add permissions, select the following permission policies and then choose Next:

      • AmazonS3ReadOnlyAccess

      • AmazonSSMManagedInstanceCore

      • EC2InstanceProfileForImageBuilder

    5. Add a Role name and Description, and then choose Create role.

  3. Create the EC2 image builder component:

    1. Open the EC2 Image Builder console at https://console.aws.amazon.com/imagebuilder.

    2. Under Saved resources, choose Components and choose Create component.

    3. On the Create component page, enter the following details:

      • For Component type, choose Build.

      • For Component details choose:

        Parameter User entry
        Image operating system (OS) Linux
        Compatible OS Versions Amazon Linux 2
        Component name Enter a name such as: <research-and-engineering-studio-infrastructure>
        Component version We recommend starting with 1.0.0.
        Description Optional user entry.
    4. On the Create component page, choose Define document content.

      1. Before entering the definition document content, you will need a file URI for the tar.gz file. Upload the tar.gz file provided by RES to an Amazon S3 bucket and copy the file's URI from the bucket properties.

      2. Enter the following:

        Note

        AddEnvironmentVariables is optional, and you may remove it if you do not require custom environment variables in your infrastructure hosts.

        If you are setting up http_proxy and https_proxy environment variables, the no_proxy parameters are required to prevent the instance from using proxy to query localhost, instance metadata IP addresses, and the services that support VPC endpoints.

        # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance # with the License. A copy of the License is located at # # http://www.apache.org/licenses/LICENSE-2.0 # # or in the 'license' file accompanying this file. This file is distributed on an 'AS IS' BASIS, WITHOUT WARRANTIES # OR CONDITIONS OF ANY KIND, express or implied. See the License for the specific language governing permissions # and limitations under the License. name: research-and-engineering-studio-infrastructure description: An RES EC2 Image Builder component to install required RES software dependencies for infrastructure hosts. schemaVersion: 1.0 parameters: - AWSAccountID: type: string description: RES Environment AWS Account ID - AWSRegion: type: string description: RES Environment AWS Region phases: - name: build steps: - name: DownloadRESInstallScripts action: S3Download onFailure: Abort maxAttempts: 3 inputs: - source: '<s3 tar.gz file uri>' destination: '/root/bootstrap/res_dependencies/res_dependencies.tar.gz' expectedBucketOwner: '{{ AWSAccountID }}' - name: RunInstallScript action: ExecuteBash onFailure: Abort maxAttempts: 3 inputs: commands: - 'cd /root/bootstrap/res_dependencies' - 'tar -xf res_dependencies.tar.gz' - 'cd all_dependencies' - '/bin/bash install.sh' - name: AddEnvironmentVariables action: ExecuteBash onFailure: Abort maxAttempts: 3 inputs: commands: - | echo -e " http_proxy=http://<ip>:<port> https_proxy=http://<ip>:<port> no_proxy=127.0.0.1,169.254.169.254,169.254.170.2,localhost,{{ AWSRegion }}.res,{{ AWSRegion }}.vpce.amazonaws.com,{{ AWSRegion }}.elb.amazonaws.com,s3.{{ AWSRegion }}.amazonaws.com,s3.dualstack.{{ AWSRegion }}.amazonaws.com,ec2.{{ AWSRegion }}.amazonaws.com,ec2.{{ AWSRegion }}.api.aws,ec2messages.{{ AWSRegion }}.amazonaws.com,ssm.{{ AWSRegion }}.amazonaws.com,ssmmessages.{{ AWSRegion }}.amazonaws.com,kms.{{ AWSRegion }}.amazonaws.com,secretsmanager.{{ AWSRegion }}.amazonaws.com,sqs.{{ AWSRegion }}.amazonaws.com,elasticloadbalancing.{{ AWSRegion }}.amazonaws.com,sns.{{ AWSRegion }}.amazonaws.com,logs.{{ AWSRegion }}.amazonaws.com,logs.{{ AWSRegion }}.api.aws,elasticfilesystem.{{ AWSRegion }}.amazonaws.com,fsx.{{ AWSRegion }}.amazonaws.com,dynamodb.{{ AWSRegion }}.amazonaws.com,api.ecr.{{ AWSRegion }}.amazonaws.com,.dkr.ecr.{{ AWSRegion }}.amazonaws.com,kinesis.{{ AWSRegion }}.amazonaws.com,.data-kinesis.{{ AWSRegion }}.amazonaws.com,.control-kinesis.{{ AWSRegion }}.amazonaws.com,events.{{ AWSRegion }}.amazonaws.com,cloudformation.{{ AWSRegion }}.amazonaws.com,sts.{{ AWSRegion }}.amazonaws.com,application-autoscaling.{{ AWSRegion }}.amazonaws.com,monitoring.{{ AWSRegion }}.amazonaws.com " > /etc/environment
    5. Choose Create component.

  4. Create an Image Builder image recipe.

    1. On the Create recipe page, enter the following:

      Section Parameter User entry
      Recipe details Name Enter an appropriate name such as res-recipe-linux-x86.
      Version Enter a version, typically starting with 1.0.0.
      Description Add an optional description.
      Base image Select image Select managed images.
      OS Amazon Linux
      Image origin Quick start (Amazon-managed)
      Image name Amazon Linux 2 x86
      Auto-versioning options Use latest available OS version.
      Instance configuration Keep everything in the default settings, and make sure Remove SSM agent after pipeline execution is not selected.
      Working directory Working directory path /root/bootstrap/res_dependencies
      Components Build components

      Search for and select the following:

      • Amazon-managed: aws-cli-version-2-linux

      • Amazon-managed: amazon-cloudwatch-agent-linux

      • Owned by you: Amazon EC2 component created previously. Put your AWS account ID and current AWS Region in the fields.

      Test components

      Search for and select:

      • Amazon-managed: simple-boot-test-linux

    2. Choose Create recipe.

  5. Create Image Builder infrastructure configuration.

    1. Under Saved resources, choose Infrastructure configurations.

    2. Choose Create infrastructure configuration.

    3. On the Create infrastructure configuration page, enter the following:

      Section Parameter User entry
      General Name Enter an appropriate name such as res-infra-linux-x86.
      Description Add an optional description.
      IAM role Select the IAM role created previously.
      AWS infrastructure Instance type Choose t3.medium.
      VPC, subnet, and security groups

      Select an option that permits internet access and access to the Amazon S3 bucket. If you need to create a security group, you can create one from the Amazon EC2 console with the following inputs:

      • VPC: Select the same VPC being used for the infrastructure configuration. This VPC must have internet access.

      • Inbound rule:

        • Type: SSH

        • Source: Custom

        • CIDR block: 0.0.0.0/0

    4. Choose Create infrastructure configuration.

  6. Create a new EC2 Image Builder pipeline:

    1. Go to Image pipelines, and choose Create image pipeline.

    2. On the Specify pipeline details page, enter the following and choose Next:

      • Pipeline name and optional description

      • For Build schedule, set a schedule or choose Manual if you want to start the AMI baking process manually.

    3. On the Choose recipe page, choose Use existing recipe and enter the Recipe name created previously. Choose Next.

    4. On the Define image process page, select the default workflows and choose Next.

    5. On the Define infrastructure configuration page, choose Use existing infrastructure configuration and enter the name of the previously created infrastructure configuration. Choose Next.

    6. On the Define distribution settings page, consider the following for your selections:

      • The output image must reside in the same region as the deployed RES environment, so that RES can properly launch infrastructure host instances from it. Using service defaults, the output image will be created in the region where the EC2 Image Builder service is being used.

      • If you want to deploy RES in multiple regions, you can choose Create a new distribution settings and add more regions there.

    7. Review your selections and choose Create pipeline.

  7. Run the EC2 Image Builder pipeline:

    1. From Image pipelines, find and select the pipeline you created.

    2. Choose Actions, and select Run pipeline.

      The pipeline may take approximately 45 minutes to an hour to create an AMI image.

  8. Note the AMI ID for the generated AMI and use it as the input for the InfrastructureHostAMI parameter in Step 1: Launch the product.

Set up VPC endpoints

To deploy RES and launch virtual desktops, AWS services require access to your private subnet. You must set up VPC endpoints to provide the required access, and you will need to repeat these steps for each endpoint.

  1. If endpoints have not previously been configured, follow the instructions provided in Access an AWS service using an interface VPC endpoint.

  2. Select one private subnet in each of the two availability zones.

AWS service Service name
Application Auto Scaling com.amazonaws.region.application-autoscaling
AWS CloudFormation com.amazonaws.region.cloudformation
Amazon CloudWatch com.amazonaws.region.monitoring
Amazon CloudWatch Logs com.amazonaws.region.logs
Amazon DynamoDB com.amazonaws.region.dynamodb (Requires gateway endpoint)
Amazon EC2 com.amazonaws.region.ec2
Amazon ECR com.amazonaws.region.ecr.api
com.amazonaws.region.ecr.dkr
Amazon Elastic File System com.amazonaws.region.elasticfilesystem
Elastic Load Balancing com.amazonaws.region.elasticloadbalancing
Amazon EventBridge com.amazonaws.region.events
Amazon FSx com.amazonaws.region.fsx
AWS Key Management Service com.amazonaws.region.kms
Amazon Kinesis Data Streams com.amazonaws.region.kinesis-streams
AWS Lambda com.amazonaws.region.lambda
Amazon S3

com.amazonaws.region.s3 (Requires a gateway endpoint that is created by default in RES.)

Additional Amazon S3 interface endpoints are required for cross-mounting buckets in an isolated environment. See Accessing Amazon Simple Storage Service interface endpoints.

AWS Secrets Manager com.amazonaws.region.secretsmanager
Amazon SES com.amazonaws.region.email-smtp (Not supported in the following Availability Zones: use-1-az2, use1-az3, use1-az5, usw1-az2, usw2-az4, apne2-az4, cac1-az3, and cac1-az4.)
AWS Security Token Service com.amazonaws.region.sts
Amazon SNS com.amazonaws.region.sns
Amazon SQS com.amazonaws.region.sqs
AWS Systems Manager com.amazonaws.region.ec2messages
com.amazonaws.region.ssm
com.amazonaws.region.ssmmessages

Connect to services without VPC endpoints

To integrate with services that do not support VPC endpoints, you can set up a proxy server in a public subnet of your VPC. Follow these steps to create a proxy server with the minimum necessary access for a Research and Engineering Studio deployment using AWS Identity Center as your identity provider.

  1. Launch a Linux instance in the public subnet of the VPC you will use for your RES deployment.

    • Linux family – Amazon Linux 2 or Amazon Linux 3

    • Architecture – x86

    • Instance type – t2.micro or higher

    • Security group – TCP on port 3128 from 0.0.0.0/0

  2. Connect to the instance to set up a proxy server.

    1. Open the http connection.

    2. Allow connection to the following domains from all relevant subnets:

      • .amazonaws.com (for generic AWS services)

      • .amazoncognito.com (for Amazon Cognito)

      • .awsapps.com (for Identity Center)

      • .signin.aws (for Identity Center)

      • .amazonaws-us-gov.com (for Gov Cloud)

    3. Deny all other connections.

    4. Activate and start the proxy server.

    5. Note the PORT on which the proxy server listens.

  3. Configure your route table to allow access to the proxy server.

    1. Go to your VPC console and identify the route tables for the subnets you will be using for Infrastructure Hosts and VDI hosts.

    2. Edit route table to allow all incoming connections to go to the proxy server instance created in the previous steps.

    3. Do this for route tables for all the subnets (without internet access) which you are going to use for Infrastructure/VDIs.

  4. Modify the security group of the proxy server EC2 instance and make sure it allows inbound TCP connections on the PORT on which the proxy server is listening.

Set private VPC deployment parameters

In Step 1: Launch the product, you are expected to input certain parameters in the AWS CloudFormation template. Be sure to set the following parameters as noted to successfully deploy into the private VPC you just configured.

Parameter Input
InfrastructureHostAMI Use the infrastructure AMI ID created in Prepare Amazon Machine Images (AMIs).
IsLoadBalancerInternetFacing Set to false.
LoadBalancerSubnets Choose private subnets without internet access.
InfrastructureHostSubnets Choose private subnets without internet access.
VdiSubnets Choose private subnets without internet access.

ClientIP

You can choose your VPC CIDR to allow access for all VPC IP addresses.