Creating custom configurations - Amazon Nimble Studio

Creating custom configurations

Administrators can use custom components to configure instances. This allows admins to set up and control additional properties of their streaming workstations. Custom components use PowerShell scripts for Windows, and shell scripts for Linux instances. These configurations can be added to launch profiles for retrieval. After you create custom configurations, you can add resources to your workstations, and run custom scripts on your instance, system, and user initialization.

This tutorial explains how to create and attach custom configurations. Example custom configurations and their uses are also provided.

Prerequisites

  • To complete this tutorial, you need an active Nimble Studio cloud studio deployed in your AWS account. If you don’t have a cloud studio already deployed, see the Deploying a new studio with StudioBuilder tutorial.

Step 1: Create the custom configuration

First, create the custom configuration. This section provides several examples of custom configurations.

  1. Sign in to the AWS Management Console and open the Nimble Studio console.

  2. Choose Studio resources in the left navigation pane.

  3. Choose Add under the studio resource type called Custom configuration.

    
                  The Studio resource types list section in the Nimble Studio console’s Studio
                     resources page.
  4. For each section, enter the following information.

    1. Region: Select the Region that your studio is deployed in.. This is prefilled with the correct value.

    2. Custom configuration name: Enter the name associated with this configuration. You can reference the custom configuration name in launch profiles later.

    3. Custom configuration description: Enter an optional description for this custom configuration.

    4. Parameter name: Enter the name for the parameter. You can use the parameter name as a key variable in the initialization scripts later.

    5. Parameter value: Enter the value of the parameter. You can use the parameter value in the scripts to replace the name (key) during runtime.

      1. Parameter values give you a way to add variables into scripting. Parameter values simplify replacing values.

      2. You can add multiple parameters to each custom configuration.

      3. For some examples of parameters, see the Custom configuration examples.

    6. (Optional) IAM Roles: Choose the IAM role that you want to associate with this custom component. Users with this IAM profile can access different AWS services through their launch profiles.

      1. During IAM role creation, select the Choose a service to view use case dropdown. Enter Nimble Studio and choose Nimble Studio.

      2. Select the radio button next to Nimble Studio - Allows Nimble Studio resources access to AWS resources. Then choose Next.

      3. Add permissions to the newly created role to grant it access to the AWS resources you need.

      4. Choose an Initialization role. This role provides temporary access to AWS resources from system initialization scripts. For initialization role examples, see the Custom configuration examples section.

      5. Choose a Runtime role. This role provides runtime access to AWS resources anytime that the instance is running.

    7. Initialization scripts: Define the Windows PowerShell or Linux shell scripts that run during system initialization time, during user initialization time, or during both.

      1. For some examples of initialization scripts, see the Custom configuration examples.

    8. Security groups: Choose the security group that you want to be associated with this custom configuration.

      1. Security groups allow administrators to open new ports on instances so they can perform certain operations on the instance. This could include opening ports for custom file storage support, or ports to communicate with license servers.

  5. (Optional) Add tags if you're using tags to track your AWS resources.

  6. Choose Save custom configuration.

Step 2: Attach custom configuration to a launch profile

Next, attach your custom configuration to a launch profile.This custom configuration is used to program launch profiles to run the attached scripts.

  1. Choose Launch profiles in the left navigation pane.

  2. Select the launch profile that you want to add the custom configuration to.

  3. Choose, Action. Then choose Edit.

  4. Scroll down to Launch profile components.

  5. Choose the check box next to the custom component that you created in Creating custom configurations.

  6. Scroll to the bottom of the page and choose Update launch profile.

You have now created and attached a custom configuration to a launch profile. To see an example of when you can use a custom configuration, see the Provide Superuser access for Linux users tutorial.

Custom configuration examples

The following examples provide parameter values and initialization scripts for custom configurations. They also provide a summary of when you can use this custom configuration.

Custom configuration to hide the Windows Server network wizard after user login

This example creates a custom configuration component that sets up a Group Policy Object (GPO) for the AWS Managed Microsoft AD. This GPO hides the network wizard, which would otherwise show when a user logs into a Windows Server instance.

Script parameters

  • Parameter name: GPOName Parameter value: “HideNetworkWizard”

  • Parameter name: GPOComment Parameter value: “Hides the Network Wizard at Login”

  • Parameter name: RegKey Parameter value: "HideNetworkWizard" -Context User -Key "HKCU\Software\Microsoft\Windows NT\CurrentVersion\Network\NwCategoryWizard" -ValueName "Show" -Type DWORD -Value 0 -Action Update

Windows system initialization scripts

The following Windows system initailization script uses the script parameters from the previous section to set up a GOP for the AWS Managed Microsoft AD.

New-GPLink -Name "HideNetworkWizard" -Target $target $domain = (Get-WmiObject Win32_ComputerSystem).Domain $split = $domain.Split(".") $target = 'ou=' + $split[0] + ',dc=' + $split[0] + ',dc=' + $split[1] + ',dc=' + $split[2] + ',dc=' + $split[3] New-GPO -Name "HideNetworkWizard" -Comment "Hides the Network Wizard at Login" Set-GPPrefRegistryValue -Name "HideNetworkWizard" -Context User -Key "HKCU\Software\Microsoft\Windows NT\CurrentVersion\Network\NwCategoryWizard" -ValueName "Show" -Type DWORD -Value 0 -Action Update

The following image shows how the Parameter name, Parameter value, and Windows system initailization script would be input in the Nimble Studio console.


                  The Script parameters and Initialization scripts sections in the Nimble Studio
                     console.

Set the NICE DCV server priority

This example creates a custom configuration component that can increase or decrease the process priority for the NICE DCV streaming server. This is helpful when users run CPU-heavy applications because it prioritizes the streaming server.

Script parameters

  • Parameter name: WinExecutable Parameter value: “dcvserver.exe”

  • Parameter name: WinPriority Parameter value: “256”

  • Parameter name: LinuxExecutable Parameter value: “/user/bin/dcvserver”

  • Parameter name: LinuxPriority Parameter value: “19”

  • Parameter name: LinuxConfigFile Parameter value: “/etc/dcv/dcv.conf”

    The following image shows how the Parameter name and Parameter value would be input in the Nimble Studio console.


                  The Script parameters section.

Windows system initialization scripts

Get-WmiObject Win32_process -filter 'name = "dcvserver.exe"' | foreach-object { $_.SetPriority(256) }

Linux system initialization scripts

PSID=ps aux | grep -i /usr/bin/dcvserver | tr -s ' ' | cut -d ' ' -f 2 | head -n 1 renice -n 19 -p $PSID

The following image shows how the Windows and Linux System initialization scripts would be input in the Nimble Studio console.


               The Script parameters and Initialization scripts sections in the Nimble Studio
                  console.

Use a studio component initialization role

This example shows how to retrieve a secret from AWS Secrets Manager by using the initialization role. Credentials for the initialization role aren’t ever made accessible to the workstation user. This role gives administrators the required AWS access to provision a machine before the user gains access to the workstation.

To use this initialization script, first create a Secrets Manager secret by following the Create a secret tutorial in the AWS Secrets Manager User Guide.

  • For Key, enter example-secret-key and enter example-secret-value as the Value.

    • In production, example-secret-key could be an API key required to perform some one time setup to provision the workstation.

  • For Name, enter a name. For example, example-secret.

  • Notice the secret’s Amazon Resource Name (ARN). You will reference this ARN in when creating the initialization role.

  • Notice the secret’s AWS Region. You will need the AWS Region so that you can retrieve the secret from the AWS CLI in the system initialization script.

Next, create an initialization IAM that can access the secret by following the Creating a role for an AWS service (console) tutorial in the IAM User Guide.

  • Choose Nimble Studio as a trusted entity.

  • During role creation, select Custom trust policy on the Select trusted entity page. Then enter the following policy.

    { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "identity.nimble.amazonaws.com" }, "Action": [ "sts:AssumeRole", "sts:TagSession" ] } ] }
  • Choose Create policy and enter the following JSON text into the JSON editor. Replace <SECRET_ARN> with the ARN of the secret that you created earlier.

    { "Version": "2012-10-17", "Statement": [ { "Sid": "GetExampleSecretValue", "Effect": "Allow", "Action": "secretsmanager:GetSecretValue", "Resource": "<SECRET_ARN>" } ] }
  • Name this policy GetExampleSecretValue and add this policy to the role that you’re creating.

  • For Role name, enter ExampleStudioComponentInitRole.

Choose this role when you create a studio component. When you add Nimble Studio as a trusted entity, this role will display in the dropdown in the IAM roles section of the Create Studio Component page.

Linux system initialization scripts

Now that you’ve attached the ExampleStudioComponentInitRole to the studio component, the credentials corresponding to that role will automatically be available from your system initialization scripts. You can access these credentials through the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN environment variables. This means that you can use the AWS CLI or SDK without any manual credential management. If your studio component requires an API key to provision a machine with the functionality that it provides, use the following script to attain that key.

# The AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN # environment variables now contain credentials corresponding to # ExampleStudioComponentInitRole, so we can go ahead and # fetch our secret. # Install jq to parse output of secrets manager (if not included in ami) sudo yum install -y jq # Retrieve the example secret from Secrets Manager. EXAMPLE_SECRET_KEY=$( aws secretsmanager get-secret-value \ --region <REGION_OF_EXAMPLE_SECRET> \ --secret-id example-secret \ | jq '.SecretString | fromjson | .["example-secret-key"]' \ ) # Do stuff with $EXAMPLE_SECRET_KEY to provision the instance before the # user has ever logged in. The user will only gain access to the secret # if this script persists it to disk.

Windows system initialization scripts

Now that you’ve attached the ExampleStudioComponentInitRole to the studio component, the credentials corresponding to that role will automatically be available from your system initialization scripts. You can access these credentials through the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN environment variables. This means that you can use the AWS CLI or SDK without any manual credential management. If your studio component requires an API key to provision a machine with the functionality that it provides, use the following script to attain that key.

# The $Env:AWS_ACCESS_KEY_ID, $Env:AWS_SECRET_ACCESS_KEY, and $Env:AWS_SESSION_TOKEN # environment variables now contain credentials corresponding to # ExampleStudioComponentInitRole, so we can go ahead and # fetch our secret. # Retrieve the example secret from Secrets Manager. $exampleSecretKey = (Get-SECSecretValue -Region us-east-1 -SecretId example-secret).SecretString ` | ConvertFrom-Json ` | select -exp "example-secret-key" ` # Do stuff with $exampleSecretKey to provision the instance before the # user has ever logged in. The user will only gain access to the secret # if this script persists it to disk.

Use a studio component runtime role

This example shows how to interact with Amazon S3 from the user initialization script. This occurs while you're logged into the instance using the studio component runtime role. The runtime role is intended to give studio users access to tools that are configured by studio components. Credentials for the runtime role are made available on workstations by a studio component that's specific to the AWS profile. The environment variable provides the profile to the system and to the user initialization scripts.

To use this initialization script, first create a runtime IAM by following the Creating a role for an AWS service (console) tutorial in the IAM User Guide. This role will be able to access Amazon S3.

  • Choose Nimble Studio as a trusted entity.

  • On the Select trusted entity page, select Custom trust policy. Then enter the following policy.

    { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "identity.nimble.amazonaws.com" }, "Action": [ "sts:AssumeRole", "sts:TagSession" ] } ] }
  • Choose the AmazonS3ReadOnlyAccess AWS Managed policy.

  • For Role name, enter ExampleStudioComponentRuntimeRole.

Choose this role when creating a studio component. When you add Nimble Studio as a trusted entity, this role will display in the dropdown in the IAM roles section of the Create Studio Component page.

Linux user initialization scripts

The main intention of the runtime role is to enable tools configured by studio components to get the AWS access they need. However, it's possible to use the credentials directly. This is useful while developing a studio component or for AWS access needs without manual credential management.

In the script, the AWS_PROFILE environment variable contains the name of an AWS profile that has been configured in the shared credentials file. You can either configure the tools which the profile name they should use, or you can directly interact with the AWS CLI. The AWS_PROFILE environment variable tells the AWS CLI which profile to use.

# The AWS_PROFILE environment variable now contains # the name of an AWS profile that has been configured in the shared # credentials file. From here, we can either configure our tools # which the profile name they should use, or we can directly interact # with the AWS CLI (since AWS_PROFILE environment variable tells # the AWS CLI which profile to use). # Optionally configure the default region for the profile. If you # do this, then you don't need to specify the region on a per-command # basis using the `--region` option as is done in the rest of this # example. # # aws configure --profile $AWS_PROFILE set region us-west-2 # # Example of configuring a tool. echo $AWS_PROFILE > $HOME/my-tools-config.txt # Example of using the AWS CLI directly. Write a list of s3 buckets # to a text file in the user's home directory. aws s3 ls --region us-west-2 > $HOME/buckets.txt # Example of configuring your shell (bash in this case) to set the # AWS_PROFILE environment variable on start. # Take caution when doing this. If you do this in multiple studio # components that are added to the same launch profile, then only # one will end up taking affect. if ! grep "export AWS_PROFILE" $HOME/.bashrc >> /dev/null 2>&1 ; then echo "export AWS_PROFILE=$AWS_PROFILE" >> $HOME/.bashrc fi

To use these runtime role credentials on a Linux machine, follow these instructions.

To use runtime role credentials on a Linux machine

  1. Open a terminal.

  2. Run the following command to list the available Amazon S3 buckets in us-west-2: aws s3 ls --region us-west-2

    1. This command works without explicitly specifying the --profile option because the user init script configured the shell to export $AWS_PROFILEs environment variable with the value of the studio component role name.

Windows user initialization scripts

The main intention of the runtime role is to enable tools configured by studio components to get the AWS access they need. However, it's possible to use the credentials directly. This is useful while developing a studio component or for AWS access needs without manual credential management.

In the script, the $Env:AWS_PROFILE environment variable contains the name of an AWS profile that has been configured in the shared credentials file. You can either configure the tools which the profile name they should use, or you can directly interact with the AWS powershell commandlets. AWS Tools for PowerShell is already configured to use this profile for the duration of this script.

# The $Env:AWS_PROFILE environment variable now contains # the name of an AWS profile that has been configured in the shared # credentials file. From here, we can either configure our tools # which the profile name they should use, or we can directly interact # with the AWS Powershell commandlets as the AWS Tools for Powershell # has already been configured to use this profile for the duration of # this script. # Optionally configure the default region for the profile. If you # do this, then you don't need to specify the region on a per-command # basis using the `-Region` option as is done in the rest of this # example. There is no AWS Tools for Powershell equivalent # of this command, so you need to ensure that you have installed # the AWS CLI on your AMI for this to work. # # aws configure --profile $Env:AWS_PROFILE set region us-west-2 # # Example of configuring a tool. $Env:AWS_PROFILE | Out-File "$Env:USERPROFILE\my-tools-config.txt" # Example of using the AWS CLI directly. Write a list of s3 buckets # to a text file in the user's home directory. Get-S3Bucket -Region us-west-2 | Out-File "$Env:USERPROFILE\buckets.txt" # Example of configuring your powershell use the AWS profile on start. # Take caution when doing this. If you do this in # multiple studio components that are added to the same launch profile, # then only one will end up taking affect. This only # configures powershell (and not command prompt). if (!(Test-Path $profile)) { New-Item -Path $profile -ItemType file -Force } if (!(Select-String -Path $profile -Pattern "^\s*Set-AWSCredential")) { echo "Set-AWSCredential -ProfileName $Env:AWS_PROFILE" ` | Add-Content -Path "$profile" }

The main intention of the runtime role is to enable tools that are configured by studio components to get the AWS access they need. However, it's possible to use the credentials directly. This is useful while developing a studio component or for ad-hoc AWS access needs without manual credential management. To use these runtime role credentials on a Windows machine, follow these instructions.

To use runtime role credentials on a Windows machine

  1. Open PowerShell.

    Note

    Windows takes some time to initialize after launch. If at first you are not in your expected user directory when you open a Powershell, close the Powershell, wait a few minutes, and try again.

  2. Run the following command to list the available Amazon S3 buckets in us-west-2: Get-S3Bucket -Region us-west-2

    1. This command works without explicitly specifying the -ProfileName option because the user init script configured the shell to export $AWS_PROFILEs environment variable with the value of the studio component role name.

You now know how to create a custom configuration and attach it to a launch profile. You've also seen several examples of what custom configurations can do. To see another example of when you can use a custom configuration, see the Provide Superuser access for Linux users tutorial.