Menu
PowerShell DSC on the AWS Cloud
Quick Start Deployment Reference Guide

Deployment with a Pull Server Infrastructure

In this section, we will cover the process of using a single PowerShell DSC configuration script along with an AWS CloudFormation template to deploy our sample architecture.

DCS Configuration Script Overview

Our pull server infrastructure uses a single DSC configuration script that applies to all of the servers in the deployment. The configuration script ensures the systems implement the following changes:

  • Create the Active Directory Forest and Domain and build a Domain Controller in the first Availability Zone

  • Configure the Active Directory Site Topology

  • Join each node to the domain

  • Promote another server to a Domain Controller in the second Availability Zone

  • Install Remote Desktop Gateway Services on the Remote Desktop Gateways in public subnets

  • Deploy IIS and our sample web page on servers in each Availability Zone

Many of these tasks involved in creating this infrastructure use DSC resources that are not native to the operating system. Microsoft has made a number of additional and experimental DSC resources available for download in "DSC Resource Kit Waves." We've used several of these to configure the state of the systems in this architecture. Additionally, we've created custom DSC resources in order to configure certain aspects of the environment that are not currently supported by Microsoft's provided DSC resources.

To help ensure that these DSC resources will always be available, we've saved copies of them in an Amazon S3 bucket. This prevents our automated deployment template from being broken if the links suddenly change on the Internet. It also provides the ability to roll back to prior versions of code since the Amazon S3 bucket has versioning enabled.

Bootstrapping the PowerShell DSC Pull Server

The bootstrapping sequence for the pull server lays the foundation for building the rest of the environment. As depicted in Figure 1, each client node accesses load balanced pull servers though Elastic Load Balancing. The bootstrapping process for the pull server includes the following:

  • IAM Role – The pull server launches with an IAM role, allowing the instance to call the DescribeLoadBalancers and DescribeInstances actions. This process allows the pull server to determine the DNS name for the Elastic Load Balancer and to query tags set on each Amazon EC2 instance in the stack.

  • Setup – Downloads all of the required components, such as the pull server configuration script, the master configuration script, the DSC resource modules, and other helper scripts. These file downloads are completed by the AWS CloudFormation files resource.

  • Self-Signed Certificate – The pull server creates a self-signed certificate using a helper script. The DNS name of the internal Elastic Load Balancer is obtained from the Get-ELBLoadBalancer cmdlet. The DNS name is used as the common name on the self-signed certificate. This allows client nodes to pull their configurations through the load balanced endpoint using secure HTTPS connections.

  • Bootstrapping the DSC Web Service – The pull server runs the CreatePullServer.ps1 configuration script, which outputs a MOF file for the pull server. The settings are then applied locally to create the DSC web service listening on TCP port 8080, which is configured to use the self-signed certificate thumbprint created previously to secure the web service.

  • Generating Configurations – The pull server executes the master configuration script, which produces a MOF file for each server in the environment. Each file must be renamed to the associated node's ConfigurationID, which is represented as a globally unique identifier (GUID). Each instance is tagged in Amazon EC2 with a GUID using the AWS CloudFormation template. The pull server is able to obtain these ConfigurationIDs, match them with each node in the topology, and rename and checksum the file.

Pull Server Configuration Script

Figure 5 shows the code for CreatePullServer.ps1, which is the configuration script used to create the pull server.


    Pull Server Configuration Script

Figure 5: Pull Server Configuration Script

The CreatePullServer.ps1 configuration script depends on the xPSDesiredConfiguration resource module. This is a non-native module that can be obtained from Microsoft. The AWS CloudFormation template is configured to download this module from Amazon S3 as a .zip file, which it then unpacks into $env:ProgramFiles\WindowsPowerShell\DscService\Modules on the pull server.

A few points of interest about this configuration script:

  • Line 9 – We declare the DSCServiceFeature, ensuring that the DSC-Service WindowsFeature is present and installed on the server.

  • Line 19 – Notice that the value of the certificate thumbprint is retrieved from the machine's local certificate store. This is because we previously generated and installed the self-signed certificate during the bootstrapping process.

  • Line 23 – The PSDSCPullServer resource uses the DependsOn attribute to ensure that the DSC-Service is first installed before attempting to configure the DSC Web Service.

  • Line 29 – The call to Start-DscConfiguration tells us that this is a DSC "push" operation. The pull server is pushing this configuration to itself. Other nodes in the environment will be configured in pull mode.

The CreatePullServer.ps1 script is called on the pull server using AWS CloudFormation.


    Running the CreatePullServer.ps1 Configuration Script on Pull1 using AWS CloudFormation

Figure 6: Running the CreatePullServer.ps1 Configuration Script on Pull1 using AWS CloudFormation

Remember, there are two pull servers in this environment to provide high availability. Each pull server instance is bootstrapped using the steps outlined here.

After you've deployed your environment, you'll need to make sure that downloadable content (MOF files, resource modules, and checksums) are kept up to date on both pull servers. This can be done as a procedure of your deployment process or by using a file synchronization service to keep the modules and configuration directories on pull servers in sync.

Bootstrapping Client Instances

The bootstrapping process for the each client instance includes the following:

  • IAM Role – Each server launches with an IAM role, allowing the instance to call the DescribeLoadBalancers and DescribeInstances actions. This process allows the server to determine the DNS name for the Elastic Load Balancer and to query tags set on each Amazon EC2 instance in the stack.

  • Setup – Downloads helper scripts from Amazon S3. These file downloads are completed by the AWS CloudFormation files resource.

  • Certificates – In addition to connecting to the correct DNS name, clients must also trust the certificate installed on the pull server. The self-signed certificate is downloaded from the pull server and installed locally. Keep in mind that this is for demonstration purposes, and an enterprise PKI solution is likely the best method for doing this in production.

  • Configuring the LCM – The Local Configuration Manager is then configured with the HTTPS endpoint that should be used as the pull server. In this case, the endpoint will be the Elastic Load Balancer. Additionally, the ConfigurationID for the node is set. Again, each Amazon EC2 instance is tagged with a unique ConfigurationID, which can also be obtained from Amazon EC2.

Client Bootstrap Configuration Script

Figure 7 shows the code for SetPullMode.ps1, which is the configuration script used to configure the client.


    DSC Client Configuration Script

Figure 7: DSC Client Configuration Script

A few points of interest about this configuration script:

  • Line 7 – The GUID assigned to the instance is retrieved from the Amazon EC2 guid tag and stored in a variable.

  • Line 8 – We store the DNS name of the Elastic Load Balancer in a variable called $PullServer.

  • Line 13 – The Local Configuration Manager ConfigurationMode is set to ApplyAndAutoCorrect. This setting helps ensure that modifications and configuration drift issues are corrected and that the system remains in the desired state.

  • Line 14 – The ConfigurationID for the client node is set using the value of the guid tag on the Amazon EC2 instance.

  • Line 15 – The CertificateID is set to the thumbprint of the self-signed certificate that was obtained from the pull server. In addition to using the self-signed certificate on the pull server to secure HTTPS connections, it's also used for encryption. Defining CertificateID allows the client node to decrypt credentials in the MOF documents.

  • Line 22 – The ServerUrl is configured to use the $PullServer variable, which is set to the DNS name of the Elastic Load Balancer.

The SetPullMode.ps1 script is called on each instance from AWS CloudFormation. The Instance and Region parameter values are passed in at runtime.


    Running the SetPullMode.ps1 Configuration Script using AWS CloudFormation

Figure 8: Running the SetPullMode.ps1 Configuration Script using AWS CloudFormation

After the pull mode has been set on the instance, a pull operation is invoked manually. This is done by calling the Update-DscConfiguration cmdlet that was released with the November 2014 update rollup for Windows Server 2012 R2 (KB3000850).

To view the Local Configuration Manager settings (meta-configuration) for a node, you can use the Get-DscLocalConfigurationManager cmdlet.

The Configuration Script

Now that we understand how the pull servers and client instances are bootstrapped, let's take a closer look at the configuration script that is responsible for defining the state of each server in the environment.

It's important to keep in mind that in this Quick Start Reference Deployment, the pull server is used as a "build server," meaning that it downloads and runs the configuration script, generating MOFs for all the servers in the environment. This means that any additional resources need to be downloaded and extracted into $env:ProgramFiles\WindowsPowerShell\Modules on the pull server during the bootstrapping process.

Let's take a look at the structure of the configuration script pictured in the following figure. Several code blocks are collapsed and will be covered in greater detail in the following sections of this guide.


    The Structure of the Master Configuration Script

Figure 9: The Structure of the Master Configuration Script

  • Lines 1 through 13 – The param block includes a number of parameters that are used to define the settings in our environment. All of these parameters map to the parameters of the AWS CloudFormation template. When you launch the stack, you're given the opportunity to customize the environment at launch time. You can change subnet CIDR ranges, IP addresses, the DNS name of the Active Directory Domain, and more. These parameter values are passed in from AWS CloudFormation to the pull server that builds the configurations using the settings you've provided.

  • Line 16 – A single call to Get-ELBLoadBalancer to get the DNS name of the Elastic Load Balancer. This will be the end-point that client nodes use to pull configurations from the pull server.

  • Line 19 and 20 – Imports helper functions used to retrieve node GUIDs and aid in formatting IP information used by the xNetworking DSC resource.

  • Line 23 – The Configuration Data for the environment, covered in more detail in the following sections of this guide.

  • Lines 62 and 63 – The credentials used by member servers to join the Active Directory Domain.

  • Line 66 – The DSC configuration called "ServerBase." Within the configuration, we import the xNetworking, xActiveDirectory, and xComputerManagement DSC resources since they are not native to Windows Server 2012 R2.

  • Line 294 and beyond – The code used to create the MOF files, rename them, and then move them to the appropriate directory on the pull server.

Now let's take a closer look at each aspect of the configuration script.

DSC Resources

The only DSC resource used in our configuration script that is native to Windows Management Framework 4.0 is the WindowsFeature resource. The other resources, which include resources from the xNetworking, xActiveDirectory, and xComputerManagment resource modules, were made available by Microsoft in an out-of-band release. These resource modules have been stored in Amazon S3 and are downloaded by the pull servers in .zip format. The pull servers are then configured to host the zipped resource modules so they can be downloaded from DSC client nodes.

Configuration Data

Configuration data provides a way to define additional environmental settings for the nodes in a configuration. The configuration data is a hash table of settings that can be passed into a configuration when generating MOF documents. Figure 10 shows an example of the configuration data used for the master DSC configuration for our environment.


    DSC Configuration Data

Figure 10: DSC Configuration Data

In the example in Figure 10 (which is condensed for the sake of brevity) the configuration data contains a hash table for each node in our deployment. Each property is described below:

  • NodeName – The hostname of the instance, which corresponds to the node name in the configuration script.

  • Guid – The ConfigurationID (in the form of a GUID) that the pull server and DSC client both use to determine which configuration should be pulled from the server.

  • AvailabilityZone – This is a custom property to indicate which AWS Availability Zone the instance is located in. As we'll see, this custom property is used within the configuration script to apply settings specific to the location of the instance.

  • CertificateFile – The physical path on the "build server" (in this case, the pull server) of the certificate that will be used to encrypt embedded credentials in the resulting MOF file. Remember, client nodes must have the private key to decrypt the credentials.

  • Thumbprint – The certificate thumbprint on the client node to indicate which installed certificate should be used to decrypt data.

DNS Client Configuration

Since our architecture will be distributed across two Availability Zones, it makes sense for domain-joined servers to use the Active Directory DNS server in the local Availability Zone. We account for this in our configuration using some additional logic and the xDnsServerAddress resource, which is made available as part of the xNetworking resource module.


        DNS Settings

Figure 11: DNS Settings

As you can see in Figure 11, we filter $AllNodes (which will be defined by our configuration data) so that instances in AZ1 will point to the Domain Controller in the first Availability Zone for its primary DNS server, and vice versa.

Domain Controller Configuration

The Domain Controller configuration is mostly completed by resources from the xActiveDirectory resource module. In order to fully implement a distributed Active Directory topology, we add a number of additional resources to the module.


        DC1 Configuration

Figure 12: DC1 Configuration

A few points to note about the configuration for DC1 as shown in Figure 12:

  • The cIPAddress resource sets the IP Address for the DC through the AWS CloudFormation template parameter. The default gateway and subnet mask are set based off the IP Address and subnet CIDR using a couple of helper functions.

  • The cADSubnet resource is used to create subnet definitions for each subnet in the Amazon VPC. The cADSubnet is a custom resource that was not originally included in the xActiveDirectory resource module. We added it to aid in configuring the Active Directory (AD) Site Topology for this Quick Start Reference Deployment.

  • The cADSite resource is used to create AD Site objects for each Availability Zone that will host a Domain Controller. The cADSite is a custom resource that was not originally included in the xActiveDirectory resource module. We added it to aid in configuring the Active Directory Site Topology for this Quick Start Reference Deployment.

  • The cADSiteLinkUpdate resource is used to link the two AD sites so the Domain Controllers will replicate data to each other. The cADSiteLinkUpdate is a custom resource that was not originally included in the xActiveDirectory resource module. We added it to aid in configuring the Active Directory Site Topology for this Quick Start Reference Deployment.

After the first Domain Controller is built, DC2 is installed in the second Availability Zone using the following node configuration:


        DC2 Configuration

Figure 13: DC2 Configuration

Since the forest and domain were created by DC1, you can see in Figure 13 that DC2 simply needs to be added to the domain and promoted to a Domain Controller.

Remote Desktop Gateway Configuration

The node configuration for the Remote Desktop Gateway servers is fairly straightforward. The RDGateway service and associated Remote Server Administration Tools (RSAT) are installed on the server, and the server is then joined to the Active Directory domain.


        RDGW1 Configuration

Figure 14: RDGW1 Configuration

The configuration for RDGW2 is identical to the one shown in Figure 14 for RDGW1, with the exception of the node name. After the environment has been deployed, you can initiate a Remote Desktop Connection to either gateway using a standard TCP port 3389 connection. To fully configure the RDGateway role with certificates (so that connections can be made securely over HTTPS) you should follow the additional steps in our Quick Start Reference Deployment for the Remote Desktop Gateway.

Web Server Configuration

The web server configuration is the same for both the WEB1 and WEB2 servers. After each server is joined to the domain, then IIS, ASP.NET, and sample “hello world” web pages are installed on the systems.


        WEB1 Configuration

Figure 15: WEB1 Configuration

The configuration in Figure 15 creates a single IIS website listening on TCP port 80 hosting a single web page. You can navigate to either web server after the deployment to confirm that IIS is working properly.

Generating the MOF Documents

As we've discussed, the pull server runs the configuration script which produces a MOF file for each server in the environment. Each MOF file initially has a basename matching the hostname of the associated server. For example, the first Domain Controllers file will be named DC1.MOF.

In order for client nodes to pull these configurations, the files must be renamed using the nodes ConfigurationID, which is the GUID stored in the AWS CloudFormation template, and tagged on the Amazon EC2 instance.


        Code Snippet

Figure 16: Code Snippet

The snippet shown in Figure 16 is the code at the end of the configuration script that runs on the pull server and generates the MOFs, renames the files, and moves them to the appropriate folder. The configuration name is ServerBase, and we store the result of running the configuration in the $mofFiles variable, which will store the collection of file system objects representing each MOF file. We then simply loop through the list of files, matching the GUID (ConfigurationID) with each node name, and renaming the file. Finally, we move the files to the Configuration folder where they are checksummed and ready for download.