Step 3: Scale Out IISExample - AWS OpsWorks

Step 3: Scale Out IISExample

Important

AWS OpsWorks Stacks is no longer accepting new customers. Existing customers will be able to use the OpsWorks console, API, CLI, and CloudFormation resources as normal until May 26, 2024, at which time they will be discontinued. To prepare for this transition, we recommend you transition your stacks to AWS Systems Manager as soon as possible. For more information, see AWS OpsWorks Stacks End of Life FAQs and Migrating your AWS OpsWorks Stacks applications to AWS Systems Manager Application Manager.

If your incoming user requests start to approach the limit of what you can handle with a single t2.micro instance, you will need to increase your server capacity. You can move to a larger instance, but that has limits. A more flexible approach is to add instances to your stack, and put them behind a load balancer. The basic architecture looks something like the following.

Among other advantages, this approach is much more robust than a single large instance.

  • If one of your instances fails, the load balancer will distribute incoming requests to the remaining instances, and your application will continue to function.

  • If you put instances in different Availability Zones (the recommended practice), your application will continue to function even if an Availability Zone encounters problems.

AWS OpsWorks Stacks makes it easy to scale out stacks. This section describes the basics of how to scale out a stack by adding a second 24/7 PHP App Server instance to IISExample and putting both instances behind an Elastic Load Balancing load balancer. You can easily extend the procedure to add an arbitrary number of 24/7 instances, or you can use time-based instances to have AWS OpsWorks Stacks scale your stack automatically. For more information, see Managing load with time-based and load-based instances.