3 9s (99.9%) scenario
The next availability goal is for applications for which it’s important to be highly available, but they can tolerate short periods of unavailability. This type of workload is typically used for internal operations that have an effect on employees when they are down. This type of workload can also be customer-facing, but are not high revenue for the business and can tolerate a longer recovery time or recovery point. Such workloads include administrative applications for account or information management.
We can improve availability for workloads by using two Availability Zones for our deployment and by separating the applications to separate tiers.
Monitor resources
Monitoring will be expanded to alert on the availability of the website over all by checking for an HTTP 200 OK status on the home page. In addition, there will be alerting on every replacement of a web server and when the database fails over. We will also monitor the static content on Amazon S3 for availability and alert if it becomes unavailable. Logging will be aggregated for ease of management and to help in root cause analysis.
Adapt to changes in demand
Automatic scaling is configured to monitor CPU utilization on EC2 instances, and add or remove instances to maintain the CPU target at 70%, but with no fewer than one EC2 instance per Availability Zone. If load patterns on our RDS instance indicate that scale up is needed, we will change the instance type during a maintenance window.
Implement change
The infrastructure deployment technologies remain the same as the previous scenario.
Delivery of new software is on a fixed schedule of every two to four weeks. Software updates will be automated, not using canary or blue/green deployment patterns, but rather, using replace in place. The decision to roll back will be made using the runbook.
We will have playbooks for establishing root cause of problems. After the root cause has been identified, the correction for the error will be identified by a combination of the operations and development teams. The correction will be deployed after the fix is developed.
Back up data
Backup and restore can be done using Amazon RDS. It will be run regularly using a runbook to ensure that we can meet recovery requirements.
Architect for resiliency
We can improve availability for applications by using two Availability Zones for our deployment and by separating the applications to separate tiers. We will use services that work across multiple Availability Zones, such as Elastic Load Balancing, Auto Scaling and Amazon RDS Multi-AZ with encrypted storage via AWS Key Management Service. This will ensure tolerance to failures on the resource level and on the Availability Zone level.
The load balancer will only route traffic to healthy application instances. The health check needs to be at the data plane/application layer indicating the capability of the application on the instance. This check should not be against the control plane. A health check URL for the web application will be present and configured for use by the load balancer and Auto Scaling, so that instances that fail are removed and replaced. Amazon RDS will manage the active database engine to be available in the second Availability Zone if the instance fails in the primary Availability Zone, then repair to restore to the same resiliency.
After we have separated the tiers, we can use distributed system resiliency patterns to increase the reliability of the application so that it can still be available even when the database is temporarily unavailable during an Availability Zone failover.
Test resiliency
We do functional testing, same as in the previous scenario. We do not test the self-healing capabilities of ELB, automatic scaling, or RDS failover.
We will have playbooks for common database problems, security-related incidents, and failed deployments.
Plan for disaster recovery (DR)
Runbooks exist for total workload recovery and common reporting. Recovery uses backups stored in the same region as the workload.
Availability design goal
We assume that at least some failures will require a manual decision to run recovery. However with the greater automation in this scenario, we assume that only two events per year will require this decision. We take 30 minutes to decide to run recovery, and assume that recovery is completed within 30 minutes. This implies 60 minutes to recover from failure. Assuming two incidents per year, our estimated impact time for the year is 120 minutes.
This means that the upper limit on availability is 99.95%. The actual availability also depends on the real rate of failure, the duration of the failure, and how quickly each failure actually recovers. For this architecture, we require the application to be briefly offline for updates, but these updates are automated. We estimate 150 minutes per year for this: 15 minutes per change, 10 times per year. This adds up to 270 minutes per year when the service is not available, so our availability design goal is 99.9%.
Summary
Topic | Implementation |
---|---|
Monitor resources | Site health check only; alerts sent when down. |
Adapt to changes in demand | ELB for web and automatic scaling application tier; resizing Multi-AZ RDS. |
Implement change | Automated deploy in place and runbook for rollback. |
Back up data | Automated backups via RDS to meet RPO and runbook for restoring. |
Architect for resiliency | Automatic scaling to provide self-healing web and application tier; RDS is Multi-AZ. |
Test resiliency | ELB and application are self-healing; RDS is Multi-AZ; no explicit testing. |
Plan for disaster recovery (DR) | Encrypted backups via RDS to same AWS Region. |