|« PreviousNext »|
|Did this page help you? Yes | No | Tell us about it...|
Micro instances (t1.micro) provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available. They are well suited for lower throughput applications and websites that require additional compute cycles periodically.
This section describes how micro instances generally work so you can understand how to apply them. Our intent is not to specify exact behavior, but to give you visibility into the instance's behavior so you can understand its performance (to a greater degree than you typically get with, for example, a multi-tenant shared web hosting system).
The micro instance is available on both 32-bit and 64-bit platforms, but it's available as an Amazon EBS-backed instance only. For more information about basic specifications for the micro instance type, see Available Instance Types.
The hardware specifications for t1.micro instances are described in the following table.
|Processor||Up to 2 EC2 compute units (for short periodic bursts).|
|Platform||32-bit and 64-bit|
Micro instances provide spiky CPU resources for workloads that have a CPU usage profile similar to what is shown in the following figure.
The instance is designed to operate with its CPU usage at essentially only two levels: the normal low background level, and then at brief spiked levels much higher than the background level. We allow the instance to operate at up to 2 EC2 compute units (ECUs) (for more information, see Compute Resources Measurement). The ratio between the maximum level and the background level is designed to be large. Micro instances are designed to support tens of requests per minute on your application. However, actual performance can vary significantly depending on the amount of CPU resources required for each request on your application.
Your application might have a different CPU usage profile than that described in the preceding section. The next figure shows the profile for an application that isn't appropriate for a micro instance. The application requires continuous data-crunching CPU resources for each request, resulting in plateaus of CPU usage that the micro instance isn't designed to handle.
The next figure shows another profile that isn't appropriate for a micro instance. Here the spikes in CPU use are brief, but they occur too frequently to be serviced by a micro instance.
The next figure shows another profile that isn't appropriate for a micro instance. Here the spikes aren't too frequent, but the background level between spikes is too high to be serviced by a micro instance.
In each of the preceding cases of workloads not appropriate for a micro instance, we recommend that you consider using a different instance type. For more information about instance types, see Instance Families and Types.
When your instance bursts to accommodate a spike in demand for compute resources, it uses unused resources on the host. The amount available depends on how much contention there is when the spike occurs. The instance is never left with zero CPU resources, whether other instances on the host are spiking or not.
We expect your application to consume only a certain amount of CPU resources in a period of time. If the application consumes more than your instance's allotted CPU resources, we temporarily limit the instance so it operates at a low CPU level. If your instance continues to use all of its allotted resources, its performance will degrade. We will increase the time that we limit its CPU level, thus increasing the time before the instance is allowed to burst again.
If you enable Amazon CloudWatch monitoring for your micro instance, you can use the "Avg CPU Utilization" graph in the AWS Management Console to determine whether your instance is regularly using all its allotted CPU resources. We recommend that you look at the maximum value reached during each given period. If the maximum value is 100%, we recommend that you use Auto Scaling to scale out (with additional micro instances and a load balancer), or move to a larger instance type. For more information about Auto Scaling, go to the Auto Scaling Developer Guide.
Micro instances are available as either 32-bit or 64-bit. If you need to move to a larger instance, make sure to move to a compatible instance type.
The following figures show the three suboptimal profiles from the preceding section and what it might look like when the instance consumes its allotted resources and we have to limit its CPU level. If the instance consumes its allotted resources, we restrict it to the low background level.
The next figure shows the situation with the long plateaus of data-crunching CPU usage. The CPU hits the maximum allowed level and stays there until the instance's allotted resources are consumed for the period. At that point, we limit the instance to operate at the low background level, and it operates there until we allow it to burst above that level again. The instance again stays there until the allotted resources are consumed and we limit it again (not seen on the graph).
The next figure shows the situation where the requests are too frequent. The instance uses its allotted resources after only a few requests and so we limit it. After we lift the restriction, the instance maxes out its CPU usage trying to keep up with the requests, and we limit it again.
The next figure shows the situation where the background level is too high. Notice that the instance doesn't have to be operating at the maximum CPU level for us to limit it. We limit the instance when it's operating above the normal background level and has consumed its allotted resources for the given period. In this case (as in the preceding one), the instance can't keep up with the work, and we limit it again.
The micro instance provides different levels of CPU resources at different times (up to 2 ECUs). By comparison, the m1.small instance type provides 1 ECU at all times. The following figure illustrates the difference.
The following figures compare the CPU usage of a micro instance with an m1.small instance for the various scenarios we've discussed in the preceding sections.
The first figure that follows shows an optimal scenario for a micro instance (the left graph) and how it might look for an m1.small instance (the right graph). In this case, we don't need to limit the micro instance. The processing time on the m1.small instance would be longer for each spike in CPU demand compared to the micro instance.
The next figure shows the scenario with the data-crunching requests that used up the allotted resources on the micro instance, and how they might look with the m1.small instance.
The next figure shows the frequent requests that used up the allotted resources on the micro instance, and how they might look on the m1.small instance.
The next figure shows the situation where the background level used up the allotted resources on the micro instance, and how it might look on the m1.small instance.
We recommend that you follow these best practices when optimizing an AMI for the micro instance type:
Design the AMI to run on 600 MB of RAM
Limit the number of recurring processes that use CPU time (e.g., cron jobs, daemons)
In Linux, you can optimize performance using swap space and virtual memory (e.g., by setting up swap space in a separate partition from the root file system).
In Windows, when you perform significant AMI or instance configuration changes (e.g., enable server roles or install large applications), you might see limited instance performance, because these changes can be memory intensive and require long-running CPU resources. We recommend that you first use a larger instance type when performing these changes to the AMI, and then run the AMI on a micro instance for normal operations.