Application scaling - Netcracker Active Resource Inventory on AWS

Application scaling

In order to maintain Netcracker active resource inventory availability when application traffic increases - such as during a large number of network inventory updates - the application need to be scaled up. When the application traffic decreases, the application needs to be scaled down in order to prevent unnecessary costs. In short, Netcracker active resource inventory on AWS uses the resources that it needs - and not more - limiting costly overprovisioning.

As outlined earlier, Netcracker active resource inventory Kubernetes worker nodes are started and scaled within a separate AWS Auto Scaling Group (ASG). Netcracker active resource inventory leverage ASG which monitor the applications, and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost.

Using AWS Auto Scaling, Netcracker active resource inventory on AWS uses application scaling for multiple resources across multiple services in minutes, based on predefined rules.

Kubernetes worker nodes scaling is actually triggered by Kubernetes Horizontal Pod Autoscaler (HPA). Depending on the CPU and memory consumption metrics of Kubernetes deployments, HPA decides to increase or decrease the number of pods within this deployment. If the new pod cannot be scheduled, it switches to the pending state. This triggers the Cluster Auto Scaler, which scales up the number of worker nodes via the assigned AWS Auto Scaling group.