Understanding 'At Risk' indicators
Explore wide range of 'At Risk' indicators that PerfectScale provides

Resilience indicators
OOM
Out-of-Memory events usually occur in the following situations:
The memory limit for a pod is set too low. An event will be triggered when the memory usage of the pod reaches a defined limit.
Node is experiencing memory pressure and tries to evict some pods. Official documentation
CPU Throttling
CPU Throttling occurs when the pod reaches its defined CPU limit and could create latency in application response.
RestartsObserved
Frequent restarts indicate the presence of a problem with a high potential of harming the desired SLA.
Eviction
Eviction indicates forcefully terminating and removing a running pod from a node. Eviction events usually occur due to memory or CPU pressure on a node.
When eviction is observed, an alert will be triggered immediately to inform the users. Make sure that you have configured and assigned the integration profile to the cluster to receive timely notifications on Slack or MS Teams channel.
HPAAtMaxReplicasObserved
As demand for a service or application increases, HPA will scale the system to handle the additional load by dynamically adding more replicas. Once the maximum configured limit of replicas is reached, PerfectScale will raise the HPAAtMaxReplicasObserved indicator, which means the system cannot scale further based on the existing settings.
Depending on a workload's running time at maximum replicas, the severity of the indicator will vary. For example, the longer a workload runs at maximum replicas, the higher the severity indicator.
Limit/Request not set indicators
CpuRequestNotSet
Setting proper CPU requests
helps the Kubernetes scheduler to allocate the right amount of CPU for each container, making sure that the cluster's nodes capacity meets the demand.
MemRequestNotSet
Setting proper MEMORY requests
helps the Kubernetes scheduler to allocate the right amount of memory for each container, making sure that the cluster's nodes capacity meets the demand.
MemLimitNotSet
Setting proper MEMORY limit
helps to protect your worker node from OOM, preventing the risk of memory over-allocation.
UnderProvisioning indicators
UnderProvisionedCpuRequest
Setting proper CPU requests
helps the Kubernetes scheduler to allocate the right amount of CPU for each container, making sure that the cluster's nodes capacity meets the demand.
UnderProvisionedMemRequest
Setting proper MEMORY requests
helps the Kubernetes scheduler to allocate the right amount of memory for each container, making sure that the cluster's nodes capacity meets the demand.
UnderProvisionedMemLimit
Setting proper MEMORY limit
helps to protect your worker node from OOM, preventing the risk of memory over-allocation. However, under-provisioned MEMORY limit
could cause unwanted OOM events on a pod level, potentially harming the desired SLA.
Waste indicators
OverProvisionedCpuRequest
Setting proper CPU requests
helps the Kubernetes scheduler allocate the right amount of CPU for each container, ensuring that the cluster's nodes capacity meets the demand. In cases of over-provisioned CPU requests, cloud resources are unnecessarily wasted due to allocation without utilization.
OverProvisionedMemoryRequest
Setting proper MEMORY requests
helps the Kubernetes scheduler allocate the right amount of memory for each container, ensuring that the cluster's nodes capacity meets the demand. However, when a memory request is over-provisioned, it wastes cloud resources, which are allocated but never used.
Last updated