By default, containers run with unbounded compute resources on a Kubernetes cluster. With resource quotas, cluster administrators can restrict resource consumption and creation on a namespace basis. Within a namespace, a Pod or Container can consume as much CPU and memory as defined by the namespace’s resource quota. There is a concern that one Pod or Container could monopolize all available resources. A LimitRange is a policy to constrain resource allocations (to Pods or Containers) in a namespace.
A LimitRange provides constraints that can:
LimitRange support is enabled by default for many Kubernetes distributions. It is
enabled when the apiserver --enable-admission-plugins=
flag has LimitRanger
admission controller as
one of its arguments.
A LimitRange is enforced in a particular namespace when there is a LimitRange object in that namespace.
The name of a LimitRange object must be a valid DNS subdomain name.
LimitRange
in one namespace.LimitRanger
admission controller enforces defaults and limits for all Pods and Containers that do not set compute resource requirements and tracks usage to ensure it does not exceed resource minimum, maximum and ratio defined in any LimitRange present in the namespace.403 FORBIDDEN
and a message explaining the constraint that have been violated.cpu
and memory
, users must specify
requests or limits for those values. Otherwise, the system may reject Pod creation.Examples of policies that could be created using limit range are:
In the case where the total limits of the namespace is less than the sum of the limits of the Pods/Containers, there may be contention for resources. In this case, the Containers or Pods will not be created.
Neither contention nor changes to a LimitRange will affect already created resources.
The following section discusses the creation of a LimitRange acting at Container Level.
A Pod with 04 Containers is first created. Each Container within the Pod has a specific spec.resource
configuration.
Each Container within the Pod is handled differently by the LimitRanger
admission controller.
Create a namespace limitrange-demo
using the following kubectl command:
kubectl create namespace limitrange-demo
To avoid passing the target limitrange-demo in your kubectl commands, change your context with the following command:
kubectl config set-context --current --namespace=limitrange-demo
Here is the configuration file for a LimitRange object:
admin/resource/limit-mem-cpu-container.yaml
|
---|
|
This object defines minimum and maximum CPU/Memory limits, default CPU/Memory requests, and default limits for CPU/Memory resources to be apply to containers.
Create the limit-mem-cpu-per-container
LimitRange in the limitrange-demo
namespace with the following kubectl command:
kubectl create -f https://k8s.io/examples/admin/resource/limit-mem-cpu-container.yaml -n limitrange-demo
kubectl describe limitrange/limit-mem-cpu-per-container -n limitrange-demo
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Container cpu 100m 800m 110m 700m -
Container memory 99Mi 1Gi 111Mi 900Mi -
Here is the configuration file for a Pod with 04 Containers to demonstrate LimitRange features:
admin/resource/limit-range-pod-1.yaml
|
---|
|
Create the busybox1
Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-1.yaml -n limitrange-demo
View the busybox-cnt01
resource configuration:
kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[0].resources"
{
"limits": {
"cpu": "500m",
"memory": "200Mi"
},
"requests": {
"cpu": "100m",
"memory": "100Mi"
}
}
busybox-cnt01
Container inside busybox
Pod defined requests.cpu=100m
and requests.memory=100Mi
.100m <= 500m <= 800m
, The Container cpu limit (500m) falls inside the authorized CPU LimitRange.99Mi <= 200Mi <= 1Gi
, The Container memory limit (200Mi) falls inside the authorized Memory LimitRange.View the busybox-cnt02
resource configuration
kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[1].resources"
{
"limits": {
"cpu": "700m",
"memory": "900Mi"
},
"requests": {
"cpu": "100m",
"memory": "100Mi"
}
}
busybox-cnt02
Container inside busybox1
Pod defined requests.cpu=100m
and requests.memory=100Mi
but not limits for cpu and memory.limit-mem-cpu-per-container
LimitRange object are injected in to this Container: limits.cpu=700mi
and limits.memory=900Mi
.100m <= 700m <= 800m
, The Container cpu limit (700m) falls inside the authorized CPU limit range.99Mi <= 900Mi <= 1Gi
, The Container memory limit (900Mi) falls inside the authorized Memory limit range.View the busybox-cnt03
resource configuration:
kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[2].resources"
{
"limits": {
"cpu": "500m",
"memory": "200Mi"
},
"requests": {
"cpu": "500m",
"memory": "200Mi"
}
}
busybox-cnt03
Container inside busybox1
Pod defined limits.cpu=500m
and limits.memory=200Mi
but no requests
for cpu and memory.limits.cpu=500m
and limits.memory=200Mi
.100m <= 500m <= 800m
, The Container cpu limit (500m) falls inside the authorized CPU limit range.99Mi <= 200Mi <= 1Gi
, The Container memory limit (200Mi) falls inside the authorized Memory limit range.View the busybox-cnt04
resource configuration:
kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[3].resources"
{
"limits": {
"cpu": "700m",
"memory": "900Mi"
},
"requests": {
"cpu": "110m",
"memory": "111Mi"
}
}
busybox-cnt04
Container inside busybox1
define neither limits
nor requests
.limits.cpu=700m and
limits.memory=900Mi
.limit-mem-cpu-per-container
LimitRange is used to fill its request section requests.cpu=110m and requests.memory=111Mi100m <= 700m <= 800m
, The Container cpu limit (700m) falls inside the authorized CPU limit range.99Mi <= 900Mi <= 1Gi
, The Container memory limit (900Mi) falls inside the authorized Memory limit range .All Containers defined in the busybox
Pod passed LimitRange validations, so this the Pod is valid and created in the namespace.
The following section discusses how to constrain resources at the Pod level.
admin/resource/limit-mem-cpu-pod.yaml
|
---|
|
Without having to delete the busybox1
Pod, create the limit-mem-cpu-pod
LimitRange in the limitrange-demo
namespace:
kubectl apply -f https://k8s.io/examples/admin/resource/limit-mem-cpu-pod.yaml -n limitrange-demo
The LimitRange is created and limits CPU to 2 Core and Memory to 2Gi per Pod:
limitrange/limit-mem-cpu-per-pod created
Describe the limit-mem-cpu-per-pod
limit object using the following kubectl command:
kubectl describe limitrange/limit-mem-cpu-per-pod
Name: limit-mem-cpu-per-pod
Namespace: limitrange-demo
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Pod cpu - 2 - - -
Pod memory - 2Gi - - -
Now create the busybox2
Pod:
admin/resource/limit-range-pod-2.yaml
|
---|
|
kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-2.yaml -n limitrange-demo
The busybox2
Pod definition is identical to busybox1
, but an error is reported since the Pod’s resources are now limited:
Error from server (Forbidden): error when creating "limit-range-pod-2.yaml": pods "busybox2" is forbidden: [maximum cpu usage per Pod is 2, but limit is 2400m., maximum memory usage per Pod is 2Gi, but limit is 2306867200.]
kubectl get po/busybox1 -n limitrange-demo -o json | jq ".spec.containers[].resources.limits.memory"
"200Mi"
"900Mi"
"200Mi"
"900Mi"
busybox2
Pod will not be admitted on the cluster since the total memory limit of its Container is greater than the limit defined in the LimitRange.
busybox1
will not be evicted since it was created and admitted on the cluster before the LimitRange creation.
You can enforce minimum and maximum size of storage resources that can be requested by each PersistentVolumeClaim in a namespace using a LimitRange:
admin/resource/storagelimits.yaml
|
---|
|
Apply the YAML using kubectl create
:
kubectl create -f https://k8s.io/examples/admin/resource/storagelimits.yaml -n limitrange-demo
limitrange/storagelimits created
Describe the created object:
kubectl describe limits/storagelimits
The output should look like:
Name: storagelimits
Namespace: limitrange-demo
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
PersistentVolumeClaim storage 1Gi 2Gi - - -
admin/resource/pvc-limit-lower.yaml
|
---|
|
kubectl create -f https://k8s.io/examples/admin/resource/pvc-limit-lower.yaml -n limitrange-demo
While creating a PVC with requests.storage
lower than the Min value in the LimitRange, an Error thrown by the server:
Error from server (Forbidden): error when creating "pvc-limit-lower.yaml": persistentvolumeclaims "pvc-limit-lower" is forbidden: minimum storage usage per PersistentVolumeClaim is 1Gi, but request is 500Mi.
Same behaviour is noted if the requests.storage
is greater than the Max value in the LimitRange:
admin/resource/pvc-limit-greater.yaml
|
---|
|
kubectl create -f https://k8s.io/examples/admin/resource/pvc-limit-greater.yaml -n limitrange-demo
Error from server (Forbidden): error when creating "pvc-limit-greater.yaml": persistentvolumeclaims "pvc-limit-greater" is forbidden: maximum storage usage per PersistentVolumeClaim is 2Gi, but request is 5Gi.
If LimitRangeItem.maxLimitRequestRatio
is specified in the LimitRangeSpec
, the named resource must have a request and limit that are both non-zero where limit divided by request is less than or equal to the enumerated value.
The following LimitRange enforces memory limit to be at most twice the amount of the memory request for any Pod in the namespace:
admin/resource/limit-memory-ratio-pod.yaml
|
---|
|
kubectl apply -f https://k8s.io/examples/admin/resource/limit-memory-ratio-pod.yaml
Describe the
kubectl describe limitrange/limit-memory-ratio-pod
Name: limit-memory-ratio-pod
Namespace: limitrange-demo
Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio
---- -------- --- --- --------------- ------------- -----------------------
Pod memory - - - - 2
Create a pod with requests.memory=100Mi
and limits.memory=300Mi
:
admin/resource/limit-range-pod-3.yaml
|
---|
|
kubectl apply -f https://k8s.io/examples/admin/resource/limit-range-pod-3.yaml
The pod creation failed as the ratio here (3
) is greater than the enforced limit (2
) in limit-memory-ratio-pod
LimitRange:
Error from server (Forbidden): error when creating "limit-range-pod-3.yaml": pods "busybox3" is forbidden: memory max limit to request ratio per Pod is 2, but provided ratio is 3.000000.
Delete the limitrange-demo
namespace to free all resources:
kubectl delete ns limitrange-demo
See LimitRanger design doc for more information.
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.