brunch

118. Limit, Request 8/8

by Master Seo

<1> Pod 생성 시 자원 할당량을 명시적으로 제한

<2> Request 최소 사용량(하한선) 보장

<3> LimitRange 특정 네임스페이스에서 할당되는 자원의 범위 또는 기본값을 지정

<4> ResourceQuota 특정 네임스페이스에서 사용할 수 있는 자원 사용량의 합을 제한



<1> Pod 생성 시 자원 할당량을 명시적으로 제한


1

Limit

최대 사용량(상한선) 제한


2

limit-pod.yaml


cat << EOF > limit-pod.yaml

apiVersion: v1

kind: Pod

metadata:

name: limit-pod

spec:

containers:

- name: ubuntu

image: ubuntu

command: ["tail"]

args: ["-f", "/dev/null"]

resources:

limits:

memory: "256Mi"

cpu: "250m"

EOF


Memory: "256Mi" → 파드의 컨테이너에 최대 메모리 사용량은 256Mi 으로 제한

cpu: "250m" → 250m(밀리코어) 는 0.25 CPU 의미로 해당 파드의 컨테이너는 최대 1CPU의 1/4 만큼의 사이클 사용 가능




3

kubectl apply -f limit-pod.yaml

[root@test11 ~]# kubectl apply -f limit-pod.yaml

pod/limit-pod created


kubectl describe pod limit-pod | grep Limits -A 5

[root@test11 ~]# kubectl describe pod limit-pod | grep Limits -A 5

Limits:

cpu: 250m

memory: 256Mi

Requests:

cpu: 250m

memory: 256Mi




4

# Limits 제한 설정 시, 자동으로 Request 설정됨

kubectl describe nodes <파드가 생성된 노드>


[root@test11 ~]# k get nodes

NAME STATUS ROLES AGE VERSION

game1-nodepool-w-11gc Ready <none> 40h v1.21.9


kubectl describe nodes game1-nodepool-w-11gc


[root@test11 ~]# kubectl describe nodes game1-nodepool-w-11gc

Name: game1-nodepool-w-11gc

Roles: <none>

Labels: beta.kubernetes.io/arch=amd64

beta.kubernetes.io/instance-type=SVR.VSVR.STAND.C002.M008.NET.SSD.B050.G002

beta.kubernetes.io/os=linux

failure-domain.beta.kubernetes.io/region=1

failure-domain.beta.kubernetes.io/zone=2

gpu=ndivia

kubernetes.io/arch=amd64

kubernetes.io/hostname=game1-nodepool-w-11gc

kubernetes.io/os=linux

ncloud.com/nks-nodepool=game1-nodepool

nodeId=10292321

regionNo=1

zoneNo=2

Annotations: alpha.kubernetes.io/provided-node-ip: 10.0.2.14

csi.volume.kubernetes.io/nodeid: {"blk.csi.ncloud.com":"10292321","nas.csi.ncloud.com":"game1-nodepool-w-11gc"}

io.cilium.network.ipv4-cilium-host: 198.18.0.23

io.cilium.network.ipv4-health-ip: 198.18.0.13

io.cilium.network.ipv4-pod-cidr: 198.18.0.0/24

node.alpha.kubernetes.io/ttl: 0

volumes.kubernetes.io/controller-managed-attach-detach: true

CreationTimestamp: Wed, 09 Mar 2022 18:24:53 +0900

Taints: <none>

Unschedulable: false

Lease:

HolderIdentity: game1-nodepool-w-11gc

AcquireTime: <unset>

RenewTime: Fri, 11 Mar 2022 10:44:34 +0900

Conditions:

Type Status LastHeartbeatTime LastTransitionTime Reason Message

---- ------ ----------------- ------------------ ------ -------

NetworkUnavailable False Wed, 09 Mar 2022 18:25:49 +0900 Wed, 09 Mar 2022 18:25:49 +0900 CiliumIsUp Cilium is running on this node

MemoryPressure False Fri, 11 Mar 2022 10:44:30 +0900 Wed, 09 Mar 2022 18:24:53 +0900 KubeletHasSufficientMemory kubelet has sufficient memory available

DiskPressure False Fri, 11 Mar 2022 10:44:30 +0900 Wed, 09 Mar 2022 18:24:53 +0900 KubeletHasNoDiskPressure kubelet has no disk pressure

PIDPressure False Fri, 11 Mar 2022 10:44:30 +0900 Wed, 09 Mar 2022 18:24:53 +0900 KubeletHasSufficientPID kubelet has sufficient PID available

Ready True Fri, 11 Mar 2022 10:44:30 +0900 Wed, 09 Mar 2022 18:25:43 +0900 KubeletReady kubelet is posting ready status. AppArmor enabled

Addresses:

InternalIP: 10.0.2.14

Hostname: game1-nodepool-w-11gc

Capacity:

cpu: 2

ephemeral-storage: 51340768Ki

hugepages-1Gi: 0

hugepages-2Mi: 0

memory: 8116376Ki

pods: 110

Allocatable:

cpu: 1900m

ephemeral-storage: 47315651711

hugepages-1Gi: 0

hugepages-2Mi: 0

memory: 7751832Ki

pods: 110

System Info:

Machine ID: b7b033b860da49e1bcaa24ec03adde26

System UUID: 34f69dba-cbf0-814e-1028-696855f7f155

Boot ID: 2fa14a7c-e877-4357-8788-bd5c00f8584c

Kernel Version: 5.4.0-65-generic

OS Image: Ubuntu 18.04.5 LTS

Operating System: linux

Architecture: amd64

Container Runtime Version: containerd://1.3.7

Kubelet Version: v1.21.9

Kube-Proxy Version: v1.21.9

PodCIDR: 198.18.0.0/24

PodCIDRs: 198.18.0.0/24

ProviderID: navercloudplatform://10292321

Non-terminated Pods: (19 in total)

Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age

--------- ---- ------------ ---------- --------------- ------------- ---

default limit-pod 250m (13%) 250m (13%) 256Mi (3%) 256Mi (3%) 2m31s

kube-system cilium-operator-7c756b4ff5-77m6c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system cilium-qwjbg 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 40h

kube-system coredns-9bbc5d444-qh9pz 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 40h

kube-system csi-nks-controller-84d675d66d-28zvz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system csi-nks-node-tn9jw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system dns-autoscaler-5c578b9dfb-2pww5 20m (1%) 0 (0%) 10Mi (0%) 0 (0%) 40h

kube-system kube-proxy-fjhfz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system metrics-server-8589b99d8f-zb5ms 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 4h25m

kube-system ncloud-kubernetes-q7lsz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system nks-nas-csi-controller-68f4bf8779-7tf8m 40m (2%) 500m (26%) 80Mi (1%) 800Mi (10%) 40h

kube-system nks-nas-csi-node-7gzfd 30m (1%) 600m (31%) 60Mi (0%) 500Mi (6%) 40h

kube-system nodelocaldns-rrkgp 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 40h

kube-system snapshot-controller-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system startup-script-rfcgm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

ns1 pod1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h15m

ns1 pod2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h15m

ns2 pod1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h2m

ns2 pod2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h2m


Allocated resources:

(Total limits may be over 100 percent, i.e., overcommitted.)

Resource Requests Limits

-------- -------- ------

cpu 740m (38%) 1350m (71%)

memory 846Mi (11%) 1896Mi (25%)

ephemeral-storage 0 (0%) 0 (0%)

hugepages-1Gi 0 (0%) 0 (0%)

hugepages-2Mi 0 (0%) 0 (0%)

Events: <none>

[root@test11 ~]#




5

kubectl top pod --use-protocol-buffers=true


root@test11 ~]# kubectl top pod --use-protocol-buffers=true

NAME CPU(cores) MEMORY(bytes)

limit-pod 1m 0Mi




6

삭제

kubectl delete -f limit-pod.yaml





<2> Request 최소 사용량(하한선) 보장


1

cat << EOF > request-limit-pod.yaml

apiVersion: v1

kind: Pod

metadata:

name: request-limit-pod

spec:

containers:

- name: ubuntu

image: ubuntu

command: ["tail"]

args: ["-f", "/dev/null"]

resources:

limits:

memory: "256Mi"

cpu: "250m"

requests:

memory: "128Mi"

cpu: "125m"

EOF


Request 최소 사용량(하한선) 보장



2

kubectl apply -f request-limit-pod.yaml


3


kubectl describe pod request-limit-pod | grep Limits -A 5


[root@test11 ~]# kubectl describe pod request-limit-pod | grep Limits -A 5

Limits:

cpu: 250m

memory: 256Mi

Requests:

cpu: 125m

memory: 128Mi


Request 최소 사용량(하한선) 보장



4

노드 정보 확인


kubectl describe nodes <파드가 생성된 노드>


[root@test11 ~]# kubectl describe nodes game1-nodepool-w-11gc

Name: game1-nodepool-w-11gc

Roles: <none>

Labels: beta.kubernetes.io/arch=amd64

beta.kubernetes.io/instance-type=SVR.VSVR.STAND.C002.M008.NET.SSD.B050.G002

beta.kubernetes.io/os=linux

failure-domain.beta.kubernetes.io/region=1

failure-domain.beta.kubernetes.io/zone=2

gpu=ndivia

kubernetes.io/arch=amd64

kubernetes.io/hostname=game1-nodepool-w-11gc

kubernetes.io/os=linux

ncloud.com/nks-nodepool=game1-nodepool

nodeId=10292321

regionNo=1

zoneNo=2

Annotations: alpha.kubernetes.io/provided-node-ip: 10.0.2.14

csi.volume.kubernetes.io/nodeid: {"blk.csi.ncloud.com":"10292321","nas.csi.ncloud.com":"game1-nodepool-w-11gc"}

io.cilium.network.ipv4-cilium-host: 198.18.0.23

io.cilium.network.ipv4-health-ip: 198.18.0.13

io.cilium.network.ipv4-pod-cidr: 198.18.0.0/24

node.alpha.kubernetes.io/ttl: 0

volumes.kubernetes.io/controller-managed-attach-detach: true

CreationTimestamp: Wed, 09 Mar 2022 18:24:53 +0900

Taints: <none>

Unschedulable: false

Lease:

HolderIdentity: game1-nodepool-w-11gc

AcquireTime: <unset>

RenewTime: Fri, 11 Mar 2022 10:49:20 +0900

Conditions:

Type Status LastHeartbeatTime LastTransitionTime Reason Message

---- ------ ----------------- ------------------ ------ -------

NetworkUnavailable False Wed, 09 Mar 2022 18:25:49 +0900 Wed, 09 Mar 2022 18:25:49 +0900 CiliumIsUp Cilium is running on this node

MemoryPressure False Fri, 11 Mar 2022 10:49:21 +0900 Wed, 09 Mar 2022 18:24:53 +0900 KubeletHasSufficientMemory kubelet has sufficient memory available

DiskPressure False Fri, 11 Mar 2022 10:49:21 +0900 Wed, 09 Mar 2022 18:24:53 +0900 KubeletHasNoDiskPressure kubelet has no disk pressure

PIDPressure False Fri, 11 Mar 2022 10:49:21 +0900 Wed, 09 Mar 2022 18:24:53 +0900 KubeletHasSufficientPID kubelet has sufficient PID available

Ready True Fri, 11 Mar 2022 10:49:21 +0900 Wed, 09 Mar 2022 18:25:43 +0900 KubeletReady kubelet is posting ready status. AppArmor enabled

Addresses:

InternalIP: 10.0.2.14

Hostname: game1-nodepool-w-11gc

Capacity:

cpu: 2

ephemeral-storage: 51340768Ki

hugepages-1Gi: 0

hugepages-2Mi: 0

memory: 8116376Ki

pods: 110

Allocatable:

cpu: 1900m

ephemeral-storage: 47315651711

hugepages-1Gi: 0

hugepages-2Mi: 0

memory: 7751832Ki

pods: 110

System Info:

Machine ID: b7b033b860da49e1bcaa24ec03adde26

System UUID: 34f69dba-cbf0-814e-1028-696855f7f155

Boot ID: 2fa14a7c-e877-4357-8788-bd5c00f8584c

Kernel Version: 5.4.0-65-generic

OS Image: Ubuntu 18.04.5 LTS

Operating System: linux

Architecture: amd64

Container Runtime Version: containerd://1.3.7

Kubelet Version: v1.21.9

Kube-Proxy Version: v1.21.9

PodCIDR: 198.18.0.0/24

PodCIDRs: 198.18.0.0/24

ProviderID: navercloudplatform://10292321

Non-terminated Pods: (19 in total)

Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age

--------- ---- ------------ ---------- --------------- ------------- ---

default request-limit-pod 125m (6%) 250m (13%) 128Mi (1%) 256Mi (3%) 114s

kube-system cilium-operator-7c756b4ff5-77m6c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system cilium-qwjbg 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 40h

kube-system coredns-9bbc5d444-qh9pz 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 40h

kube-system csi-nks-controller-84d675d66d-28zvz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system csi-nks-node-tn9jw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system dns-autoscaler-5c578b9dfb-2pww5 20m (1%) 0 (0%) 10Mi (0%) 0 (0%) 40h

kube-system kube-proxy-fjhfz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system metrics-server-8589b99d8f-zb5ms 100m (5%) 0 (0%) 200Mi (2%) 0 (0%)4h30m

kube-system ncloud-kubernetes-q7lsz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system nks-nas-csi-controller-68f4bf8779-7tf8m 40m (2%) 500m (26%) 80Mi (1%) 800Mi (10%) 40h

kube-system nks-nas-csi-node-7gzfd 30m (1%) 600m (31%) 60Mi (0%) 500Mi (6%) 40h

kube-system nodelocaldns-rrkgp 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 40h

kube-system snapshot-controller-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

kube-system startup-script-rfcgm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 40h

ns1 pod1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h20m

ns1 pod2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h20m

ns2 pod1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h7m

ns2 pod2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h7m


Allocated resources:

(Total limits may be over 100 percent, i.e., overcommitted.)

Resource Requests Limits

-------- -------- ------

cpu 615m (32%) 1350m (71%)

memory 718Mi (9%) 1896Mi (25%)

ephemeral-storage 0 (0%) 0 (0%)

hugepages-1Gi 0 (0%) 0 (0%)

hugepages-2Mi 0 (0%) 0 (0%)

Events: <none>





kubectl top pod --use-protocol-buffers=true



5

삭제

kubectl delete -f request-limit-pod.yaml





<3> LimitRange 특정 네임스페이스에서 할당되는 자원의 범위 또는 기본값을 지정


1

LimitRange

특정 네임스페이스에서 할당되는 자원의 범위 또는 기본값을 지정

일반 사용자가 리소스 사용량 정의를 생략하더라도 자동으로 Pod 의 리소스 사용량을 설정


2

limitrange.yaml


cat << EOF > limitrange.yaml

apiVersion: v1

kind: LimitRange

metadata:

name: limit-range

spec:

limits:

- default: # 1. 자동으로 설정될 기본 Limit 값

memory: 256Mi

cpu: 250m

defaultRequest: # 2. 자동으로 설정될 기본 Request 값

memory: 128Mi

cpu: 125m

max: # 3. 자원 할당량의 최대값

memory: 0.5Gi

cpu: 500m

min: # 4. 자원 할당량의 최소값

memory: 100Mi

cpu: 100m

type: Container # 5. 각 컨테이너에 대해서 적용

EOF




3

kubectl apply -f limitrange.yaml


kubectl get limitranges


[root@test11 ~]# kubectl get limitranges

NAME CREATED AT

limit-range 2022-03-11T01:55:31Z



kubectl describe limitranges limit-range


[root@test11 ~]# kubectl describe limitranges limit-range

Name: limit-range

Namespace: default

Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio

---- -------- --- --- --------------- ------------- -----------------------

Container cpu 100m 500m 125m 250m -

Container memory 100Mi 512Mi 128Mi 256Mi -

[root@test11 ~]#




4

pod 생성

kubectl run webpod --image nginx


kubectl describe pod webpod | grep Limits -A 5


pod/webpod created

[root@test11 ~]# kubectl describe pod webpod | grep Limits -A 5

Limits:

cpu: 250m

memory: 256Mi

Requests:

cpu: 125m

memory: 128Mi




5

삭제

kubectl delete pod webpod


6

초과 pod 생성 시도?


cat << EOF > pod-exceed.yaml

apiVersion: v1

kind: Pod

metadata:

name: pod-exceed

spec:

containers:

- name: ubuntu

image: ubuntu

command: ["tail"]

args: ["-f", "/dev/null"]

resources:

limits:

memory: "800Mi"

cpu: "750m"

requests:

memory: "500Mi"

cpu: "500m"

EOF





kubectl apply -f pod-exceed.yaml


[root@test11 ~]# kubectl apply -f pod-exceed.yaml

Error from server (Forbidden): error when creating "pod-exceed.yaml": pods "pod-exceed" is forbidden: [maximum cpu usage per Container is 500m, but limit is 750m, maximum memory usage per Container is 512Mi, but limit is 800Mi]



7

삭제

kubectl delete -f limitrange.yaml





<4> ResourceQuota 특정 네임스페이스에서 사용할 수 있는 자원 사용량의 합을 제한


1

ResourceQuota

특정 네임스페이스에서 사용할 수 있는 자원 사용량의 합을 제한


2

네임스페이스에서 할당할 수 있는 자원(CPU, 메모리, 퍼시스턴트 볼륨 클레임의 크기 등)의 총합을 제한할 수 있습니다

네임스페이스에서 생성할 수 있는 리소스(서비스, 디플로이먼트 등)의 개수를 제한할 수 있습니다.


3

cat << EOF > ns3-quota-pod-count.yaml

apiVersion: v1

kind: Namespace

metadata:

name: ns3

---

apiVersion: v1

kind: ResourceQuota

metadata:

name: resource-quota-pod-count

namespace: ns3

spec:

hard:

count/pods: 3

count/services: 1

EOF




4

kubectl apply -f ns3-quota-pod-count.yaml


[root@test11 ~]# kubectl apply -f ns3-quota-pod-count.yaml

namespace/ns3 created

resourcequota/resource-quota-pod-count created



[root@test11 ~]# k get ns

NAME STATUS AGE

default Active 40h

kube-node-lease Active 40h

kube-public Active 40h

kube-system Active 40h

ns1 Active 4h32m

ns2 Active 4h32m

ns3 Active 3s




5


kubectl get namespaces ns3

kubectl get quota

kubectl get quota -n ns3


[root@test11 ~]# kubectl get quota

No resources found in default namespace.


[root@test11 ~]# kubectl get quota -n ns3

NAME AGE REQUEST LIMIT

resource-quota-pod-count 54s count/pods: 0/3, count/services: 0/1

[root@test11 ~]#



6

포드 생성 : 3개까지는 정상 생성되고 이후 4개부터는 포드 개수 제한으로 생성 실패!

for i in {1..4}; do kubectl run -n ns3 over-pod-$i --image=nginx --restart=Never; echo "--------------"; sleep 10; done;


[root@test11 ~]# for i in {1..4}; do kubectl run -n ns3 over-pod-$i --image=nginx --restart=Never; echo "--------------"; sleep 10; done;

pod/over-pod-1 created

--------------

pod/over-pod-2 created

--------------

pod/over-pod-3 created

--------------

Error from server (Forbidden): pods "over-pod-4" is forbidden: exceeded quota: resource-quota-pod-count, requested: count/pods=1, used: count/pods=3, limited: count/pods=3

--------------



7

삭제

kubectl delete -f ns3-quota-pod-count.yaml




https://brunch.co.kr/@topasvga/2240

k8s.png

감사합니다.







keyword
매거진의 이전글117. 네임 스페이스 이해  7/8