EKS 8탄 - 3주차
본 내용은 CloudNet 주말 스터디 내용을 참고로 작성되었습니다.
https://gasidaseo.notion.site/gasidaseo/CloudNet-Blog-c9dfa44a27ff431dafdd2edacc8a1863
계속 테스트하며 내용과 설명이 업데이트됩니다.
pod가 EBS 사용하기
볼륨 증가 해보기
스냅숏 생성해 보기
EBS 컨트롤러로 Pod가 EBS 볼륨을 사용하게 하자.
1
참고
https://malwareanalysis.tistory.com/598
Pod가 EBS를 사용하기 위해서는 csi-controller 와 csi-node가 필요하다.
AWS API 호출하는 csi-controller
AWS 스토리지를 Pod에 mount하는 csi-node
설정 주의점
persistentvolume, persistentvolumeclaim의 accessModes는 ReadWriteOnce로 설정해야 합니다
EBS스토리지 기본 설정이 동일 AZ에 있는 EC2 인스턴스(에 배포된 파드)에 연결해야 하기 때문입니다.
=> Pod를 배포하는 노드 스케쥴링으로 해결.
2
설치 Amazon EBS CSI driver as an Amazon EKS add-on
https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/managing-ebs-csi.html
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md
3
# 아래는 aws-ebs-csi-driver 전체 버전 정보와 기본 설치 버전(True) 정보 확인
aws eks describe-addon-versions --addon-name aws-ebs-csi-driver --kubernetes-version 1.24 --query "addons[].addonVersions[].[addonVersion,compatibilities[].defaultVersion]" --output text
(masterseo@myeks:default) [root@myeks-bastion-EC2 ~]# aws eks describe-addon-versions --addon-name aws-ebs-csi-driver --kubernetes-version 1.24 --query "addons[].addonVersions[].[addonVersion,compatibilities[].defaultVersion]" --output text
v1.18.0-eksbuild.1
True
v1.17.0-eksbuild.1
False
:
False
v1.14.1-eksbuild.1
False
v1.14.0-eksbuild.1
4
# EBS도 IRSA 설정
AWS관리형 정책 AmazonEBSCSIDriverPolicy 사용
eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster ${CLUSTER_NAME} --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve --role-only --role-name AmazonEKS_EBS_CSI_DriverRole
5
# IRSA 세팅 되었는지 확인 ?
kubectl get sa -n kube-system ebs-csi-controller-sa -o yaml | head -5
eksctl get iamserviceaccount --cluster myeks
(masterseo@myeks:default) [root@myeks-bastion-EC2 ~]# eksctl get iamserviceaccount --cluster myeks
NAMESPACE NAME ROLE ARN
kube-system aws-load-balancer-controller arn:aws:iam::476286675138:role/eksctl-myeks-addon-iamserviceaccount-kube-sy-Role1-ILKQZ0MFQ6E5
kube-system ebs-csi-controller-sa arn:aws:iam::4768:role/AmazonEKS_EBS_CSI_DriverRole
// 로드밸런서용 role IRSA생김
// ebs-csi용 role IRSA 생김
// 쿠버네티스에는 서비스 어카운트가 생기고 , AWS는 IRSA가 생김
5
# Amazon EBS CSI driver 를 addon으로 추가하기
eksctl create addon --name aws-ebs-csi-driver --cluster ${CLUSTER_NAME} --service-account-role-arn arn:aws:iam::${ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole --force
6
# addon 된것 확인 ?
get addon으로 확인한다.
eksctl get addon --cluster ${CLUSTER_NAME}
NAME VERSION STATUS ISSUES IAMROLE UPDATE AVAILABLE CONFIGURATION VALUES
aws-ebs-csi-driver v1.18.0-eksbuild.1 CREATING 0 arn:aws:iam::476286675138:role/AmazonEKS_EBS_CSI_DriverRole
coredns v1.9.3-eksbuild.3 ACTIVE 0
kube-proxy v1.24.10-eksbuild.2 ACTIVE 0
vpc-cni v1.12.6-eksbuild.1 ACTIVE 0 arn:aws:iam::476286675138:role/eksctl-myeks-addon-vpc-cni-Role1-11DYQF2ETYXDT
kubectl get deploy,ds -l=app.kubernetes.io/name=aws-ebs-csi-driver -n kube-system
7
Pod가 EBS를 사용하기 위해서는 csi-controller 와 csi-node가 필요하다.
AWS API 호출하는 csi-controller
AWS 스토리지를 Pod에 mount하는 csi-node
ebs-csi-controller , ebs-csi-node 확인.
kubectl get pod -n kube-system -l 'app in (ebs-csi-controller,ebs-csi-node)'
(masterseo@myeks:default) [root@myeks-bastion-EC2 ~]# kubectl get pod -n kube-system -l 'app in (ebs-csi-controller,ebs-csi-node)'
NAME READY STATUS RESTARTS AGE
ebs-csi-controller-67658f895c-bllmf 6/6 Running 0 109s
ebs-csi-controller-67658f895c-j4mnw 6/6 Running 0 109s
ebs-csi-node-7htwr 3/3 Running 0 109s
ebs-csi-node-pchk5 3/3 Running 0 109s
ebs-csi-node-skmhb 3/3 Running 0 109s
// csi-controller pod와 csi-node pod 생성 확인
kubectl get pod -n kube-system -l app.kubernetes.io/component=csi-driver
8
# ebs-csi-controller 파드에 6개 컨테이너 확인
kubectl get pod -n kube-system -l app=ebs-csi-controller -o jsonpath='{.items[0].spec.containers[*].name}' ; echo
----------------------------------
ebs-plugin csi-provisioner csi-attacher csi-snapshotter csi-resizer liveness-probe
9
# csinodes 확인
// 워커 노드들이 csinodes로 동작한다.
kubectl get csinodes
NAME DRIVERS AGE
ip-192-168-1-140.ap-northeast-2.compute.internal 1 4h47m
ip-192-168-2-38.ap-northeast-2.compute.internal 1 4h48m
ip-192-168-2-85.ap-northeast-2.compute.internal 1 4h47m
10
모니터링 - 확인
Every 2.0s: kubectl get pods -n kube-syste
NAME READY STATUS RESTARTS AGE
aws-load-balancer-controller-6fb4f86d9d-t59w5 1/1 Running 0 3h42m
aws-load-balancer-controller-6fb4f86d9d-t7gvp 1/1 Running 0 3h42m
aws-node-2rxtk 1/1 Running 0 4h52m
aws-node-gwvfl 1/1 Running 0 4h52m
aws-node-x4fvb 1/1 Running 0 4h52m
busybox 1/1 Running 0 175m
coredns-6777fcd775-7qhw8 1/1 Running 0 4h49m
coredns-6777fcd775-scllk 1/1 Running 0 4h49m
ebs-csi-controller-67658f895c-bllmf 6/6 Running 0 8m17s
ebs-csi-controller-67658f895c-j4mnw 6/6 Running 0 8m17s
ebs-csi-node-7htwr 3/3 Running 0 8m17s
ebs-csi-node-pchk5 3/3 Running 0 8m17s
ebs-csi-node-skmhb 3/3 Running 0 8m17s
external-dns-556896ffd4-tbfdd 1/1 Running 0 3h41m
kube-ops-view-558d87b798-lktv4 1/1 Running 0 3h40m
kube-proxy-992qw 1/1 Running 0 4h50m
kube-proxy-lzsq2 1/1 Running 0 4h50m
kube-proxy-rmznq 1/1 Running 0 4h50m
11
# gp3 스토리지 클래스 생성
kubectl get sc
---------------
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 5h local-path rancher.io/local-path Delete WaitForFirstConsumer fal
12
cat <<EOT > gp3-sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp3
allowVolumeExpansion: true
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp3
allowAutoIOPSPerGBIncrease: 'true'
encrypted: 'true'
#fsType: ext4 # 기본값이 ext4 이며 xfs 등 변경 가능
EOT
// allowAutoIOPSPerGBIncrease: 'true' = 자동 iops
// encrypted: 'true' = ebs 볼륨 암호화
// allowVolumeExpansion: true = 온라인 상에서 볼륨 증설 가능 허용
파라미터에 대한 추가 정보 ?
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md
13
모니터링 해보자.
http://kubeopsview.masterseo1.link:8080/
kubectl apply -f gp3-sc.yaml
14
kubectl get sc
// gp2가 디폴트
// gp3 사용하자.
(masterseo@myeks:default) [root@myeks-bastion-EC2 ~]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 5h8m
gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 17s
local-path rancher.io/local-path Delete WaitForFirstConsumer false
15
kubectl describe sc gp3 | grep Parameters
1
콘솔에서 EBS 볼륨 확인
2
CLI로 확인
# 워커노드의 EBS 볼륨 확인 : tag(키/값) 필터링 - 링크
aws ec2 describe-volumes --filters Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node --output table
aws ec2 describe-volumes --filters Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node --query "Volumes[*].Attachments" | jq
aws ec2 describe-volumes --filters Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node --query "Volumes[*].{ID:VolumeId,Tag:Tags}" | jq
aws ec2 describe-volumes --filters Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node --query "Volumes[].[VolumeId, VolumeType, Attachments[].[InstanceId, State][]][]" | jq
aws ec2 describe-volumes --filters Name=tag:Name,Values=$CLUSTER_NAME-ng1-Node --query "Volumes[].{VolumeId: VolumeId, VolumeType: VolumeType, InstanceId: Attachments[0].InstanceId, State: Attachments[0].State}" | jq
# 워커노드에서 파드에 추가한 EBS 볼륨 확인
aws ec2 describe-volumes --filters Name=tag:ebs.csi.aws.com/cluster,Values=true --output table
aws ec2 describe-volumes --filters Name=tag:ebs.csi.aws.com/cluster,Values=true --query "Volumes[*].{ID:VolumeId,Tag:Tags}" | jq
aws ec2 describe-volumes --filters Name=tag:ebs.csi.aws.com/cluster,Values=true --query "Volumes[].{VolumeId: VolumeId, VolumeType: VolumeType, InstanceId: Attachments[0].InstanceId, State: Attachments[0].State}" | jq
3
(별도 터미널로 모니터링)
# 워커노드에서 파드에 추가한 EBS 볼륨만 모니터링 하도록 걸어 놓자
while true; do aws ec2 describe-volumes --filters Name=tag:ebs.csi.aws.com/cluster,Values=true --query "Volumes[].{VolumeId: VolumeId, VolumeType: VolumeType, InstanceId: Attachments[0].InstanceId, State: Attachments[0].State}" --output text; date; sleep 1; done
4
# PVC 생성
cat <<EOT > awsebs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
storageClassName: gp3
EOT
kubectl apply -f awsebs-pvc.yaml
kubectl get pvc,pv
5
pod를 만들어 붙이자.
콘솔에서 확인하자.
파드를 만들면 EBS 스토리지가 생긴다.
gp3로 만들어 진다.
# 파드 생성
cat <<EOT > awsebs-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
terminationGracePeriodSeconds: 3
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo \$(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
EOT
kubectl apply -f awsebs-pod.yaml
6
모니터링 결과 ?
pod가 생성되었는데 EBS가 생성된다!!!
콘솔에서도 확인 하자. EC2 > EBS
Wed May 10 23:14:54 KST 2023
Wed May 10 23:14:57 KST 2023
Wed May 10 23:14:59 KST 2023
None None vol-06940ec15a1d5c0ab gp3
Wed May 10 23:15:02 KST 2023
None None vol-06940ec15a1d5c0ab gp3
Wed May 10 23:15:05 KST 2023
i-05e5e1b64d625eb95 attached vol-06940ec15a1d5c0ab
4GiB짜리가 생겼다.
pod가 gp3 볼륨을 사용한다.!!!
7
콘솔에서도 확인 하자.
8
# PVC, 파드 확인 ?
kubectl get pvc,pv,pod
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/ebs-claim Bound pvc-f7a0ca05-1098-49c2-bcfd-bfc751d2ca21 4Gi RWO gp3 31m
persistentvolumeclaim/localpath-claim Bound pvc-c37e1be5-de1a-4e4b-a7df-4984a60f6ee3 1Gi RWO local-path 3h27m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-c37e1be5-de1a-4e4b-a7df-4984a60f6ee3 1Gi RWO Delete Bound default/localpath-claim local-path 3h23m
persistentvolume/pvc-f7a0ca05-1098-49c2-bcfd-bfc751d2ca21 4Gi RWO Delete Bound default/ebs-claim gp3 3m25s
NAME READY STATUS RESTARTS AGE
pod/app 1/1 Running 0 3m29s
pod/nginx-deployment-6fb79bc456-lbhp2 1/1 Running 0 4h9m
kubectl get VolumeAttachment
6
명령어로 확인해보자.
# 추가된 EBS 볼륨 상세 정보 확인
aws ec2 describe-volumes --volume-ids $(kubectl get pv -o jsonpath="{.items[0].spec.csi.volumeHandle}") | jq
7
# PV 상세 확인 : nodeAffinity 내용의 의미는?
kubectl get pv -o yaml | yh
...
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.ebs.csi.aws.com/zone
operator: In
values:
- ap-northeast-2b
...
8
노드의 라벨 정보 보기
kubectl get node --label-columns=topology.ebs.csi.aws.com/zone,topology.kubernetes.io/zone
-----------------
NAME STATUS ROLES AGE VERSION ZONE ZONE
ip-192-168-1-140.ap-northeast-2.compute.internal Ready <none> 5h9m v1.24.11-eks-a59e1f0 ap-northeast-2a ap-northeast-2a
ip-192-168-2-38.ap-northeast-2.compute.internal Ready <none> 5h9m v1.24.11-eks-a59e1f0 ap-northeast-2b ap-northeast-2b
ip-192-168-2-85.ap-northeast-2.compute.internal Ready <none> 5h9m v1.24.11-eks-a59e1f0 ap-northeast-2b ap-northeast-2b
kubectl describe node | more
9
# 파일 내용 추가 저장이 잘 되어 있는지 확인
kubectl exec app -- tail -f /data/out.txt
10
kubectl df-pv
# 아래 명령어는 확인까지 다소 시간이 소요됨
# pod가 사용하는 pv 용량등 정보 확인해줌 ?
(masterseo@myeks:default) [root@myeks-bastion-EC2 ~]# kubectl df-pv
PV NAME PVC NAME NAMESPACE NODE NAME POD NAME VOLUME MOUNT NAME SIZE USED AVAILABLE %USED IUSED IFREE %IUSED
pvc-f7a0ca05-1098-49c2-bcfd-bfc751d2ca21 ebs-claim default ip-192-168-2-85.ap-northeast-2.compute.internal app persistent-storage 3Gi 28Ki 3Gi 0.00 12 262132 0.00
11
## 파드 내에서 볼륨 정보 확인
kubectl exec -it app -- sh -c 'df -hT --type=overlay'
-------------
Filesystem Type Size Used Avail Use% Mounted on
overlay overlay 30G 3.6G 27G 12% /
// 30G 보임
kubectl exec -it app -- sh -c 'df -hT --type=ext4'
Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme1n1 ext4 3.9G 16M 3.8G 1% /data
// data는 EBS볼륨에 붙어 있다.
로그가 너무 많아진 경우 볼륨을 늘려보자~
1
증가만 가능.
감소는 불가능.
온라인상에서 증가 시키자~
pod가 사용하는 스토리지를 온라인상에서 증가가 가능하다.
2
# 현재 pv 의 이름을 기준하여 4G > 10G 로 증가 : .spec.resources.requests.storage의 4Gi 를 10Gi로 변경
kubectl get pvc ebs-claim -o jsonpath={.spec.resources.requests.storage} ; echo
---------
4Gi
kubectl get pvc ebs-claim -o jsonpath={.status.capacity.storage} ; echo
3
patch로 증가 시키자.
kubectl patch pvc ebs-claim -p '{"spec":{"resources":{"requests":{"storage":"10Gi"}}}}'
kubectl patch pvc ebs-claim -p '{"status":{"capacity":{"storage":"10Gi"}}}'
# status 는 바로 위 커멘드 적용 후 EBS 10Gi 확장 후 알아서 10Gi 반영됨
4
# 증가 확인 하자 ?
볼륨 용량 수정 반영이 되어야 되니, 수치 반영이 조금 느릴수 있다
kubectl exec -it app -- sh -c 'df -hT --type=ext4'
---------------------------------------
Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme2n1 ext4 9.8G 28K 9.7G 1% /data
// 10 GiB로 증가 되었다.
5
콘솔에서도 확인하자.
EBS에서 확인
6
kubectl df-pv
또 확인
aws ec2 describe-volumes --volume-ids $(kubectl get pv -o jsonpath="{.items[0].spec.csi.volumeHandle}") | jq
7
PVC 삭제 = PV도 같이 지워진다.
kubectl delete pod app & kubectl delete pvc ebs-claim
8
모니터링 결과 - gp3 삭제됨
i-05e5e1b64d625eb95 detaching vol-06940ec15a1d5c0ab gp3
Wed May 10 23:27:15 KST 2023
i-05e5e1b64d625eb95 detaching vol-06940ec15a1d5c0ab gp3
Wed May 10 23:27:17 KST 2023
None None vol-06940ec15a1d5c0ab gp3
Wed May 10 23:27:19 KST 2023
Wed May 10 23:27:21 KST 2023
Wed May 10 23:27:23 KST 2023
9
아래 파라미터 링크 참고해 파라미터 설정 변경 가능.
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/parameters.md
Pod가 쓰는 EBS인데 중요해서, 또 스냅셧을 뜨는 경우 사용한다.
1
EBS에 대해 스냅샷을 만들어보자.
볼륨 스냅셧 컨트롤러를 사용한다.
https://github.com/kubernetes-csi/external-snapshotter
https://kubernetes.io/docs/concepts/storage/volume-snapshots/
2
# (참고) EBS CSI Driver에 snapshots 기능 포함 될 것으로 보임
kubectl describe pod -n kube-system -l app=ebs-csi-controller
3
진행을 위해서는 CRD 부터 직접 설치해야 한다.
// aws 개선중?
# Install Snapshot CRDs
curl -s -O https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
curl -s -O https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
curl -s -O https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl apply -f snapshot.storage.k8s.io_volumesnapshots.yaml,snapshot.storage.k8s.io_volumesnapshotclasses.yaml,snapshot.storage.k8s.io_volumesnapshotcontents.yaml
kubectl get crd | grep snapshot
---------
(masterseo@myeks:default) [root@myeks-bastion-EC2 ~]# kubectl get crd | grep snapshot
volumesnapshotclasses.snapshot.storage.k8s.io 2023-05-10T14:35:04Z
volumesnapshotcontents.snapshot.storage.k8s.io 2023-05-10T14:35:04Z
volumesnapshots.snapshot.storage.k8s.io 2023-05-10T14:35:04Z
4
kubectl api-resources | grep snapshot
(masterseo@myeks:default) [root@myeks-bastion-EC2 ~]# kubectl api-resources | grep snapshot
volumesnapshotclasses vsclass,vsclasses snapshot.storage.k8s.io/v1 false VolumeSnapshotClass
volumesnapshotcontents vsc,vscs snapshot.storage.k8s.io/v1 false VolumeSnapshotContent
volumesnapshots vs snapshot.storage.k8s.io/v1 true VolumeSnapshot
// api 리소스에 3개가 추가 되었다!!
5
// 스넵셧 콘트롤러 설치 ?
# Install Common Snapshot Controller
curl -s -O https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml
curl -s -O https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/master/deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml
kubectl apply -f rbac-snapshot-controller.yaml,setup-snapshot-controller.yaml
kubectl get deploy -n kube-system snapshot-controller
kubectl get pod -n kube-system -l app=snapshot-controller
6
스냅셧 클레스를 만들자!!!
# Install Snapshotclass = volumesnapshotclasses 를 만들자.
curl -s -O https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/master/examples/kubernetes/snapshot/manifests/classes/snapshotclass.yaml
kubectl apply -f snapshotclass.yaml
kubectl get vsclass # 혹은 volumesnapshotclasses
1
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes/snapshot
2
# PVC 생성
kubectl apply -f awsebs-pvc.yaml
# 파드 생성
kubectl apply -f awsebs-pod.yaml
# 파일 내용 추가 저장 확인
kubectl exec app -- tail -f /data/out.txt
----------------
Wed May 10 14:39:08 UTC 2023
Wed May 10 14:39:13 UTC 2023
Wed May 10 14:39:18 UTC 2023
Wed May 10 14:39:23 UTC 2023
Wed May 10 14:39:28 UTC 2023
Wed May 10 14:39:33 UTC 2023
3
# VolumeSnapshot 생성 ?
Create a VolumeSnapshot referencing the PersistentVolumeClaim name >> EBS 스냅샷 확인
curl -s -O https://raw.githubusercontent.com/gasida/PKOS/main/3/ebs-volume-snapshot.yaml
cat ebs-volume-snapshot.yaml | yh
----------------
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: ebs-volume-snapshot
spec:
volumeSnapshotClassName: csi-aws-vsc
source:
persistentVolumeClaimName: ebs-claim
kubectl apply -f ebs-volume-snapshot.yaml
스냅셧이 만들어진다.
volumesnapshot 이라고 하는 리소스가 PV로 된거를 스냅셧으로 떠준다.
콘솔에서 확인하자.
4
# VolumeSnapshot 확인
명령어로 확인하자~
kubectl get volumesnapshot
----------------------
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
ebs-volume-snapshot true ebs-claim 4Gi csi-aws-vsc snapcontent-4fc0323e-18c9-4d26-a7d6-f7772d69a0d3 111s 112s
kubectl get volumesnapshot ebs-volume-snapshot -o jsonpath={.status.boundVolumeSnapshotContentName} ; echo
kubectl describe volumesnapshot.snapshot.storage.k8s.io ebs-volume-snapshot
kubectl get volumesnapshotcontents
# VolumeSnapshot ID 확인
kubectl get volumesnapshotcontents -o jsonpath='{.items[*].status.snapshotHandle}' ; echo
# AWS EBS 스냅샷 확인
aws ec2 describe-snapshots --owner-ids self | jq
aws ec2 describe-snapshots --owner-ids self --query 'Snapshots[]' --output table
1
# app & pvc 제거 : 강제로 장애 재현
kubectl delete pod app && kubectl delete pvc ebs-claim
2
콘솔에서 EBS 확인하자.
3
스냅샷으로 복원 ?
# 스냅샷에서 PVC 로 복원
kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/localpath-claim Bound pvc-c37e1be5-de1a-4e4b-a7df-4984a60f6ee3 1Gi RWO local-path 3h55m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-b1407ae6-25ee-47f4-9a19-58ac12e35145 4Gi RWO Delete Released default/ebs-claim gp3 7m6s
persistentvolume/pvc-c37e1be5-de1a-4e4b-a7df-4984a60f6ee3 1Gi RWO Delete Bound default/localpath-claim local-path 3h51m
4
PVC 생성
cat <<EOT > ebs-snapshot-restored-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-snapshot-restored-claim
spec:
storageClassName: gp3
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
dataSource:
name: ebs-volume-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
EOT
cat ebs-snapshot-restored-claim.yaml | yh
kubectl apply -f ebs-snapshot-restored-claim.yaml
5
# 확인
(1분 소요)
kubectl get pvc,pv
------------
(masterseo@myeks:default) [root@myeks-bastion-EC2 ~]# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/ebs-snapshot-restored-claim Pending gp3 97s
persistentvolumeclaim/localpath-claim Bound pvc-c37e1be5-de1a-4e4b-a7df-4984a60f6ee3 1Gi RWO local-path 3h57m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-c37e1be5-de1a-4e4b-a7df-4984a60f6ee3 1Gi RWO Delete Bound default/localpath-claim local-path 3h53m
6
# 파드 생성
curl -s -O https://raw.githubusercontent.com/gasida/PKOS/main/3/ebs-snapshot-restored-pod.yaml
cat ebs-snapshot-restored-pod.yaml | yh
kubectl apply -f ebs-snapshot-restored-pod.yaml
7
# 파일 내용 저장 확인
: 파드 삭제 전까지의 저장 기록이 남아 있다.
이후 파드 재생성 후 기록도 잘 저장되고 있다
kubectl exec app -- cat /data/out.txt
(masterseo@myeks:default) [root@myeks-bastion-EC2 ~]# kubectl exec app -- cat /data/out.txt
Wed May 10 14:39:08 UTC 2023
Wed May 10 14:39:13 UTC 2023
Wed May 10 14:39:18 UTC 2023
Wed May 10 14:39:23 UTC 2023
Wed May 10 14:39:28 UTC 2023
Wed May 10 14:39:33 UTC 2023
Wed May 10 14:39:38 UTC 2023
Wed May 10 14:39:43 UTC 2023
Wed May 10 14:39:48 UTC 2023
Wed May 10 14:39:53 UTC 2023
Wed May 10 14:39:58 UTC 2023
Wed May 10 14:40:03 UTC 2023
Wed May 10 14:40:08 UTC 2023
Wed May 10 14:40:13 UTC 2023
Wed May 10 14:40:18 UTC 2023
Wed May 10 14:40:23 UTC 2023
Wed May 10 14:40:28 UTC 2023
8
참고
백업
https://hanhorang31.github.io/post/pkos2-2-localstorage/
AWS에서 제공되는 EBS와 스넵셧을 쿠버네티스에서 사용할수 있다.
다음 과정
https://brunch.co.kr/@topasvga/3235
https://brunch.co.kr/@topasvga/3217
감사 합니다.