실습 1탄 = 4/7
EKS 클러스터를 만들었다면, CloudWatch Insights로 모니터링 하자
인그레스 자원을 사용하는 ALB를 만들자.
kube-ops-view로 보자
echo ${AWS_REGION}
리전 변경 필요시
export AWS_REGION=us-west-1
export AWS_REGION=us-west-2
US West (N. California)us-west-1
US West (Oregon)us-west-2
echo ${AWS_REGION}
Cluster 설치 완료 후
Container insights 설정을 하고 나면 아래와 같이 리소스를 볼수 있다.
기본적으로는 나오지 않는다.
1
CloudWatch > insights > Container insights > 데시보드를 만들면 리소스를 볼수 있다.
오른쪽위에서 시간을 5분으로 조정한다. 디폴트는 3시간이다.
Custom(5m)으로 변경
2
오른쪽 View in MAP 클릭
자 ~
Container insights 를 볼수 있도록 설정해보자.
CloudWatch 에이전트 및 Fluent Bit를 설치 = 아래 내용 정리 text
3
# 별도 터미널에서 모니터링하자
# 터미널1
watch -d kubectl get ns -A
Every 2.0s: kubectl get ns -A demo1: Wed Dec 18 17:25:49 2024
NAME STATUS AGE
default Active 35m
eks-sample-app Active 22m
kube-node-lease Active 35m
kube-public Active 35m
kube-system Active 35m
# 터미널2
watch -d kubectl get pod,ds,svc,ep,deployment -A
4
# 터미널3
cd
cd ~/environment
mkdir -p manifests/cloudwatch-insight && cd manifests/cloudwatch-insight
네임스페이스 만들기
kubectl create ns amazon-cloudwatch
5
kubectl get ns
NAME STATUS AGE
amazon-cloudwatch Active 8s
cert-manager Active 138m
default Active 3h14m
kube-node-lease Active 3h14m
kube-public Active 3h14m
kube-system Active 3h14m
kubens amazon-cloudwatch
watch -d kubectl get pod,ds,svc,ep,deployment
6
# CloudWatch 에이전트 및 Fluent Bit를 설치
# 변수 설정
ClusterName=eks-demo
RegionName=${AWS_REGION}
FluentBitHttpPort='2020'
FluentBitReadFromHead='Off'
[[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'
[[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On'
# 참고
ClusterName=eks-demo
RegionName=us-west-1
FluentBitHttpPort='2020'
FluentBitReadFromHead='Off'
[[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'
[[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On'
7
배포
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluent-bit-quickstart.yaml | sed 's/{{cluster_name}}/'${ClusterName}'/;s/{{region_name}}/'${RegionName}'/;s/{{http_server_toggle}}/"'${FluentBitHttpServer}'"/;s/{{http_server_port}}/"'${FluentBitHttpPort}'"/;s/{{read_from_head}}/"'${FluentBitReadFromHead}'"/;s/{{read_from_tail}}/"'${FluentBitReadFromTail}'"/' | kubectl apply -f -
8
kubectl get po -n amazon-cloudwatch
NAME READY STATUS RESTARTS AGE
cloudwatch-agent-gh6lf 0/1 ContainerCreating 0 10s
cloudwatch-agent-jxglj 0/1 ContainerCreating 0 10s
cloudwatch-agent-r2ckc 0/1 ContainerCreating 0 10s
fluent-bit-2x6g8 0/1 ContainerCreating 0 10s
fluent-bit-8mjtp 0/1 ContainerCreating 0 10s
fluent-bit-plfnv 0/1 ContainerCreating 0 10s
9
kubectl get daemonsets -n amazon-cloudwatch
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
cloudwatch-agent 3 3 3 3 3 <none> 22s
fluent-bit 3 3 3 3 3 <none> 22s
10
# console에서 확인
cloudwatch > container insights
리전 변경 필요시
export AWS_REGION=us-west-1
export AWS_REGION=us-west-2
US West (N. California)us-west-1
US West (Oregon)us-west-2
2
echo ${AWS_REGION}
ClusterName=eks-demo
echo $ClusterName
3
인그레스 ALB -----서비스 ----- POD 구성
4
cd ~/environment
mkdir -p manifests/alb-ingress-controller && cd manifests/alb-ingress-controller
5
iam oidc (open ip 만들기)
서비스 어카운트에 iam role 을 사용하기 위해 eks-demo 에 iam provider가 존재해야함.
eksctl utils associate-iam-oidc-provider --region ${AWS_REGION} --cluster ${ClusterName} --approve
2021-06-18 00:28:40 [ℹ] will create IAM Open ID Connect provider for cluster "eks-demo" in "us-west-1"
2021-06-18 00:28:40 [✔] created IAM Open ID Connect provider for cluster "eks-demo" in "us-west-1"
6
확인?
aws eks describe-cluster --name ${ClusterName} --query "cluster.identity.oidc.issuer" --output text
https://oidc.eks.us-east-2.amazonaws.com/id/28C478AFEF60726FD91F80A9E7E1EC2D
7
aws iam list-open-id-connect-providers | grep 28C478AFEF60726FD91F80A9E7E1EC2D
8
alb를 클러스터에 추가
# 최신버전 다운로드 적용
kubectl apply --validate=false -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.2/cert-manager.yaml
# 최신 버전 확인
https://github.com/cert-manager/cert-manager/releases/
# 구버전
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.3.0/cert-manager.yaml
9
lb 컨트롤러 다운로드
# 구버전
10
vi v2_1_3_full.yaml
spec:
containers:
- args:
- --cluster-name=${ClusterName}
또는
spec:
containers:
- args:
- --cluster-name=eks-demo # 생성한 클러스터 이름을 입력
11
참고
https://www.suse.com/ko-kr/support/kb/doc/?id=000020805
배포 // 버전 변경되며 오류~
kubectl apply -f v2_1_3_full.yaml
[root@ip-172-31-40-122 alb-ingress-controller]# kubectl apply -f v2_1_3_full.yaml
serviceaccount/aws-load-balancer-controller unchanged
role.rbac.authorization.k8s.io/aws-load-balancer-controller-leader-election-role unchanged
clusterrole.rbac.authorization.k8s.io/aws-load-balancer-controller-role configured
rolebinding.rbac.authorization.k8s.io/aws-load-balancer-controller-leader-election-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/aws-load-balancer-controller-rolebinding unchanged
service/aws-load-balancer-webhook-service unchanged
deployment.apps/aws-load-balancer-controller unchanged
certificate.cert-manager.io/aws-load-balancer-serving-cert unchanged
issuer.cert-manager.io/aws-load-balancer-selfsigned-issuer unchanged
resource mapping not found for name: "targetgroupbindings.elbv2.k8s.aws" namespace: "" from "v2_1_3_full.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "aws-load-balancer-webhook" namespace: "" from "v2_1_3_full.yaml": no matches for kind "MutatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
ensure CRDs are installed first
resource mapping not found for name: "aws-load-balancer-webhook" namespace: "" from "v2_1_3_full.yaml": no matches for kind "ValidatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
ensure CRDs are installed first
[root@ip-172-31-40-122 alb-ingress-controller]#
12
확인
kubectl get deployment -n kube-system aws-load-balancer-controller
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 1/1 1 1 49s
13
서비스 어카운드 확인
kubectl get sa aws-load-balancer-controller -n kube-system -o yaml
14
로그 확인
kubectl logs -n kube-system $(kubectl get po -n kube-system | egrep -o "aws-load-balancer[a-zA-Z0-9-]+")
15
속성 값 확인 !!
ALBPOD=$(kubectl get pod -n kube-system | egrep -o "aws-load-balancer[a-zA-Z0-9-]+")
kubectl describe pod -n kube-system ${ALBPOD}
cd ~/environment
선행작업?
Cluster 설치 완료 후
ALB 설치 완료 후
Helm을 이용 = kube-ops-view 사용
Helm은 쿠버네티스 뷰를 관리
cloud9로
1
helm cli 툴을 설치
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
현재의 버전 확인
helm version --short
2
repo에 Stable 저장소 더함
helm repo add stable https://charts.helm.sh/stable
3
차트 리스트 확인 (선택)
helm search repo stable
4
helm completion bash >> ~/.bash_completion
. /etc/profile.d/bash_completion.sh
. ~/.bash_completion
source <(helm completion bash)
# 방법 1 - git 다운로드 설치
1
git clone https://codeberg.org/hjacobs/kube-ops-view.git
cd kube-ops-view/
kubectl apply -k deploy
2
외부에서 kube-ops-view를 접속하기 위해서 Service Type을 LoadBalancer 로 변경한다.
kubectl edit svc kube-ops-view
apiVersion: v1
kind: Service
metadata:
annotations:
name: kube-ops-view
spec:
....
sessionAffinity: None
type: LoadBalancer
status:
(3분 걸림)
# kube ops view 접속 URL
kubectl get svc kube-ops-view | tail -n 1 | awk '{ print "Kube-ops-view URL = http://"$4 }'
2
# 방법 2
Helm 으로 설치
kubens kube-system
( 5분 걸림)
2
kubens kube-system
Context "i-07fd28704cd927cb4@eks-demo.ap-northeast-2.eksctl.io" modified.
Active namespace is "kube-system".
k get svc
kube-ops-view LoadBalancer 10.100.18.156 a2f43379b1fb0440db35af6dc4a29f2b-1377535766.ap-northeast-2.elb.amazonaws.com 8080:30385/TCP 2m1s
[root@ip-172-31-40-122 alb-ingress-controller]#
8080 접속
a2f43379b1fb0440db35af6dc4a29f2b-1377535766.ap-northeast-2.elb.amazonaws.com:8080
AWS 콘솔 로그인 > EC2 > Load Balancers 가서 로드 밸런서 생성확인 > DNS name 확인
웹 브라우저에서 실행.
3
그림 설명
위 3개 cloudwatch-agent
아래 9개 kube-system- core dns , kube proxy
4
참고 자료
https://codeberg.org/hjacobs/kube-ops-view
1
kubens default
kubectl run myweb1 --image nginx
kubectl run myweb2 --image nginx
kubectl run myweb3 --image nginx
kubectl delete pod myweb1
kubectl delete pod myweb2
kubectl delete pod myweb3
2
컨테이너가 추가 된것을 확인하자~
노란색 디폴트가 3개 추가 된다.~
1
EKS 삭제
export AWS_REGION=ap-south-1
eksctl delete cluster --name=eks-demo
리전 변경
export AWS_REGION=us-east-1
export AWS_REGION=us-east-2
export AWS_REGION=us-west-1
export AWS_REGION=us-west-2
US East (N. Virginia)us-east-1
US East (Ohio)us-east-2
US West (N. California)us-west-1
US West (Oregon)us-west-2
echo ${AWS_REGION}
or
EKS삭제가 안되면
EC2 에서 오토스케일링 그룹 삭제후 , EC2 > 인스턴스 종료
EKS > 삭제
https://brunch.co.kr/@topasvga/1654