<1> 프루브 개요
<2> completed.yaml - sleep 5초후 종료코드0(정상) 반환후 종료함
<3> onfailure.yaml - sleep 5초후 종료코드 1(Error) 반환후 종료함
<4> onfailure.yaml : restartPolicy: OnFailure , sleep 5초 후 종료 코드 0 (정상) 반환 후 종료
<5> livenessprobe.yaml
<6> ReadinessProbe
<1> 프루브 개요
1
liveness Probe
컨테이너 내부 애플리케이션이 살아있는지 검사
실패할 경우 해당 컨테이너는 restartPolicy 에 따라서 재시작
2
readiness Probe
컨테이너 내부의 애플리케이션이 사용자 요청을 처리할 준비가 됐는지 검사
검사에 실패할 경우 컨테이너는 서비스의 라우팅 대상에서 제외
로드 밸런서 = 서비스 개념이다.
<2> completed.yaml - sleep 5초후 종료코드0(정상) 반환후 종료함
1
topasvga@cloudshell:~ (ap-seoul-1)$ cat << EOF > completed.yaml
> apiVersion: v1
> kind: Pod
> metadata:
> name: completed-pod
> spec:
> containers:
> - name: completed-pod
> image: busybox
> command: ["sh"]
> args: ["-c", "sleep 5 && exit 0"]
> EOF
2
생성 , 모니터링
topasvga@cloudshell:~ (ap-seoul-1)$ watch -d "kubectl describe pod completed-pod | grep Events -A 12"
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl apply -f completed.yaml && kubectl get pod -w
pod/completed-pod created
NAME READY STATUS RESTARTS AGE
completed-pod 0/1 ContainerCreating 0 2s
completed-pod 1/1 Running 0 4s
completed-pod 0/1 Completed 0 9s
completed-pod 1/1 Running 1 (4s ago) 12s
completed-pod 0/1 Completed 1 (9s ago) 17s
completed-pod 0/1 CrashLoopBackOff 1 (12s ago) 29s
completed-pod 1/1 Running 2 (15s ago) 32s
completed-pod 0/1 Completed 2 (20s ago) 37s
completed-pod 0/1 CrashLoopBackOff 2 (11s ago) 48s
completed-pod 1/1 Running 3 (27s ago) 64s
completed-pod 0/1 Completed 3 (32s ago) 69s
3
restartPolicy 확인
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl get pod completed-pod -o yaml | grep restartPolicy
restartPolicy: Always
// restartPolicy 재시작 정책이 Always 라 다시 Running 상태로 가서 무한 반복 하는것이다. Completed (0) 반환
4
과정
Running > Completed > restartPolicy 가 Always (재시작) 되면 > CrashLoopBackOff >
Running > Completed > restartPolicy 가 Always (재시작) 되면 > CrashLoopBackOff >
5
삭제
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl delete pod --all
pod "completed-pod" deleted
<3> onfailure.yaml - sleep 5초후 종료코드 1(Error) 반환후 종료함
정상 종료 코드는 0이다.
sleep 5초 후 종료 코드 1 반환 후 종료하는것을 해보자
비정상적으로 종료(1)가 되었다고 가정한다.
1
topasvga@cloudshell:~ (ap-seoul-1)$ cat << EOF > onfailure.yaml
> apiVersion: v1
> kind: Pod
> metadata:
> name: completed-pod
> spec:
> restartPolicy: OnFailure
> containers:
> - name: completed-pod
> image: busybox
> command: ["sh"]
> args: ["-c", "sleep 5 && exit 1"]
> EOF
2
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl apply -f onfailure.yaml && kubectl get pod -w
pod/completed-pod created
NAME READY STATUS RESTARTS AGE
completed-pod 0/1 ContainerCreating 0 2s
completed-pod 1/1 Running 0 4s
completed-pod 0/1 Error 0 9s
completed-pod 1/1 Running 1 (4s ago) 12s
completed-pod 0/1 Error 1 (9s ago) 17s
completed-pod 0/1 CrashLoopBackOff 1 (15s ago) 31s
completed-pod 1/1 Running 2 (18s ago) 34s
completed-pod 0/1 Error 2 (23s ago) 39s
무한 반복
과정
Running > Error (비정상 종료1) > restartPolicy: OnFailure
Running > Error (비정상 종료1) > restartPolicy: OnFailure
Running > Error (비정상 종료1) > restartPolicy: OnFailure
3
restartPolicy 확인 ?
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl get pod completed-pod -o yaml | grep restartPolicy
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"completed-pod","namespace":"default"},"spec":{"containers":[{"args":["-c","sleep 5 \u0026\u0026 exit 1"],"command":["sh"],"image":"busybox","name":"completed-pod"}],"restartPolicy":"OnFailure"}}
restartPolicy: OnFailure
// Runninng -> Error (1)로 감. 무한 반복
4
topasvga@cloudshell:~ (ap-seoul-1)$ watch -d "kubectl describe pod completed-pod | grep Events -A 12"
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl delete pod --all
pod "completed-pod" deleted
<4> onfailure.yaml : restartPolicy: OnFailure , sleep 5초 후 종료 코드 0 (정상) 반환 후 종료
1
topasvga@cloudshell:~ (ap-seoul-1)$ cat << EOF > onfailure.yaml
> apiVersion: v1
> kind: Pod
> metadata:
> name: completed-pod
> spec:
> restartPolicy: OnFailure
> containers:
> - name: completed-pod
> image: busybox
> command: ["sh"]
> args: ["-c", "sleep 5 && exit 0"]
> EOF
2
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl apply -f onfailure.yaml && kubectl get pod -w
pod/completed-pod created
NAME READY STATUS RESTARTS AGE
completed-pod 0/1 ContainerCreating 0 2s
completed-pod 1/1 Running 0 3s
completed-pod 0/1 Completed 0 8s
completed-pod 0/1 Completed 0 10s
// 정상 종료라 완료 됨.
restartPolicy: OnFailure 라 완료.
OnFailure는 비정상일때만 재시작 하라는 것이다.
0 = 정상종료라 재시작하지 않는다.
topasvga@cloudshell:~ (ap-seoul-1)$ watch -d "kubectl describe pod completed-pod | grep Events -A 12"
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl delete pod --all
pod "completed-pod" deleted
<5> livenessprobe.yaml
컨테이너 내부 애플리케이션이 살아있는지 검사
실패할 경우 해당 컨테이너는 restartPolicy 에 따라서 재시작
1
topasvga@cloudshell:~ (ap-seoul-1)$ cat << EOF > livenessprobe.yaml
> apiVersion: v1
> kind: Pod
> metadata:
> name: livenessprobe
> spec:
> containers:
> - name: livenessprobe
> image: nginx
> livenessProbe:
> httpGet:
> port: 80
> path: /index.html
> EOF
2
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl apply -f livenessprobe.yaml && kubectl get events --sort-by=.metadata.creationTimestamp -w
pod/livenessprobe created
3
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl describe pod livenessprobe | grep Liveness
Liveness: http-get http://:80/index.html delay=0s timeout=1s period=10s #success=1 #failure=3
4
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl logs livenessprobe -f
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
:
2023/02/21 02:53:27 [notice] 1#1: nginx/1.23.3
2023/02/21 02:53:27 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/02/21 02:53:27 [notice] 1#1: OS: Linux 5.4.17-2136.314.6.2.el8uek.x86_64
2023/02/21 02:53:27 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/02/21 02:53:27 [notice] 1#1: start worker processes
2023/02/21 02:53:27 [notice] 1#1: start worker process 29
2023/02/21 02:53:27 [notice] 1#1: start worker process 30
10.244.0.1 - - [21/Feb/2023:02:53:34 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:53:44 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:53:54 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:54:04 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:54:14 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:54:24 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:54:34 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
// probe가 index.html을 10초 간격으로 체크하고 있다.
5
테스트 ?
index.html 삭제후 실행
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl exec livenessprobe -- rm /usr/share/nginx/html/index.html && kubectl logs livenessprobe -f
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
:
10.244.0.1 - - [21/Feb/2023:02:54:24 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:54:34 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:54:44 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:54:54 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
2023/02/21 02:55:04 [error] 30#30: *10 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 10.244.0.1, server: localhost, request: "GET /index.html HTTP/1.1", host: "10.244.0.27:80"
10.244.0.1 - - [21/Feb/2023:02:55:04 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:55:14 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.25" "-"
2023/02/21 02:55:14 [error] 30#30: *11 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 10.244.0.1, server: localhost, request: "GET /index.html HTTP/1.1", host: "10.244.0.27:80"
2023/02/21 02:55:24 [error] 30#30: *12 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 10.244.0.1, server: localhost, request: "GET /index.html HTTP/1.1", host: "10.244.0.27:80"
10.244.0.1 - - [21/Feb/2023:02:55:24 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.25" "-"
2023/02/21 02:55:24 [notice] 1#1: signal 3 (SIGQUIT) received, shutting down
2023/02/21 02:55:24 [notice] 29#29: gracefully shutting down
2023/02/21 02:55:24 [notice] 30#30: gracefully shutting down
2023/02/21 02:55:24 [notice] 29#29: exiting
2023/02/21 02:55:24 [notice] 30#30: exiting
2023/02/21 02:55:24 [notice] 29#29: exit
2023/02/21 02:55:24 [notice] 30#30: exit
2023/02/21 02:55:24 [notice] 1#1: signal 17 (SIGCHLD) received from 30
2023/02/21 02:55:24 [notice] 1#1: worker process 29 exited with code 0
2023/02/21 02:55:24 [notice] 1#1: worker process 30 exited with code 0
2023/02/21 02:55:24 [notice] 1#1: exit
// index.html 3번 실패하면 재시작.
이미지를 다시 가져와 재시작함
6
다시 생성되면 정상 됨.
다시 생성되어 index.html 가 생겨서 정상이 됨
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl logs livenessprobe -f
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/02/21 02:55:27 [notice] 1#1: using the "epoll" event method
2023/02/21 02:55:27 [notice] 1#1: nginx/1.23.3
2023/02/21 02:55:27 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/02/21 02:55:27 [notice] 1#1: OS: Linux 5.4.17-2136.314.6.2.el8uek.x86_64
2023/02/21 02:55:27 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/02/21 02:55:27 [notice] 1#1: start worker processes
2023/02/21 02:55:27 [notice] 1#1: start worker process 29
2023/02/21 02:55:27 [notice] 1#1: start worker process 30
10.244.0.1 - - [21/Feb/2023:02:55:34 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:55:44 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:02:55:54 +0000] "GET /index.html HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
7
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl get events --sort-by=.metadata.creationTimestamp -w
8
파드 삭제
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl delete pod --all
<6> ReadinessProbe
컨테이너 내부의 애플리케이션이 사용자 요청을 처리할 준비가 됐는지 검사
검사에 실패할 경우 컨테이너는 서비스의 라우팅 대상에서 제외
로드 밸런서 = 서비스 개념이다.
1
topasvga@cloudshell:~ (ap-seoul-1)$ cat << EOF > readinessprobe-service.yaml
> apiVersion: v1
> kind: Pod
> metadata:
> name: readinessprobe
> labels:
> readinessprobe: first
> spec:
> containers:
> - name: readinessprobe
> image: nginx
> readinessProbe:
> httpGet:
> port: 80
> path: /
> ---
> apiVersion: v1
> kind: Service
> metadata:
> name: readinessprobe-service
> spec:
> ports:
> - name: nginx
> port: 80
> targetPort: 80
> selector:
> readinessprobe: first
> type: ClusterIP
> EOF
2
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl apply -f readinessprobe-service.yaml && kubectl get events --sort-by=.metadata.creationTimestamp -w
pod/readinessprobe created
service/readinessprobe-service created
warning: --watch or --watch-only requested, --sort-by will be ignored
LAST SEEN TYPE REASON OBJECT MESSAGE
52m Normal Scheduled pod/completed-pod Successfully assigned default/completed-pod to 10.0.10.80
50m Normal Pulling pod/completed-pod Pulling image "busybox"
:
:
38m Normal Pulled pod/completed-pod Successfully pulled image "busybox" in 2.496072185s
38m Normal Created pod/completed-pod Created container completed-pod
38m Normal Started pod/completed-pod Started container completed-pod
36m Normal Scheduled pod/livenessprobe Successfully assigned default/livenessprobe to 10.0.10.80
34m Normal Pulling pod/livenessprobe Pulling image "nginx"
34m Normal Started pod/livenessprobe Started container livenessprobe
34m Normal Created pod/livenessprobe Created container livenessprobe
36m Normal Pulled pod/livenessprobe Successfully pulled image "nginx" in 2.929686215s
34m Warning Unhealthy pod/livenessprobe Liveness probe failed: HTTP probe failed with statuscode: 404
34m Normal Killing pod/livenessprobe Container livenessprobe failed liveness probe, will be restarted
34m Normal Pulled pod/livenessprobe Successfully pulled image "nginx" in 2.793015453s
33m Normal Killing pod/livenessprobe Stopping container livenessprobe
2s Normal Scheduled pod/readinessprobe Successfully assigned default/readinessprobe to 10.0.10.80
2s Normal Pulling pod/readinessprobe Pulling image "nginx"
0s Normal Pulled pod/readinessprobe Successfully pulled image "nginx" in 3.575815811s
0s Normal Created pod/readinessprobe Created container readinessprobe
0s Normal Started pod/readinessprobe Started container readinessprobe
3
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl describe pod readinessprobe | grep Readiness
Readiness: http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
4
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl logs readinessprobe -f
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/02/21 03:29:54 [notice] 1#1: using the "epoll" event method
2023/02/21 03:29:54 [notice] 1#1: nginx/1.23.3
2023/02/21 03:29:54 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/02/21 03:29:54 [notice] 1#1: OS: Linux 5.4.17-2136.314.6.2.el8uek.x86_64
2023/02/21 03:29:54 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/02/21 03:29:54 [notice] 1#1: start worker processes
2023/02/21 03:29:54 [notice] 1#1: start worker process 29
2023/02/21 03:29:54 [notice] 1#1: start worker process 30
10.244.0.1 - - [21/Feb/2023:03:29:55 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:30:00 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:30:10 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:30:20 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
5
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl get endpoints readinessprobe-service
NAME ENDPOINTS AGE
readinessprobe-service 10.244.0.28:80 45s
6
topasvga@cloudshell:~ (ap-seoul-1)$ curl 10.244.0.28
7
L4가 실패하도록 index.html 삭제후 재확인
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl exec readinessprobe -- rm /usr/share/nginx/html/index.html && kubectl logs readinessprobe -f
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/02/21 03:29:54 [notice] 1#1: using the "epoll" event method
2023/02/21 03:29:54 [notice] 1#1: nginx/1.23.3
2023/02/21 03:29:54 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2023/02/21 03:29:54 [notice] 1#1: OS: Linux 5.4.17-2136.314.6.2.el8uek.x86_64
2023/02/21 03:29:54 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/02/21 03:29:54 [notice] 1#1: start worker processes
2023/02/21 03:29:54 [notice] 1#1: start worker process 29
2023/02/21 03:29:54 [notice] 1#1: start worker process 30
10.244.0.1 - - [21/Feb/2023:03:29:55 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:30:00 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:30:10 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:30:20 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:30:30 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:30:40 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:30:50 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:31:00 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
10.244.0.1 - - [21/Feb/2023:03:31:10 +0000] "GET / HTTP/1.1" 200 615 "-" "kube-probe/1.25" "-"
2023/02/21 03:31:20 [error] 29#29: *10 directory index of "/usr/share/nginx/html/" is forbidden, client: 10.244.0.1, server: localhost, request: "GET / HTTP/1.1", host: "10.244.0.28:80"
10.244.0.1 - - [21/Feb/2023:03:31:20 +0000] "GET / HTTP/1.1" 403 153 "-" "kube-probe/1.25" "-"
8
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl get pod
NAME READY STATUS RESTARTS AGE
readinessprobe 1/1 Running 0 99s
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl get pod
NAME READY STATUS RESTARTS AGE
readinessprobe 0/1 Running 0 113s
//READY 상태가 0이 됨
9
실패하면 EXTERNAL-IP 가 없어진다.
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl get service readinessprobe-service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
readinessprobe-service ClusterIP 10.96.59.15 <none> 80/TCP 2m7s readinessprobe=first
9
index.html 실패하면 ENDPOINTS 가 없어진다.
topasvga@cloudshell:~ (ap-seoul-1)$ kubectl get endpoints readinessprobe-service
NAME ENDPOINTS AGE
readinessprobe-service 2m21s
pod가 서비스에서 제외 된다.
10
삭제
kubectl delete pod --all
감사합니다.