클러스터 신규 설치하여 실습한다.
: 데이터베이스 액세스 계층 도입, 인증 모니터링 로깅 처리 가능, 연결 재사용 - Link
https://cloudnative-pg.io/documentation/current/connection_pooling/
1
# 클러스터 신규 설치 : 동기 복제
cat <<EOT> mycluster2.yaml
# Example of PostgreSQL cluster
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: mycluster
spec:
imageName: ghcr.io/cloudnative-pg/postgresql:16.0
instances: 3
storage:
size: 3Gi
postgresql:
pg_hba:
- host all postgres all trust
enableSuperuserAccess: true
minSyncReplicas: 1
maxSyncReplicas: 2
monitoring:
enablePodMonitor: true
EOT
kubectl apply -f mycluster2.yaml && kubectl get pod -w
# 동기 복제 정보 확인
watch kubectl cnpg status mycluster
kubectl cnpg status mycluster
Streaming Replication status
Replication Slots Enabled
Name Sent LSN Write LSN Flush LSN Replay LSN Write Lag Flush Lag Replay Lag State Sync State Sync Priority Replication Slot
---- -------- --------- --------- ------------ ---------- ----- ---------- ------------- ----------------
mycluster-2 0/604DF70 0/604DF70 0/604DF70 0/604DF70 00:00:00 00:00:00 00:00:00 streaming quorum 1 active
mycluster-3 0/604DF70 0/604DF70 0/604DF70 0/604DF70 00:00:00 00:00:00 00:00:00 streaming quorum 1 active
Unmanaged Replication Slot Status
No unmanaged replication slots found
Instances status
Name Database Size Current LSN Replication role Status QoS Manager Version Node
---- ------------- ----------- ---------------- ------ --- --------------- ----
mycluster-1 29 MB 0/604DF70 Primary OK BestEffort 1.21.0 ip-192-168-2-231.ap-northeast-2.compute.internal
mycluster-2 29 MB 0/604DF70 Standby (sync) OK BestEffort 1.21.0 ip-192-168-3-29.ap-northeast-2.compute.internal
mycluster-3 29 MB 0/604DF70 Standby (sync) OK BestEffort 1.21.0 ip-192-168-1-211.ap-northeast-2.compute.internal
# 클러스터와 반드시 동일한 네임스페이스에 PgBouncer 파드 설치
cat <<EOT> pooler.yaml
apiVersion: postgresql.cnpg.io/v1
kind: Pooler
metadata:
name: pooler-rw
spec:
cluster:
name: mycluster
instances: 3
type: rw
pgbouncer:
poolMode: session
parameters:
max_client_conn: "1000"
default_pool_size: "10"
EOT
kubectl apply -f pooler.yaml && kubectl get pod -w
# 확인
kubectl get pooler
kubectl get svc,ep pooler-rw
피지 바운서가 인증도 수행할수 있다.
# superuser 계정 암호
kubectl get secrets mycluster-superuser -o jsonpath={.data.password} | base64 -d ; echo
nhp8ymj6I7lSQcUk08FJtprwJzRR0ZojdCdx4sQbjjKW61JtrWRrMAioqI1xmzWz
# 접속 확인 : pooler 인증 설정이 적용됨!, 반복 접속 시 파드가 변경되는지?
for ((i=1; i<=3; i++)); do PODNAME=myclient$i VERSION=15.3.0 envsubst < myclient.yaml | kubectl apply -f - ; done
kubectl exec -it myclient1 -- psql -U postgres -h pooler-rw -p 5432 -c "select inet_server_addr();"
Password for user postgres: (암호 입력)
inet_server_addr
------------------
192.168.3.200
(1 row)
...
# (옵션) Monitoring Metrics
kubectl get pod -l cnpg.io/poolerName=pooler-rw -owide
curl <파드IP>:9127/metrics
cat <<EOT> podmonitor-pooler-rw.yaml
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: pooler-rw
spec:
selector:
matchLabels:
cnpg.io/poolerName: pooler-rw
podMetricsEndpoints:
- port: metrics
EOT
kubectl apply -f podmonitor-pooler-rw.yaml
프로메테우스 타켓으로 들어간다.
그라파나에서 모니터링을 볼수 있다!!!
2
샘플 YAML - 링크 / Cluster-Full - 링크
3
4
mycluster 클러스터 삭제
kubectl delete -f mycluster2.yaml
5
삭제 방안 : 장점(1줄 명령어로 완전 삭제), 단점(삭제 실행과 완료까지 SSH 세션 유지 필요)
eksctl delete cluster --name $CLUSTER_NAME && aws cloudformation delete-stack --stack-name $CLUSTER_NAME
주말 CloudNet 스터디 내용 참고하여 정리한 부분입니다.
https://gasidaseo.notion.site/gasidaseo/CloudNet-Blog-c9dfa44a27ff431dafdd2edacc8a1863
https://brunch.co.kr/@topasvga/3512