CI/CD Study 4주차 ArgoCD 1/3 (2)
가시다님이 운영하시는 CI/CD Study 4주차 내용과 예제로 배우는 ArgoCD 책의 내용을 정리한 게시글 입니다.
3. ArgoCD 운영
실습 환경 구성
Git Repository를 신규로 추가를 해야합니다.
# kind k8s 배포
kind create cluster --name myk8s --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
- containerPort: 30001
hostPort: 30001
- containerPort: 30002
hostPort: 30002
- containerPort: 30003
hostPort: 30003
- role: worker
- role: worker
- role: worker
EOF
# git Repository Clone
git clone https://github.com/ymir0804/my-sample-app.git
cd my-sample-app
mkdir resources
# 매니페스트 파일 작성 및 배포
cat << EOF > resources/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: argocd
EOF
kubectl apply -f resources/namespace.yaml
wget https://raw.githubusercontent.com/argoproj/argo-cd/refs/heads/master/manifests/ha/install.yaml
mv install.yaml resources/
kubectl apply -f resources/install.yaml -n argocd
kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64
-d ; echo
# jfc9E95-x3CHSh7X
# 새로운 터미널에서 port-forward 수행
kubectl port-forward svc/argocd-server -n argocd 8080:80
# 웹 UI 접속
wslview https://127.0.0.1:8080
# 원격 리포지터리에 커밋하고 푸시한다.
git add . && git commit -m "Deploy Argo CD " && git push -u origin main
ArgoCD HA Mode
ArgoCD HA Mode의 핵심은 ArgoCD의 구성요소(Controller, Repo server ,Server)와 Redis를 다중화로 구성하는 것 입니다.
Redis는 Sentinel + HAProxy를 사용하여 장애 시 자동으로 FailOver가 발생하며, Controller는 Leader Election으로 1대만 Active를 하고 나머지는 Standby로 유지 합니다.
Repo Server와 Server는 수평확장으로 배포하여 부하를 분산시킵니다.
모든 구성 요소들은 ArgoCD CRD를 Watch하여 자동으로 GitOps를 수행하게 해줍니다.
ArgoCD 자체 관리
ArgoCD Application을 만들고 Manifest에 특정 폴더를 지정하여 AutoPilot Mode처럼 자기 자신을 Application으로도 관리할 수 있습니다.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: my-repo-secret
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repository
stringData:
type: git
url: https://github.com/ymir0804/my-sample-app.git
password: <PAT>
username: ymir0804
EOF
# Argo CD 애플리케이션 생성
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: argocd
namespace: argocd
spec:
project: default
source:
path: resources
repoURL: https://github.com/ymir0804/my-sample-app.git
targetRevision: main
syncPolicy:
automated: {}
destination:
namespace: argocd
server: https://kubernetes.default.svc
EOF
kubectl get applications.argoproj.io -n argocd -owide
# NAME SYNC STATUS HEALTH STATUS REVISION PROJECT
# argocd Synced Healthy 7cc9d4627f2a0d247f3c1061e337fc1b94d8a367 default

Figure 3.1 ArgoCD 자체 관리 확인
Network Policy 항목을 삭제 후 원격 저장소에 반영을 하게 되면 상태가 어떻게 변경되는지 확인해보겠습니다.
kubectl get networkpolicies.networking.k8s.io -n argocd
# NAME POD-SELECTOR AGE
# argocd-application-controller-network-policy app.kubernetes.io/name=argocd-application-controller 15m
# argocd-applicationset-controller-network-policy app.kubernetes.io/name=argocd-applicationset-controller 15m
# argocd-dex-server-network-policy app.kubernetes.io/name=argocd-dex-server 15m
# argocd-notifications-controller-network-policy app.kubernetes.io/name=argocd-notifications-controller 15m
# argocd-redis-ha-proxy-network-policy app.kubernetes.io/name=argocd-redis-ha-haproxy 15m
# argocd-redis-ha-server-network-policy app.kubernetes.io/name=argocd-redis-ha 15m
# argocd-repo-server-network-policy app.kubernetes.io/name=argocd-repo-server 15m
# argocd-server-network-policy app.kubernetes.io/name=argocd-server 15m
# 원격 리포지터리에 커밋하고 푸시한다.
git add . && git commit -m "Delete Network Policy Resource" && git push -u origin main
# 모니터링
watch -d kubectl get networkpolicies.networking.k8s.io -n argocd

Figure 3.2 Sync Policy Prune Resource 적용
ArgoCD에서의 관찰 가능성
ArgoCD에서 운영을 위해 필요한 관찰 가능성 정보를 파악해보겠습니다.
정보를 얻기 위해 Prometheus Stack(Prometheus, Grafana)를 설치해보겠습니다.
# prometheus 설치
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
cat <<EOT > monitor-values.yaml
prometheus:
prometheusSpec:
scrapeInterval: "15s"
evaluationInterval: "15s"
service:
type: NodePort
nodePort: 30002
grafana:
defaultDashboardsTimezone: Asia/Seoul
adminPassword: prom-operator
service:
type: NodePort
nodePort: 30003
alertmanager:
enabled: false
defaultRules:
create: false
prometheus-windows-exporter:
prometheus:
monitor:
enabled: false
EOT
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 75.15.1 \
-f monitor-values.yaml --create-namespace --namespace monitoring
wslview http://127.0.0.1:30002
# ServiceMonitor 생성
cat <<EOF | kubectl apply -f -
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: argocd-applicationset-controller-metrics
namespace: monitoring
labels:
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-applicationset-controller
endpoints:
- port: metrics
namespaceSelector:
matchNames:
- argocd
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: argocd-repo-server-metrics
namespace: monitoring
labels:
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-repo-server
endpoints:
- port: metrics
namespaceSelector:
matchNames:
- argocd
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: argocd-server-metrics
namespace: monitoring
labels:
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-server-metrics
endpoints:
- port: metrics
namespaceSelector:
matchNames:
- argocd
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: argocd-dex-server
namespace: monitoring
labels:
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-dex-server
endpoints:
- port: metrics
namespaceSelector:
matchNames:
- argocd
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: argocd-redis-haproxy-metrics
namespace: monitoring
labels:
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-redis-ha-haproxy
endpoints:
- port: http-exporter-port
namespaceSelector:
matchNames:
- argocd
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: argocd-notifications-controller
namespace: monitoring
labels:
release: kube-prometheus-stack
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-notifications-controller-metrics
endpoints:
- port: metrics
namespaceSelector:
matchNames:
- argocd
EOF
# guestbook helm 차트 애플리케이션 생성
cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
helm:
valueFiles:
- values.yaml
path: helm-guestbook
repoURL: https://github.com/argoproj/argocd-example-apps
targetRevision: HEAD
syncPolicy:
automated:
enabled: true
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
destination:
namespace: guestbook
server: https://kubernetes.default.svc
EOF
# Grafana Web DashBoard 접속 후 https://github.com/argoproj/argo-cd/blob/master/examples/dashboard.json 해당 Dash Board Import
wslview http://127.0.0.1:30003
ServiceMonitor 생성 시 반드시 release: kube-prometheus-stack Label이 포함되어있어야 Metric을 수집 할 수 있습니다.

Figure 3.3 Prometheus에서 확인되는 Metric 정보

Figure 3.4 Grafana에서 확인되는 ArgoCD 정보
백업 + 복원
ArgoCD에서 백업 후 신규 복원을 해보겠습니다.
# argocd 서버 (http) 로그인 VwgDi0bUzXS10MaS
ARGOPW=Ujfc9E95-x3CHSh7X
argocd login localhost:8080 --username admin --password $ARGOPW --insecure
# 'admin:login' logged in successfully
# Context 'localhost:8080' updated
# 백업 생성
argocd admin export -n argocd > backup.yaml
# 244
# 신규 cluster 생성
kind create cluster --name myk8s2 --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 31000
hostPort: 31000
- containerPort: 31001
hostPort: 31001
- containerPort: 31002
hostPort: 31002
- containerPort: 31003
hostPort: 31003
- role: worker
- role: worker
- role: worker
EOF
# 설치 확인
kubectl config get-contexts
# alias 설정
alias k8s1='kubectl --context kind-myk8s'
alias k8s2='kubectl --context kind-myk8s2'
# 신규 클러스터에 argoCD HA 설치
k8s2 apply -f resources/namespace.yaml
k8s2 apply -f resources/install.yaml -n argocd
# 신규 클러스터 포트포워딩 적용(신규터미널)
kubectl port-forward svc/argocd-server -n argocd 8081:80 --context kind-myk8s2
# 신규 클러스터 secret 확인
k8s2 get secret -n argocd argocd-initial-admin-secret -o jsonpath='{.data.password}' | base64 -d ; echo
# 0wxRcRzjftraHZgl
# 신규 클러스터에 복원
ARGOPW2=0wxRcRzjftraHZgl
argocd login localhost:8081 --username admin --password $ARGOPW2 --insecure
# 'admin:login' logged in successfully
# Context 'localhost:8081' updated
# 백업 시 Main Password로 변경됨
argocd admin import -n argocd - < backup.yaml
# Import process completed successfully in namespace argocd at 2025-11-08T23:28:22+09:00, duration: 116.658115ms

Figure 3.5 ArgoCD 백업 확인
'DevOps > Study' 카테고리의 다른 글
| CI/CD Study 6주차 ArgoCD 3/3 (1/2) (0) | 2025.11.23 |
|---|---|
| CI/CD Study 5주차 ArgoCD 2/3 (1) | 2025.11.16 |
| CI/CD Study 4주차 ArgoCD 1/3 (1) (0) | 2025.11.08 |
| CI/CD Study 3주차 Jenkins, ArgoCD (0) | 2025.11.02 |
| CI/CD Study 2주차 Cloud Native CI/CD (1) | 2025.10.26 |