MinIO Study 2주차 DirectPV
가시다님이 운영하시는 MinIO Study 2주차 내용을 정리한 게시글 입니다.
1. 실습 환경 구성
실습 환경을 구성하기 위해 VM(vCPU 4 Mem 16 Volume 30G 4EA) 1개로 구성해서 실습을 진행합니다.
기존 실습은 AWS로 구성하지만, 저는 Proxmox를 구축한 Home Lab 서버가 있어 해당 서버를 사용했습니다.
VM 생성 후 실습 환경 구성 시 필요한 프로그램을 설치하는 스크립트 입니다.
#!/bin/bash
hostnamectl --static set-hostname k3s-s
# Config convenience
echo 'alias vi=vim' >> /etc/profile
echo "sudo su -" >> /home/ubuntu/.bashrc
ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime
# Disable ufw & apparmor
systemctl stop ufw && systemctl disable ufw
systemctl stop apparmor && systemctl disable apparmor
# Install packages
apt update && apt-get install bridge-utils net-tools conntrack ngrep jq yq tree unzip kubecolor fio tuned -y
# local dns - hosts file
echo "192.168.10.10 k3s-s" >> /etc/hosts
# Install k3s-server
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.33.4+k3s1 INSTALL_K3S_EXEC=" --disable=traefik" K3S_KUBECONFIG_MODE="644" sh -s - server --token miniotoken
# Change kubeconfig
echo 'export KUBECONFIG=/etc/rancher/k3s/k3s.yaml' >> /etc/profile
# Install Helm
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
# Alias kubectl to k
echo 'alias kc=kubecolor' >> /etc/profile
echo 'alias k=kubectl' >> /etc/profile
echo 'complete -o default -F __start_kubectl k' >> /etc/profile
# kubectl Source the completion
source <(kubectl completion bash)
echo 'source <(kubectl completion bash)' >> /etc/profile
# Install Kubectx & Kubens
git clone https://github.com/ahmetb/kubectx /opt/kubectx
ln -s /opt/kubectx/kubens /usr/local/bin/kubens
ln -s /opt/kubectx/kubectx /usr/local/bin/kubectx
# Install Kubeps & Setting PS1
git clone https://github.com/jonmosco/kube-ps1.git /root/kube-ps1
cat <<"EOT" >> ~/.bash_profile
source /root/kube-ps1/kube-ps1.sh
KUBE_PS1_SYMBOL_ENABLE=true
function get_cluster_short() {
echo "$1" | cut -d . -f1
}
KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
KUBE_PS1_SUFFIX=') '
PS1='$(kube_ps1)'$PS1
EOT
# Install MC : https://docs.min.io/community/minio-object-store/reference/minio-mc.html
curl -O https://dl.min.io/client/mc/release/linux-amd64/mc
chmod +x mc
cp mc /usr/bin
# Install Krew
wget -P /root "https://github.com/kubernetes-sigs/krew/releases/latest/download/krew-linux_amd64.tar.gz"
tar zxvf "/root/krew-linux_amd64.tar.gz" --warning=no-unknown-keyword
./krew-linux_amd64 install krew
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH" # export PATH="$PATH:/root/.krew/bin"
echo 'export PATH="$PATH:/root/.krew/bin:/root/go/bin"' >> /etc/profile
kubectl krew install get-all neat rolesum pexec stern
kubectl krew list
fio를 사용한 성능 테스트
제가 실습 한 환경은 Proxmox를 이용하여 vm을 구성했기 때문에 EBS가 아닌 QEMU로 Disk의 정보가 출력됩니다.
lsblk -a -o NAME,KNAME,MAJ:MIN,SIZE,TYPE,MOUNTPOINT,FSTYPE,UUID,MODEL,SERIAL
# NAME KNAME MAJ:MIN SIZE TYPE M FSTYPE UUID MODEL SERIAL
# sda sda 8:0 30G disk QEMU HARDDISK drive-scsi0
# ├─sda1 sda1 8:1 1M part
# └─sda2 sda2 8:2 30G part / ext4 e9663254-f103-4834-80b2-ddd78482e1b6
# sdb sdb 8:16 30G disk QEMU HARDDISK drive-scsi1
# sdc sdc 8:32 30G disk QEMU HARDDISK drive-scsi2
# sdd sdd 8:48 30G disk QEMU HARDDISK drive-scsi3
# sde sde 8:64 30G disk QEMU HARDDISK drive-scsi4
AWS에서 실습 환경 수행 시 gp3를 사용하기 때문에 IOPS가 최대 3천으로 되어있지만, QEMU를 사용하게 되면 어느정도 나오는지 테스트 해보겠습니다.
fio --name=randrw_test \
--filename=/mnt/testfile \
--size=4G \
--rw=randrw \
--rwmixread=70 \
--bs=4k \
--iodepth=16 \
--numjobs=4 \
--time_based \
--runtime=60 \
--group_reporting
# read: IOPS=98.8k, BW=386MiB/s (405MB/s)(22.6GiB/60001msec)
# write: IOPS=42.4k, BW=165MiB/s (173MB/s)(9927MiB/60001msec); 0 zone resets
# Disk stats (read/write):
# sda: ios=1514607/1058766, sectors=12116856/15905992, merge=0/267, ticks=149702/389924, in_queue=539634, util=60.35%
# AWS 결과
## 읽기 평균 IOPS : 2388
# read: IOPS=2388, BW=9553KiB/s (9782kB/s)(560MiB/60003msec)
## 쓰기 평균 IOPS : 1030
# write: IOPS=1030, BW=4121KiB/s (4220kB/s)(241MiB/60003msec); 0 zone resets
## 디스크 활용률 : 읽기/쓰기 IO 수량(132k/49k) , 디스크 활용률 87%
#Disk stats (read/write):
# nvme0n1: ios=132972/49709, sectors=1063776/420736, merge=0/8, ticks=232695/154017, in_queue=386713, util=87.09%
QEMU보다 확실히 압도적으로 성능이 잘 나오는 것을 확인 할 수 있습니다.
2. Tunning
MinIO 공식문서에서 권장하는 요구사항을 확인해보겠습니다.
Kernel 6.6+ 이상
시간 동기화(ntp, timedatectl)
chronyc sources -v
chronyc tracking
파일시스템에 대한 index, scan, audit 설정 Disable
# mlocate or plocate
systemctl disable --now plocate-updatedb
systemctl list-timers | grep locate
# updatedb
systemctl disable --now updatedb.timer
systemctl list-timers | grep updatedb
# auditd
systemctl disable --now auditd
systemctl list-timers | grep audit
Tuned Profile
RedHat Performance Team에서 OpenSource로 공개를 했고, RedHat에서 Kernel Tunning 시 사용하는 Tuned라는 프로그램을 사용하여 Kernel Parameter Tunning 실습을 해보겠습니다.
# 튜닝 전 설정 확인
sysctl -a > before.txt
# 서비스 시작
systemctl start tuned && systemctl enable tuned
# 현재 튜닝 Profile 확인
tuned-adm active
# Current active profile: virtual-guest
튜닝 설정을 전/후를 fio 명령어를 사용해서 비교해보도록 하겠습니다.
# 설정 해제
tuned-adm off
fio --name=randrw_test \
--filename=/mnt/testfile \
--size=16G \
--rw=randrw \
--ioengine=io_uring \
--rwmixread=70 \
--bs=4k \
--direct=1 \
--iodepth=32 \
--ramp_time=20 \
--numjobs=4 \
--time_based \
--runtime=300 \
--group_reporting
# 설정 적용
tuned-adm profile virtual-guest
# read: IOPS=78.0k, BW=305MiB/s (320MB/s)(89.3GiB/300001msec)
# write: IOPS=33.4k, BW=131MiB/s (137MB/s)(38.2GiB/300001msec); 0 zone resets
# Disk stats (read/write):
# sda: ios=24950932/10688076, sectors=199614432/85514776, merge=0/197, ticks=24533354/11321736, in_queue=35855267, util=59.48%

Figure 2.1 설정 후와 설정 전 비교
테스트로만 봤을때는 실질적인 성능차이가 발생하지 않는 것 같습니다.
3. Direct PV
✅ Direct PV
MinIO가 개발한 Kubernetes CSI 드라이버로, 각 노드의 직결 디스크(DAS)를 직접 발견·포맷·마운트하여 Local PV를 동적으로 제공하는 분산 PV 관리 계층입니다.
네트워크 스토리지(SAN/NAS) 대신 Node의 Local Disk를 사용해 MinIO 같은 분산 스토리지의 성능과 단순성을 유지하기 위해 개발됐습니다.

Figure 3.1 Direct PV vs 네트워크 스토리지
Direct PV Architecture
Direct PV를 구성하는 Pod는 Controller, Node Server 로 구성돼있습니다.

Figure 3.2 Controller 작동 방식
기본적으로 controller는 Deployment로 3개의 Replica로 배포되며 1대가 리더역할을 수행합니다.

Figure 3.3 Node Server 작동 방식
기본적으로 Node Server는 처리를 Node 단위로 수행해야하기 때문에 Daemonset으로 배포됩니다.
Direct PV 설치
k krew install directpv
k directpv install
k get crd | grep min
# directpvdrives.directpv.min.io 2025-09-20T07:29:02Z
# directpvinitrequests.directpv.min.io 2025-09-20T07:29:02Z
# directpvnodes.directpv.min.io 2025-09-20T07:29:02Z
# directpvvolumes.directpv.min.io 2025-09-20T07:29:02Z
k get sc
# NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
# directpv-min-io directpv-min-io Delete WaitForFirstConsumer true 38s
# local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 15h
# node-server 로그
kubectl stern -n directpv -l selector.directpv.min.io.service=enabled
# controller 로그
kubectl stern -n directpv -l selector.directpv.min.io=controller-4h5ww
# 선출된 Leader 확인
k get lease -n directpv
Direct PV로 Disk 관리
k directpv info
# ┌─────────┬──────────┬───────────┬─────────┬────────┐
# │ NODE │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
# ├─────────┼──────────┼───────────┼─────────┼────────┤
# │ • k3s-s │ - │ - │ - │ - │
# └─────────┴──────────┴───────────┴─────────┴────────┘
k directpv discover
# Discovered node 'k3s-s' ✔
# ┌─────────────────────┬───────┬───────┬────────┬────────────┬────────────────────┬───────────┬─────────────┐
# │ ID │ NODE │ DRIVE │ SIZE │ FILESYSTEM │ MAKE │ AVAILABLE │ DESCRIPTION │
# ├─────────────────────┼───────┼───────┼────────┼────────────┼────────────────────┼───────────┼─────────────┤
# │ 8:16$HXNakbzbhTv... │ k3s-s │ sdb │ 30 GiB │ - │ QEMU QEMU_HARDDISK │ YES │ - │
# │ 8:32$3j4QNKlh/zK... │ k3s-s │ sdc │ 30 GiB │ - │ QEMU QEMU_HARDDISK │ YES │ - │
# │ 8:48$rA3KmYdvNa9... │ k3s-s │ sdd │ 30 GiB │ - │ QEMU QEMU_HARDDISK │ YES │ - │
# │ 8:64$ouQvt0GrtRy... │ k3s-s │ sde │ 30 GiB │ - │ QEMU QEMU_HARDDISK │ YES │ - │
# └─────────────────────┴───────┴───────┴────────┴────────────┴────────────────────┴───────────┴─────────────┘
# 발견된 디스크 정보 확인 (특정 디스크 제외시 yes 부분을 no로 변경 하면됨)
cat drives.yaml | yq
# 1차 확인 후 --dangerous 옵션 추가 확인
k directpv init drives.yaml
k directpv init drives.yaml --dangerous
# Processed initialization request '0aac5bee-1c7f-4a0a-9987-acf0baaf4944' for node 'k3s-s' ✔
# ┌──────────────────────────────────────┬───────┬───────┬─────────┐
# │ REQUEST_ID │ NODE │ DRIVE │ MESSAGE │
# ├──────────────────────────────────────┼───────┼───────┼─────────┤
# │ 0aac5bee-1c7f-4a0a-9987-acf0baaf4944 │ k3s-s │ sdb │ Success │
# │ 0aac5bee-1c7f-4a0a-9987-acf0baaf4944 │ k3s-s │ sdc │ Success │
# │ 0aac5bee-1c7f-4a0a-9987-acf0baaf4944 │ k3s-s │ sdd │ Success │
# │ 0aac5bee-1c7f-4a0a-9987-acf0baaf4944 │ k3s-s │ sde │ Success │
# └──────────────────────────────────────┴───────┴───────┴─────────┘
k directpv list drives
# ┌───────┬──────┬────────────────────┬────────┬────────┬─────────┬────────┐
# │ NODE │ NAME │ MAKE │ SIZE │ FREE │ VOLUMES │ STATUS │
# ├───────┼──────┼────────────────────┼────────┼────────┼─────────┼────────┤
# │ k3s-s │ sdb │ QEMU QEMU_HARDDISK │ 30 GiB │ 30 GiB │ - │ Ready │
# │ k3s-s │ sdc │ QEMU QEMU_HARDDISK │ 30 GiB │ 30 GiB │ - │ Ready │
# │ k3s-s │ sdd │ QEMU QEMU_HARDDISK │ 30 GiB │ 30 GiB │ - │ Ready │
# │ k3s-s │ sde │ QEMU QEMU_HARDDISK │ 30 GiB │ 30 GiB │ - │ Ready │
# └───────┴──────┴────────────────────┴────────┴────────┴─────────┴────────┘
lsblk
# ...
# sdb 8:16 0 30G 0 disk /var/lib/directpv/mnt/2871e2b0-b5e7-4b3e-9009-6066289704ea
# sdc 8:32 0 30G 0 disk /var/lib/directpv/mnt/bca22541-a8f4-4ff3-a08b-2e39d9892f23
# sdd 8:48 0 30G 0 disk /var/lib/directpv/mnt/204b588c-7d78-4aad-9ceb-97b8815ea05d
# sde 8:64 0 30G 0 disk /var/lib/directpv/mnt/9ea45ca8-9536-43bd-a720-5f7405ea8ec1
# ...
df -hT --type xfs
# /dev/sdc xfs 30G 247M 30G 1% /var/lib/directpv/mnt/bca22541-a8f4-4ff3-a08b-2e39d9892f23
# /dev/sde xfs 30G 247M 30G 1% /var/lib/directpv/mnt/9ea45ca8-9536-43bd-a720-5f7405ea8ec1
# /dev/sdd xfs 30G 247M 30G 1% /var/lib/directpv/mnt/204b588c-7d78-4aad-9ceb-97b8815ea05d
# /dev/sdb xfs 30G 247M 30G 1% /var/lib/directpv/mnt/2871e2b0-b5e7-4b3e-9009-6066289704ea
디스크 연결 후 fstab에 자동으로 등록이 되지 않기 때문에 반드시 ``/etc/fstab`에 등록을 해야합니다.
Direct PV 테스트
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
volumeMode: Filesystem
storageClassName: directpv-min-io
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 8Mi
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
volumes:
- name: nginx-volume
persistentVolumeClaim:
claimName: nginx-pvc
containers:
- name: nginx-container
image: nginx:alpine
volumeMounts:
- mountPath: "/mnt"
name: nginx-volume
EOF
k exec -it nginx-pod -- df -hT -t xfs
# Filesystem Type Size Used Available Use% Mounted on
# /dev/sdc xfs 8.0M 0 8.0M 0% /mnt
k exec -it nginx-pod -- sh -c 'echo hello > /mnt/hello.txt'
k exec -it nginx-pod -- sh -c 'cat /mnt/hello.txt'
lsblk
# .....
# sdc 8:32 0 30G 0 disk /var/lib/kubelet/pods/a65e4e0d-bdbf-4525-b2a5-ad62eceedbb6/volumes/kubernetes.io~csi/pvc-3f7d2e2b-a1c2-4907-8644-da4059e2b1ef/mount
# /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/199ab76a4aacd8c106e1a4b2b833d3e6ed806fb03bfa2dc3bafe592a80bf7e41/globalmount
# .....
tree -a /var/lib/directpv/mnt
# ......
# └── bca22541-a8f4-4ff3-a08b-2e39d9892f23
# ├── .directpv
# │ └── meta.info
# ├── .FSUUID.bca22541-a8f4-4ff3-a08b-2e39d9892f23 -> .
# └── pvc-3f7d2e2b-a1c2-4907-8644-da4059e2b1ef
# └── hello.txt
# ......
# 테스트 리소스 삭제
k delete pod nginx-pod
k delete pvc nginx-pvc
특정 Drive에 Volume 사용
공식문서에서 제공하는 create-storage-class.sh를 사용하여 Storage Class를 생성 하고 생성된 Stroage Class를 사용하는 실습을 해보겠습니다.
k directpv list drives
# ┌───────┬──────┬────────────────────┬────────┬────────┬─────────┬────────┐
# │ NODE │ NAME │ MAKE │ SIZE │ FREE │ VOLUMES │ STATUS │
# ├───────┼──────┼────────────────────┼────────┼────────┼─────────┼────────┤
# │ k3s-s │ sdb │ QEMU QEMU_HARDDISK │ 30 GiB │ 30 GiB │ - │ Ready │
# │ k3s-s │ sdc │ QEMU QEMU_HARDDISK │ 30 GiB │ 30 GiB │ - │ Ready │
# │ k3s-s │ sdd │ QEMU QEMU_HARDDISK │ 30 GiB │ 30 GiB │ - │ Ready │
# │ k3s-s │ sde │ QEMU QEMU_HARDDISK │ 30 GiB │ 30 GiB │ - │ Ready │
# └───────┴──────┴────────────────────┴────────┴────────┴─────────┴────────┘
# 첫번째 Disk에 Labeling 적용
k directpv label drives --drives=sdb tier=fast
# Label 'directpv.min.io/tier:fast' successfully set on k3s-s/sdb
chmod +x create-storage-class.sh
source create-storage-class.sh fast-tier-storage 'directpv.min.io/tier: fast'
kc describe sc fast-tier-storage
# Parameters: directpv.min.io/tier=fast,fstype=xfs
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
volumeMode: Filesystem
storageClassName: fast-tier-storage
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 8Mi
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
volumes:
- name: nginx-volume
persistentVolumeClaim:
claimName: nginx-pvc
containers:
- name: nginx-container
image: nginx:alpine
volumeMounts:
- mountPath: "/mnt"
name: nginx-volume
EOF
kc describe pv
# .....
# VolumeAttributes: directpv.min.io/tier=fast
# fstype=xfs
# storage.kubernetes.io/csiProvisionerIdentity=1758353380487-2269-directpv-min-io
# .....
kubectl delete pod nginx-pod && kubectl delete pvc nginx-pvc
# Label 적용
k label nodes k3s-s directpv.min.io/rack=rack01 --overwrite
kc describe directpvdrives.directpv.min.io
# Topology:
# directpv.min.io/identity: directpv-min-io
# directpv.min.io/node: k3s-s
# directpv.min.io/rack: default
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
volumeMode: Filesystem
storageClassName: fast-tier-storage
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 8Mi
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
volumes:
- name: nginx-volume
persistentVolumeClaim:
claimName: nginx-pvc
containers:
- name: nginx-container
image: nginx:alpine
volumeMounts:
- mountPath: "/mnt"
name: nginx-volume
EOF
k get pod,pvc,pv
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
# persistentvolumeclaim/nginx-pvc Pending fast-tier-storage <unset> 30s
kc describe pvc
# Warning ProvisioningFailed 8s (x12 over 115s) directpv-min-io_controller-6685dc7db9-jwtgd_49941720-897d-4152-bf76-13a5bfef91a5 failed to provision volume with StorageClass "fast-tier-storage": rpc error: code = ResourceExhausted desc = no drive found for requested topology; requested node(s): k3s-s; requested size: 8388608 bytes
# 해결 1 : label 값 원복
k label nodes k3s-s directpv.min.io/rack=default --overwrite
# 해결 2 : directpvdrives CR에 label 값 directpv.min.io/rack=rack01 로 수정
kubectl delete pod nginx-pod && kubectl delete pvc nginx-pvc
# directpv.min.io/ Label이 중요하다는 것을 확인할 수 있습니다.
MinIO 설치 및 버킷 생성
helm repo add minio-operator https://operator.min.io
cat << EOF > minio-operator-values.yaml
operator:
env:
- name: MINIO_OPERATOR_RUNTIME
value: "Rancher"
replicaCount: 1
EOF
helm install --namespace minio-operator --create-namespace minio-operator minio-operator/operator --values minio-operator-values.yaml
# 현재는 오퍼레이터 관리 웹 미제공
cat << EOF > minio-tenant-1-values.yaml
tenant:
name: tenant1
configSecret:
name: tenant1-env-configuration
accessKey: minio
secretKey: minio123
pools:
- servers: 1
name: pool-0
volumesPerServer: 4
size: 10Gi
storageClassName: directpv-min-io
env:
- name: MINIO_STORAGE_CLASS_STANDARD
value: "EC:1"
metrics:
enabled: true
port: 9000
protocol: http
EOF
helm install --namespace tenant1 --create-namespace --values minio-tenant-1-values.yaml tenant1 minio-operator/tenant \
&& kubectl get tenants -A -w
lsblk
# sdb 8:16 0 30G 0 disk /var/lib/kubelet/pods/eec567e8-6365-40c3-b3ad-c0958949823d/volumes/kubernetes.io~csi/pvc-ee6e98f1-7535-44a3-92e6-551d9f23cd5a/mount
# /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/bb96d8a9ed5d4969a4c547122b4f46aef7e6b6bd73e464d848d71bbcd0d34a91/globalmount
# /var/lib/directpv/mnt/2871e2b0-b5e7-4b3e-9009-6066289704ea
# sdc 8:32 0 30G 0 disk /var/lib/kubelet/pods/eec567e8-6365-40c3-b3ad-c0958949823d/volumes/kubernetes.io~csi/pvc-93bf5f18-d12c-422a-addf-290eebca7e5c/mount
# /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/d465a200bc28751c4fd8515206642d8537a8ac049af8f895539ea1c56d0f3b74/globalmount
# /var/lib/directpv/mnt/bca22541-a8f4-4ff3-a08b-2e39d9892f23
# sdd 8:48 0 30G 0 disk /var/lib/kubelet/pods/eec567e8-6365-40c3-b3ad-c0958949823d/volumes/kubernetes.io~csi/pvc-56dca7a8-9fa7-41a5-a191-2176d8592801/mount
# /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/f30421a08a2aebe39361cd73c421736b05d7f2bcb63ce4527c41f2d08c275f47/globalmount
# /var/lib/directpv/mnt/204b588c-7d78-4aad-9ceb-97b8815ea05d
# sde 8:64 0 30G 0 disk /var/lib/kubelet/pods/eec567e8-6365-40c3-b3ad-c0958949823d/volumes/kubernetes.io~csi/pvc-b171fee1-9ab2-458c-90e0-bfd37efa8a8c/mount
# /var/lib/kubelet/plugins/kubernetes.io/csi/directpv-min-io/90ba5be4e91022a9b2f2124eb39772150fbf34203ddfe9ebc4cfc6716cee41a6/globalmount
# /var/lib/directpv/mnt/9ea45ca8-9536-43bd-a720-5f7405ea8ec1
k directpv list volumes
# ┌──────────────────────────────────────────┬──────────┬───────┬───────┬──────────────────┬──────────────┬─────────┐
# │ VOLUME │ CAPACITY │ NODE │ DRIVE │ PODNAME │ PODNAMESPACE │ STATUS │
# ├──────────────────────────────────────────┼──────────┼───────┼───────┼──────────────────┼──────────────┼─────────┤
# │ pvc-ee6e98f1-7535-44a3-92e6-551d9f23cd5a │ 10 GiB │ k3s-s │ sdb │ tenant1-pool-0-0 │ tenant1 │ Bounded │
# │ pvc-93bf5f18-d12c-422a-addf-290eebca7e5c │ 10 GiB │ k3s-s │ sdc │ tenant1-pool-0-0 │ tenant1 │ Bounded │
# │ pvc-56dca7a8-9fa7-41a5-a191-2176d8592801 │ 10 GiB │ k3s-s │ sdd │ tenant1-pool-0-0 │ tenant1 │ Bounded │
# │ pvc-b171fee1-9ab2-458c-90e0-bfd37efa8a8c │ 10 GiB │ k3s-s │ sde │ tenant1-pool-0-0 │ tenant1 │ Bounded │
# └──────────────────────────────────────────┴──────────┴───────┴───────┴──────────────────┴──────────────┴─────────┘
kubectl stern -n tenant1 -l v1.min.io/pool=pool-0
# + tenant1-pool-0-0 › minio
# + tenant1-pool-0-0 › validate-arguments
# + tenant1-pool-0-0 › sidecar
# tenant1-pool-0-0 minio INFO: Formatting 1st pool, 1 set(s), 4 drives per set.
# tenant1-pool-0-0 minio INFO: WARNING: Host local has more than 1 drives of set. A host failure will result in data becoming unavailable.
# tenant1-pool-0-0 sidecar 2025/09/20 08:42:57 sidecar_utils.go:50: Starting Sidecar
# tenant1-pool-0-0 minio MinIO Object Storage Server
# tenant1-pool-0-0 minio Copyright: 2015-2025 MinIO, Inc.
# tenant1-pool-0-0 minio License: GNU AGPLv3 - https://www.gnu.org/licenses/agpl-3.0.html
# tenant1-pool-0-0 minio Version: RELEASE.2025-04-08T15-41-24Z (go1.24.2 linux/amd64)
# tenant1-pool-0-0 minio
# tenant1-pool-0-0 minio API: https://minio.tenant1.svc.cluster.local
# tenant1-pool-0-0 minio WebUI: https://10.42.0.13:9443 https://127.0.0.1:9443
# tenant1-pool-0-0 minio
# tenant1-pool-0-0 minio Docs: https://docs.min.io
# tenant1-pool-0-0 minio INFO:
# tenant1-pool-0-0 minio You are running an older version of MinIO released 5 months before the latest release
# tenant1-pool-0-0 minio Update: Run `mc admin update ALIAS`
# tenant1-pool-0-0 minio
# tenant1-pool-0-0 minio
# - tenant1-pool-0-0 › validate-arguments
kubectl patch svc -n tenant1 tenant1-console -p '{"spec": {"type": "NodePort", "ports": [{"port": 9443, "targetPort": 9443, "nodePort": 30001}]}}'
# mc alias
mc alias set k8s-tenant1 https://127.0.0.1:30002 minio minio123 --insecure
mc alias list
mc admin info k8s-tenant1 --insecure
# ┌──────┬──────────────────────┬─────────────────────┬──────────────┐
# │ Pool │ Drives Usage │ Erasure stripe size │ Erasure sets │
# │ 1st │ 0.0% (total: 30 GiB) │ 4 │ 1 │
# └──────┴──────────────────────┴─────────────────────┴──────────────┘
# 버킷 생성
mc mb k8s-tenant1/mybucket --insecure
mc ls k8s-tenant1 --insecure
# [2025-09-20 18:40:09 KST] 0B mybucket/
활용 실습
# 신규터미널에 MinIO Call Trace를 위한 admin trace 적용
mc admin trace -v -a k8s-tenant1 --insecure
echo hello > hello.txt
mc cp ./hello.txt k8s-tenant1/mybucket/ --insecure
# 127.0.0.1:30002 [REQUEST s3.PutObject] [2025-09-20T18:50:53.526] [Client IP: 10.42.0.1]
# 127.0.0.1:30002 PUT /mybucket/hello.txt
mc cat k8s-tenant1/mybucket/hello.txt --insecure
# 100MB 파일 생성
</dev/urandom tr -dc 'A-Za-z0-9' | head -c 100M > randtext.txt
# 업로드 시 기본적으로 MultiPart로 Upload 됩니다.
mc cp ./randtext.txt k8s-tenant1/mybucket/ --insecure
# MultiPart 미적용
cp randtext.txt randtext2.txt
mc cp ./randtext2.txt k8s-tenant1/mybucket/ --disable-multipart --insecure
# Bucket File 삭제
mc rm k8s-tenant1/mybucket/randtext2.txt --insecure
이번에는 Pod 재시작 없이 Direct PV Volume 확장 실습을 수행해보겠습니다.
k directpv list drives
# ┌───────┬──────┬────────────────────┬────────┬────────┬─────────┬────────┐
# │ NODE │ NAME │ MAKE │ SIZE │ FREE │ VOLUMES │ STATUS │
# ├───────┼──────┼────────────────────┼────────┼────────┼─────────┼────────┤
# │ k3s-s │ sdb │ QEMU QEMU_HARDDISK │ 30 GiB │ 20 GiB │ 1 │ Ready │
# │ k3s-s │ sdc │ QEMU QEMU_HARDDISK │ 30 GiB │ 20 GiB │ 1 │ Ready │
# │ k3s-s │ sdd │ QEMU QEMU_HARDDISK │ 30 GiB │ 20 GiB │ 1 │ Ready │
# │ k3s-s │ sde │ QEMU QEMU_HARDDISK │ 30 GiB │ 20 GiB │ 1 │ Ready │
# └───────┴──────┴────────────────────┴────────┴────────┴─────────┴────────┘
kubectl patch pvc -n tenant1 data0-tenant1-pool-0-0 -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
kubectl patch pvc -n tenant1 data1-tenant1-pool-0-0 -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
kubectl patch pvc -n tenant1 data2-tenant1-pool-0-0 -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
kubectl patch pvc -n tenant1 data3-tenant1-pool-0-0 -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
k directpv list drives
# ┌───────┬──────┬────────────────────┬────────┬────────┬─────────┬────────┐
# │ NODE │ NAME │ MAKE │ SIZE │ FREE │ VOLUMES │ STATUS │
# ├───────┼──────┼────────────────────┼────────┼────────┼─────────┼────────┤
# │ k3s-s │ sdb │ QEMU QEMU_HARDDISK │ 30 GiB │ 10 GiB │ 1 │ Ready │
# │ k3s-s │ sdc │ QEMU QEMU_HARDDISK │ 30 GiB │ 10 GiB │ 1 │ Ready │
# │ k3s-s │ sdd │ QEMU QEMU_HARDDISK │ 30 GiB │ 10 GiB │ 1 │ Ready │
# │ k3s-s │ sde │ QEMU QEMU_HARDDISK │ 30 GiB │ 10 GiB │ 1 │ Ready │
# └───────┴──────┴────────────────────┴────────┴────────┴─────────┴────────┘
# 업데이트 시간이 오래걸림
k exec -it -n tenant1 tenant1-pool-0-0 -c minio -- sh -c 'df -hT --type xfs'
# Filesystem Type Size Used Avail Use% Mounted on
# /dev/sdb xfs 20G 34M 20G 1% /export0
# /dev/sdd xfs 20G 34M 20G 1% /export1
# /dev/sde xfs 20G 34M 20G 1% /export2
# /dev/sdc xfs 20G 34M 20G 1% /export3
'DevOps > Study' 카테고리의 다른 글
| MinIO Study 3주차 PBAC, LDAP (0) | 2025.09.27 |
|---|---|
| MinIO Study 2주차 Performance & Warp (0) | 2025.09.20 |
| MinIO Study 1주차 MinIO On k8s (0) | 2025.09.13 |
| MinIO Study 1주차 MinIO 소개 (0) | 2025.09.13 |
| Cilium Study [1기] 8주차 K8S Security & Tetragon (0) | 2025.09.06 |