kubeadm을 통해 v1.29.0에서 v1.30.0으로 업그레이드해라, 단, 각 node를 업그레이드 하기 전에 gold-nginx
deployment가 먼저 다른 node로 재스케줄 되어야 한다.
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
/etc/apt/keyrings
가 기본적으로 없다. 따라서, curl
이전에 먼저 만들어져야 한다.# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo apt update
sudo apt-cache madison kubeadm
kubeadm | 1.30.3-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb Packages
kubeadm | 1.30.2-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb Packages
kubeadm | 1.30.1-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb Packages
kubeadm | 1.30.0-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb Packages
sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.30.0-1.1' && \
sudo apt-mark hold kubeadm
kubeadm version
kubeadm upgrade plan
...
COMPONENT NODE CURRENT TARGET
kubelet controlplane v1.29.0 v1.30.3
kubelet node01 v1.29.0 v1.30.3
Upgrade to the latest stable version:
COMPONENT NODE CURRENT TARGET
kube-apiserver controlplane v1.29.0 v1.30.3
kube-controller-manager controlplane v1.29.0 v1.30.3
kube-scheduler controlplane v1.29.0 v1.30.3
kube-proxy 1.29.0 v1.30.3
CoreDNS v1.10.1 v1.11.1
etcd controlplane 3.5.10-0 3.5.12-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.30.3
1.30으로 모든 component들을 업데이트 할 수 있다는 것을 확인
v1.30.0
kubeadm upgrade apply v1.30.0
kubeadm upgrade plan
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT NODE CURRENT TARGET
kubelet controlplane v1.29.0 v1.30.3
kubelet node01 v1.29.0 v1.30.3
Upgrade to the latest version in the v1.30 series:
COMPONENT NODE CURRENT TARGET
kube-apiserver controlplane v1.30.0 v1.30.3
kube-controller-manager controlplane v1.30.0 v1.30.3
kube-scheduler controlplane v1.30.0 v1.30.3
kube-proxy 1.30.0 v1.30.3
CoreDNS v1.11.1 v1.11.1
etcd controlplane 3.5.12-0 3.5.12-0
kubelet
만 1.29.0
인 것을 확인 v1.30.*
으로 업그레이드 시켜주도록 하자.
kubectl drain controlplane --ignore-daemonsets
node/controlplane cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-5jpxh, kube-system/weave-net-bvnlq
evicting pod kube-system/coredns-7db6d8ff4d-g4lpb
...
node/controlplane drained
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo chmod 644 /etc/apt/sources.list.d/kubernetes.list # helps tools such as command-not-found to work correctly
sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg # allow unprivileged APT programs to read this keyring
apt-cache madison kubelet
kubelet | 1.30.3-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb Packages
kubelet | 1.30.2-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb Packages
kubelet | 1.30.1-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb Packages
kubelet | 1.30.0-1.1 | https://pkgs.k8s.io/core:/stable:/v1.30/deb Packages
# replace x in 1.30.x-* with the latest patch version
sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet='1.30.0-1.1' kubectl='1.30.0-1.1' && \
sudo apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl uncordon controlplane
node01에 대해서도 ssh로 접속하여, 위와 같이 kubelet 업그레이드 반복
kubectl get nodes
NAME STATUS ROLES AGE VERSION
controlplane Ready control-plane 84m v1.30.0
node01 Ready <none> 83m v1.30.0
admin2406
namespace에 있는 모든 deployment의 이름을 가져오되, 다음의 형식을 지키도록 하자.
DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE
<deployment name> <container image used> <ready replica count> <Namespace>
데이터는 deployment name
에 대해서 오름차순으로 정렬되어야 한다.
가령 다음과 같다.
DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE
deploy0 nginx:alpine 1 admin2406
결과를 /opt/admin2406_data
에 저장하면 된다.
kubectl -n admin2406 get deployment -o custom-columns=DEPLOYMENT:.metadata.name,CONTAINER_IMAGE:.spec.template.spec.containers[].image,READY_REPLICAS:.status.readyReplicas,NAMESPACE:.metadata.namespace --sort-by=.metadata.name > /opt/admin2406_data
/root/CKA
에 있는 admin.kubeconfig
에 어떤 문제가 있는 지 확인하고 수정하도록 하자.
server
가 kube-apiserver와 동일한 지 봐야한다. 6443
으로 수정하도록 하자.
nginx-deploy
라는 deployment를 만들도록 하자. image는 nginx:1.16
이고 replica는 1개이다. 다음으로 deployment version을 rolling update
를 사용하여 image 버전을 1.17
로 업데이트하자.
kubectl create deployment nginx-deploy --image=nginx:1.16 --replicas=1
kubectl set image deployment/nginx-deploy nginx=nginx:1.17 --record
--record
를 사용하면 rolling update로 이력이 남는다. 참고로 kubectl edit
으로는 이력이 남지 않는다.
alpha-mysql
이라는 deployment가 새로 alpha
namespace에서 배포되었다. 그러나, pod가 동작하지 않으므로 해결하도록 하자.
이 deployment는 alpha-pv
PV를 사용하여 /var/lib/mysql
에 마운팅되어야 한다. 또한, 환경 변수로 MYSQL_ALLOW_EMPTY_PASSWORD=1
가 설정되어야 한다.
kubectl get pvc -n alpha alpha-claim -o yaml > claim.yaml
kubectl describe -n alpha pod alpha-mysql-5b9
...
Volumes:
mysql-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mysql-alpha-pvc
ReadOnly: false
pvc의 이름을 mysql-alpha-pvc
으로 수정해야한다. 이전에 먼저 pv정보를 확인하자.
kubectl describe pv alpha-pv
Name: alpha-pv
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: slow
Status: Bound
Claim: alpha/mysql-alpha-pvc
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /opt/pv-1
HostPathType:
Events: <none>
Access Modes
와 StorageClass
, Capacity
를 확인하여, pvc에 해당 값들을 넣어주자.
kubectl describe pvc -n alpha mysql-alpha-pvc
Name: mysql-alpha-pvc
Namespace: alpha
StorageClass: slow
Status: Bound
Volume: alpha-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: alpha-mysql-5b944d484-p8rcc
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForPodScheduled 4m12s persistentvolume-controller waiting for pod alpha-mysql-5b944d484-p8rcc to be scheduled
pod가 정상적으로 배포된다.
ETCD backup을 controlplane node의 /opt/etcd-backup.db
에 저장하도록 하자.
etcdctl
을 통해서 백업을 만들 때는 etcd의 endpoint와 pki정보인 ca, key(private key), cert(public key)가 필요하다.
kubectl describe -n kube-system pod etcd-controlplane
...
Host Port: <none>
Command:
etcd
--advertise-client-urls=https://192.35.74.9:2379
--cert-file=/etc/kubernetes/pki/etcd/server.crt
--client-cert-auth=true
--data-dir=/var/lib/etcd
--experimental-initial-corrupt-check=true
--experimental-watch-progress-notify-interval=5s
--initial-advertise-peer-urls=https://192.35.74.9:2380
--initial-cluster=controlplane=https://192.35.74.9:2380
--key-file=/etc/kubernetes/pki/etcd/server.key
--listen-client-urls=https://127.0.0.1:2379,https://192.35.74.9:2379
--listen-metrics-urls=http://127.0.0.1:2381
--listen-peer-urls=https://192.35.74.9:2380
--name=controlplane
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
--peer-client-cert-auth=true
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
--snapshot-count=10000
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
State: Running
...
--listen-client-urls
: etcd 서버의 엔드포인트--trusted-ca-file
: etcd의 pki ca정보들--cert-file
: etcd의 pki privat key--key-file
: etcd의 pki private keyETCDCTL_API=3 etcdctl snapshot save /opt/etcd-backup.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key
secret-1401
이라는 pod를 admin1401
namespace에 만들고, image로는 busybox
를 사용하되 container이름을 secret-admin
으로 하자. 또한, 명령어로 4800
초 동안 sleep하도록 한다.
마지막으로 secret인 dotfile-secret
을 pod의 /etc/secret-volume
에 read-only
secret-volume
로 마운트하도록 하자.
일반적인 pod생성은 kubectl run pod --image
로 가능하다. deployment
, job
, secret
등은 kubectl create
로 가능하다.
kubectl run secret-1401 --image busybox -o=yaml --dry-run=client > pod.yaml
이제 만들어진 pod를 아래와 같이 수정하도록 하자.
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: secret-1401
name: secret-1401
namespace: admin1401
spec:
volumes:
- name: secret-volume
# secret volume
secret:
secretName: dotfile-secret
containers:
- command:
- sleep
- "4800"
image: busybox
name: secret-admin
# volumes' mount path
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"