[Kubernetes] 7일차 수업 정리

soyeon·2022년 10월 11일
0
post-thumbnail

DaemonSet

: cluster의 모든 node에 pod를 하나씩 배포하는 controller
로그 수집기, 모니터링을 목적으로 사용한다.

ex) prometheus/grafana : metric 수집(utilization),
ELK(Elasticsearch + Logstash + kibana) : log, streaming data(실시간 데이터) 수집
ㄴ 검색엔진 DB(json) -> 정제(L, 필요한 data만) -> 시각화(k)

GKE -> fluentd -> EFK -> logstash는 메모리 사용량이 과도하다.

신한은행 -> 이상거래분석 -> 보이스피싱
롯데닷컴 -> 지역분석 -> 지역별/판매별/..

## yaml 파일 작성
kevin@k8s-master:~/LABs/daemonset$ vi daemonset-ef.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
          
## apply
kevin@k8s-master:~/LABs/daemonset$ kubectl apply -f daemonset-ef.yaml

## 확인
kevin@k8s-master:~/LABs/daemonset$ kubectl get po -A
NAMESPACE              NAME                                        READY   STATUS      RESTARTS       AGE
kube-system            fluentd-elasticsearch-svb9j                 1/1     Running     0              28s
kube-system            fluentd-elasticsearch-xmlm8                 1/1     Running     0              28s
...

## 확인
kevin@k8s-master:~/LABs/daemonset$ kubectl get po -o wide -n kube-system
NAME                                       READY   STATUS    RESTARTS       AGE     IP               NODE         NOMINATED NODE   READINESS GATES
fluentd-elasticsearch-svb9j                1/1     Running   0              110s    10.111.156.73    k8s-node1    <none>           <none>
fluentd-elasticsearch-xmlm8                1/1     Running   0              110s    10.109.131.11    k8s-node2    <none>           <none>

Cronjob

: 분 시 일 월 요일
정기적인 보고서 생성, 백업

## cronjob 생성
kevin@k8s-master:~/LABs$ kubectl create cronjob my-job --image=busybox --schedule="*/1 * * * *" --dry-run=client -o yaml -- /bin/sh -c "date && echo 'hello from kubernetes cluster'" > my-job.yaml

## yaml 파일 확인
kevin@k8s-master:~/LABs$ vi my-job.yaml

## apply
kevin@k8s-master:~/LABs$ kubectl apply -f my-job.yaml

## my-job pod가 생성되었다.
kevin@k8s-master:~/LABs$ kubectl get cj,job,po -o wide
NAME                   SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE   CONTAINERS   IMAGES    SELECTOR
cronjob.batch/my-job   */1 * * * *   False     0        10s             25s   my-job       busybox   <none>

NAME                        COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES    SELECTOR
job.batch/my-job-27757506   1/1           7s         10s   my-job       busybox   controller-uid=169019e5-3d67-486b-b1d9-587bfce6b76a

NAME                        READY   STATUS      RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
pod/my-job-27757506-22tf8   0/1     Completed   0          10s   10.109.131.28   k8s-node2   <none>           <none>

## log 확인
kevin@k8s-master:~/LABs$ kubectl logs pod/my-job-27757506-22tf8
Tue Oct 11 01:06:04 UTC 2022
hello from kubernetes cluster

## 1분이 지나면 하나가 더 생긴다
kevin@k8s-master:~/LABs$ kubectl get cj,job,po -o wide
NAME                   SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE   CONTAINERS   IMAGES    SELECTOR
cronjob.batch/my-job   */1 * * * *   False     0        20s             95s   my-job       busybox   <none>

NAME                        COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES    SELECTOR
job.batch/my-job-27757506   1/1           7s         80s   my-job       busybox   controller-uid=169019e5-3d67-486b-b1d9-587bfce6b76a
job.batch/my-job-27757507   1/1           8s         20s   my-job       busybox   controller-uid=2f8d3dbd-e292-4c53-a91a-c3542415fffd

NAME                        READY   STATUS      RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
pod/my-job-27757506-22tf8   0/1     Completed   0          80s   10.109.131.28   k8s-node2   <none>           <none>
pod/my-job-27757507-n4rvc   0/1     Completed   0          20s   10.109.131.5    k8s-node2   <none>           <none>

node 관리

taint

: node 관리, 리소스 관리 차원에서도 사용할 수 있다.

## taint 확인
kevin@k8s-master:~/LABs$ kubectl describe no | grep -i taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Taints:             <none>
Taints:             <none>

## node1에 taint 설정
kevin@k8s-master:~/LABs$ kubectl taint node k8s-node1 disktype=ssd:NoSchedule
node/k8s-node1 tainted

## 확인
kevin@k8s-master:~/LABs$ kubectl describe no | grep -i taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Taints:             disktype=ssd:NoSchedule
Taints:             <none>

## yaml 파일 작성
kevin@k8s-master:~/LABs/mynode$ vi mynode3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: taint-pod
spec:
  nodeSelector:
    kubernetes.io/hostname: k8s-node1
  containers:
  - image: dbgurum/mynode:1.0
    name: mynode-container
    ports:
    - containerPort: 8000

## apply
kevin@k8s-master:~/LABs/mynode$ kubectl apply -f mynode3.yaml

## pending 되었다.
kevin@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
taint-pod   0/1     Pending   0          10s   <none>   <none>   <none>           <none>

## NoSchedule 설정을 해제한다
kevin@k8s-master:~/LABs/mynode$ kubectl taint node k8s-node1 disktype=ssd:NoSchedule-

## 곧바로 pod가 만들어진다.
kevin@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME        READY   STATUS              RESTARTS   AGE   IP       NODE        NOMINATED NODE   READINESS GATES
taint-pod   0/1     ContainerCreating   0          43s   <none>   k8s-node1   <none>           <none>
  • taint 허용하기
## 다시 taint 설정
kevin@k8s-master:~/LABs/mynode$ kubectl taint node k8s-node1 disktype=ssd:NoSchedule

## label 설정
kevin@k8s-master:~/LABs/mynode$ kubectl label no k8s-node1 disktype=ssd

## yaml 파일 작성
kevin@k8s-master:~/LABs/mynode$ vi mynode3.yaml
apiVersion: v1
kind: Pod
metadata:
  name: taint-pod
spec:
  nodeSelector:
    disktype: ssd
  containers:
  - image: dbgurum/mynode:1.0
    name: mynode-container
    ports:
    - containerPort: 8000
  tolerations:
  - key: disktype
    operator: Equal
    value: ssd
    effect: NoSchedule

## apply
kevin@k8s-master:~/LABs/mynode$ kubectl apply -f mynode3.yaml

## 확인
kevin@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
taint-pod   1/1     Running   0          5s    10.111.156.84   k8s-node1   <none>           <none>

## 삭제
kevin@k8s-master:~/LABs/mynode$ kubectl delete -f mynode3.yaml
kevin@k8s-master:~/LABs/mynode$ kubectl taint node k8s-node1 disktype=ssd:NoSchedule-
kevin@k8s-master:~/LABs/mynode$ kubectl label no k8s-node1 disktype-

cordon vs drain

  • cordon
    : 현재 실행 중인 pod는 그대로 유지하면서 신규 pod를 제한한다.

  • drain
    : node에서 실행 중인 모든 pod를 제거하고 node를 비운다.

=> cordon과 drain의 해제는 uncordon으로 한다.

## node1 drain
kevin@k8s-master:~/LABs$ kubectl drain k8s-node1 --delete-emptydir-data --ignore-daemonsets --force

## 확인
kevin@k8s-master:~/LABs$ kubectl get no
NAME         STATUS                     ROLES                  AGE   VERSION
k8s-master   Ready                      control-plane,master   11d   v1.24.5
k8s-node1    Ready,SchedulingDisabled   worker                 11d   v1.24.5
k8s-node2    Ready                      worker                 11d   v1.24.5

## apply
kevin@k8s-master:~/LABs$ kubectl apply -f mynode/mynode3.yaml

## 확인 - pending 되어서 할당되지 않는다.
kevin@k8s-master:~/LABs$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
taint-pod   0/1     Pending   0          10s   <none>   <none>   <none>           <none>

## drain 해제
kevin@k8s-master:~/LABs$ kubectl uncordon k8s-node1

## 확인
kevin@k8s-master:~/LABs$ kubectl get no
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   11d   v1.24.5
k8s-node1    Ready    worker                 11d   v1.24.5
k8s-node2    Ready    worker                 11d   v1.24.5

## label 붙이기
kevin@k8s-master:~/LABs/mynode$ kubectl label no k8s-node1 disktype=ssd

## apply
kevin@k8s-master:~/LABs$ kubectl apply -f mynode/mynode3.yaml

## cordon 설정
kevin@k8s-master:~/LABs$ kubectl cordon k8s-node1

## yaml 파일 작성
kevin@k8s-master:~/LABs/mynode$ vi mynode-cordon.yaml
apiVersion: v1
kind: Pod
metadata:
  name: cordon-pod
spec:
  nodeSelector:
    disktype: ssd
  containers:
  - image: dbgurum/mynode:1.0
    name: mynode-container
    ports:
    - containerPort: 8000

## apply
kevin@k8s-master:~/LABs/mynode$ kubectl apply -f mynode-cordon.yaml

## 할당이 되지 않는다
kevin@k8s-master:~/LABs/mynode$ kubectl get po
NAME         READY   STATUS    RESTARTS   AGE
cordon-pod   0/1     Pending   0          2s
taint-pod    1/1     Running   0          36s

nodeName -> kubelet을 이용한 직접 스케줄링하는 방법
nodeName으로 하면 cordon을 해도 할당이 된다.
nodeSelector -> scheculer에 수동 스케줄링을 지시하는 방법

NotReady

kevin@k8s-master:~/LABs/mynode$ kubectl get no
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   11d   v1.24.5
k8s-node1    NotReady    worker                 11d   v1.24.5
k8s-node2    Ready    worker                 11d   v1.24.5

=> 해당 노드에서 sudo systemctl start kubelet.service

upgrade

현재는 1.24.5 -> 1.24.6으로 upgrade

원칙: 무중단

  • kubeadm -> bootstrap, init, job, upgrade 작업 지원
    "api-server"(엔진의 핵심), scheduler, controller-manager, cloud-controller-manager
    => 가장 먼저 upgrade 되어야 한다. etcd는 upgrade에서 제외 가능하다.
    minor +- 1단계까지 가능하다.

  • kubelet, kubectl
    minor +- 2단계까지 가능하다.

단계

  1. drain 작업
  2. control-plane -> kubeadm upgrade
  3. control-plane -> kubelet, kubectl upgrade
  4. worker node => kubeadm upgrade
  5. worker node => kubelet, kubectl upgrade
  6. kubeadm reset
  7. systemctl restart kubelet

kubeadm upgrade 단계

  1. apt update
  2. version check -> apt-cache policy kubeadm | grep 1.24
  3. control-plane drain 작업
  4. kubeadm upgrade plan check
  5. kubeadm upgrade apply [버전지정]
    + apt -y install kubectl=[버전] kubelet=[버전]
  6. uncordon
  • control-plane upgrade
## master drain
kevin@k8s-master:~/LABs/mynode$ kubectl drain k8s-master --delete-emptydir-data --ignore-daemonsets --force

## 관리자로 넘어가기
kevin@k8s-master:~/LABs/mynode$ sudo -i

## update
root@k8s-master:~# apt-get update

## version check
root@k8s-master:~# apt-cache policy kubeadm | grep 1.24
  Installed: 1.24.5-00
     1.24.6-00 500
 *** 1.24.5-00 500
 ...

## kubeadm upgrade plan check
root@k8s-master:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.24.6
[upgrade/versions] kubeadm version: v1.24.5
I1011 11:16:42.169791   81662 version.go:255] remote version is much newer: v1.25.2; falling back to: stable-1.24
[upgrade/versions] Target version: v1.24.6
[upgrade/versions] Latest version in the v1.24 series: v1.24.6

## kubeadm upgrade apply
root@k8s-master:~# kubeadm upgrade apply 1.24.6 --etcd-upgrade=false --force
...
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.24.6". Enjoy!

## kubelet, kubectl upgrade
root@k8s-master:~# apt-get -y install kubelet=1.24.6-00 kubectl=1.24.6-00

## 확인
root@k8s-master:~# systemctl daemon-reload
root@k8s-master:~# systemctl restart kubelet.service
root@k8s-master:~# systemctl status kubelet.service
     Active: active (running) since Tue 2022-10-11 11:21:55 KST; 4s ago
...

## 빠져나오기
root@k8s-master:~# exit

## 버전 upgrade 된 것 확인
kevin@k8s-master:~/LABs/mynode$ kubectl get no
NAME         STATUS                     ROLES                  AGE   VERSION
k8s-master   Ready,SchedulingDisabled   control-plane,master   11d   v1.24.6
k8s-node1    Ready                      worker                 11d   v1.24.5
k8s-node2    Ready                      worker                 11d   v1.24.5

## master uncordon
kevin@k8s-master:~/LABs/mynode$ kubectl uncordon k8s-master

## kubeadm install
kevin@k8s-master:~/LABs/mynode$ apt -y install kubeadm=1.24.6-00

## 최종 확인
kevin@k8s-master:~/LABs/mynode$ kubectl get no
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   11d   v1.24.6
k8s-node1    Ready    worker                 11d   v1.24.5
k8s-node2    Ready    worker                 11d   v1.24.5

## hold
kevin@k8s-master:~/LABs/mynode$ sudo apt-mark hold kubeadm kubectl kubelet
  • worker upgrade
## node1 drain
kevin@k8s-master:~/LABs/mynode$ kubectl drain k8s-node1 --delete-emptydir-data --ignore-daemonsets --force

## 관리자로 넘어가기
kevin@k8s-node1:/data_dir$ sudo -i

## update
root@k8s-node1:~# apt-get update

## version check
root@k8s-node1:~# apt-cache policy kubeadm | grep 1.24
  Installed: 1.24.5-00
     1.24.6-00 500
 *** 1.24.5-00 500
...

## master에서 upgrade
kevin@k8s-master:~/LABs/mynode$ sudo kubeadm upgrade node --etcd-upgrade=false

## kubelet, kubectl upgrade
root@k8s-node1:~# apt-get -y install kubelet=1.24.6-00 kubectl=1.24.6-00

## 재시작
root@k8s-node1:~# systemctl daemon-reload
root@k8s-node1:~# systemctl restart kubelet.service
root@k8s-node1:~# systemctl status kubelet.service

## drain 해제
kevin@k8s-master:~/LABs/mynode$ kubectl uncordon k8s-node1

## 최종 확인
kevin@k8s-master:~/LABs/mynode$ kubectl get no
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   11d   v1.24.6
k8s-node1    Ready    worker                 11d   v1.24.6
k8s-node2    Ready    worker                 11d   v1.24.5

0개의 댓글