10/11일(화) 쿠버네티스 7

Yuri JI·2022년 10월 11일
0

Kakao Cloud School

목록 보기
18/27
post-thumbnail

오늘 진행할 일

🐳 Daemon set

  • cluster의 모든 node에 pod를 하나씩 배포하는 controller

    • 컨트롤러는 pall로 확인가능
    • node-exoprter의 정체가 deamon set
  • 어떤 용도로 사용?

    • 주로 로그 수집기, 모니터링을 목적으로 사용
    • prometheus / grafana = metric 수집
      • ELK -> Elasticsearch(검색엔진DB) + Logstash + kibana -> log 수집, streaming data 수집이 목적
        -> 왜? 데이터 정체, 필요한 data만
    • 신한은행 -> 이상거래징후분석 -> 보이스피싱
    • 롯데닷컴 -> 지역분석 -> 지역별/판매별
      • streaming data : 실시간 데이터
  • 요즘은 ELK 보단 EFK -> why? logstash는 메모리 사용량이 과도함. fluentd는 가볍다.

👻 elastic search + fluentd 
yji@k8s-master:~/LABs$ mkdir daemonset && cd $_

yji@k8s-master:~/LABs/daemonset$ vim daemonset-ef.yaml

yji@k8s-master:~/LABs/daemonset$ kubectl apply -f daemonset-ef.yaml
daemonset.apps/fluentd-elasticsearch created

yji@k8s-master:~/LABs/daemonset$ kubectl get po -A | grep fluentd
kube-system            fluentd-elasticsearch-2zf98                 1/1     Running     0               70s
kube-system            fluentd-elasticsearch-clcdq                 1/1     Running     0               70s
kube-system            fluentd-elasticsearch-x5r6c                 1/1     Running     0               70s


yji@k8s-master:~/LABs/daemonset$ kubectl get po -o wide -n kube-system | grep fluentd
fluentd-elasticsearch-2zf98                1/1     Running   0              2m19s   10.109.131.45    k8s-node2    <none>           <none>
fluentd-elasticsearch-clcdq                1/1     Running   0              2m19s   10.108.82.193    k8s-master   <none>           <none>
fluentd-elasticsearch-x5r6c                1/1     Running   0              2m19s   10.111.156.99    k8s-node1    <none>   

🐳 cronjob

  • 분 시 일 월 요일
  • 정기적인 보고서 생성, 백업

q1

create a "Cronjob" with busybox image that print 'date' and 'hello from kubernetes cluster' message for every minute.
kubectl create cronjob my-job --image=busybox --schedule="*/1 * * * *"
-args: [/bin/sh, -c, "date && echo 'hello from kubernetes cluster"]

=> kubectl create cronjob my-job --image=busybox --schedule="*/1 * * * *" --dry-run=client -o yaml -- /bin/sh -c "date && echo 'hello from k8s cluster'"

👻 command랑 args가 어떻게 만들어지는지 확인하세요. -> 1분마다 date 찍힌다 ! 
yji@k8s-master:~/LABs/daemonset$ kubectl create cronjob my-job --image=busybox --schedule="*/1 * * * *" --dry-run=client -o yaml -- /bin/sh -c "date && echo 'hello from k8s cluster'" > my-job.yaml
yji@k8s-master:~/LABs/daemonset$ vim my-job.yaml

yji@k8s-master:~/LABs/daemonset$ kubectl apply -f my-job.yaml
cronjob.batch/my-job created

yji@k8s-master:~/LABs/daemonset$ kubectl get cj,job,po -o wide
NAME                   SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE   CONTAINERS   IMAGES    SELECTOR
cronjob.batch/my-job   */1 * * * *   False     0        <none>          18s   my-job       busybox   <none>
yji@k8s-master:~/LABs/daemonset$ kubectl get cj,job,po -o wide
NAME                   SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE   CONTAINERS   IMAGES    SELECTOR
cronjob.batch/my-job   */1 * * * *   False     0        17s             54s   my-job       busybox   <none>

NAME                        COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES    SELECTOR
job.batch/my-job-27757506   1/1           6s         17s   my-job       busybox   controller-uid=5c9ba883-e313-4cf4-a7f0-f483a185c713

NAME                        READY   STATUS      RESTARTS   AGE   IP              NODE        NOMINATED NODE   READINESS GATES
 ⭐pod/my-job-27757506-xbrfl   0/1     Completed   0          17s   10.111.156.90   k8s-node1   <none>           <none>

👻 로그 확인 
yji@k8s-master:~/LABs/daemonset$ kubectl logs  ⭐pod/my-job-27757506-xbrfl
Tue Oct 11 01:06:03 UTC 2022
hello from k8s cluster

👻 1분 후 다시 get po 하면 새로운 po가 생김
NAME                        READY   STATUS      RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
pod/my-job-27757506-xbrfl   0/1     Completed   0          93s   10.111.156.90    k8s-node1   <none>           <none>
pod/my-job-27757507-v585s   0/1     Completed   0          33s   10.111.156.100   k8s-node1   <none>           <none>

yji@k8s-master:~/LABs/daemonset$ kubectl logs pod/my-job-27757506-xbrfl
Tue Oct 11 01:06:03 UTC 2022
hello from k8s cluster

👻 새로 생긴 pod
yji@k8s-master:~/LABs/daemonset$ kubectl logs pod/my-job-27757507-v585s
Tue Oct 11 01:07:03 UTC 2022
hello from k8s cluster


👻  지워 ! 
kubectl delete -f my-job.yaml

yji@k8s-master:~/LABs/daemonset$ kubectl create cronjob -h
Create a cron job with the specified name.

Aliases:
cronjob, cj

Examples:
  # Create a cron job
  kubectl create cronjob my-job --image=busybox --schedule="*/1 * * * *"

  # Create a cron job with a command
  kubectl create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date

🐳 node 관리 여기 맨 아래 실습 다시 해야됨 ㅠ taint 안걸고 작성함

  • taint, cordon, drain, uncordon(해제)

  • 확장(노드 추가)

  • 1.19의 쿠버네티스를 1.24.5버전으로 올리기

    • 한 번에 가능? NO
      • minor * 2 step씩만 버전 업그레이드가 가능하다.
      • 1.19 과 1.24.5 버전 사이의 minor 버전 = 1.24
    • 업그레이드 전에는 계획을 잘 세워야합니답.
  • taint

    • 언제 써?
      • node관리, 리소스 관리(해당 노드는 리소스가 부족하다-> 다른 노드로 파드 옮겨)
  • drain

    • node에서 실행 중인 모든 pod를 제거하고 node를 비운다.
      vs
  • cordon

    • 현재 실행 중인 Pod는 그대로 유지, 신규 pod 제한
  • uncordon -> drain, cordon의 해제

/*
 * taint 실습 
 */
👻 control-plane은 제어 영역이라 애플리케이션을 돌리지는 않음 ~
yji@k8s-master:~/LABs/daemonset$ kubectl get no
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    👻control-plane,master   11d   v1.24.5
k8s-node1    Ready    worker                 11d   v1.24.5
k8s-node2    Ready    worker                 11d   v1.24.5


👻 taint 확인 -> master node는 파드 생성 금지됨
yji@k8s-master:~/LABs/daemonset$ kubectl describe no | grep -i taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Taints:             <none>
Taints:             <none>

---


👻 node1에 taint 걸기
yji@k8s-master:~/LABs/daemonset$ kubectl taint node k8s-node1 disktype=ssd:NoSchedule
node/k8s-node1 tainted

yji@k8s-master:~/LABs/daemonset$ kubectl describe no | grep -i taint
Taints:             node-role.kubernetes.io/control-plane:NoSchedule
Taints:             disktype=ssd:NoSchedule
Taints:             <none>

👻 노트 1번에 Pod 생성
vim taint-node.yaml
apiVersion: v1
kind: Pod
metadata:
  name: taint-pod
spec:
  nodeSelector:
    kubernetes.io/hostname: k8s-node1
  containers:
  - name: nodejs-container
    image: ur2e/mynode:1.0
    ports:
    - containerPort: 8000


yji@k8s-master:~/LABs/mynode$ kubectl apply -f taint-node.yaml
pod/taint-pod created

⭐ Pending 상태임을 확인할 수 있음 
yji@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
taint-pod   0/1     ⭐Pending⭐   0          5s    <none>   <none>   <none>           <none>

👻 k8s-node1에 taint 해제 
yji@k8s-master:~/LABs/mynode$ kubectl taint node k8s-node1 disktype=ssd:NoSchedule-
node/k8s-node1 untainted

⭐ taint 해제 후 pod 조회하면 바로 node1에 pod가 생성된 것을 알 수 있다. 
yji@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
taint-pod   1/1     Running   0          80s   10.111.156.112   k8s-node1   <none>           <none>

---
📔 taint 허용 방법을 알아보자 -> tolerations

👻 노드1에 taint 걸기
kubectl taint node k8s-node1 disktype=ssd:NoSchedule

👻 노드 1에 NodeSelector를 위해 라벨 생성 
kubectl label no k8s-node1 disktype=ssd

👻 vim taint-node-tolerations.yaml
apiVersion: v1
kind: Pod
metadata:
  name: taint-pod
spec:
  ⭐ nodeSelector:
    disktype: ssd
  containers:
  - name: nodejs-container
    image: ur2e/mynode:1.0
    ports:
    - containerPort: 8000
  ⭐ tolerations:
  - key: disktype
    operator: Equal
    value: ssd
    effect: NoSchedule

✍✍ disktype=ssd:NoSchedule을 풀어서 tolerations에 적어줌 ! 

yji@k8s-master:~/LABs/mynode$ kubectl apply -f yurinode.yaml
pod/taint-pod created
yji@k8s-master:~/LABs/mynode$ kubectl get po
NAME        READY   STATUS    RESTARTS   AGE
taint-pod   0/1     Pending   0          3s
yji@k8s-master:~/LABs/mynode$ kubectl get po
NAME        READY   STATUS    RESTARTS   AGE
taint-pod   0/1     Pending   0          5s
yji@k8s-master:~/LABs/mynode$ kubectl get po
NAME        READY   STATUS    RESTARTS   AGE
taint-pod   0/1     Pending   0          6s
yji@k8s-master:~/LABs/mynode$ kubectl label no k8s-node1 disktype=ssd
node/k8s-node1 labeled
yji@k8s-master:~/LABs/mynode$ kubectl get po
NAME        READY   STATUS              RESTARTS   AGE
taint-pod   0/1     ContainerCreating   0          11s
yji@k8s-master:~/LABs/mynode$ kubectl get po
NAME        READY   STATUS    RESTARTS   AGE
taint-pod   1/1     Running   0          15s

👻 taint, label 해제 ! 


yji@k8s-master:~/LABs/mynode$ kubectl label no k8s-node1 disktype-
node/k8s-node1 unlabeled
/*
 * drain 실습 
   -> 더 이상 다른 파드 배치 X
   -> 현재 애플리케이션도 날려? 
   -> 근데 시스템을 날리는건 아니다.
 */

👻 node1에 drain 확인
yji@k8s-master:~/LABs/mynode$ kubectl drain k8s-node1 --delete-emptydir-data --ignore-daemonsets --force

👻 drain 확인 : node1에 schedulingDisabled 이 생김
yji@k8s-master:~/LABs/mynode$ kubectl get no
NAME         STATUS                     ROLES                  AGE   VERSION
k8s-master   Ready                      control-plane,master   11d   v1.24.5
k8s-node1    Ready,SchedulingDisabled   worker                 11d   v1.24.5
k8s-node2    Ready                      worker                 11d   v1.24.5



yji@k8s-master:~/LABs/mynode$ kubectl apply -f yurinode.yaml
pod/taint-pod created

yji@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
taint-pod   0/1     👻Pending   0          6s    <none>   <none>   <none>           <none>

yji@k8s-master:~/LABs/mynode$ kubectl uncordon k8s-node1
node/k8s-node1 uncordoned

yji@k8s-master:~/LABs/mynode$ kubectl get no
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   11d   v1.24.5
k8s-node1    Ready    worker                 11d   v1.24.5
k8s-node2    Ready    worker                 11d   v1.24.5

yji@k8s-master:/LABs/mynode$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
taint-pod   0/1     👻Pending   0          28s   <none>   <none>   <none>           <none>

yji@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP       NODE     NOMINATED NODE   READINESS GATES
taint-pod   0/1     Pending   0          2m15s   <none>   <none>   <none>           <none>

---
label 붙이면 또 잘 돈다?
yji@k8s-master:~/LABs/mynode$ kubectl label no k8s-node disktype=ssd
Error from server (NotFound): nodes "k8s-node" not found

yji@k8s-master:~/LABs/mynode$ kubectl label no k8s-node1 disktype=ssd
node/k8s-node1 labeled

yji@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
taint-pod   1/1     Running   0          2m46s   10.111.156.114   k8s-node1   <none>           <none>

yji@k8s-master:~/LABs/mynode$ kubectl cordon k8s-node1
node/k8s-node1 cordoned

yji@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE    IP               NODE        NOMINATED NODE   READINESS GATES
taint-pod   1/1     Running   0          3m6s   10.111.156.114   k8s-node1   <none>           <none>


yji@k8s-master:~/LABs/mynode$ kubectl apply -f mynode-cordon.yaml // 위의 yaml과 같은데 pod 이름만 바꿈 
pod/cordon-pod created
yji@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
cordon-pod   0/1     Pending   0          29s   <none>   <none>   <none>           <none>
yji@k8s-master:~/LABs/mynode$ kubectl uncordon k8s-node1
node/k8s-node1 uncordoned
yji@k8s-master:~/LABs/mynode$ kubectl get po -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
cordon-pod   1/1     Running   0          49s   10.111.156.111   k8s-node1   <none>           <none>


kubectl cordon k8s-node1

🤔🤔🤔 nodeName vs NodeSelector

  • nodeSelector는 mastser에 있는 scheduler에 수동으로 스케줄링을 지시하는 방법
  • nodename은 kubelet을 이용한 직접 스케줄링
    🤔 cordon이 걸려있어도 nodename이 가능. 왜?
    🤔 cordon은 ... 스케줄러에 영향을 받어?
    노드의 상태값을 스케줄러에 등록 (정확히는 etcd)

📔 node의 상태 변경하기 Read <-> NotReady (CKA)

원하는 node의 터미널로 가서
sudo systemctl stop kubelet.service

다시 Ready로 바꾸고 싶으면
sudo systemctl start kubelet.service

📔 Upgrade 작업

engine 을 업데이트하는게 목표

  • 1.24.5(currnet) -> 1.24.6

    • 원칙 : 무중단

    • kubeadm -> bootstrap, init, job, upgrade 작업 지원

    • kubelet, kubectl : minor에서 +- 2단계까지만 한 번에 업그레이드 가능

0) drain, cordon
1) control-plane: kubeadm upgrade
2) control-plane: kubelet, kubectl upgrade
3) worker-node: kubeadm upgrade
4) worker-node: kubelet, kubectl upgrade
5) kubeadm reset
6) systemctl restart kubelet


  • kubeadm upgrade
    1) apt update
    2) version check -> apt-cache policy kubeadm | grep 1.24
    3) control-plane -> drain
    4) kubeadm upgrade plan check
    5) kubeadm upgrade apply 버전 지정
    • 이때 버전 명시 안하면 :lastest
      6) uncordon (drain 해제)
👻 루트 작업이 원칙
👻 서버 안 날리게 조심해서 작업하기

👻 0) master node drain
yji@k8s-master:~$ kubectl drain k8s-master --delete-emptydir-data --ignore-daemonsets --force
node/k8s-master cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-qwvqz, kube-system/kube-proxy-4hnhk
evicting pod kube-system/calico-kube-controllers-6799f5f4b4-c44lc
pod/calico-kube-controllers-6799f5f4b4-c44lc evicted
node/k8s-master drained

👻 1) kubeadm upgrade
yji@k8s-master:~$ sudo -i

👻 1-1)
root@k8s-master:~# apt-get update

👻 1-2) version check
root@k8s-master:~# apt-cache policy kubeadm | grep 1.24
  Installed: 1.24.5-00
     1.24.6-00 500 🐣 1.23.6 여기로 가자 ~
 *** 1.24.5-00 500
     1.24.4-00 500
     1.24.3-00 500
     1.24.2-00 500
     1.24.1-00 500
     1.24.0-00 500

👻 1-3) 이미 함.
👻 1-4) kubeadm upgrade plan check
root@k8s-master:~# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.24.6
[upgrade/versions] kubeadm version: v1.24.5
I1011 11:16:44.365696  421229 version.go:255] remote version is much newer: v1.25.2; falling back to: stable-1.24
[upgrade/versions] Target version: v1.24.6 🐣 여기로 갈 예정이구나 ~ 
[upgrade/versions] Latest version in the v1.24 series: v1.24.6

👻 1-5) kubeadm upgrade apply 버전 지정
⭐ etcd는 upgrade에서 제외한다.
⭐ why? DB 백업 안했음 
kubeadm upgrade apply 1.24.6 --etcd-upgrade=false --force
...어쩌구저쩌구 하다가 SUCCESS 나오면 성공
[upgrade/successful] 🥳 SUCCESS! Your cluster was upgraded to "v1.24.6". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

👻 2) kubelet, kubectl upgrade
root@k8s-master:~# apt-get -y install kubelet=1.24.6-00 kubectl=1.24.6-00

👻 2-1) 작업 후 kubelet.service 재가동 
root@k8s-master:~# systemctl daemon-reload

root@k8s-master:~# systemctl restart kubelet.service

root@k8s-master:~# systemctl status  kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: active (running) since Tue 2022-10-11 11:21:40 KST; 21s ago

👻 2-2) master-node 버전 확인
root@k8s-master:~# exit
logout

yji@k8s-master:~$ kubectl get no
NAME         STATUS                     ROLES                  AGE   VERSION
k8s-master   Ready,SchedulingDisabled   control-plane,master   11d   v1.24.6
k8s-node1    Ready                      worker                 11d   v1.24.5
k8s-node2    Ready                      worker                 11d   v1.24.5


👻 2-3) master-node의 cordon 해제 ! 
yji@k8s-master:~$ kubectl uncordon k8s-master
node/k8s-master uncordoned
yji@k8s-master:~$ kubectl get no
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   11d   v1.24.6
k8s-node1    Ready    worker                 11d   v1.24.5
k8s-node2    Ready    worker                 11d   v1.24.5


👻 worker-node1, workder-node2도 버전 맞춰주세요~ 

# master node에서 worker1 node drain 걸기
kubectl drain k8s-node1 --delete-emptydir-data --ignore-daemonsets --force

yji@k8s-master:~$ kubectl get no
NAME         STATUS                     ROLES                  AGE   VERSION
k8s-master   Ready                      control-plane,master   11d   v1.24.6
k8s-node1    Ready,SchedulingDisabled   worker                 11d   v1.24.5
k8s-node2    Ready                      worker                 11d   v1.24.5

kubelet --version
kubectl version
kubeadm version -> kubeadm은 엔진이 아니라서... 따로 apt -y install kubeadm=1.24.6-00 해줘야한다...

-- 👻 워커노드 업그레이드
👻 워커는 engine을 바꾸는건 아니다...

sudo apt update
sudo apt -y upgrade kubeadm=1.24.6-00
kubeadm version
sudo apt-mark hold kubeadm
sudo apt -y upgrade kubelet=1.24.6-00 kubectl=1.24.6-00
sudo apt-mark hold kubectl kubelet
kubectl version
kubelet --version
sudo systemctl daemon-realod
sydo systemctl restart kubelet 


root@k8s-node1:~# sudo apt-mark hold kubeadm
kubeadm was already set on hold.
root@k8s-node1:~# sudo apt -y upgrade kubelet=1.24.6-00 kubectl=1.24.6-00






메모장

⭐ 📘 📗 💭 🤔 📕 📔 🐳 ✍ 🥳 ⭐ 🐣 👻

추후

  • ~ 다음 주 목 : 수업 완전히 끝 !
  • 다음 주 금: 프로젝트 발표
  • jenkins, argoCD
  • AWS로 진행합니다. -> EKS 서비스 + jenkins + argoCD 붙이고..
    • Amazon EKS 관리형 서비스는 컨트롤 플레인을 직접 구성하지 않고서 k8s를 손쉽게 사용할 수 있도록 편리함을 제공
  • or ECS
profile
안녕하세요 😄

0개의 댓글