
cat <<EOF > completed.yaml
apiVersion: v1
kind: Pod
metadata:
  name: completed-pod
spec:
  containers:
    - name: completed-pod
      image: busybox
      command: ["sh"]
      args: ["-c", "sleep 5 && exit 0"]
EOF
[root@master aiden (⎈ |kube:default)]# kubectl apply -f completed.yaml && kubectl get pod -w
pod/completed-pod created
NAME            READY   STATUS              RESTARTS   AGE
completed-pod   0/1     ContainerCreating   0          0s
completed-pod   0/1     ContainerCreating   0          1s
completed-pod   1/1     Running             0          4s
completed-pod   0/1     Completed           0          9s
completed-pod   1/1     Running             1          12s
completed-pod   0/1     Completed           1          17s
completed-pod   0/1     CrashLoopBackOff    1          28s
completed-pod   1/1     Running             2          31s
completed-pod   0/1     Completed           2          36s
completed-pod   0/1     CrashLoopBackOff    2          48s
[root@master aiden (⎈ |kube:default)]# kubectl get pod completed-pod -o yaml | grep restartPolicy
  restartPolicy: Always
⇒ 실패하는 횟수가 늘어날수록 재시도하는 간격이 지수 형태로 늘어나며 CrashLoopBackoff 상태에 머물게 되는 시간이 길어짐
onfailure.yaml : restartPolicy: OnFailure, sleep 5초 후 종료 코드 1 반환 후 종료
cat <<EOF > onfailure.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: completed-pod
spec:
  restartPolicy: OnFailure
  containers:
    - name: completed-pod
      image: busybox
      command: ["sh"]
      args: ["-c", "sleep 5 && exit 1"]
EOF
Pod 생성 및 확인
[root@master aiden (⎈ |kube:default)]# kubectl apply -f onfailure.yaml && kubectl get pod -w
pod/completed-pod created
NAME            READY   STATUS              RESTARTS   AGE
completed-pod   0/1     ContainerCreating   0          0s
completed-pod   0/1     ContainerCreating   0          1s
completed-pod   1/1     Running             0          3s
completed-pod   0/1     Error               0          9s
completed-pod   1/1     Running             1          12s
completed-pod   0/1     Error               1          17s
completed-pod   0/1     CrashLoopBackOff    1          27s
completed-pod   1/1     Running             2          31s
completed-pod   0/1     Error               2          35s
completed-pod   0/1     CrashLoopBackOff    2          46s
liveness Probe : 컨테이너 내부의 애플리케이션이 살아있는지(liveness) 검사한다.
readiness Probe : 컨테이너 내부의 애플리케이션이 사용자 요청을 처리할 준비가 됐는지(readiness) 검사한다.
Startup Probe : 부하가 큰 컨테이너는 시작 시 오랜 시간이 걸리며 이때 liveness & readiness probe 에 의해 종료 및 실패가 될 수 있다.
httpGet : HTTP 요청을 전송해 상태를 검사한다. HTTP 요청의 종료 코드가 200 또는 300번 계열이 아닌 경우 애플리케이션의 상태 검사가 실패한 것으로 간주한다.
tcpSocket : TCP 연결이 수립될 수 있는지 체크함으로써 상태를 검사한다.
exec : 컨테이너 내부에서 명령어를 실행해 상태를 검사한다. 명령어의 종료 코드가 0 이 아닌 경우에 애플리케이션의 상태 검사가 실패한 것으로 간주한다.
livenessprobe.yaml
cat << EOF > livenessprobe.yaml
apiVersion: v1
kind: Pod
metadata:
  name: livenessprobe
spec:
  containers:
  - name: livenessprobe
    image: nginx
    livenessProbe: 
      httpGet:     
        port: 80
        path: /index.html
EOF
Pod 생성 및 확인
[root@master aiden (⎈ |kube:default)]# kubectl apply -f livenessprobe.yaml && kubectl get events --sort-by=.metadata.creationTimestamp -w
pod/livenessprobe created
... 생략 ...
0s          Normal    Scheduled   pod/livenessprobe   Successfully assigned default/livenessprobe to worker1
0s          Normal    Pulling     pod/livenessprobe   Pulling image "nginx"
0s          Normal    Pulled      pod/livenessprobe   Successfully pulled image "nginx" in 2.655459739s
0s          Normal    Created     pod/livenessprobe   Created container livenessprobe
0s          Normal    Started     pod/livenessprobe   Started container livenessprobe
[root@master aiden (⎈ |kube:default)]# kubectl describe pod livenessprobe | grep Liveness
    Liveness:       http-get http://:80/index.html delay=0s timeout=1s period=10s #success=1 #failure=3
상태 검사 실패를 위해 index.html 삭제 후 확인
[root@master aiden (⎈ |kube:default)]# kubectl exec livenessprobe -- rm /usr/share/nginx/html/index.html && kubectl logs livenessprobe -f
... 생략 ...
192.168.1.212 - - [27/Jun/2021:10:32:52 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:10:33:02 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:10:33:12 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
2021/06/27 10:33:22 [error] 31#31: *16 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 192.168.1.212, server: localhost, request: "GET /index.html HTTP/1.1", host: "172.16.235.150:80"
192.168.1.212 - - [27/Jun/2021:10:33:22 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.21" "-"
2021/06/27 10:33:32 [error] 31#31: *17 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 192.168.1.212, server: localhost, request: "GET /index.html HTTP/1.1", host: "172.16.235.150:80"
192.168.1.212 - - [27/Jun/2021:10:33:32 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:10:33:42 +0000] "GET /index.html HTTP/1.1" 404 153 "-" "kube-probe/1.21" "-"
2021/06/27 10:33:42 [error] 31#31: *18 open() "/usr/share/nginx/html/index.html" failed (2: No such file or directory), client: 192.168.1.212, server: localhost, request: "GET /index.html HTTP/1.1", host: "172.16.235.150:80"
2021/06/27 10:33:42 [notice] 1#1: signal 3 (SIGQUIT) received, shutting down
2021/06/27 10:33:42 [notice] 31#31: gracefully shutting down
2021/06/27 10:33:42 [notice] 31#31: exiting
2021/06/27 10:33:42 [notice] 31#31: exit
2021/06/27 10:33:42 [notice] 1#1: signal 17 (SIGCHLD) received from 31
2021/06/27 10:33:42 [notice] 1#1: worker process 31 exited with code 0
2021/06/27 10:33:42 [notice] 1#1: exit
상태 재확인
[root@master aiden (⎈ |kube:default)]# kubectl logs livenessprobe -f
... 생략 ...
2021/06/27 10:33:45 [notice] 1#1: using the "epoll" event method
2021/06/27 10:33:45 [notice] 1#1: nginx/1.21.0
2021/06/27 10:33:45 [notice] 1#1: built by gcc 8.3.0 (Debian 8.3.0-6) 
2021/06/27 10:33:45 [notice] 1#1: OS: Linux 5.4.0-74-generic
2021/06/27 10:33:45 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2021/06/27 10:33:45 [notice] 1#1: start worker processes
2021/06/27 10:33:45 [notice] 1#1: start worker process 31
192.168.1.212 - - [27/Jun/2021:10:33:52 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:10:34:02 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:10:34:12 +0000] "GET /index.html HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
readiness Probe : 컨테이너 내부의 애플리케이션이 사용자 요청을 처리할 준비가 됐는지(readiness) 검사한다.
readinessprobe-service.yaml
cat << EOF > readinessprobe-service.yaml
apiVersion: v1
kind: Pod
metadata:
  name: readinessprobe
  labels:
    readinessprobe: first
spec:
  containers:
  - name: readinessprobe
    image: nginx       
    readinessProbe:
      httpGet:
        port: 80
        path: /
---
apiVersion: v1
kind: Service
metadata:
  name: readinessprobe-service
spec:
  ports:
    - name: nginx
      port: 80
      targetPort: 80
  selector:
    readinessprobe: first
  type: ClusterIP
EOF
Pod 생성 및 확인
[root@master aiden (⎈ |kube:default)]# kubectl apply -f readinessprobe-service.yaml && kubectl get events --sort-by=.metadata.creationTimestamp -w
... 생략 ...
0s          Normal    Scheduled   pod/readinessprobe   Successfully assigned default/readinessprobe to worker2
0s          Normal    Pulling     pod/readinessprobe   Pulling image "nginx"
0s          Normal    Pulled      pod/readinessprobe   Successfully pulled image "nginx" in 2.593637347s
0s          Normal    Created     pod/readinessprobe   Created container readinessprobe
0s          Normal    Started     pod/readinessprobe   Started container readinessprobe
[root@master aiden (⎈ |kube:default)]# kubectl describe pod readinessprobe | grep Readiness
    Readiness:      http-get http://:80/ delay=0s timeout=1s period=10s #success=1 #failure=3
[root@master aiden (⎈ |kube:default)]# kubectl get service readinessprobe-service -o wide
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
readinessprobe-service   ClusterIP   10.103.179.125   <none>        80/TCP    87s   readinessprobe=first
[root@master aiden (⎈ |kube:default)]# kubectl get endpoints readinessprobe-service
NAME                     ENDPOINTS          AGE
readinessprobe-service   172.16.189.68:80   113s
서비스 접속 테스트
[root@master aiden (⎈ |kube:default)]# curl 10.103.179.125
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
... 생략 ...
[root@master aiden (⎈ |kube:default)]# kubectl exec readinessprobe -- rm /usr/share/nginx/html/index.html && kubectl logs readinessprobe -f
... 생략 ...
192.168.1.212 - - [27/Jun/2021:11:04:49 +0000] "GET / HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:11:04:55 +0000] "GET / HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:11:05:05 +0000] "GET / HTTP/1.1" 200 612 "-" "kube-probe/1.21" "-"
2021/06/27 11:05:15 [error] 30#30: *4 directory index of "/usr/share/nginx/html/" is forbidden, client: 192.168.1.212, server: localhost, request: "GET / HTTP/1.1", host: "172.16.235.151:80"
192.168.1.212 - - [27/Jun/2021:11:05:15 +0000] "GET / HTTP/1.1" 403 153 "-" "kube-probe/1.21" "-"
2021/06/27 11:05:25 [error] 30#30: *5 directory index of "/usr/share/nginx/html/" is forbidden, client: 192.168.1.212, server: localhost, request: "GET / HTTP/1.1", host: "172.16.235.151:80"
192.168.1.212 - - [27/Jun/2021:11:05:25 +0000] "GET / HTTP/1.1" 403 153 "-" "kube-probe/1.21" "-"
192.168.1.212 - - [27/Jun/2021:11:05:35 +0000] "GET / HTTP/1.1" 403 153 "-" "kube-probe/1.21" "-"
2021/06/27 11:05:35 [error] 30#30: *6 directory index of "/usr/share/nginx/html/" is forbidden, client: 192.168.1.212, server: localhost, request: "GET / HTTP/1.1", host: "172.16.235.151:80"
2021/06/27 11:05:45 [error] 30#30: *7 directory index of "/usr/share/nginx/html/" is forbidden, client: 192.168.1.212, server: localhost, request: "GET / HTTP/1.1", host: "172.16.235.151:80"
[root@master aiden (⎈ |kube:default)]# kubectl get pod
NAME             READY   STATUS    RESTARTS   AGE
readinessprobe   0/1     Running   0          2m28s
[root@master aiden (⎈ |kube:default)]# kubectl get service readinessprobe-service -o wide
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
readinessprobe-service   ClusterIP   10.103.179.125   <none>        80/TCP    25m   readinessprobe=first
⇒ EXTERNAL-IP 삭제됨
cat << EOF > init.yaml
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
spec:
  containers:
  - name: myapp-container
    image: busybox:1.28
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  initContainers:
  - name: init-myservice
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
  - name: init-mydb
    image: busybox:1.28
    command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
EOF
[root@master aiden (⎈ |kube:default)]# kubectl apply -f init.yaml && kubectl get pod -w
pod/myapp-pod created
NAME        READY   STATUS     RESTARTS   AGE
myapp-pod   0/1     Init:0/2   0          0s
myapp-pod   0/1     Init:0/2   0          1s
myapp-pod   0/1     Init:0/2   0          7s
[root@master aiden (⎈ |kube:default)]# kubectl get pod -o wide
NAME        READY   STATUS     RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
myapp-pod   0/1     Init:0/2   0          64s   172.16.235.183   worker1   <none>           <none>
cat << EOF | kubectl apply -f - && watch -d "kubectl describe pod myapp-pod | grep Events -A 12"
apiVersion: v1
kind: Service
metadata:
  name: myservice
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376
EOF
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m26s  default-scheduler  Successfully assigned default/myapp-pod to worker1
  Normal  Pulling    2m25s  kubelet            Pulling image "busybox:1.28"
  Normal  Pulled     2m19s  kubelet            Successfully pulled image "busybox:1.28" in 5.457684006s
  Normal  Created    2m19s  kubelet            Created container init-myservice
  Normal  Started    2m19s  kubelet            Started container init-myservice
  Normal  Pulled     8s     kubelet            Container image "busybox:1.28" already present on machine
  Normal  Created    8s     kubelet            Created container init-mydb
  Normal  Started    8s     kubelet            Started container init-mydb
[root@master aiden (⎈ |kube:default)]# kubectl get pod
NAME        READY   STATUS     RESTARTS   AGE
myapp-pod   0/1     Init:1/2   0          2m38s
cat << EOF | kubectl apply -f - && watch -d "kubectl describe pod myapp-pod | grep Events -A 12"
apiVersion: v1
kind: Service
metadata:
  name: mydb
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9377
EOF
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  4m24s  default-scheduler  Successfully assigned default/myapp-pod to worker1
  Normal  Pulling    4m24s  kubelet            Pulling image "busybox:1.28"
  Normal  Pulled     4m18s  kubelet            Successfully pulled image "busybox:1.28" in 5.457684006s
  Normal  Created    4m18s  kubelet            Created container init-myservice
  Normal  Started    4m18s  kubelet            Started container init-myservice
  Normal  Pulled     2m7s   kubelet            Container image "busybox:1.28" already present on machine
  Normal  Created    2m7s   kubelet            Created container init-mydb
  Normal  Started    2m7s   kubelet            Started container init-mydb
  Normal  Pulled     32s    kubelet            Container image "busybox:1.28" already present on machine
  Normal  Created    32s    kubelet            Created container myapp-container
[root@master aiden (⎈ |kube:default)]# kubectl get pod
NAME        READY   STATUS    RESTARTS   AGE
myapp-pod   1/1     Running   0          4m35s
[root@master aiden (⎈ |kube:default)]# kubectl delete pod --all
pod "myapp-pod" deleted
kubectl create configmap log-level --from-literal LOG_LEVEL=DEBUG
[root@master aiden (⎈ |kube:default)]# kubectl get configmap
NAME               DATA   AGE
kube-root-ca.crt   1      11d
log-level          1      6s
[root@master aiden (⎈ |kube:default)]# kubectl describe configmaps log-level
Name:         log-level
Namespace:    default
Labels:       <none>
Annotations:  <none>
Data
====
LOG_LEVEL:
----
DEBUG
Events:  <none>
[root@master aiden (⎈ |kube:default)]# kubectl get configmaps log-level -o yaml
apiVersion: v1
data:
  LOG_LEVEL: DEBUG
kind: ConfigMap
metadata:
  creationTimestamp: "2021-06-27T14:05:23Z"
  name: log-level
  namespace: default
  resourceVersion: "1493907"
cat << EOF > configmap-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: configmap-pod
spec:
  containers:
    - name: configmap-pod
      image: busybox
      args: ['tail', '-f', '/dev/null']
      envFrom:
      - configMapRef:
          name: log-level
EOF
[root@master aiden (⎈ |kube:default)]# kubectl apply -f configmap-pod.yaml
pod/configmap-pod created
[root@master aiden (⎈ |kube:default)]# kubectl exec configmap-pod -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=configmap-pod
LOG_LEVEL=DEBUG
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
HOME=/root
[root@master aiden (⎈ |kube:default)]# kubectl delete pod --all && kubectl delete configmaps log-level
pod "configmap-pod" deleted
configmap "log-level" deleted
[root@master aiden (⎈ |kube:default)]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-kgbtl   kubernetes.io/service-account-token   3      11d
[root@master aiden (⎈ |kube:default)]# kubectl create secret generic my-password --from-literal password=1q2w3e4r
secret/my-password created
[root@master aiden (⎈ |kube:default)]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-kgbtl   kubernetes.io/service-account-token   3      11d
my-password           Opaque                                1      32s
[root@master aiden (⎈ |kube:default)]# kubectl describe secrets my-password
Name:         my-password
Namespace:    default
Labels:       <none>
Annotations:  <none>
Type:  Opaque
Data
====
password:  8 bytes
[root@master aiden (⎈ |kube:default)]# kubectl get secrets my-password -o jsonpath='{.data.password}' ; echo
MXEydzNlNHI=
[root@master aiden (⎈ |kube:default)]# echo MXEydzNlNHI= |base64 -d ;echo
1q2w3e4r
cat << EOF > secret-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: secret-pod
spec:
  containers:
    - name: secret-pod
      image: busybox
      args: ['tail', '-f', '/dev/null']
      envFrom:
      - secretRef:
          name: my-password
EOF
[root@master aiden (⎈ |kube:default)]# kubectl apply -f secret-pod.yaml
pod/secret-pod created
[root@master aiden (⎈ |kube:default)]# kubectl get pod -o wide
NAME         READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
secret-pod   1/1     Running   0          37s   172.16.235.182   worker1   <none>           <none>
root@worker1:~# cat /proc/`ps -ef | grep tail | grep -v auto | awk '{print $2}'`/environ
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/binHOSTNAME=secret-podpassword=1q2w3e4rKUBERNETES_PORT_443_TCP_PORT=443KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1KUBERNETES_SERVICE_HOST=10.96.0.1KUBERNETES_SERVICE_PORT=443KUBERNETES_SERVICE_PORT_HTTPS=443KUBERNETES_PORT=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443KUBERNETES_PORT_443_TCP_PROTO=tcpHOME=/rootroot@
⇒ 시크릿 정보가 워커노드에 그대로 노출된다
[root@master aiden (⎈ |kube:default)]# kubectl delete pod --all && kubectl delete secret my-password
pod "secret-pod" deleted
secret "my-password" deleted