CKA를 준비해보자 42일차 - mock exam4

0

CKA

목록 보기
42/43

문제1

pvviewer라는 새로운 service account를 만들고, pvviewer-role-binding이라는 cluster role을 만들어서 pv에 대한 list권한을 주도록 하자. 또한, pvviewer-role-binding이라는 ClusterRoleBinding을 만들도록 하자. 다음으로 pvviwer pod를 reids image를 가지도록 만들고, 해당 pod가 ServiceAccountpvviewer를 가지도록 하자.

  • service account 생성
kubectl create serviceaccount pvviewer
  • ClusterRole 생성
kubectl create clusterrole pvviewer-role --resource=pv --verb=list
  • ClusterRoleBinding 생성
kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --user=pvviewer
  • pod manifest 생성
kubectl run pvviewer --image redis -o=yaml --dry-run=client > pvviewer.yaml
  • pod에 serviceaccountName 추가
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: pvviewer
  name: pvviewer
spec:
  containers:
  - image: redis
    name: pvviewer
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  serviceAccountName: pvviewer
status: {}
  • pod 생성
kubectl create -f ./pvviewer.yaml

문제2

cluster의 모든 node의 IP를 리스트화하여 /root/CKA/node_ips에 저장하도록 하자.

단, 다음의 형식으로 저장하도록 하자. <Node1-IP> <Node2-IP> ...

  • node 정보 가져오기
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' > /root/CKA/node_ips

문제3

multi-pod라는 pod를 만들되, 다음의 두 가지 container를 가지도록 한다.

  • container1: alpha라는 이름과 nginx image를 갖는다.
  • container2: beta라는 이름과 busybox image를 갖되, sleep 4800`초를 가진다.

환경 변수로 container1은 name: alpha를 가지고, container2는 name: beta를 가진다.

  • pod manifest 생성
kubectl run multi-pod --image nginx -o yaml --dry-run=client  > multi-pod.yaml
  • pod 수정
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: multi-pod
  name: multi-pod
spec:
  containers:
  - image: nginx
    name: alpha
    resources: {}
    env:
    - name: alpha
      value: "alpha"
  - image: busybox
    name: beta
    command: ["sleep", "4800"]
    env:
    - name: beta
      value: "beta"
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
  • pod 생성
kubectl create -f ./multi-pod.yaml 

문제4

non-root-pod라는 pod를 만들고 image로 redis:alpine를 넣도록 하자.

  • runAsUser: 1000

  • fsGroup: 2000

  • pod manifest 생성

kubectl run non-root-pod --image redis:alpine -o yaml --dry-run=client > non-root-pod.yaml
  • pod manifest 수정
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: non-root-pod
  name: non-root-pod
spec:
  securityContext:
    runAsUser: 1000
    fsGroup: 2000
  containers:
  - image: redis:alpine
    name: non-root-pod
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
  • pod 생성
kubectl create -f ./non-root-pod.yaml 

문제5

np-test-1이라는 pod와 np-test-service라는 service를 배포하였다. 해당 service로 들어오는 connection들이 제대로 동작하지 않는데, 이를 확인하여 고치도록 하자. ingress-to-nptest라는 NetworkPolicy를 생성하여 해당 service에 대해서 80 port로 들어오는 connection에 대해서 허용하도록 하자.

kubectl get networkpolicies.networking.k8s.io default-deny -o yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.k8s.io/v1","kind":"NetworkPolicy","metadata":{"annotations":{},"name":"default-deny","namespace":"default"},"spec":{"podSelector":{},"policyTypes":["Ingress"]}}
  creationTimestamp: "2024-08-11T14:13:05Z"
  generation: 1
  name: default-deny
  namespace: default
  resourceVersion: "5320"
  uid: cb9370b5-4958-4ee4-986d-4bcae390ed19
spec:
  podSelector: {}
  policyTypes:
  - Ingress

모든 ingress에 대해서 막고 있는 것을 볼 수 있다. 따라서, np-test-1 pod에 대해서 80 port ingress를 허용하도록 하자.

  • pod label 확인
ubectl get pod np-test-1 --show-labels 
NAME        READY   STATUS    RESTARTS   AGE   LABELS
np-test-1   1/1     Running   0          10m   run=np-test-1
  • networkpolicy manifet 생성
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: ingress-to-nptest
spec:
  podSelector:
    matchLabels:
      run: np-test-1
  policyTypes:
  - Ingress
  ingress:
  - from:
    ports:
    - protocol: TCP
      port: 80

모든 ingress connection을 허용해야하므로 ingress쪽에 namespaceSelector을 설정해주지 않는다.

문제6

node01에 taint를 걸어서 스케줄링되지 못하게 하고, redis:alpine image를 가지는 dev-redis pod를 만들고, 이 pod가 node01에 스케줄링되지 못하게 해야한다. 다음으로 redis:alpine image를 가진 prod-redis pod를 만들도록 하고 node01에 대한 toleration을 가지도록 한다.

  • key: env_type, value: production

  • operator: Equal, effect는 NoSchedule

  • node01 taint

kubectl taint nodes node01 env_type=production:NoSchedule
  • dev-redis pod 생성
kubectl run dev-redis --image redis:alpine
  • prod-redis pod 생성
  • prod-redis manifest toleration 추가
cat ./prod-redis.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: prod-redis
  name: prod-redis
spec:
  containers:
  - image: redis:alpine
    name: prod-redis
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  tolerations:
  - key: "env_type"
    value: "production"
    operator: "Equal"
    effect: "NoSchedule"
status: {}
  • 배포
kubectl create -f ./prod-redis.yaml 

문제7

hr-podhr namespace에서 만들고, label로 environment: productiontier: frontend로 만들어주도록 하자. image는 redis:alpine이다.

  • namespace 생성
kubectl create namespace hr
  • pod 생성
kubectl run hr-pod --image=redis:alpine --namespace=hr --labels="environment=production,tier=frontend"
pod/hr-pod created

문제8

super.kubeconfig라는 kubeconfig file이 /root/CKA에 만들어졌다. 여기에 문제가 있으니 트러블슈팅을 해보도록 하자.

  • super.kubeconfig 확인
cat /root/CKA/super.kubeconfig

apiVersion: v1
clusters:
- cluster:
...
    server: https://controlplane:9999

kube-apiserver가 정말 9999 port가 맞는 지 확인해보도록 하자.

kubectl get po -n kube-system kube-apiserver-controlplane -o yaml
apiVersion: v1
kind: Pod
metadata:
...
spec:
  containers:
  - command:
    - kube-apiserver
    ...
    - --secure-port=6443
...

6443인 것을 확인할 수 있다. 수정하도록 하자.

  • vi /root/CKA/super.kubeconfig
apiVersion: v1
clusters:
- cluster:
...
    server: https://controlplane:6443

문제9

replica 3개를 가진 nginx-deploy라는 deployment를 만들었다. 진짜로 3개가 만들어졌는지 확인하고, 문제를 해결하도록 하자.

  • deployment 확인
kubectl get deployments.apps nginx-deploy -o wide
NAME           READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTOR
nginx-deploy   1/1     1            1           8m36s   nginx        nginx    app=nginx-deploy

1개 밖에 replica가 안되어있다. replica 수를 늘려주도록 하자.

  • replica 3개로 수정
kubectl scale deployment nginx-deploy --replicas 3
  • 확인
kubectl get deployments.apps nginx-deploy 
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   1/3     1            1           10m

1/3에서 안올라가고 있다. 문제를 확인하도록 하자.

  1. 전체 pod의 상태 확인
  2. deployment의 상태 확인
  • 전체 pod의 상태 확인
kubectl get po -A
NAMESPACE     NAME                                   READY   STATUS             RESTARTS      AGE
...
kube-system   kube-contro1ler-manager-controlplane   0/1     ImagePullBackOff   0

kube-contro1ler-managerImagePullBackOff 상태인 것을 확인할 수 있다. kube-contro1ler-manager가 제대로 동작하지 않으므로, deployment와 같은 controller들이 동작하지 않은 것이다.

kubectl describe -n kube-system pod kube-contro1ler-manager-controlplane

Name:                 kube-contro1ler-manager-controlplane
Namespace:            kube-system
...
Normal   BackOff  112s (x87 over 21m)  kubelet  Back-off pulling image "registry.k8s.io/kube-contro1ler-manager:v1.30.0"

image이름이 kube-contro1ler-manager으로 잘못 적혀있다. kube-controller-manager 수정하도록 하자.

참고로 kube-controller-manager와 같은 주요 component들은 static pod로 배포되어있기 때문에 /etc/kubernetes/manifests/에서 확인할 수 있다.

  • kube-controller-manager 수정
cat /etc/kubernetes/manifests/kube-controller-manager.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=10.244.0.0/16
    - --cluster-name=kubernetes
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.96.0.0/12
    - --use-service-account-credentials=true
    image: registry.k8s.io/kube-controller-manager:v1.30.0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10257
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-controller-manager
    resources:
      requests:
        cpu: 200m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10257
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      name: flexvolume-dir
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/controller-manager.conf
      name: kubeconfig
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priority: 2000001000
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
      type: DirectoryOrCreate
    name: flexvolume-dir
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/controller-manager.conf
      type: FileOrCreate
    name: kubeconfig
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}
  • kube-controller-manager 확인
kubectl get po -n kube-system kube-controller-manager-controlplane 
NAME                                   READY   STATUS    RESTARTS   AGE
kube-controller-manager-controlplane   1/1     Running   0          3m8s
  • nginx-deploy replica 3개 확인
kubectl get deployments.apps nginx-deploy 
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deploy   3/3     3            3           40m

0개의 댓글