pvviewer
라는 새로운 service account를 만들고, pvviewer-role-binding
이라는 cluster role을 만들어서 pv에 대한 list권한을 주도록 하자. 또한, pvviewer-role-binding
이라는 ClusterRoleBinding
을 만들도록 하자. 다음으로 pvviwer
pod를 reids
image를 가지도록 만들고, 해당 pod가 ServiceAccount
로 pvviewer
를 가지도록 하자.
kubectl create serviceaccount pvviewer
kubectl create clusterrole pvviewer-role --resource=pv --verb=list
kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --user=pvviewer
kubectl run pvviewer --image redis -o=yaml --dry-run=client > pvviewer.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pvviewer
name: pvviewer
spec:
containers:
- image: redis
name: pvviewer
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
serviceAccountName: pvviewer
status: {}
kubectl create -f ./pvviewer.yaml
cluster의 모든 node의 IP를 리스트화하여 /root/CKA/node_ips
에 저장하도록 하자.
단, 다음의 형식으로 저장하도록 하자. <Node1-IP> <Node2-IP> ...
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' > /root/CKA/node_ips
multi-pod
라는 pod를 만들되, 다음의 두 가지 container를 가지도록 한다.
alpha
라는 이름과 nginx image를 갖는다.beta
라는 이름과 busybox image를 갖되,
sleep 4800`초를 가진다.환경 변수로 container1은 name: alpha
를 가지고, container2는 name: beta
를 가진다.
kubectl run multi-pod --image nginx -o yaml --dry-run=client > multi-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: multi-pod
name: multi-pod
spec:
containers:
- image: nginx
name: alpha
resources: {}
env:
- name: alpha
value: "alpha"
- image: busybox
name: beta
command: ["sleep", "4800"]
env:
- name: beta
value: "beta"
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
kubectl create -f ./multi-pod.yaml
non-root-pod
라는 pod를 만들고 image로 redis:alpine
를 넣도록 하자.
runAsUser: 1000
fsGroup: 2000
pod manifest 생성
kubectl run non-root-pod --image redis:alpine -o yaml --dry-run=client > non-root-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: non-root-pod
name: non-root-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- image: redis:alpine
name: non-root-pod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
kubectl create -f ./non-root-pod.yaml
np-test-1
이라는 pod와 np-test-service
라는 service를 배포하였다. 해당 service로 들어오는 connection들이 제대로 동작하지 않는데, 이를 확인하여 고치도록 하자. ingress-to-nptest
라는 NetworkPolicy
를 생성하여 해당 service에 대해서 80
port로 들어오는 connection에 대해서 허용하도록 하자.
단, 기존의 object들을 수정하지 말도록 하자.
https://kubernetes.io/docs/concepts/services-networking/network-policies/#default-deny-all-ingress-traffic
현재의 networkpolicy 확인
kubectl get networkpolicies.networking.k8s.io default-deny -o yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"NetworkPolicy","metadata":{"annotations":{},"name":"default-deny","namespace":"default"},"spec":{"podSelector":{},"policyTypes":["Ingress"]}}
creationTimestamp: "2024-08-11T14:13:05Z"
generation: 1
name: default-deny
namespace: default
resourceVersion: "5320"
uid: cb9370b5-4958-4ee4-986d-4bcae390ed19
spec:
podSelector: {}
policyTypes:
- Ingress
모든 ingress에 대해서 막고 있는 것을 볼 수 있다. 따라서, np-test-1
pod에 대해서 80
port ingress를 허용하도록 하자.
ubectl get pod np-test-1 --show-labels
NAME READY STATUS RESTARTS AGE LABELS
np-test-1 1/1 Running 0 10m run=np-test-1
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-to-nptest
spec:
podSelector:
matchLabels:
run: np-test-1
policyTypes:
- Ingress
ingress:
- from:
ports:
- protocol: TCP
port: 80
모든 ingress connection을 허용해야하므로 ingress쪽에 namespaceSelector
을 설정해주지 않는다.
node01
에 taint를 걸어서 스케줄링되지 못하게 하고, redis:alpine
image를 가지는 dev-redis
pod를 만들고, 이 pod가 node01
에 스케줄링되지 못하게 해야한다. 다음으로 redis:alpine
image를 가진 prod-redis
pod를 만들도록 하고 node01
에 대한 toleration을 가지도록 한다.
key: env_type
, value: production
operator: Equal
, effect는 NoSchedule
node01 taint
kubectl taint nodes node01 env_type=production:NoSchedule
kubectl run dev-redis --image redis:alpine
cat ./prod-redis.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: prod-redis
name: prod-redis
spec:
containers:
- image: redis:alpine
name: prod-redis
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
tolerations:
- key: "env_type"
value: "production"
operator: "Equal"
effect: "NoSchedule"
status: {}
kubectl create -f ./prod-redis.yaml
hr-pod
를 hr
namespace에서 만들고, label로 environment: production
과 tier: frontend
로 만들어주도록 하자. image는 redis:alpine
이다.
kubectl create namespace hr
kubectl run hr-pod --image=redis:alpine --namespace=hr --labels="environment=production,tier=frontend"
pod/hr-pod created
super.kubeconfig
라는 kubeconfig file이 /root/CKA
에 만들어졌다. 여기에 문제가 있으니 트러블슈팅을 해보도록 하자.
cat /root/CKA/super.kubeconfig
apiVersion: v1
clusters:
- cluster:
...
server: https://controlplane:9999
kube-apiserver가 정말 9999
port가 맞는 지 확인해보도록 하자.
kubectl get po -n kube-system kube-apiserver-controlplane -o yaml
apiVersion: v1
kind: Pod
metadata:
...
spec:
containers:
- command:
- kube-apiserver
...
- --secure-port=6443
...
6443
인 것을 확인할 수 있다. 수정하도록 하자.
apiVersion: v1
clusters:
- cluster:
...
server: https://controlplane:6443
replica 3개를 가진 nginx-deploy
라는 deployment를 만들었다. 진짜로 3개가 만들어졌는지 확인하고, 문제를 해결하도록 하자.
kubectl get deployments.apps nginx-deploy -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx-deploy 1/1 1 1 8m36s nginx nginx app=nginx-deploy
1개 밖에 replica가 안되어있다. replica 수를 늘려주도록 하자.
kubectl scale deployment nginx-deploy --replicas 3
kubectl get deployments.apps nginx-deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 1/3 1 1 10m
1/3
에서 안올라가고 있다. 문제를 확인하도록 하자.
kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
...
kube-system kube-contro1ler-manager-controlplane 0/1 ImagePullBackOff 0
kube-contro1ler-manager
가 ImagePullBackOff
상태인 것을 확인할 수 있다. kube-contro1ler-manager
가 제대로 동작하지 않으므로, deployment와 같은 controller들이 동작하지 않은 것이다.
kubectl describe -n kube-system pod kube-contro1ler-manager-controlplane
Name: kube-contro1ler-manager-controlplane
Namespace: kube-system
...
Normal BackOff 112s (x87 over 21m) kubelet Back-off pulling image "registry.k8s.io/kube-contro1ler-manager:v1.30.0"
image이름이 kube-contro1ler-manager
으로 잘못 적혀있다. kube-controller-manager
수정하도록 하자.
참고로 kube-controller-manager
와 같은 주요 component들은 static pod로 배포되어있기 때문에 /etc/kubernetes/manifests/
에서 확인할 수 있다.
kube-controller-manager
수정cat /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
image: registry.k8s.io/kube-controller-manager:v1.30.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priority: 2000001000
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
kubectl get po -n kube-system kube-controller-manager-controlplane
NAME READY STATUS RESTARTS AGE
kube-controller-manager-controlplane 1/1 Running 0 3m8s
kubectl get deployments.apps nginx-deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 3/3 3 3 40m