๐Ÿšจ EC2 ํด๋Ÿฌ์Šคํ„ฐ ๊ตฌ์ถ• ํŠธ๋Ÿฌ๋ธ” ์ŠˆํŒ…

๊น€์„ฑ์ธยท2023๋…„ 10์›” 26์ผ
0

[DevOps] ๐ŸณDocker & Kubernetes

๋ชฉ๋ก ๋ณด๊ธฐ
58/62

https://www.youtube.com/watch?v=aYdUmISXzKI

ํด๋Ÿฌ์Šคํ„ฐ ๊ตฌ์ถ•

apt-key deprecated

Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).

W: https://apt.kubernetes.io/dists/kubernetes-xenial/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.

์ฒซ๋ฒˆ์งธ Kubeadm ์˜ค๋ฅ˜

kubeadm init ์˜ค๋ฅ˜

~$ sudo kubeadm init
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your inte                                                                                                                      rnet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config                                                                                                                       images pull'
W1026 13:39:34.222733    9717 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the conta                                                                                 iner runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as t                                                                                 he CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ip-172-31-9-118.ap-northeast-2.compute.internal kubernetes ku                                                                                 bernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.9.118]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ip-172-31-9-118.ap-northeast-2.compute.internal localhost]                                                                                  and IPs [172.31.9.118 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ip-172-31-9-118.ap-northeast-2.compute.internal localhost] an                                                                                 d IPs [172.31.9.118 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kuberne                                                                                 tes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.008838 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in t                                                                                 he cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ip-172-31-9-118.ap-northeast-2.compute.internal as control-plane by adding the                                                                                  labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ip-172-31-9-118.ap-northeast-2.compute.internal as control-plane by adding the                                                                                  taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: blex1o.96r68lcnqm1nilms
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long te                                                                                 rm certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bo                                                                                 otstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
error execution phase addon/coredns: unable to create RBAC clusterrolebinding: rpc error: code = Unknown desc = malfo                                                                                 rmed header: missing HTTP content-type
To see the stack trace of this error execute with --v=5 or higher

~$ kubectl get nodes
E1026 13:41:19.145608   11226 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8                                          080: connect: connection refused
E1026 13:41:19.146037   11226 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8                                          080: connect: connection refused
E1026 13:41:19.147528   11226 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8                                          080: connect: connection refused
E1026 13:41:19.148948   11226 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8                                          080: connect: connection refused
E1026 13:41:19.150346   11226 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8                                          080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?

ํ•ด๊ฒฐ

sudo kubeadm init --pod-network-cidr=192.168.0.0/16

$ kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR IsPrivilegedUser]: user is not running as root
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
ubuntu@ip-172-31-9-118:/var/run/containerd$ sudo kubeadm init --pod-network-cidr=192.168.0.0/16
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W1026 14:45:27.856706   17558 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ip-172-31-9-118.ap-northeast-2.compute.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.9.118]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ip-172-31-9-118.ap-northeast-2.compute.internal localhost] and IPs [172.31.9.118 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ip-172-31-9-118.ap-northeast-2.compute.internal localhost] and IPs [172.31.9.118 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.002909 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ip-172-31-9-118.ap-northeast-2.compute.internal as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ip-172-31-9-118.ap-northeast-2.compute.internal as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 1bv3c7.5gqxgnlq79bc55hv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.9.118:6443 --token 1bv3c7.5gqxgnlq79bc55hv \
        --discovery-token-ca-cert-hash sha256:0b129eb9702932968acdf93d5406bfbf0b92617ab89c5a149bf2cc94a94ff005

ํ•ด๋‹น ๋ช…๋ น์–ด๋กœ ํ•ด๊ฒฐ ํ•œ์ค„ ์•Œ์•˜์œผ๋‚˜, kubectl ๋ช…๋ น์–ด ์•ˆ๋จนํž˜


๋‘๋ฒˆ์งธ Kubeadm ์„ค์น˜

sudo kubeadm init --pod-network-cidr=192.168.0.0/16  --control-plane-endpoint=k8s-master.insung.local --apiserver-cert-extra-sans=k8s-master.insung.local
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W1028 15:35:38.812340   13858 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master.insung.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.26.159]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master.insung.local localhost] and IPs [172.31.26.159 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master.insung.local localhost] and IPs [172.31.26.159 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

์™œ ์•ˆ๋˜์ง€ ํ•˜๊ณ  ec2์ธ์Šคํ„ด์Šค ์ง€์šฐ๊ณ  ๋‹ค์‹œ ์„ค์น˜ํ•ด๋ด„.

๊ทธ๋ž˜๋„ kube-etlstart ์—์„œ ๊ณ„์† ์˜ค๋ฅ˜ ๋‚ฌ์—ˆ์Œ.
port ๊ฐœ๋ฐฉ์„ ์•ˆํ•ด์„œ ๋ฌธ์ œ์˜€๋˜๊ฒƒ๊ฐ™์Œ.


์„ธ๋ฒˆ์งธ Kubeadm ์„ค์น˜

ํฌํŠธ ๊ฐœ๋ฐฉ ํ›„ ์ง„ํ–‰ํ•ด ๋ณด์•˜์Œ..
์ธ์Šคํ„ด์Šค์˜ private IPv4 ์ฃผ์†Œ์˜ CIDR์ด 172.31.0.0/16์ด๋ผ์„œ ์ธ๋ฐ”์šด๋“œ ๋ณด์•ˆ ๊ทœ์น™์˜ ํฌํŠธ๋ฅผ ๊ฐœ๋ฐฉํ•˜์˜€์Œ

root@ip-172-31-28-195:~# sudo kubeadm init --pod-network-cidr=172.31.0.0/16
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet co                                                                              nnection
[preflight] You can also perform this action in beforehand using 'kubeadm config images                                                                               pull'
W1028 16:30:06.942781    4055 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.k8s.io/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ip-172-31-28-195 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.28.195]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ip-172-31-28-195 localhost] and IPs [172.31.28.195 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ip-172-31-28-195 localhost] and IPs [172.31.28.195 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.502632 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ip-172-31-28-195 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ip-172-31-28-195 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: j41tlh.8xijl5hsththzauh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://172.31.28.195:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns": dial tcp 172.31.28.195:6443: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher

https://velog.io/@koo8624/Kubernetes-AWS-EC2-%EC%9D%B8%EC%8A%A4%ED%84%B4%EC%8A%A4%EC%97%90-Kubernetes-%ED%81%B4%EB%9F%AC%EC%8A%A4%ED%84%B0-%EA%B5%AC%EC%B6%95%ED%95%98%EA%B8%B0#%EB%84%A4%ED%8A%B8%EC%9B%8C%ED%81%AC

https://velog.io/@99_insung/GKE-VM#%EC%BB%A8%ED%85%8C%EC%9D%B4%EB%84%88-%EB%9F%B0%ED%83%80%EC%9E%84-%EA%B5%AC%EC%84%B1
์—์„œ NIC๋กœ Cilium์„ ๋‹ค์šด๋ฐ›์•„์„œ install๋ช…๋ น์–ด๋ฅผ ์ง„ํ–‰ํ•˜์˜€๋Š”๋ฐ, kube-apiserver ๊ฐ€ ์ข…๋ฃŒ๋˜๋ฒ„๋ฆผ


๋„ค๋ฒˆ์งธ Kubeadm ์„ค์น˜

์ธ๋ฐ”์šด๋“œ ํฌํŠธ ๊ฐœ๋ฐฉ ํ›„ sudo kubeadm init ๊ทธ๋ƒฅ ์ด๋ ‡๊ฒŒ ์‹คํ–‰ํ•ด๋ด„ -> ์•ˆ๋จ

sudo systemctl restart kubelet ๋ช…๋ น์–ด ์ˆ˜ํ–‰ํ•˜์—ฌ kubelet ์žฌ ์‹œ์ž‘

ํฌํŠธ ๊ฐœ๋ฐฉ์˜ CIDR ์„ 192.168.0.0/16์œผ๋กœ ์ง€์ •ํ•˜๊ณ ,
sudo kubeadm init --pod-network-cidr=192.168.0.0/16๋ช…๋ น์–ด๋กœ ์ง„ํ–‰ํ•ด๋ด„

์ž˜๋จ.
๋กœ๊ทธ๋ฅผ ๋ชป ์ฐ์Œ...


calico NIC ์„ค์น˜ ํ›„ kubectl ๊ฐ•์ œ์ข…๋ฃŒ

ubuntu@ip-172-31-28-195:/var/log/calico/cni$ cat cni.log
2023-10-28 17:33:17.629 [INFO][24933] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--195-k8s-calico--kube--controllers--7d                   dc4f45bc--5g59d-eth0 calico-kube-controllers-7ddc4f45bc- kube-system  925ddf84-5b6c-48bf-9b8e-6cd08e39a683 1252 0 2023-10-28 17:24:51 +0000 UTC <nil> <nil> map[k8s-app:calico-kube-contro                   llers pod-template-hash:7ddc4f45bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k                   8s  ip-172-31-28-195  calico-kube-controllers-7ddc4f45bc-5g59d eth0 calico-kube-controllers [] []   [kns.kube-system ksa.kube-system.calico-kube-controllers] caliba25c8bff33  [] []}} Con                   tainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" Namespace="kube-system" Pod="calico-kube-controllers-7ddc4f45bc-5g59d" WorkloadEndpoint="ip--172--31--28--195-                   k8s-calico--kube--controllers--7ddc4f45bc--5g59d-"
2023-10-28 17:33:17.629 [INFO][24933] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" Namespace="kube-system                   " Pod="calico-kube-controllers-7ddc4f45bc-5g59d" WorkloadEndpoint="ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g59d-eth0"
2023-10-28 17:33:17.637 [INFO][24950] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-et                   h0 coredns-5dd5756b68- kube-system  2cb07ab2-ec51-4a3f-a207-7568a32d44bc 1253 0 2023-10-28 17:19:58 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.                   org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ip-172-31-28-195  coredns-5dd5756b68-gjxbn eth0 coredns [] []                      [kns.kube-system ksa.kube-system.coredns] cali019fff392e6  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826                   740ac7af03679881cc" Namespace="kube-system" Pod="coredns-5dd5756b68-gjxbn" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-"
2023-10-28 17:33:17.637 [INFO][24950] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" Namespace="kube-system                   " Pod="coredns-5dd5756b68-gjxbn" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0"
2023-10-28 17:33:17.650 [INFO][24944] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-et                   h0 coredns-5dd5756b68- kube-system  e2d2eb0b-8f2e-438b-916a-0cf973a7c57f 1254 0 2023-10-28 17:19:58 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.                   org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  ip-172-31-28-195  coredns-5dd5756b68-68k5g eth0 coredns [] []                      [kns.kube-system ksa.kube-system.coredns] calia130c5c7720  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cb                   b43228b51639663e9f" Namespace="kube-system" Pod="coredns-5dd5756b68-68k5g" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-"
2023-10-28 17:33:17.650 [INFO][24944] k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" Namespace="kube-system                   " Pod="coredns-5dd5756b68-68k5g" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0"
2023-10-28 17:33:17.706 [INFO][24967] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" Handl                   eID="k8s-pod-network.aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" Workload="ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g59d-eth0"
2023-10-28 17:33:17.713 [INFO][24971] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" Handl                   eID="k8s-pod-network.8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" Workload="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0"
2023-10-28 17:33:17.735 [INFO][24976] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" Handl                   eID="k8s-pod-network.fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" Workload="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0"
2023-10-28 17:33:17.752 [INFO][24971] ipam_plugin.go 268: Auto assigning IP ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" HandleID="k8s-pod-network.8f564                   cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" Workload="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(                   *string)(0xc0000c0480), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-195", "pod":"coredns-5dd5756b68-gjxbn", "timestamp":"2023-10-28 17:33:17.712987926 +0000 U                   TC"}, Hostname:"ip-172-31-28-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam                   .HostReservedAttr)(nil), IntendedUse:"Workload"}
2023-10-28 17:33:17.752 [INFO][24971] ipam_plugin.go 356: About to acquire host-wide IPAM lock.
2023-10-28 17:33:17.752 [INFO][24971] ipam_plugin.go 371: Acquired host-wide IPAM lock.
2023-10-28 17:33:17.752 [INFO][24971] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-195'
2023-10-28 17:33:17.754 [INFO][24967] ipam_plugin.go 268: Auto assigning IP ContainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" HandleID="k8s-pod-network.aadcb                   d010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" Workload="ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g59d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1,                    Num6:0, HandleID:(*string)(0xc000460180), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-195", "pod":"calico-kube-controllers-7ddc4f45bc-5g59d", "timestamp":"202                   3-10-28 17:33:17.70649293 +0000 UTC"}, Hostname:"ip-172-31-28-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(n                   il), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
2023-10-28 17:33:17.754 [INFO][24967] ipam_plugin.go 356: About to acquire host-wide IPAM lock.
2023-10-28 17:33:17.761 [INFO][24976] ipam_plugin.go 268: Auto assigning IP ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" HandleID="k8s-pod-network.fd9c1                   641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" Workload="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(                   *string)(0xc0004b9e30), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-195", "pod":"coredns-5dd5756b68-68k5g", "timestamp":"2023-10-28 17:33:17.735599982 +0000 U                   TC"}, Hostname:"ip-172-31-28-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam                   .HostReservedAttr)(nil), IntendedUse:"Workload"}
2023-10-28 17:33:17.761 [INFO][24976] ipam_plugin.go 356: About to acquire host-wide IPAM lock.
2023-10-28 17:33:17.773 [INFO][24971] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" host=                   "ip-172-31-28-195"
2023-10-28 17:33:17.780 [INFO][24971] ipam.go 372: Looking up existing affinities for host host="ip-172-31-28-195"
2023-10-28 17:33:17.801 [INFO][24971] ipam.go 489: Trying affinity for 192.168.72.0/26 host="ip-172-31-28-195"
2023-10-28 17:33:17.804 [INFO][24971] ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ip-172-31-28-195"
2023-10-28 17:33:17.807 [INFO][24971] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ip-172-31-28-195"
2023-10-28 17:33:17.807 [INFO][24971] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.8f564cd36820637cbd9cfa0e181bb83c2419d0ff3708                   26740ac7af03679881cc" host="ip-172-31-28-195"
2023-10-28 17:33:17.809 [INFO][24971] ipam.go 1682: Creating new handle: k8s-pod-network.8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc
2023-10-28 17:33:17.816 [INFO][24971] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7                   af03679881cc" host="ip-172-31-28-195"
2023-10-28 17:33:17.825 [INFO][24971] ipam.go 1216: Successfully claimed IPs: [192.168.72.1/26] block=192.168.72.0/26 handle="k8s-pod-network.8f564cd36820637cbd9cfa0e181bb83c2419d0ff3708                   26740ac7af03679881cc" host="ip-172-31-28-195"
2023-10-28 17:33:17.825 [INFO][24971] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.1/26] handle="k8s-pod-network.8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af0367988                   1cc" host="ip-172-31-28-195"
2023-10-28 17:33:17.825 [INFO][24971] ipam_plugin.go 377: Released host-wide IPAM lock.
2023-10-28 17:33:17.825 [INFO][24967] ipam_plugin.go 371: Acquired host-wide IPAM lock.
2023-10-28 17:33:17.825 [INFO][24967] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-195'
2023-10-28 17:33:17.826 [INFO][24971] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.72.1/26] IPv6=[] ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740                   ac7af03679881cc" HandleID="k8s-pod-network.8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" Workload="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0"
2023-10-28 17:33:17.828 [INFO][24950] k8s.go 383: Populated endpoint ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" Namespace="kube-system" Pod="coredns-5                   dd5756b68-gjxbn" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"proje                   ctcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID                   :"2cb07ab2-ec51-4a3f-a207-7568a32d44bc", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2023, time.October, 28, 17, 19, 58, 0, time.Local), DeletionTimestamp:<nil>, De                   letionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/o                   rchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[                   ]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-195", ContainerID:"", Pod:"coredns-5dd5756b68-gjxbn", Endpoint:"eth0", Serv                   iceAccountName:"coredns", IPNetworks:[]string{"192.168.72.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"},                    InterfaceName:"cali019fff392e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35,                    HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpoin                   tPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
2023-10-28 17:33:17.829 [INFO][24950] k8s.go 384: Calico CNI using IPs: [192.168.72.1/32] ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" Namespace="kube-s                   ystem" Pod="coredns-5dd5756b68-gjxbn" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0"
2023-10-28 17:33:17.829 [INFO][24950] dataplane_linux.go 68: Setting the host side veth name to cali019fff392e6 ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af0367988                   1cc" Namespace="kube-system" Pod="coredns-5dd5756b68-gjxbn" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0"
2023-10-28 17:33:17.830 [INFO][24950] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" Namespace="kube-sys                   tem" Pod="coredns-5dd5756b68-gjxbn" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0"
2023-10-28 17:33:17.832 [INFO][24967] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" host=                   "ip-172-31-28-195"
2023-10-28 17:33:17.839 [INFO][24967] ipam.go 372: Looking up existing affinities for host host="ip-172-31-28-195"
2023-10-28 17:33:17.846 [INFO][24967] ipam.go 489: Trying affinity for 192.168.72.0/26 host="ip-172-31-28-195"
2023-10-28 17:33:17.857 [INFO][24950] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af036798                   81cc" Namespace="kube-system" Pod="coredns-5dd5756b68-gjxbn" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMet                   a{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0", GenerateName:"coredns-5dd5756b68-                   ", Namespace:"kube-system", SelfLink:"", UID:"2cb07ab2-ec51-4a3f-a207-7568a32d44bc", ResourceVersion:"1253", Generation:0, CreationTimestamp:time.Date(2023, time.October, 28, 17, 19, 58,                    0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/na                   mespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil                   ), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-195", ContainerID:"8f564cd36820                   637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc", Pod:"coredns-5dd5756b68-gjxbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.1/32"}, IPNATs:[]v3.                   IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali019fff392e6", MAC:"1e:2d:d0:58:6c:af", Ports:[]v3.Workload                   EndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-t                   cp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, N                   umVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
2023-10-28 17:33:17.865 [INFO][24967] ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ip-172-31-28-195"
2023-10-28 17:33:17.884 [INFO][24967] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ip-172-31-28-195"
2023-10-28 17:33:17.884 [INFO][24967] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.aadcbd010ca3bfde8228e2c78959753b4d85cb94366d                   ca1141f07e8b612b7d6b" host="ip-172-31-28-195"
2023-10-28 17:33:17.902 [INFO][24967] ipam.go 1682: Creating new handle: k8s-pod-network.aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b
2023-10-28 17:33:17.908 [INFO][24967] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f0                   7e8b612b7d6b" host="ip-172-31-28-195"
2023-10-28 17:33:17.913 [INFO][24950] k8s.go 489: Wrote updated endpoint to datastore ContainerID="8f564cd36820637cbd9cfa0e181bb83c2419d0ff370826740ac7af03679881cc" Namespace="kube-syste                   m" Pod="coredns-5dd5756b68-gjxbn" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--gjxbn-eth0"
2023-10-28 17:33:17.926 [INFO][24967] ipam.go 1216: Successfully claimed IPs: [192.168.72.2/26] block=192.168.72.0/26 handle="k8s-pod-network.aadcbd010ca3bfde8228e2c78959753b4d85cb94366d                   ca1141f07e8b612b7d6b" host="ip-172-31-28-195"
2023-10-28 17:33:17.926 [INFO][24967] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.2/26] handle="k8s-pod-network.aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7                   d6b" host="ip-172-31-28-195"
2023-10-28 17:33:17.926 [INFO][24967] ipam_plugin.go 377: Released host-wide IPAM lock.
2023-10-28 17:33:17.926 [INFO][24967] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.72.2/26] IPv6=[] ContainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca114                   1f07e8b612b7d6b" HandleID="k8s-pod-network.aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" Workload="ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g5                   9d-eth0"
2023-10-28 17:33:17.926 [INFO][24976] ipam_plugin.go 371: Acquired host-wide IPAM lock.
2023-10-28 17:33:17.926 [INFO][24976] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-195'
2023-10-28 17:33:17.929 [INFO][24933] k8s.go 383: Populated endpoint ContainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" Namespace="kube-system" Pod="calico-ku                   be-controllers-7ddc4f45bc-5g59d" WorkloadEndpoint="ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g59d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"Wo                   rkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g59d-eth0", GenerateName:"calico-kube                   -controllers-7ddc4f45bc-", Namespace:"kube-system", SelfLink:"", UID:"925ddf84-5b6c-48bf-9b8e-6cd08e39a683", ResourceVersion:"1252", Generation:0, CreationTimestamp:time.Date(2023, time.                   October, 28, 17, 24, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-kube-controllers", "pod-template-ha                   sh":"7ddc4f45bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[stri                   ng]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload                   :"", Node:"ip-172-31-28-195", ContainerID:"", Pod:"calico-kube-controllers-7ddc4f45bc-5g59d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.                   72.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.calico-kube-controllers"}, InterfaceName:"caliba25c8bff33", MAC:"                   ", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
2023-10-28 17:33:17.929 [INFO][24933] k8s.go 384: Calico CNI using IPs: [192.168.72.2/32] ContainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" Namespace="kube-s                   ystem" Pod="calico-kube-controllers-7ddc4f45bc-5g59d" WorkloadEndpoint="ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g59d-eth0"
2023-10-28 17:33:17.929 [INFO][24933] dataplane_linux.go 68: Setting the host side veth name to caliba25c8bff33 ContainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7                   d6b" Namespace="kube-system" Pod="calico-kube-controllers-7ddc4f45bc-5g59d" WorkloadEndpoint="ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g59d-eth0"
2023-10-28 17:33:17.930 [INFO][24933] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" Namespace="kube-sys                   tem" Pod="calico-kube-controllers-7ddc4f45bc-5g59d" WorkloadEndpoint="ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g59d-eth0"
2023-10-28 17:33:17.936 [INFO][24933] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b                   7d6b" Namespace="kube-system" Pod="calico-kube-controllers-7ddc4f45bc-5g59d" WorkloadEndpoint="ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g59d-eth0" endpoint=&v3.Wo                   rkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f                   45bc--5g59d-eth0", GenerateName:"calico-kube-controllers-7ddc4f45bc-", Namespace:"kube-system", SelfLink:"", UID:"925ddf84-5b6c-48bf-9b8e-6cd08e39a683", ResourceVersion:"1252", Generatio                   n:0, CreationTimestamp:time.Date(2023, time.October, 28, 17, 24, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app"                   :"calico-kube-controllers", "pod-template-hash":"7ddc4f45bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"cal                   ico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.Workl                   oadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-195", ContainerID:"aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b", Pod:"calico-kube-controllers-7dd                   c4f45bc-5g59d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]                   string{"kns.kube-system", "ksa.kube-system.calico-kube-controllers"}, InterfaceName:"caliba25c8bff33", MAC:"d6:76:81:d9:73:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePr                   efixes:[]string(nil)}}
2023-10-28 17:33:17.977 [INFO][24976] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" host=                   "ip-172-31-28-195"
2023-10-28 17:33:18.029 [INFO][24976] ipam.go 372: Looking up existing affinities for host host="ip-172-31-28-195"
2023-10-28 17:33:18.029 [INFO][24933] k8s.go 489: Wrote updated endpoint to datastore ContainerID="aadcbd010ca3bfde8228e2c78959753b4d85cb94366dca1141f07e8b612b7d6b" Namespace="kube-syste                   m" Pod="calico-kube-controllers-7ddc4f45bc-5g59d" WorkloadEndpoint="ip--172--31--28--195-k8s-calico--kube--controllers--7ddc4f45bc--5g59d-eth0"
2023-10-28 17:33:18.070 [INFO][24976] ipam.go 489: Trying affinity for 192.168.72.0/26 host="ip-172-31-28-195"
2023-10-28 17:33:18.103 [INFO][24976] ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ip-172-31-28-195"
2023-10-28 17:33:18.117 [INFO][24976] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ip-172-31-28-195"
2023-10-28 17:33:18.117 [INFO][24976] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.fd9c1641865498cc74e7a99e87c731dbf3933200e447                   cbb43228b51639663e9f" host="ip-172-31-28-195"
2023-10-28 17:33:18.123 [INFO][24976] ipam.go 1682: Creating new handle: k8s-pod-network.fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f
2023-10-28 17:33:18.140 [INFO][24976] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228                   b51639663e9f" host="ip-172-31-28-195"
2023-10-28 17:33:18.170 [INFO][24976] ipam.go 1216: Successfully claimed IPs: [192.168.72.3/26] block=192.168.72.0/26 handle="k8s-pod-network.fd9c1641865498cc74e7a99e87c731dbf3933200e447                   cbb43228b51639663e9f" host="ip-172-31-28-195"
2023-10-28 17:33:18.170 [INFO][24976] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.3/26] handle="k8s-pod-network.fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663                   e9f" host="ip-172-31-28-195"
2023-10-28 17:33:18.170 [INFO][24976] ipam_plugin.go 377: Released host-wide IPAM lock.
2023-10-28 17:33:18.170 [INFO][24976] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.72.3/26] IPv6=[] ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43                   228b51639663e9f" HandleID="k8s-pod-network.fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" Workload="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0"
2023-10-28 17:33:18.174 [INFO][24944] k8s.go 383: Populated endpoint ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" Namespace="kube-system" Pod="coredns-5                   dd5756b68-68k5g" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"proje                   ctcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID                   :"e2d2eb0b-8f2e-438b-916a-0cf973a7c57f", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2023, time.October, 28, 17, 19, 58, 0, time.Local), DeletionTimestamp:<nil>, De                   letionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/o                   rchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[                   ]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-195", ContainerID:"", Pod:"coredns-5dd5756b68-68k5g", Endpoint:"eth0", Serv                   iceAccountName:"coredns", IPNetworks:[]string{"192.168.72.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"},                    InterfaceName:"calia130c5c7720", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35,                    HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpoin                   tPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
2023-10-28 17:33:18.175 [INFO][24944] k8s.go 384: Calico CNI using IPs: [192.168.72.3/32] ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" Namespace="kube-s                   ystem" Pod="coredns-5dd5756b68-68k5g" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0"
2023-10-28 17:33:18.175 [INFO][24944] dataplane_linux.go 68: Setting the host side veth name to calia130c5c7720 ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663                   e9f" Namespace="kube-system" Pod="coredns-5dd5756b68-68k5g" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0"
2023-10-28 17:33:18.176 [INFO][24944] dataplane_linux.go 473: Disabling IPv4 forwarding ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" Namespace="kube-sys                   tem" Pod="coredns-5dd5756b68-68k5g" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0"
2023-10-28 17:33:18.199 [INFO][24944] k8s.go 411: Added Mac, interface name, and active container ID to endpoint ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b5163966                   3e9f" Namespace="kube-system" Pod="coredns-5dd5756b68-68k5g" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMet                   a{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0", GenerateName:"coredns-5dd5756b68-                   ", Namespace:"kube-system", SelfLink:"", UID:"e2d2eb0b-8f2e-438b-916a-0cf973a7c57f", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2023, time.October, 28, 17, 19, 58,                    0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/na                   mespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil                   ), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-195", ContainerID:"fd9c16418654                   98cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f", Pod:"coredns-5dd5756b68-68k5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.3/32"}, IPNATs:[]v3.                   IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia130c5c7720", MAC:"7e:91:61:25:ca:79", Ports:[]v3.Workload                   EndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-t                   cp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, N                   umVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
2023-10-28 17:33:18.255 [INFO][24944] k8s.go 489: Wrote updated endpoint to datastore ContainerID="fd9c1641865498cc74e7a99e87c731dbf3933200e447cbb43228b51639663e9f" Namespace="kube-syste                   m" Pod="coredns-5dd5756b68-68k5g" WorkloadEndpoint="ip--172--31--28--195-k8s-coredns--5dd5756b68--68k5g-eth0"
2023-10-28 17:33:25.265 [ERROR][25524] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:25.284 [ERROR][25513] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:25.853 [ERROR][25605] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:25.866 [ERROR][25611] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:26.049 [ERROR][25701] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:26.787 [ERROR][25765] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:26.858 [ERROR][25803] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:26.860 [ERROR][25802] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:27.834 [ERROR][25845] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:27.840 [ERROR][25846] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:27.877 [ERROR][25872] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:28.793 [ERROR][25892] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:39.998 [ERROR][26091] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:40.023 [ERROR][26090] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:40.051 [ERROR][26116] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:51.986 [ERROR][26294] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:53.955 [ERROR][26320] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:33:53.994 [ERROR][26340] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:04.986 [ERROR][26409] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:07.019 [ERROR][26495] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:08.989 [ERROR][26682] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:15.950 [ERROR][26758] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:19.965 [ERROR][26778] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:23.959 [ERROR][26798] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:26.970 [ERROR][26819] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:31.956 [ERROR][26841] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:36.951 [ERROR][26866] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:38.992 [ERROR][26886] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:44.961 [ERROR][26908] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:51.954 [ERROR][26931] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:54.031 [ERROR][26950] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:34:55.975 [ERROR][27011] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:35:02.963 [ERROR][27201] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:35:04.983 [ERROR][27221] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:35:08.018 [ERROR][27240] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:35:25.958 [ERROR][27296] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": net/http: TLS handshake timeout
2023-10-28 17:35:28.992 [ERROR][27315] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": net/http: TLS handshake timeout
2023-10-28 17:35:31.964 [ERROR][27443] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": net/http: TLS handshake timeout
2023-10-28 17:35:37.987 [ERROR][27600] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:35:39.980 [ERROR][27652] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:35:42.982 [ERROR][27671] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:35:51.949 [ERROR][27695] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:35:54.018 [ERROR][27717] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:35:57.956 [ERROR][27736] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:03.983 [ERROR][27755] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:08.989 [ERROR][27776] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:10.986 [ERROR][27795] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:17.974 [ERROR][27948] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:21.005 [ERROR][27969] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:25.015 [ERROR][27991] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:28.955 [ERROR][28011] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:35.983 [ERROR][28031] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:36.982 [ERROR][28053] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:42.011 [ERROR][28073] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:47.975 [ERROR][28094] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:47.994 [ERROR][28114] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:36:53.985 [ERROR][28139] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:00.962 [ERROR][28164] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:01.001 [ERROR][28187] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:04.978 [ERROR][28210] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:12.957 [ERROR][28237] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:15.983 [ERROR][28258] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:16.987 [ERROR][28277] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:27.951 [ERROR][28303] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:28.964 [ERROR][28323] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:28.985 [ERROR][28343] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:39.950 [ERROR][28363] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:41.957 [ERROR][28382] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:41.985 [ERROR][28403] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:52.953 [ERROR][28429] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:53.984 [ERROR][28448] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused
2023-10-28 17:37:55.992 [ERROR][28468] plugin.go 580: Final result of CNI DEL was an error. error=error getting ClusterInformation: Get "https://10.96.0.1:443/apis/crd.projectcalico.org/                   v1/clusterinformations/default": dial tcp 10.96.0.1:443: connect: connection refused

kube-proxy ๋กœ๊ทธ

/var/log/containers$ sudo cat kube-proxy-2l5w5_kube-system_kube-proxy-ef5b7ed99e8b36d551d122ba77cbc33bc99a061d588ae0af4dab23350ff46b8d.log
2023-10-28T17:38:13.251141137Z stderr F I1028 17:38:13.250995       1 server_others.go:69] "Using iptables proxy"
2023-10-28T17:38:13.25851484Z stderr F E1028 17:38:13.258391       1 node.go:130] Failed to retrieve node info: Get "https://172.31.28.195:6443/api/v1/nodes/ip-172-31-28-195": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:14.330071285Z stderr F E1028 17:38:14.329960       1 node.go:130] Failed to retrieve node info: Get "https://172.31.28.195:6443/api/v1/nodes/ip-172-31-28-195": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:16.528345121Z stderr F E1028 17:38:16.528231       1 node.go:130] Failed to retrieve node info: Get "https://172.31.28.195:6443/api/v1/nodes/ip-172-31-28-195": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:20.970683641Z stderr F E1028 17:38:20.970577       1 node.go:130] Failed to retrieve node info: Get "https://172.31.28.195:6443/api/v1/nodes/ip-172-31-28-195": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:29.361270894Z stderr F E1028 17:38:29.361115       1 node.go:130] Failed to retrieve node info: Get "https://172.31.28.195:6443/api/v1/nodes/ip-172-31-28-195": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:48.310268628Z stderr F E1028 17:38:48.310149       1 node.go:130] Failed to retrieve node info: Get "https://172.31.28.195:6443/api/v1/nodes/ip-172-31-28-195": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:48.310309729Z stderr F I1028 17:38:48.310184       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
2023-10-28T17:38:48.311774388Z stderr F I1028 17:38:48.311637       1 conntrack.go:52] "Setting nf_conntrack_max" nfConntrackMax=131072
2023-10-28T17:38:48.347948272Z stderr F I1028 17:38:48.347832       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
2023-10-28T17:38:48.350771252Z stderr F I1028 17:38:48.350637       1 server_others.go:152] "Using iptables Proxier"
2023-10-28T17:38:48.35082166Z stderr F I1028 17:38:48.350674       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
2023-10-28T17:38:48.350827008Z stderr F I1028 17:38:48.350683       1 server_others.go:438] "Defaulting to no-op detect-local"
2023-10-28T17:38:48.351702201Z stderr F I1028 17:38:48.351614       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
2023-10-28T17:38:48.352476165Z stderr F I1028 17:38:48.352388       1 server.go:846] "Version info" version="v1.28.3"
2023-10-28T17:38:48.352489637Z stderr F I1028 17:38:48.352421       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
2023-10-28T17:38:48.357436755Z stderr F I1028 17:38:48.357329       1 config.go:188] "Starting service config controller"
2023-10-28T17:38:48.357567808Z stderr F I1028 17:38:48.357515       1 shared_informer.go:311] Waiting for caches to sync for service config
2023-10-28T17:38:48.357793614Z stderr F I1028 17:38:48.357733       1 config.go:97] "Starting endpoint slice config controller"
2023-10-28T17:38:48.357882082Z stderr F I1028 17:38:48.357826       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
2023-10-28T17:38:48.358701563Z stderr F I1028 17:38:48.358620       1 config.go:315] "Starting node config controller"
2023-10-28T17:38:48.358807998Z stderr F I1028 17:38:48.358761       1 shared_informer.go:311] Waiting for caches to sync for node config
2023-10-28T17:38:48.359789768Z stderr F E1028 17:38:48.359708       1 event_broadcaster.go:274] Unable to write event: 'Post "https://172.31.28.195:6443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 172.31.28.195:6443: connect: connection refused' (may retry after sleeping)
2023-10-28T17:38:48.360034548Z stderr F W1028 17:38:48.359953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:48.360195875Z stderr F E1028 17:38:48.360129       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:48.360419593Z stderr F W1028 17:38:48.360333       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:48.360563125Z stderr F E1028 17:38:48.360515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:48.360847737Z stderr F W1028 17:38:48.360728       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:48.360994856Z stderr F E1028 17:38:48.360928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:49.321657481Z stderr F W1028 17:38:49.321503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:49.321682409Z stderr F E1028 17:38:49.321556       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:49.359150522Z stderr F W1028 17:38:49.359019       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:49.359175214Z stderr F E1028 17:38:49.359062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:49.494577879Z stderr F W1028 17:38:49.494334       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:49.494659225Z stderr F E1028 17:38:49.494491       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:51.535161805Z stderr F W1028 17:38:51.535036       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:51.535192976Z stderr F E1028 17:38:51.535084       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:51.619371889Z stderr F W1028 17:38:51.619239       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:51.619397228Z stderr F E1028 17:38:51.619285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:52.489504432Z stderr F W1028 17:38:52.489373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:52.489527952Z stderr F E1028 17:38:52.489420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:56.41950465Z stderr F W1028 17:38:56.419340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:56.419533944Z stderr F E1028 17:38:56.419383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:57.610280827Z stderr F W1028 17:38:57.610076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:57.610325216Z stderr F E1028 17:38:57.610119       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:57.934152566Z stderr F W1028 17:38:57.934031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:57.934178118Z stderr F E1028 17:38:57.934071       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:38:59.61930354Z stderr F E1028 17:38:59.619143       1 event_broadcaster.go:274] Unable to write event: 'Post "https://172.31.28.195:6443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 172.31.28.195:6443: connect: connection refused' (may retry after sleeping)
2023-10-28T17:39:03.318078198Z stderr F W1028 17:39:03.317943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:03.318103657Z stderr F E1028 17:39:03.317997       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:05.64498081Z stderr F W1028 17:39:05.644840       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:05.645233809Z stderr F E1028 17:39:05.644896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:07.212620304Z stderr F W1028 17:39:07.212508       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:07.212661916Z stderr F E1028 17:39:07.212550       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:09.764626113Z stderr F E1028 17:39:09.764507       1 event_broadcaster.go:274] Unable to write event: 'Post "https://172.31.28.195:6443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 172.31.28.195:6443: connect: connection refused' (may retry after sleeping)
2023-10-28T17:39:20.087821498Z stderr F E1028 17:39:20.087632       1 event_broadcaster.go:274] Unable to write event: 'Post "https://172.31.28.195:6443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 172.31.28.195:6443: connect: connection refused' (may retry after sleeping)
2023-10-28T17:39:20.771337946Z stderr F W1028 17:39:20.771196       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:20.77137853Z stderr F E1028 17:39:20.771292       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://172.31.28.195:6443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:28.841136758Z stderr F W1028 17:39:28.841015       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:28.841170246Z stderr F E1028 17:39:28.841066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.195:6443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:29.29950882Z stderr F W1028 17:39:29.299375       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused
2023-10-28T17:39:29.299536973Z stderr F E1028 17:39:29.299422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-195&limit=500&resourceVersion=0": dial tcp 172.31.28.195:6443: connect: connection refused

ํฌํŠธ ์ „๋ถ€ ๊ฐœ๋ฐฉ..

๋„์ €ํžˆ ๋ชจ๋ฅด๊ฒ ์–ด์„œ kube-apiserver์— ์‚ฌ์šฉ๋˜๋Š” ํฌํŠธ ๋ชจ๋‘ ๊ฐœ๋ฐฉํ•จ (์ธ๋ฐ”์šด๋“œ ๊ทœ์น™ ์ˆ˜์ •)


kube-apiserver ๋กœ๊ทธ ํ™•์ธ

2023-10-28T17:53:10.675654623Z stderr F I1028 17:53:10.675481       1 options.go:220] external host was not specified, using 172.31.28.195
2023-10-28T17:53:10.676690389Z stderr F I1028 17:53:10.676599       1 server.go:148] Version: v1.28.3
2023-10-28T17:53:10.676705785Z stderr F I1028 17:53:10.676630       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
2023-10-28T17:53:10.886131223Z stderr F I1028 17:53:10.886022       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
2023-10-28T17:53:10.897780296Z stderr F I1028 17:53:10.897673       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
2023-10-28T17:53:10.897812518Z stderr F I1028 17:53:10.897767       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
2023-10-28T17:53:10.898141536Z stderr F I1028 17:53:10.898090       1 instance.go:298] Using reconciler: lease
2023-10-28T17:53:10.942337328Z stderr F I1028 17:53:10.942209       1 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
2023-10-28T17:53:10.942398203Z stderr F W1028 17:53:10.942237       1 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.198794459Z stderr F I1028 17:53:11.198672       1 handler.go:232] Adding GroupVersion  v1 to ResourceManager
2023-10-28T17:53:11.199074496Z stderr F I1028 17:53:11.199006       1 instance.go:709] API group "internal.apiserver.k8s.io" is not enabled, skipping.
2023-10-28T17:53:11.727109665Z stderr F I1028 17:53:11.726977       1 instance.go:709] API group "resource.k8s.io" is not enabled, skipping.
2023-10-28T17:53:11.73812671Z stderr F I1028 17:53:11.737997       1 handler.go:232] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.738155336Z stderr F W1028 17:53:11.738024       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.738159422Z stderr F W1028 17:53:11.738032       1 genericapiserver.go:744] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
2023-10-28T17:53:11.738619466Z stderr F I1028 17:53:11.738542       1 handler.go:232] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.738632851Z stderr F W1028 17:53:11.738560       1 genericapiserver.go:744] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.739751559Z stderr F I1028 17:53:11.739633       1 handler.go:232] Adding GroupVersion autoscaling v2 to ResourceManager
2023-10-28T17:53:11.740591068Z stderr F I1028 17:53:11.740490       1 handler.go:232] Adding GroupVersion autoscaling v1 to ResourceManager
2023-10-28T17:53:11.740602548Z stderr F W1028 17:53:11.740508       1 genericapiserver.go:744] Skipping API autoscaling/v2beta1 because it has no resources.
2023-10-28T17:53:11.740606042Z stderr F W1028 17:53:11.740514       1 genericapiserver.go:744] Skipping API autoscaling/v2beta2 because it has no resources.
2023-10-28T17:53:11.742424923Z stderr F I1028 17:53:11.742357       1 handler.go:232] Adding GroupVersion batch v1 to ResourceManager
2023-10-28T17:53:11.742435361Z stderr F W1028 17:53:11.742374       1 genericapiserver.go:744] Skipping API batch/v1beta1 because it has no resources.
2023-10-28T17:53:11.743566246Z stderr F I1028 17:53:11.743475       1 handler.go:232] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.743607717Z stderr F W1028 17:53:11.743523       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.743624456Z stderr F W1028 17:53:11.743532       1 genericapiserver.go:744] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
2023-10-28T17:53:11.744408331Z stderr F I1028 17:53:11.744339       1 handler.go:232] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.744418616Z stderr F W1028 17:53:11.744357       1 genericapiserver.go:744] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.744510579Z stderr F W1028 17:53:11.744463       1 genericapiserver.go:744] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.745210431Z stderr F I1028 17:53:11.745137       1 handler.go:232] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.747033458Z stderr F I1028 17:53:11.746949       1 handler.go:232] Adding GroupVersion networking.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.747043866Z stderr F W1028 17:53:11.746969       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.747047903Z stderr F W1028 17:53:11.746976       1 genericapiserver.go:744] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
2023-10-28T17:53:11.747623138Z stderr F I1028 17:53:11.747555       1 handler.go:232] Adding GroupVersion node.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.747633393Z stderr F W1028 17:53:11.747571       1 genericapiserver.go:744] Skipping API node.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.747636885Z stderr F W1028 17:53:11.747578       1 genericapiserver.go:744] Skipping API node.k8s.io/v1alpha1 because it has no resources.
2023-10-28T17:53:11.748569133Z stderr F I1028 17:53:11.748504       1 handler.go:232] Adding GroupVersion policy v1 to ResourceManager
2023-10-28T17:53:11.748578681Z stderr F W1028 17:53:11.748519       1 genericapiserver.go:744] Skipping API policy/v1beta1 because it has no resources.
2023-10-28T17:53:11.750536224Z stderr F I1028 17:53:11.750413       1 handler.go:232] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.750546703Z stderr F W1028 17:53:11.750431       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.7505507Z stderr F W1028 17:53:11.750438       1 genericapiserver.go:744] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
2023-10-28T17:53:11.751024773Z stderr F I1028 17:53:11.750956       1 handler.go:232] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.751034551Z stderr F W1028 17:53:11.750975       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.751038109Z stderr F W1028 17:53:11.750981       1 genericapiserver.go:744] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
2023-10-28T17:53:11.753321116Z stderr F I1028 17:53:11.753247       1 handler.go:232] Adding GroupVersion storage.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.753333005Z stderr F W1028 17:53:11.753268       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.753337025Z stderr F W1028 17:53:11.753275       1 genericapiserver.go:744] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
2023-10-28T17:53:11.755529216Z stderr F I1028 17:53:11.755450       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
2023-10-28T17:53:11.756694632Z stderr F I1028 17:53:11.756613       1 handler.go:232] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta2 to ResourceManager
2023-10-28T17:53:11.756706438Z stderr F W1028 17:53:11.756632       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.756710644Z stderr F W1028 17:53:11.756639       1 genericapiserver.go:744] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
2023-10-28T17:53:11.760827706Z stderr F I1028 17:53:11.760725       1 handler.go:232] Adding GroupVersion apps v1 to ResourceManager
2023-10-28T17:53:11.760845097Z stderr F W1028 17:53:11.760748       1 genericapiserver.go:744] Skipping API apps/v1beta2 because it has no resources.
2023-10-28T17:53:11.760848979Z stderr F W1028 17:53:11.760754       1 genericapiserver.go:744] Skipping API apps/v1beta1 because it has no resources.
2023-10-28T17:53:11.762078747Z stderr F I1028 17:53:11.761990       1 handler.go:232] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.762091472Z stderr F W1028 17:53:11.762009       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.762095783Z stderr F W1028 17:53:11.762019       1 genericapiserver.go:744] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
2023-10-28T17:53:11.763164146Z stderr F I1028 17:53:11.763088       1 handler.go:232] Adding GroupVersion events.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.763315977Z stderr F W1028 17:53:11.763257       1 genericapiserver.go:744] Skipping API events.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:11.792989331Z stderr F I1028 17:53:11.792793       1 handler.go:232] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
2023-10-28T17:53:11.793021703Z stderr F W1028 17:53:11.792818       1 genericapiserver.go:744] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
2023-10-28T17:53:12.522110465Z stderr F I1028 17:53:12.521972       1 secure_serving.go:213] Serving securely on [::]:6443
2023-10-28T17:53:12.522198047Z stderr F I1028 17:53:12.522158       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
2023-10-28T17:53:12.522356415Z stderr F I1028 17:53:12.522292       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key"
2023-10-28T17:53:12.522393334Z stderr F I1028 17:53:12.522367       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
2023-10-28T17:53:12.52578876Z stderr F I1028 17:53:12.525686       1 apf_controller.go:372] Starting API Priority and Fairness config controller
2023-10-28T17:53:12.525968661Z stderr F I1028 17:53:12.525926       1 system_namespaces_controller.go:67] Starting system namespaces controller
2023-10-28T17:53:12.526053118Z stderr F I1028 17:53:12.526007       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
2023-10-28T17:53:12.526120569Z stderr F I1028 17:53:12.526088       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
2023-10-28T17:53:12.526219117Z stderr F I1028 17:53:12.526170       1 handler_discovery.go:412] Starting ResourceDiscoveryManager
2023-10-28T17:53:12.526357309Z stderr F I1028 17:53:12.526309       1 available_controller.go:423] Starting AvailableConditionController
2023-10-28T17:53:12.526421743Z stderr F I1028 17:53:12.526376       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
2023-10-28T17:53:12.526812111Z stderr F I1028 17:53:12.526663       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key"
2023-10-28T17:53:12.526827107Z stderr F I1028 17:53:12.526777       1 controller.go:78] Starting OpenAPI AggregationController
2023-10-28T17:53:12.527344578Z stderr F I1028 17:53:12.527039       1 gc_controller.go:78] Starting apiserver lease garbage collector
2023-10-28T17:53:12.527658721Z stderr F I1028 17:53:12.527481       1 controller.go:116] Starting legacy_token_tracking_controller
2023-10-28T17:53:12.527668185Z stderr F I1028 17:53:12.527520       1 shared_informer.go:311] Waiting for caches to sync for configmaps
2023-10-28T17:53:12.52778416Z stderr F I1028 17:53:12.527724       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
2023-10-28T17:53:12.534714309Z stderr F I1028 17:53:12.534591       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
2023-10-28T17:53:12.534847657Z stderr F I1028 17:53:12.534783       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
2023-10-28T17:53:12.534971958Z stderr F I1028 17:53:12.534922       1 controller.go:80] Starting OpenAPI V3 AggregationController
2023-10-28T17:53:12.535322271Z stderr F I1028 17:53:12.535260       1 customresource_discovery_controller.go:289] Starting DiscoveryController
2023-10-28T17:53:12.535492046Z stderr F I1028 17:53:12.535440       1 aggregator.go:164] waiting for initial CRD sync...
2023-10-28T17:53:12.536758523Z stderr F I1028 17:53:12.536615       1 gc_controller.go:78] Starting apiserver lease garbage collector
2023-10-28T17:53:12.541638944Z stderr F I1028 17:53:12.541479       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
2023-10-28T17:53:12.541853446Z stderr F I1028 17:53:12.541796       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
2023-10-28T17:53:12.542359048Z stderr F I1028 17:53:12.542280       1 controller.go:134] Starting OpenAPI controller
2023-10-28T17:53:12.542487424Z stderr F I1028 17:53:12.542435       1 controller.go:85] Starting OpenAPI V3 controller
2023-10-28T17:53:12.542607913Z stderr F I1028 17:53:12.542559       1 naming_controller.go:291] Starting NamingConditionController
2023-10-28T17:53:12.542695949Z stderr F I1028 17:53:12.542638       1 establishing_controller.go:76] Starting EstablishingController
2023-10-28T17:53:12.542801818Z stderr F I1028 17:53:12.542743       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
2023-10-28T17:53:12.542889575Z stderr F I1028 17:53:12.542831       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
2023-10-28T17:53:12.54298293Z stderr F I1028 17:53:12.542926       1 crd_finalizer.go:266] Starting CRDFinalizer
2023-10-28T17:53:12.543513569Z stderr F I1028 17:53:12.543437       1 crdregistration_controller.go:111] Starting crd-autoregister controller
2023-10-28T17:53:12.543618347Z stderr F I1028 17:53:12.543567       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
2023-10-28T17:53:12.660244404Z stderr F I1028 17:53:12.660051       1 apf_controller.go:377] Running API Priority and Fairness config worker
2023-10-28T17:53:12.660268253Z stderr F I1028 17:53:12.660079       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
2023-10-28T17:53:12.660430111Z stderr F I1028 17:53:12.660346       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
2023-10-28T17:53:12.661980615Z stderr F I1028 17:53:12.661883       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2023-10-28T17:53:12.666050495Z stderr F I1028 17:53:12.665929       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
2023-10-28T17:53:12.687205363Z stderr F I1028 17:53:12.686201       1 shared_informer.go:318] Caches are synced for node_authorizer
2023-10-28T17:53:12.727286856Z stderr F I1028 17:53:12.727130       1 cache.go:39] Caches are synced for AvailableConditionController controller
2023-10-28T17:53:12.727689269Z stderr F I1028 17:53:12.727612       1 shared_informer.go:318] Caches are synced for configmaps
2023-10-28T17:53:12.744280542Z stderr F I1028 17:53:12.744153       1 shared_informer.go:318] Caches are synced for crd-autoregister
2023-10-28T17:53:12.7443023Z stderr F I1028 17:53:12.744192       1 aggregator.go:166] initial CRD sync complete...
2023-10-28T17:53:12.744306386Z stderr F I1028 17:53:12.744199       1 autoregister_controller.go:141] Starting autoregister controller
2023-10-28T17:53:12.744326847Z stderr F I1028 17:53:12.744231       1 cache.go:32] Waiting for caches to sync for autoregister controller
2023-10-28T17:53:12.744367881Z stderr F I1028 17:53:12.744323       1 cache.go:39] Caches are synced for autoregister controller
2023-10-28T17:53:13.539608439Z stderr F I1028 17:53:13.539345       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
2023-10-28T17:53:13.840218284Z stderr F W1028 17:53:13.840109       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.31.28.195]
2023-10-28T17:53:13.841684003Z stderr F I1028 17:53:13.841480       1 controller.go:624] quota admission added evaluator for: endpoints
2023-10-28T17:53:13.849297708Z stderr F I1028 17:53:13.848962       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
2023-10-28T17:53:26.959645768Z stderr F I1028 17:53:26.959544       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
2023-10-28T17:53:26.970006246Z stderr F W1028 17:53:26.969899       1 lease.go:263] Resetting endpoints for master service "kubernetes" to []
2023-10-28T17:53:26.977509812Z stderr F I1028 17:53:26.977395       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
2023-10-28T17:53:26.977767497Z stderr F I1028 17:53:26.977683       1 autoregister_controller.go:165] Shutting down autoregister controller
2023-10-28T17:53:26.977780551Z stderr F I1028 17:53:26.977708       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
2023-10-28T17:53:26.97778421Z stderr F I1028 17:53:26.977726       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
2023-10-28T17:53:26.977841894Z stderr F I1028 17:53:26.977737       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
2023-10-28T17:53:26.97784745Z stderr F I1028 17:53:26.977754       1 controller.go:129] Ending legacy_token_tracking_controller
2023-10-28T17:53:26.977850935Z stderr F I1028 17:53:26.977759       1 controller.go:130] Shutting down legacy_token_tracking_controller
2023-10-28T17:53:26.977854284Z stderr F I1028 17:53:26.977795       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
2023-10-28T17:53:26.977857645Z stderr F I1028 17:53:26.977803       1 available_controller.go:439] Shutting down AvailableConditionController
2023-10-28T17:53:26.978358814Z stderr F I1028 17:53:26.978279       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
2023-10-28T17:53:26.978369244Z stderr F I1028 17:53:26.978298       1 crd_finalizer.go:278] Shutting down CRDFinalizer
2023-10-28T17:53:26.978373441Z stderr F I1028 17:53:26.978313       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
2023-10-28T17:53:26.978377195Z stderr F I1028 17:53:26.978328       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
2023-10-28T17:53:26.981201388Z stderr F I1028 17:53:26.978495       1 establishing_controller.go:87] Shutting down EstablishingController
2023-10-28T17:53:26.98128024Z stderr F I1028 17:53:26.978504       1 naming_controller.go:302] Shutting down NamingConditionController
2023-10-28T17:53:26.981286621Z stderr F I1028 17:53:26.978512       1 controller.go:115] Shutting down OpenAPI V3 controller
2023-10-28T17:53:26.981290852Z stderr F I1028 17:53:26.978519       1 controller.go:162] Shutting down OpenAPI controller
2023-10-28T17:53:26.981301532Z stderr F I1028 17:53:26.978528       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
2023-10-28T17:53:26.981316153Z stderr F I1028 17:53:26.978538       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
2023-10-28T17:53:26.981368922Z stderr F I1028 17:53:26.978564       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
2023-10-28T17:53:26.981394439Z stderr F I1028 17:53:26.978572       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
2023-10-28T17:53:26.981398882Z stderr F I1028 17:53:26.978695       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key"
2023-10-28T17:53:26.981419153Z stderr F I1028 17:53:26.979126       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
2023-10-28T17:53:26.981458076Z stderr F I1028 17:53:26.979199       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
2023-10-28T17:53:26.981482897Z stderr F I1028 17:53:26.979259       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
2023-10-28T17:53:26.981522814Z stderr F I1028 17:53:26.979269       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
2023-10-28T17:53:26.981527653Z stderr F I1028 17:53:26.979277       1 controller.go:84] Shutting down OpenAPI AggregationController
2023-10-28T17:53:26.98156274Z stderr F I1028 17:53:26.979337       1 secure_serving.go:258] Stopped listening on [::]:6443
2023-10-28T17:53:26.981582769Z stderr F I1028 17:53:26.979354       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
2023-10-28T17:53:26.98161059Z stderr F I1028 17:53:26.979425       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
2023-10-28T17:53:26.981657223Z stderr F I1028 17:53:26.979584       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key"
2023-10-28T17:53:26.981665538Z stderr F I1028 17:53:26.980528       1 controller.go:159] Shutting down quota evaluator
2023-10-28T17:53:26.981713774Z stderr F I1028 17:53:26.981685       1 controller.go:178] quota evaluator worker shutdown
2023-10-28T17:53:26.981762375Z stderr F I1028 17:53:26.981726       1 controller.go:178] quota evaluator worker shutdown
2023-10-28T17:53:26.981800124Z stderr F I1028 17:53:26.981767       1 controller.go:178] quota evaluator worker shutdown
2023-10-28T17:53:26.981823248Z stderr F I1028 17:53:26.981800       1 controller.go:178] quota evaluator worker shutdown
2023-10-28T17:53:26.981868552Z stderr F I1028 17:53:26.981834       1 controller.go:178] quota evaluator worker shutdown
  • "external host was not specified, using 172.31.28.195" - ์™ธ๋ถ€ ํ˜ธ์ŠคํŠธ๊ฐ€ ๋ช…์‹œ๋˜์ง€ ์•Š์•˜๊ณ  IP ์ฃผ์†Œ 172.31.28.195๋ฅผ ์‚ฌ์šฉํ•˜๋„๋ก ์„ค์ •๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

  • "Version: v1.28.3" - Kubernetes API ์„œ๋ฒ„ ๋ฒ„์ „์€ v1.28.3์ž…๋‹ˆ๋‹ค.

  • "Golang settings" - Golang ์‹คํ–‰ ์„ค์ •์ด ํ‘œ์‹œ๋˜๋ฉฐ GOGC, GOMAXPROCS ๋ฐ GOTRACEBACK ๊ฐ’์ด ๋น„์–ด ์žˆ์Šต๋‹ˆ๋‹ค.

  • "Loaded 12 mutating admission controller(s) successfully" - 12๊ฐœ์˜ Mutating Admission Controller๊ฐ€ ์„ฑ๊ณต์ ์œผ๋กœ ๋กœ๋“œ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

  • "Loaded 13 validating admission controller(s) successfully" - 13๊ฐœ์˜ Validating Admission Controller๊ฐ€ ์„ฑ๊ณต์ ์œผ๋กœ ๋กœ๋“œ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

  • "Using reconciler: lease" - ๋ ˆ์ฝ”๋“œ๊ฐ€ ๋ฆฌ์ฝ˜์‹ค๋Ÿฌ๋กœ ์‚ฌ์šฉ๋จ์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค.

  • ๋‹ค์–‘ํ•œ API ๊ทธ๋ฃน ๋ฐ ๋ฒ„์ „๋“ค์ด ๋ฆฌ์†Œ์Šค ๋งค๋‹ˆ์ €์— ์ถ”๊ฐ€๋˜๊ณ  ๋ช‡๋ช‡ API ๋ฒ„์ „์€ ๋ฆฌ์†Œ์Šค๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์ง€ ์•Š์•„ ์Šคํ‚ต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. (์˜ˆ: apiextensions.k8s.io/v1beta1, authentication.k8s.io/v1alpha1 ๋“ฑ)

  • "Serving securely on [::]:6443" - ์•ˆ์ „ํ•˜๊ฒŒ 6443 ํฌํŠธ์—์„œ ์„œ๋น„์Šค ์ค‘์ธ ๊ฒƒ์œผ๋กœ ๋ณด์ž…๋‹ˆ๋‹ค.

  • ์—ฌ๋Ÿฌ ์ปจํŠธ๋กค๋Ÿฌ์™€ ๋ฆฌ์ฝ˜์‹ค๋Ÿฌ๊ฐ€ ์‹œ์ž‘๋˜์—ˆ๊ณ , API Priority ๋ฐ Fairness ๊ด€๋ จ ์„ค์ •๋„ ๋กœ๋”ฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

  • "Resetting endpoints for master service "kubernetes" to [172.31.28.195]" - ๋งˆ์Šคํ„ฐ ์„œ๋น„์Šค "kubernetes"์˜ ์—”๋“œํฌ์ธํŠธ๊ฐ€ 172.31.28.195๋กœ ์žฌ์„ค์ •๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

  • ๋‹ค์–‘ํ•œ ์ปจํŠธ๋กค๋Ÿฌ์™€ ๋ฆฌ์†Œ์Šค ๋งค๋‹ˆ์ €๊ฐ€ ์ข…๋ฃŒ๋˜์—ˆ๊ณ , API ์„œ๋ฒ„ ๋ฆด๋ฆฌ์Šค ๊ฐ€๋น„์ง€ ์ปฌ๋ ‰ํ„ฐ๋„ ์ข…๋ฃŒ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

๋กœ๊ทธ๋ฅผ ๋ณด๋‹ˆ ๋ญ”๊ฐ€ pod ์Šค์ผ€์ค„๋ง์— ๊ด€ํ•ด์„œ kube-apiserver ํŒŒ๋“œ๊ฐ€ ์žฌ์‹œ์ž‘ํ•˜๋Š” ๊ณผ์ •์ด ๋ฐ˜๋ณต๋œ๊ฒƒ๊ฐ™๊ธฐ๋„ํ•˜๊ณ ... ๋ญ๊ฐ€ ๋ฌธ์ œ์ธ์ง€ ์•„์ง‚๊นŒ์ง€ ๋ชจ๋ฅด๊ฒ ๋‹ค ใ… 


๊ฒฐ๋ก 

ec2 ์ธ๋ฐ”์šด๋“œ์˜ kube-apiserver, scheduler์™€ ๊ฐ™์€ ํŒŒ๋“œ์— ๊ด€ํ•œ ํฌํŠธ๋ฅผ ๋ฌด์กฐ๊ฑด ๊ฐœ๋ฐฉํ•˜๋Š”๊ฒƒ์ด ์˜ณ๋‹ค..

profile
๊ฐœ๋ฐœ์ž๊ฐ€ ๊ฟˆ์ธ 25์‚ด ๋Œ€ํ•™์ƒ์ž…๋‹ˆ๋‹ค.

0๊ฐœ์˜ ๋Œ“๊ธ€