Network & Storage

SparkΒ·2023λ…„ 3μ›” 17일
0

PKOS

λͺ©λ‘ 보기
2/6

πŸ“Œ λͺ©ν‘œ

μ΄λ²ˆμ£ΌλŠ” VPC CNIλ₯Ό μ‚¬μš©ν•œ μΏ λ²„λ„€ν‹°μŠ€ ν΄λŸ¬μŠ€ν„° ꡬ성을 μ§€λ‚œλ²ˆκ³Ό λ§ˆμ°¬κ°€μ§€λ‘œ kops λ₯Ό 톡해 AWS ν™˜κ²½μ— κ΅¬μ„±ν•˜κ²Œ λœλ‹€.
PODκ°„ 톡신을 νŒ¨ν‚·λ€ν”„λ₯Ό 톡해 ν™•μΈν•˜κ²Œ 되며,
VPC CNIμ—μ„œ νŒŒλ“œ 생성 개수의 μ œν•œλ„ ν™•μΈν•˜κ²Œ λœλ‹€.
AWS EBSλ₯Ό 톡해 PV, PVC λ₯Ό 닀루며
λ³Όλ₯¨ μŠ€λƒ…μƒ·λ„ ν™•μΈν•œλ‹€.
μΆ”κ°€λ‘œ AWS EFS,FSx,File cache 도 ν™•μΈν•˜μž.

kops-oneclick 슀크립트 뢄석

μ‹€μŠ΅ν™˜κ²½ ꡬ성에 μ•žμ„œ κ°€μ‹œλ‹€λ‹˜μ΄ μ œκ³΅ν•΄μ£Όμ‹œλŠ” ν•œλ°©μ„€μΉ˜ 슀크립트 (kops-oneclick-f1.yaml)을 ν•œλ²ˆ λΆ„μ„ν•΄λ³΄μž. (이런게 μž¬λ―Έμ§€μš”)

curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/kops-oneclick-f1.yaml

# AWS cloudformation ν…œν”Œλ¦ΏμœΌλ‘œ κ΅¬μ„±λ˜μ–΄μžˆμœΌλ©° 본인은 openstack heat template에 μ΅μˆ™ν•˜μ—¬
λŒ€λž΅ νŒŒμ•…ν•  수 μžˆμ„ λ“― ν•˜λ‹€. (μ°Έκ³ : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html )

AWSTemplateFormatVersion: '2010-09-09'
...
# νŒŒλΌλ―Έν„°κ°’μ€ keypair, IAM정보, S3, λ…Έλ“œ 개수, VPC block 정보 그리고 배포 νƒ€κ²Ÿ 리전등이 μ„€μ •λœλ‹€.
...
## Resources λŠ” AWS μƒμ˜ 각 μžμ›μ„ μƒμ„±ν•˜λŠ” 뢀뢄이며 Ref ν•¨μˆ˜λ₯Ό μ΄μš©ν•˜μ—¬ λ‹€λ₯Έ λ¦¬μ†ŒμŠ€λ₯Ό μ°Έμ‘°ν•  수 μžˆλ‹€.
Resources:
## VPCλ₯Ό μƒμ„±ν•˜λ©° λ„€νŠΈμ›Œν¬ λŒ€μ—­μ„ μ„ μ–Έν•œλ‹€.
  MyVPC:
    Type: AWS::EC2::VPC
    Properties:
     EnableDnsSupport: true
     EnableDnsHostnames: true
     CidrBlock: 10.0.0.0/16
     Tags:
        - Key: Name
          Value: My-VPC
## internet gatewayλ₯Ό μƒμ„±ν•œλ‹€
  MyIGW:
    Type: AWS::EC2::InternetGateway
    Properties:
      Tags:
        - Key: Name
          Value: My-IGW
          
## μ•žμ„œ μƒμ„±λœ Internet Gatewayλ₯Ό λ‚΄ VPC에 μ—°κ²°ν•œλ‹€.
  MyIGWAttachment:
    Type: AWS::EC2::VPCGatewayAttachment
    Properties:
      InternetGatewayId: !Ref MyIGW
      VpcId: !Ref MyVPC

## "MyPVC"에 λΌμš°νŒ… ν…Œμ΄λΈ”μ„ μƒμ„±ν•œλ‹€.
  MyPublicRT:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref MyVPC
      Tags:
        - Key: Name
          Value: My-Public-RT

## 0.0.0.0/0 λŒ€μ—­μ— λŒ€ν•œ Default gatewayλ₯Ό μ„€μ •ν•œλ‹€. 
## μ΄λ•Œ μ•žμ„œ μƒμ„±ν•œ internet gatewayλ₯Ό default gateway둜 μ„€μ •ν•œλ‹€.
  DefaultPublicRoute:
    Type: AWS::EC2::Route
    DependsOn: MyIGWAttachment
    Properties:
      RouteTableId: !Ref MyPublicRT
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref MyIGW

## "MyPVC"에 μ„œλΈŒλ„·μ„ μ„ μ–Έν•˜λ©°, AZλŠ” GetAZs ν•¨μˆ˜λ₯Ό μ΄μš©ν•΄μ„œ array둜 λ¦¬ν„΄λœ 첫번째 κ°’μœΌλ‘œ μ„€μ •ν•œλ‹€.  
  MyPublicSN:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref MyVPC
      AvailabilityZone: !Select [ 0, !GetAZs '' ]
      CidrBlock: 10.0.0.0/24
      Tags:
        - Key: Name
          Value: My-Public-SN

## MyPublicSN μ„œλΈŒλ„·μ— μ•žμ„œ μƒμ„±ν•œ λΌμš°νŒ… ν…Œμ΄λΈ”μ„ ν• λ‹Ήν•œλ‹€.
  MyPublicSNRouteTableAssociation:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties:
      RouteTableId: !Ref MyPublicRT
      SubnetId: !Ref MyPublicSN

## ec2 sec group μ„€μ •: νŒŒλΌλ―Έν„°(SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32)
둜 λ°›κ²Œλ˜λŠ” SgIngressSshCidr λŒ€μ—­λ§Œ 22,80 ν¬νŠΈμ— ν—ˆμš©ν•΄μ€€λ‹€. 
  KOPSEC2SG:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: kops ec2 Security Group
      VpcId: !Ref MyVPC
      Tags:
        - Key: Name
          Value: KOPS-EC2-SG
      SecurityGroupIngress:
      - IpProtocol: tcp
        FromPort: '22'
        ToPort: '22'
        CidrIp: !Ref SgIngressSshCidr
      - IpProtocol: tcp
        FromPort: '80'
        ToPort: '80'
        CidrIp: !Ref SgIngressSshCidr      
        
# EC2 생성 λΆ€λΆ„ : t3.small νƒ€μž…μ˜ μΈμŠ€ν„΄μŠ€, νŒŒλΌλ―Έν„°λ‘œ λ°›λŠ” ami 이미지와 keypairλ₯Ό μ§€μ •ν•˜λ©°,
μ•žμ„œ μƒμ„±ν•œ μ„œλΈŒλ„·μ— λ„€νŠΈμ›Œν¬μΈν„°νŽ˜μ΄μŠ€λ₯Ό μƒμ„±ν•œλ‹€.
  KOPSEC2:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t3.small
      ImageId: !Ref LatestAmiId
      KeyName: !Ref KeyName
      Tags:
        - Key: Name
          Value: kops-ec2
      NetworkInterfaces:
        - DeviceIndex: 0
          SubnetId: !Ref MyPublicSN
          GroupSet:
          - !Ref KOPSEC2SG
          AssociatePublicIpAddress: true
          PrivateIpAddress: 10.0.0.10
          
## μΈμŠ€ν„΄μŠ€κ°€ κΈ°λ™λœ ν›„ GuestOSλ‚΄μ—μ„œ μ‹€ν–‰λ˜λŠ” 슀크립트 λ‚΄μš©μ΄λ‹€. cloud-init에 μ˜ν•΄ μ‹€ν–‰λœλ‹€. 
      UserData: 
## userdata λ¬Έμžμ—΄μ„ Base64둜 μΈμ½”λ”©ν•œλ‹€. 
        Fn::Base64:
          !Sub |
            #!/bin/bash
            ## hostname λ³€κ²½
            hostnamectl --static set-hostname kops-ec2

            # Change Timezone
            ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime

            # Install Packages
            cd /root
            
            yum -y install tree jq git htop
            ## kubectl latest stable 버전을 λ‚΄λ € λ°›μ•„ μ„€μΉ˜.
            curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
            install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
            ## μ΅œμ‹ λ²„μ „μ˜ kopsλ₯Ό λ‚΄λ €λ°›μ•„ μ„€μΉ˜ν•œλ‹€.
            curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
            chmod +x kops
            mv kops /usr/local/bin/kops
            ## aws client λ₯Ό λ°›μ•„ μ„€μΉ˜ν•œλ‹€.
            curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
            unzip awscliv2.zip >/dev/null 2>&1
            sudo ./aws/install
            export PATH=/usr/local/bin:$PATH
            source ~/.bash_profile
            ## aws bash auto complition μ„€μΉ˜
            complete -C '/usr/local/bin/aws_completer' aws
            ## ssh rsa ν‚€ 생성 (password없이)
            ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
            
            echo 'alias vi=vim' >> /etc/profile
            ## ec2-user 둜 λ‘œκ·ΈμΈμ‹œ λ°”λ‘œ root μœ μ €λ‘œ μŠ€μœ„μΉ­ μ„€μ •
            echo 'sudo su -' >> /home/ec2-user/.bashrc
            ## helm3, yh(yaml ν•˜μ΄λΌμ΄νŠΈ) λ‹€μš΄λ‘œλ“œ 및 μ„€μΉ˜
            curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
            wget https://github.com/andreazorzetto/yh/releases/download/v0.4.0/yh-linux-amd64.zip
            unzip yh-linux-amd64.zip
            mv yh /usr/local/bin/

            ## K8S Version νŒŒλΌλ―Έν„° λ³€μˆ˜ KubernetesVersion μ‚¬μš©.
            export KUBERNETES_VERSION=${KubernetesVersion}
            echo "export KUBERNETES_VERSION=${KubernetesVersion}" >> /etc/profile

            ## IAM User Credentials νŒŒλΌλ―Έν„° λ³€μˆ˜λ₯Ό μ΄μš©ν•΄ iam μ„€μ •.
            export AWS_ACCESS_KEY_ID=${MyIamUserAccessKeyID}
            export AWS_SECRET_ACCESS_KEY=${MyIamUserSecretAccessKey}
            export AWS_DEFAULT_REGION=${AWS::Region}
            export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
            echo "export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" >> /etc/profile
            echo "export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" >> /etc/profile
            echo "export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION" >> /etc/profile
            echo 'export AWS_PAGER=""' >>/etc/profile
            echo "export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)" >> /etc/profile

            ## CLUSTER_NAME νŒŒλΌλ―Έν„° λ³€μˆ˜ ClusterBaseName μ‚¬μš©.
            export KOPS_CLUSTER_NAME=${ClusterBaseName}
            echo "export KOPS_CLUSTER_NAME=$KOPS_CLUSTER_NAME" >> /etc/profile

            ## S3 State Store Bucket Name 지정
            export KOPS_STATE_STORE=s3://${S3StateStore}
            echo "export KOPS_STATE_STORE=s3://${S3StateStore}" >> /etc/profile

            ## κ°€μ‹œλ‹€λ‹˜μ˜ PKOS κΉƒν—ˆλΈŒ 클둠
            git clone https://github.com/gasida/PKOS.git /root/pkos

            ## kubectl plugin manager인 "krew" λ‹€μš΄λ‘œλ“œ 및 μ„€μΉ˜
            curl -LO https://github.com/kubernetes-sigs/krew/releases/download/v0.4.3/krew-linux_amd64.tar.gz
            tar zxvf krew-linux_amd64.tar.gz
            ./krew-linux_amd64 install krew
            export PATH="$PATH:/root/.krew/bin"
            echo 'export PATH="$PATH:/root/.krew/bin"' >> /etc/profile

            ## kubectl autocompletion μ„€μ • 및 Install kube-ps1 (context,ns ν‘œμ‹œνˆ΄)
            echo 'source <(kubectl completion bash)' >> /etc/profile
            echo 'alias k=kubectl' >> /etc/profile
            echo 'complete -F __start_kubectl k' >> /etc/profile

            git clone https://github.com/jonmosco/kube-ps1.git /root/kube-ps1
            cat <<"EOT" >> /root/.bash_profile
            source /root/kube-ps1/kube-ps1.sh
            KUBE_PS1_SYMBOL_ENABLE=false
            function get_cluster_short() {
              echo "$1" | cut -d . -f1
            }
            KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
            KUBE_PS1_SUFFIX=') '
            PS1='$(kube_ps1)'$PS1
            EOT
            
            ## Install krew plugin 
            ## ctx (context μ „ν™˜), ns (namespaceμ „ν™˜) 
            ## get-all (get all 보닀 λ§Žμ€ λ²”μœ„μ˜ 였브젝트 확인가λŠ₯) 
            ## ktop (λΉ„μ£Όμ–ΌλΌμ΄μ¦ˆ 된 κ΄€λ¦¬νˆ΄ like k9s)
            ## df-pv (각 pv별 μ‚¬μš©λŸ‰ 확인)
            ## mtail (λ‹€μˆ˜ pod 의 둜그 tail )
            ## tree (였브젝트 트리ꡬ쑰 ν‘œν˜„)
            kubectl krew install ctx ns get-all ktop # df-pv mtail tree

            ## Install Docker
            amazon-linux-extras install docker -y
            systemctl start docker && systemctl enable docker

            ## kops μ‚¬μš©ν•˜μ—¬ ec2μΈμŠ€ν„΄μŠ€ 생성 및 k8s ν΄λŸ¬μŠ€ν„° λ””ν”Œλ‘œμ΄ ν•˜λŠ” 과정이며
            ## dry-run으둜 kops.yaml 을 μš°μ„  μƒμ„±ν•œλ‹€. 
            kops create cluster --zones=${AvailabilityZone1},${AvailabilityZone2} --networking amazonvpc --cloud aws \
            --master-size ${MasterNodeInstanceType} --node-size ${WorkerNodeInstanceType} --node-count=${WorkerNodeCount} \
            --network-cidr ${VpcBlock} --ssh-public-key ~/.ssh/id_rsa.pub --kubernetes-version "${KubernetesVersion}" --dry-run \
            --output yaml > kops.yaml

            ## μ•„λž˜ 섀정을 μΆ”κ°€ν•΄μ€€λ‹€.
            cat <<EOT > addon.yaml
              certManager:
                enabled: true
              awsLoadBalancerController:
                enabled: true
              externalDns:
                provider: external-dns
              metricsServer:
                enabled: true
              kubeProxy:
                metricsBindAddress: 0.0.0.0
              kubeDNS:
                provider: CoreDNS
                nodeLocalDNS:
                  enabled: true
                  memoryRequest: 5Mi
                  cpuRequest: 25m
            EOT
            sed -i -n -e '/aws$/r addon.yaml' -e '1,$p' kops.yaml

            ## max-pod per node 섀정도 μΆ”κ°€
            cat <<EOT > maxpod.yaml
                maxPods: 100
            EOT
            sed -i -n -e '/anonymousAuth/r maxpod.yaml' -e '1,$p' kops.yaml

			## vpc ENABLE_PREFIX_DELEGATION μ„€μ • μΆ”κ°€
            sed -i 's/amazonvpc: {}/amazonvpc:/g' kops.yaml
            cat <<EOT > awsvpc.yaml
                  env:
                  - name: ENABLE_PREFIX_DELEGATION
                    value: "true"
            EOT
            sed -i -n -e '/amazonvpc/r awsvpc.yaml' -e '1,$p' kops.yaml

			## μ€€λΉ„ν•œ kops.yaml 을 μ΄μš©ν•΄μ„œ clusterλ₯Ό μƒμ„±ν•œλ‹€.
            cat kops.yaml | kops create -f -
            kops update cluster --name $KOPS_CLUSTER_NAME --ssh-public-key ~/.ssh/id_rsa.pub --yes

            ## kops둜 μƒμ„±λœ k8s cluster의 kubeconfig λ₯Ό μ‚¬μš©ν•˜λ„λ‘ μ„€μ •
            echo "kops export kubeconfig --admin" >> /etc/profile

## cloudformation μ—μ„œ μ‚¬μš©κ°€λŠ₯ν•œ output μ„€μ •: ν˜„μž¬ 생성될 ec2μΈμŠ€ν„΄μŠ€μ˜ PublicIPλ₯Ό λ³€μˆ˜λ‘œ λ°›μ•„ 좜λ ₯λ˜λ„λ‘ 섀정됨.
Outputs:
  KOPSEC2IP:
    Value: !GetAtt KOPSEC2.PublicIp                    

μ‹€μŠ΅ν™˜κ²½ 배포

자 이제 μ‹€μŠ΅ν™˜κ²½μ„ λ°°ν¬ν•΄λ³΄μž. 이번 μ‹€μŠ΅μ—λŠ” 고사양 μΈμŠ€ν„΄μŠ€ c5d νƒ€μž…μ΄ μ‚¬μš©λ˜λ―€λ‘œ 즉, λΉ„μ‹Έλ‹€. μ‹€μŠ΅ν• λ•Œ λ°°ν¬ν•˜κ³ , μ‚¬μš©ν•˜μ§€ μ•Šμ„λ• μ‚­μ œν•˜κ³  κΆκΈˆν•˜λ©΄ λ‹€μ‹œ λ°°ν¬ν•΄μ„œ ν™•μΈν•˜κ³  κ·Έλ ‡κ²Œ ν•˜λ„λ‘ ν•˜μž. (μ•„κ»΄μ•Ό μž˜μ‚°λ‹€.)

## kops YAML 파일 λ‹€μš΄λ‘œλ“œ
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/kops-oneclick-f1.yaml

## CloudFormation μŠ€νƒ 배포 : λ…Έλ“œ μΈμŠ€ν„΄μŠ€ νƒ€μž… λ³€κ²½ - MasterNodeInstanceType=t3.medium WorkerNodeInstanceType=c5d.large
# "kops-oneclick-f1.sh" νŒŒμΌμ„ μ•„λž˜ λ‚΄μš©μœΌλ‘œ λ§Œλ“€μ–΄μ€€λ‹€. (ν•„μš”μ‹œ μž¬μ‚¬μš©)
aws cloudformation deploy --template-file kops-oneclick-f1.yaml --stack-name mykops --parameter-overrides \
KeyName=spark SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32  \
MyIamUserAccessKeyID=AKI..57W MyIamUserSecretAccessKey='F4KdY..og2d' \
ClusterBaseName='sparkandassociates.net' S3StateStore='pkos2' \
MasterNodeInstanceType=t3.medium WorkerNodeInstanceType=c5d.large \
--region ap-northeast-2

# CloudFormation μŠ€νƒ 배포 μ™„λ£Œ ν›„ kOps EC2 IP 좜λ ₯ (yaml μ•ˆ outputs μ •μ˜λœ k/v μ‚¬μš©)
aws cloudformation describe-stacks --stack-name mykops --query 'Stacks[*].Outputs[0].OutputValue' --output text

#  ssh 접속가λŠ₯ν•œμ§€ 확인.
ssh -i ./spark.pem ec2-user@$(aws cloudformation describe-stacks --stack-name mykops --query 'Stacks[*].Outputs[0].OutputValue' --output text)

## post script 진행과정 확인은 cloud-init-output.log λ‘œκ·Έν†΅ν•΄ κ°€λŠ₯ν•˜λ‹€. (kops-ec2 λ…Έλ“œ)
## 15λΆ„λ’€ k8s λ…Έλ“œ λ§ˆμŠ€ν„°,μ›Œμ»€2λŒ€ 배포가 λ˜μ§€μ•Šμ„ 경우 μ—λŸ¬λ‘œκ·Έ 확인.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
I0314 14:42:18.254922    3105 create_cluster.go:878] Using SSH public key: /root/.ssh/id_rsa.pub
Error: cluster "sparkandassociates.net" already exists; use 'kops update cluster' to apply changes
Error: error parsing file "-": Object 'Kind' is missing in 'null'
--ssh-public-key on update is deprecated - please use `kops create secret --name sparkandassociates.net sshpublickey admin -i ~/.ssh/id_rsa.pub` instead
I0314 14:42:18.661378    3125 update_cluster.go:238] Using SSH public key: /root/.ssh/id_rsa.pub
Error: exactly one 'admin' SSH public key can be specified when running with AWS; please delete a key using `kops delete secret`
Cloud-init v. 19.3-46.amzn2 finished at Tue, 14 Mar 2023 05:42:25 +0000. Datasource DataSourceEc2.  Up 695.61 seconds

sparkandassociates.net ν΄λŸ¬μŠ€ν„°κ°€ 이미 μžˆλ‹€κ³  λ‚˜μ˜€λŠ”λ°, λͺ‡λ²ˆ μž¬μ„€μΉ˜ κ³Όμ •μ—μ„œ μ°ŒκΊΌκΈ°κ°€ λ‚¨μ•„μ„œ κ·ΈλŸ°λ“―ν•˜λ‹€. ν™•μΈν•΄λ³΄μž.

s3 pkos2 버킷내 ν΄λŸ¬μŠ€ν„° 정보가 λ‚¨μ•„μžˆμŒ.

# s3 bucket μ‚­μ œν›„ λ‹€μ‹œ 생성.
[root@san-1 pkos]# aws s3 mb s3://pkos2 --region ap-northeast-2
make_bucket: pkos2
[root@san-1 pkos]# aws s3 ls
2023-03-14 06:23:54 pkos2

λ‘λ²ˆμ§Έ 문제

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
/var/lib/cloud/instance/scripts/part-001: line 90:  3104 Segmentation fault      kops create cluster --zones=ap-northeast-2a,ap-northeast-2c --networking amazonvpc --cloud aws --master-size t3.medium --node-size c5d.large --node-count=2 --network-cidr 172.30.0.0/16 --ssh-public-key ~/.ssh/id_rsa.pub --kubernetes-version "1.24.11" --dry-run --output yaml > kops.yaml
/var/lib/cloud/instance/scripts/part-001: line 127:  3115 Done                    cat kops.yaml
      3116 Segmentation fault      | kops create -f -
/var/lib/cloud/instance/scripts/part-001: line 128:  3117 Segmentation fault      kops update cluster --name $KOPS_CLUSTER_NAME --ssh-public-key ~/.ssh/id_rsa.pub --yes
Cloud-init v. 19.3-46.amzn2 finished at Tue, 14 Mar 2023 06:41:53 +0000. Datasource DataSourceEc2.  Up 999.85 seconds

dry-run μ‹€νŒ¨.

## cloud-init 으둜 λ³€μˆ˜ νŒŒμ‹±λ˜μ–΄ λ§Œλ“€μ–΄μ§„ 슀크립트 λ‚΄μš© 확인.
/var/lib/cloud/instance/scripts/part-001

[root@kops-ec2 ~]# cat kops.yaml
[root@kops-ec2 ~]#

route53λ‚΄μ—μ„œ λ§Œλ“€μ–΄μ§„ A-record μ‚­μ œν›„ μ •μƒμ μœΌλ‘œ 생성됨.
cloud-init-output.log 파일 확인

kOps has set your kubectl context to sparkandassociates.net
W0315 13:37:59.908065    3042 update_cluster.go:347] Exported kubeconfig with no user authentication; use --admin, --user or --auth-plugin flags with `kops export kubeconfig`

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * validate cluster: kops validate cluster --wait 10m
 * list nodes: kubectl get nodes --show-labels
 * ssh to a control-plane node: ssh -i ~/.ssh/id_rsa ubuntu@api.sparkandassociates.net
 * the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
 * read about installing addons at: https://kops.sigs.k8s.io/addons.

Cloud-init v. 19.3-46.amzn2 finished at Wed, 15 Mar 2023 04:37:59 +0000. Datasource DataSourceEc2.  Up 125.37 seconds

kops validate cluster ν†΅ν•΄μ„œ 진행상황 확인가λŠ₯ν•˜λ‹€.

(sparkandassociates:N/A) [root@kops-ec2 ~]# kops validate cluster --wait 10m
Validating cluster sparkandassociates.net

INSTANCE GROUPS
NAME                            ROLE            MACHINETYPE     MIN     MAX     SUBNETS
control-plane-ap-northeast-2a   ControlPlane    t3.medium       1       1       ap-northeast-2a
nodes-ap-northeast-2a           Node            c5d.large       1       1       ap-northeast-2a
nodes-ap-northeast-2c           Node            c5d.large       1       1       ap-northeast-2c

NODE STATUS
NAME    ROLE    READY

VALIDATION ERRORS
KIND    NAME            MESSAGE
dns     apiserver       Validation Failed

The external-dns Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a control plane node to start, external-dns to launch, and DNS to propagate.  The protokube container and external-dns deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
W0315 13:44:52.957150    4265 validate_cluster.go:232] (will retry): cluster not yet healthy

## μƒμ„±μ™„λ£Œ 확인.
(sparkandassociates:N/A) [root@kops-ec2 ~]# kops validate cluster --wait 10m
Validating cluster sparkandassociates.net

INSTANCE GROUPS
NAME                            ROLE            MACHINETYPE     MIN     MAX     SUBNETS
control-plane-ap-northeast-2a   ControlPlane    t3.medium       1       1       ap-northeast-2a
nodes-ap-northeast-2a           Node            c5d.large       1       1       ap-northeast-2a
nodes-ap-northeast-2c           Node            c5d.large       1       1       ap-northeast-2c

NODE STATUS
NAME                    ROLE            READY
i-05538a0cedc2ceac8     node            True
i-08a4f488be204357c     control-plane   True
i-0f94a3e2d9abe939f     node            True

Your cluster sparkandassociates.net is ready
# λ©”νŠΈλ¦­ μ„œλ²„ 확인 : λ©”νŠΈλ¦­μ€ 15초 κ°„κ²©μœΌλ‘œ cAdvisorλ₯Ό ν†΅ν•˜μ—¬ κ°€μ Έμ˜΄
kubectl top node
(sparkandassociates:N/A) [root@kops-ec2 ~]# k top node
NAME                  CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
i-05538a0cedc2ceac8   32m          1%     1009Mi          28%
i-08a4f488be204357c   192m         9%     2027Mi          53%
i-0f94a3e2d9abe939f   24m          1%     965Mi           26%

# limit range 기본정책이 100m *μ΅œμ†Œ 0.1CPU λ₯Ό κ°œλŸ°ν‹°ν•˜λ―€λ‘œ 
# ν…ŒμŠ€νŠΈλ‘œ 100개 pod을 ν•˜λ‚˜μ˜ μ›Œμ»€λ…Έλ“œμ—μ„œ κΈ°λ™μ‹œν‚¬λ•Œ 이뢀뢄에 κ±Έλ¦¬λ―€λ‘œ ν…ŒμŠ€νŠΈλ₯Ό μœ„ν•΄μ„œ μ‚­μ œ.

(sparkandassociates:N/A) [root@kops-ec2 ~]# kubectl describe limitranges
Name:       limits
Namespace:  default
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Container   cpu       -    -    100m             -              -
(sparkandassociates:N/A) [root@kops-ec2 ~]# kubectl delete limitranges limits
limitrange "limits" deleted
(sparkandassociates:N/A) [root@kops-ec2 ~]# kubectl get limitranges
No resources found in default namespace.
(sparkandassociates:N/A) [root@kops-ec2 ~]#

자, 이제 μ‹œμž‘
λ‹€μŒμœΌλ‘œ λ„€νŠΈμ›Œν¬μ΄λ‹€.

μΏ λ²„λ„€ν‹°μŠ€ λ„€νŠΈμ›Œν¬

1. AWS VPC CNI 이해

일단 μ•„λž˜ κ·Έλ¦Όν•˜λ‚˜λ‘œ λ°”λ‘œ VPC CNI 에 λŒ€ν•΄ νŒŒμ•…μ΄ κ°€λŠ₯ν•˜λ‹€.


좜처 - PKOS μžλ£Œλ‚΄

직접 ν™•μΈν•΄λ³΄μž.

# CNI 정보 확인
(sparkandassociates:N/A) [root@kops-ec2 ~]# kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
amazon-k8s-cni-init:v1.12.2
amazon-k8s-cni:v1.12.2

# λ…Έλ“œ IP 확인
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table

(sparkandassociates:N/A) [root@kops-ec2 ~]# aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table | grep -v 172.31.
----------------------------------------------------------------------------------------------------------------
|                                               DescribeInstances                                              |
+---------------------------------------------------------------+----------------+-----------------+-----------+
|                         InstanceName                          | PrivateIPAdd   |   PublicIPAdd   |  Status   |
+---------------------------------------------------------------+----------------+-----------------+-----------+
|  nodes-ap-northeast-2c.sparkandassociates.net                 |  172.30.66.26  |  15.164.221.63  |  running  |
|  kops-ec2                                                     |  10.0.0.10     |  13.124.35.72   |  running  |
|  control-plane-ap-northeast-2a.masters.sparkandassociates.net |  172.30.63.222 |  13.125.181.109 |  running  |
|  nodes-ap-northeast-2a.sparkandassociates.net                 |  172.30.32.225 |  3.35.141.26    |  running  |
+---------------------------------------------------------------+----------------+-----------------+-----------+


# νŒŒλ“œ IP 확인
kubectl get pod -n kube-system -o=custom-columns=NAME:.metadata.name,IP:.status.podIP,STATUS:.status.phase

# νŒŒλ“œ 이름 확인
kubectl get pod -A -o name

# νŒŒλ“œ 갯수 확인
kubectl get pod -A -o name | wc -l
kubectl ktop  # νŒŒλ“œ 정보 좜λ ₯μ—λŠ” λ‹€μ†Œ μ‹œκ°„ ν•„μš”

master node 접속 ν›„ 확인

# 툴 μ„€μΉ˜
sudo apt install -y tree jq net-tools

# CNI 정보 확인
ls /var/log/aws-routed-eni
cat /var/log/aws-routed-eni/plugin.log | jq 
cat /var/log/aws-routed-eni/ipamd.log | jq

# λ„€νŠΈμ›Œν¬ 정보 확인 : eniYλŠ” pod network λ„€μž„μŠ€νŽ˜μ΄μŠ€μ™€ veth pair
ip -br -c addr
ip -c addr
ip -c route
sudo iptables -t nat -S
sudo iptables -t nat -L -n -v

ubuntu@i-08a4f488be204357c:~$ ip -br -c addr
lo               UNKNOWN        127.0.0.1/8 ::1/128
ens5             UP             172.30.63.222/19 fe80::d1:a1ff:febf:2b56/64
nodelocaldns     DOWN           169.254.20.10/32
eni6d8cdfa2db1@if3 UP             fe80::70f8:9fff:fe74:618d/64
eni95ee4851614@if3 UP             fe80::e0fe:b9ff:fe1f:f5aa/64
enib6e94747ace@if3 UP             fe80::5017:fdff:fe5a:24f6/64
enibafb7cbc19f@if3 UP             fe80::b45f:96ff:febf:bd42/64

ubuntu@i-08a4f488be204357c:~$ ip -c route
default via 172.30.32.1 dev ens5 proto dhcp src 172.30.63.222 metric 100
172.30.32.0/19 dev ens5 proto kernel scope link src 172.30.63.222
172.30.32.1 dev ens5 proto dhcp scope link src 172.30.63.222 metric 100
172.30.56.192 dev eni6d8cdfa2db1 scope link
172.30.56.193 dev eni95ee4851614 scope link
172.30.56.194 dev enib6e94747ace scope link
172.30.56.195 dev enibafb7cbc19f scope link

μ›Œμ»€λ…Έλ“œλ„ λ§ˆμ°¬κ°€μ§€λ‘œ μ ‘μ†ν•΄μ„œ ν™•μΈν•΄λ³΄μž.

# μ›Œμ»€ λ…Έλ“œ Public IP 확인
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value}" --filters Name=instance-state-name,Values=running --output table

# μ›Œμ»€ λ…Έλ“œ Public IP λ³€μˆ˜ 지정
W1PIP=15.164.221.63
W2PIP=3.35.141.26

# [μ›Œμ»€ λ…Έλ“œ1~2] SSH 접속 : 접속 ν›„ μ•„λž˜ 툴 μ„€μΉ˜ λ“± 정보 각각 확인
ssh -i ~/.ssh/id_rsa ubuntu@$W1PIP
ssh -i ~/.ssh/id_rsa ubuntu@$W2PIP
--------------------------------------------------
# 툴 μ„€μΉ˜
sudo apt install -y tree jq net-tools

# CNI 정보 확인
ls /var/log/aws-routed-eni
cat /var/log/aws-routed-eni/plugin.log | jq
cat /var/log/aws-routed-eni/ipamd.log | jq

# λ„€νŠΈμ›Œν¬ 정보 확인
ip -br -c addr
ip -c addr
ip -c route
sudo iptables -t nat -S
sudo iptables -t nat -L -n -v

2. κΈ°λ³Έ λ„€νŠΈμ›Œν¬ 정보 확인

  • Network λ„€μž„μŠ€νŽ˜μ΄μŠ€λŠ” 호슀트(Root)와 νŒŒλ“œ 별(Per Pod)둜 κ΅¬λΆ„λœλ‹€
  • νŠΉμ •ν•œ νŒŒλ“œ(kube-proxy, aws-node)λŠ” 호슀트(Root)의 IPλ₯Ό κ·ΈλŒ€λ‘œ μ‚¬μš©ν•œλ‹€
  • t3.medium 의 경우 ENI λ§ˆλ‹€ μ΅œλŒ€ 6개의 IPλ₯Ό κ°€μ§ˆ 수 μžˆλ‹€
  • ENI0, ENI1 으둜 2개의 ENIλŠ” μžμ‹ μ˜ IP 이외에 μΆ”κ°€μ μœΌλ‘œ 5개의 보쑰 프라이빗 IPλ₯Ό κ°€μ§ˆμˆ˜ μžˆλ‹€
  • coredns νŒŒλ“œλŠ” veth 으둜 ν˜ΈμŠ€νŠΈμ—λŠ” eniY@ifN μΈν„°νŽ˜μ΄μŠ€μ™€ νŒŒλ“œμ— eth0 κ³Ό μ—°κ²°λ˜μ–΄ μžˆλ‹€

IP μ£Όμ†Œ 확인

(sparkandassociates:N/A) [root@kops-ec2 ~]# kubectl get pod -n kube-system -l app=ebs-csi-node -owide
NAME                 READY   STATUS    RESTARTS   AGE   IP              NODE                  NOMINATED NODE   READINESS GATES
ebs-csi-node-7gd5l   3/3     Running   0          19h   172.30.56.192   i-08a4f488be204357c   <none>           <none>
ebs-csi-node-jghkn   3/3     Running   0          19h   172.30.94.160   i-05538a0cedc2ceac8   <none>           <none>
ebs-csi-node-rz2x7   3/3     Running   0          19h   172.30.60.112   i-0f94a3e2d9abe939f   <none>           <none>
(sparkandassociates:N/A) [root@kops-ec2 ~]#

ubuntu@i-05538a0cedc2ceac8:~$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.30.64.1     0.0.0.0         UG    100    0        0 ens5
172.30.64.0     0.0.0.0         255.255.224.0   U     0      0        0 ens5
172.30.64.1     0.0.0.0         255.255.255.255 UH    100    0        0 ens5
172.30.88.64    0.0.0.0         255.255.255.255 UH    0      0        0 eniff5a9530b8d
172.30.94.160   0.0.0.0         255.255.255.255 UH    0      0        0 enif8808f94e33
172.30.94.161   0.0.0.0         255.255.255.255 UH    0      0        0 eni4122c9c8c4f
172.30.94.162   0.0.0.0         255.255.255.255 UH    0      0        0 eni942832c30bf
172.30.94.163   0.0.0.0         255.255.255.255 UH    0      0        0 eni3e424b82595

3. λ…Έλ“œ κ°„ νŒŒλ“œ 톡신


( 좜처 : https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/cni-proposal.md)

VPC CNI λŠ” VPC λ‚΄λΆ€μ—μ„œ POD λ„€νŠΈμ›Œν¬ λŒ€μ—­κ³Ό μ›Œμ»€ λ…Έλ“œ λŒ€μ—­μ΄ κ°™μœΌλ―€λ‘œ 별도 μ˜€λ²„λ ˆμ΄ 톡신 없이 직접 톡신 κ°€λŠ₯ν•˜λ‹€.

눈으둜 ν™•μΈν•΄λ³΄μž. ν…ŒμŠ€νŠΈ νŒŒλ“œ 생성 - nicolaka/netshoot

# [터미널1~2] μ›Œμ»€ λ…Έλ“œ 1~2 λͺ¨λ‹ˆν„°λ§
ssh -i ~/.ssh/id_rsa ubuntu@$W1PIP
watch -d "ip link | egrep 'ens5|eni' ;echo;echo "[ROUTE TABLE]"; route -n | grep eni"

ssh -i ~/.ssh/id_rsa ubuntu@$W2PIP
watch -d "ip link | egrep 'ens5|eni' ;echo;echo "[ROUTE TABLE]"; route -n | grep eni"

# ν…ŒμŠ€νŠΈμš© νŒŒλ“œ netshoot-pod 생성
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: netshoot-pod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: netshoot-pod
  template:
    metadata:
      labels:
        app: netshoot-pod
    spec:
      containers:
      - name: netshoot-pod
        image: nicolaka/netshoot
        command: ["tail"]
        args: ["-f", "/dev/null"]
      terminationGracePeriodSeconds: 0
EOF

# νŒŒλ“œ 이름 λ³€μˆ˜ 지정
PODNAME1=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[0].metadata.name})
PODNAME2=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[1].metadata.name})

# νŒŒλ“œ 확인
kubectl get pod -o wide
kubectl get pod -o=custom-columns=NAME:.metadata.name,IP:.status.podIP

νŒŒλ“œκ°€ μƒμ„±λ˜λ©΄ eniY@ifN μΆ”κ°€λ˜κ³  λΌμš°νŒ… ν…Œμ΄λΈ”μ— 정보가 μΆ”κ°€λœλ‹€.

MTUλŠ” μ ν¬ν”„λ ˆμž„ 9001둜 μ„€μ •λœλ‹€.
AWS MTU 확인

awseniY@ifN μΈν„°νŽ˜μ΄μŠ€ μΆ”κ°€λœκ²ƒμ„ 확인가λŠ₯ν•˜λ‹€.

aws-routed-eni 둜그λ₯Ό μ°λŠ” ν”„λ‘œμ„ΈμŠ€λ₯Ό μ°Ύμ•„λ³΄μž. ν•΄λ‹Ή ν”„λ‘œμ„ΈμŠ€μ—μ„œ cni μΈν„°νŽ˜μ΄μŠ€λ₯Ό μƒμ„±ν•˜κ³ , λΌμš°νŒ…ν…Œμ΄λΈ” μ—…λ°μ΄νŠΈ 함.

일단 k8s cluster 내에 aws κ΄€λ ¨ pod 듀이 있으며 aws-node-* νŒŒλ“œλ₯Ό ν•œλ²ˆ μ‚΄νŽ΄λ³΄μž.

(sparkandassociates:default) [root@kops-ec2 ~]# k get pod -A -o wide | grep aws
kube-system   aws-cloud-controller-manager-hr2qz              1/1     Running   0             27h   172.30.63.222   i-08a4f488be204357c   <none>           <none>
kube-system   aws-load-balancer-controller-55bd49cfc7-5kq7q   1/1     Running   0             27h   172.30.63.222   i-08a4f488be204357c   <none>           <none>
kube-system   aws-node-9d9h7                                  1/1     Running   0             27h   172.30.63.222   i-08a4f488be204357c   <none>           <none>
kube-system   aws-node-lv8gm                                  1/1     Running   0             27h   172.30.66.26    i-05538a0cedc2ceac8   <none>           <none>
kube-system   aws-node-r5g6c                                  1/1     Running   0             27h   172.30.32.225   i-0f94a3e2d9abe939f   <none>           <none>

(sparkandassociates:default) [root@kops-ec2 ~]# kubectl get ds -n kube-system
NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
aws-cloud-controller-manager   1         1         1       1            1           <none>                   2d
aws-node                       3         3         3       3            3           <none>                   2d
ebs-csi-node                   3         3         3       3            3           kubernetes.io/os=linux   2d
kops-controller                1         1         1       1            1           <none>                   2d
node-local-dns                 3         3         3       3            3           <none>                   2d

aws-node daemonset 확인해보면 찾을 수 μžˆλ‹€.

Liveness, Readiness 둜 μ‚¬μš©ν•˜λŠ” grpc-health-probe μ½”λ“œ 확인
https://github.com/aws/amazon-vpc-cni-k8s/blob/master/cmd/grpc-health-probe/main.go

aws-vpc-cni μ½”λ“œ 확인
https://github.com/aws/amazon-vpc-cni-k8s/blob/master/cmd/aws-vpc-cni/main.go

eniconfigs.crd.k8s.amazonaws.com CRD (CustomResourceDefinition)에 μ˜ν•΄ vpc eni 컨트둀 λ˜λŠ”λ“―.

VPC CNI κ΄€λ ¨ μ„€λͺ…

https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/

μ•„λ¬΄νŠΌ kops둜 배포된 k8s ν΄λŸ¬μŠ€ν„°κ°€ μ–΄λ–€ νŠΉμ„±μ„ κ°€μ‘ŒλŠ”μ§€ λŒ€λž΅ ν™•μΈν–ˆλ‹€.

POD - POD 톡신 ν…ŒμŠ€νŠΈ

# νŒŒλ“œ IP λ³€μˆ˜ 지정
PODIP1=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[0].status.podIP})
PODIP2=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[1].status.podIP})

PODNAME1=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[0].metadata.name})
PODNAME2=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[1].metadata.name})

# νŒŒλ“œ1 Shell μ—μ„œ νŒŒλ“œ2둜 ping ν…ŒμŠ€νŠΈ
kubectl exec -it $PODNAME1 -- ping -c 2 $PODIP2

# νŒŒλ“œ2 Shell μ—μ„œ νŒŒλ“œ1둜 ping ν…ŒμŠ€νŠΈ
kubectl exec -it $PODNAME2 -- ping -c 2 $PODIP1

(sparkandassociates:default) [root@kops-ec2 ~]# kubectl exec -it $PODNAME1 -- ping -c 2 $PODIP2
PING 172.30.94.164 (172.30.94.164) 56(84) bytes of data.
64 bytes from 172.30.94.164: icmp_seq=1 ttl=62 time=1.12 ms
64 bytes from 172.30.94.164: icmp_seq=2 ttl=62 time=1.04 ms

--- 172.30.94.164 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 1.035/1.076/1.117/0.041 ms
(sparkandassociates:default) [root@kops-ec2 ~]# ^C
(sparkandassociates:default) [root@kops-ec2 ~]# kubectl exec -it $PODNAME2 -- ping -c 2 $PODIP1
PING 172.30.60.114 (172.30.60.114) 56(84) bytes of data.
64 bytes from 172.30.60.114: icmp_seq=1 ttl=62 time=1.03 ms
64 bytes from 172.30.60.114: icmp_seq=2 ttl=62 time=1.02 ms

# μ›Œμ»€ λ…Έλ“œ EC2 : TCPDUMP 확인
sudo tcpdump -i any -nn icmp
sudo tcpdump -i ens5 -nn icmp

4. POD μ™ΈλΆ€ 톡신

톡신 흐름 : iptables SNAT 톡해 λ…Έλ“œ μΈν„°νŽ˜μ΄μŠ€ IP둜 λ³€κ²½λ˜μ„œ 톡신

(μ°Έκ³  : https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/cni-proposal.md )

# μž‘μ—…μš© EC2 : pod-1 Shell μ—μ„œ μ™ΈλΆ€λ‘œ ping
kubectl exec -it $PODNAME1 -- ping -c 1 www.google.com
kubectl exec -it $PODNAME1 -- ping -i 0.1 www.google.com

# μ›Œμ»€ λ…Έλ“œ EC2 : 퍼블릭IP 확인, TCPDUMP 확인
curl -s ipinfo.io/ip ; echo
sudo tcpdump -i any -nn icmp
sudo tcpdump -i ens5 -nn icmp

(sparkandassociates:default) [root@kops-ec2 ~]# kubectl exec -it $PODNAME2 -- ping -c 1 www.google.com
PING www.google.com (142.250.196.100) 56(84) bytes of data.
64 bytes from nrt12s35-in-f4.1e100.net (142.250.196.100): icmp_seq=1 ttl=45 time=26.4 ms

--- www.google.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 26.374/26.374/26.374/0.000 ms
(sparkandassociates:default) [root@kops-ec2 ~]#
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
root@i-05538a0cedc2ceac8:~#
root@i-05538a0cedc2ceac8:~#
root@i-05538a0cedc2ceac8:~# sudo tcpdump -i any -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
06:35:55.774315 IP 172.30.94.164 > 142.250.196.100: ICMP echo request, id 63106, seq 1, length 64
06:35:55.774339 IP 172.30.66.26 > 142.250.196.100: ICMP echo request, id 21114, seq 1, length 64
06:35:55.800666 IP 142.250.196.100 > 172.30.66.26: ICMP echo reply, id 21114, seq 1, length 64
06:35:55.800682 IP 142.250.196.100 > 172.30.94.164: ICMP echo reply, id 63106, seq 1, length 64

# podIP(172.30.94.164) μ—μ„œ node IP(172.30.66.26)둜 SNAT 이 λ˜μ–΄ 톡신됨.

# νŒŒλ“œκ°€ 외뢀와 ν†΅μ‹ μ‹œμ—λŠ” μ•„λž˜ 처럼 'AWS-SNAT-CHAIN-0, AWS-SNAT-CHAIN-1' λ£°(rule)에 μ˜ν•΄μ„œ SNAT λ˜μ–΄μ„œ 외뢀와 톡신!
root@i-05538a0cedc2ceac8:~# sudo iptables -t nat -S | grep 'A AWS-SNAT-CHAIN'
-A AWS-SNAT-CHAIN-0 ! -d 172.30.0.0/16 -m comment --comment "AWS SNAT CHAIN" -j AWS-SNAT-CHAIN-1
-A AWS-SNAT-CHAIN-1 ! -o vlan+ -m comment --comment "AWS, SNAT" -m addrtype ! --dst-type LOCAL -j SNAT --to-source 172.30.66.26 --random-fully
root@i-05538a0cedc2ceac8:~#



# 참고둜 λ’€ IPλŠ” eth0(ENI 첫번째)의 IP μ£Όμ†Œμ΄λ‹€
# --random-fully λ™μž‘ - 링크1  링크2
sudo iptables -t nat -S | grep 'A AWS-SNAT-CHAIN'
-A AWS-SNAT-CHAIN-0 ! -d 172.30.0.0/16 -m comment --comment "AWS SNAT CHAIN" -j AWS-SNAT-CHAIN-1
-A AWS-SNAT-CHAIN-1 ! -o vlan+ -m comment --comment "AWS, SNAT" -m addrtype ! --dst-type LOCAL -j SNAT --to-source 172.30.85.242 --random-fully

## μ•„λž˜ 'mark 0x4000/0x4000' λ§€μΉ­λ˜μ§€ μ•Šμ•„μ„œ RETURN 됨!
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
...

# 카운트 확인 
Every 2.0s: sudo iptables -v --numeric --table nat --list AWS-SNAT-CHAIN-0; echo ; sudo iptables -v --numeric --table nat --list ...  i-05538a0cedc2ceac8: Fri Mar 17 06:40:43 2023

Chain AWS-SNAT-CHAIN-0 (1 references)
 pkts bytes target     prot opt in     out     source               destination
 264K   16M AWS-SNAT-CHAIN-1  all  --  *      *       0.0.0.0/0           !172.30.0.0/16        /* AWS SNAT CHAIN */

Chain AWS-SNAT-CHAIN-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination
33030 1983K SNAT       all  --  *      !vlan+  0.0.0.0/0            0.0.0.0/0            /* AWS, SNAT */ ADDRTYPE match dst-type !LOCAL to:172.30.66.26 random-fully

Chain KUBE-POSTROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination
 2973  185K RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match ! 0x4000/0x4000
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK xor 0x4000
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ random-fully



# conntrack 확인
sudo conntrack -L -n |grep -v '169.254.169'

conntrack v1.4.5 (conntrack-tools): 24 flow entries have been shown.
icmp     1 24 src=172.30.94.164 dst=142.251.42.132 type=8 code=0 id=33110 src=142.251.42.132 dst=172.30.66.26 type=0 code=0 id=29865 mark=128 use=1
tcp      6 102 TIME_WAIT src=172.30.94.164 dst=142.251.42.132 sport=60882 dport=80 src=142.251.42.132 dst=172.30.66.26 sport=80 dport=18054 [ASSURED] mark=128 use=1

5. λ…Έλ“œμ— νŒŒλ“œ 생성 갯수 μ œν•œ

# t3 νƒ€μž…μ˜ 정보(ν•„ν„°) 확인

 [root@san-1 yaml]# aws ec2 describe-instance-types --filters Name=instance-type,Values=t3.* \
>  --query "InstanceTypes[].{Type: InstanceType, MaxENI: NetworkInfo.MaximumNetworkInterfaces, IPv4addr: NetworkInfo.Ipv4AddressesPerInterface}" \
>  --output table
--------------------------------------
|        DescribeInstanceTypes       |
+----------+----------+--------------+
| IPv4addr | MaxENI   |    Type      |
+----------+----------+--------------+
|  15      |  4       |  t3.2xlarge  |
|  15      |  4       |  t3.xlarge   |
|  12      |  3       |  t3.large    |
|  6       |  3       |  t3.medium   |
|  2       |  2       |  t3.nano     |
|  2       |  2       |  t3.micro    |
|  4       |  3       |  t3.small    |
+----------+----------+--------------+

# νŒŒλ“œ μ‚¬μš© κ°€λŠ₯ 계산 μ˜ˆμ‹œ : aws-node 와 kube-proxy νŒŒλ“œλŠ” host-networking μ‚¬μš©μœΌλ‘œ IP 2개 λ‚¨μŒ
((MaxENI * (IPv4addr-1)) + 2)
t3.medium 경우 : ((3 * (6 - 1) + 2 ) = 17개 >> aws-node 와 kube-proxy 2개 μ œμ™Έν•˜λ©΄ 15개

ν•˜μ§€λ§Œ, IP prefix delegation 섀정을 이미 해둔거라 μ›Œμ»€λ…Έλ“œλ‹Ή 100개 pod이 생성 될 수 μžˆλ„λ‘ λ˜μ–΄μžˆλ‹€.
(IPv4 Prefix Delegation : IPv4 28bit μ„œλΈŒλ„·(prefix)λ₯Ό μœ„μž„ν•˜μ—¬ ν• λ‹Ή κ°€λŠ₯ IP μˆ˜μ™€ μΈμŠ€ν„΄μŠ€ μœ ν˜•μ— ꢌμž₯ν•˜λŠ” μ΅œλŒ€ 갯수둜 μ„ μ •)

(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl describe daemonsets.apps -n kube-system aws-node | egrep 'ENABLE_PREFIX_DELEGATION|WARM_PREFIX_TARGET'
      ENABLE_PREFIX_DELEGATION:               true
      WARM_PREFIX_TARGET:                     1
      
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl describe node | grep Allocatable: -A6
Allocatable:
  cpu:                2
  ephemeral-storage:  119703055367
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3670004Ki
  pods:               100
--
Allocatable:
  cpu:                2
  ephemeral-storage:  59763732382
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3854320Ki
  pods:               100
--
Allocatable:
  cpu:                2
  ephemeral-storage:  119703055367
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3698676Ki
  pods:               100
# pod 200개 생성 
(sparkandassociates:default) [root@kops-ec2 ~]# kubectl apply -f ~/pkos/2/nginx-dp.yaml
deployment.apps/nginx-deployment created

(sparkandassociates:default) [root@kops-ec2 ~]# kubectl scale deployment nginx-deployment --replicas=200
deployment.apps/nginx-deployment scaled

(sparkandassociates:default) [root@kops-ec2 pkos]# k get pod | grep Pend| wc -l
13

μ›Œμ»€ λ…Έλ“œ 각 100κ°œμ”© pod이 가득 μ°¨ 있고, 13개 pod은 Pending μƒνƒœμž„. (* μ›Œμ»€λ…Έλ“œμ— pod이 이미 μžˆμ–΄μ„œ μ΅œλŒ€ 200개 도달함)

maxPod μ„€μ • 확인

(sparkandassociates:default) [root@kops-ec2 pkos]# kops edit cluster
...
  kubelet:
    anonymousAuth: false
    maxPods: 100
...

μΈμŠ€ν„΄μŠ€ νƒ€μž… 확인

(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl describe nodes | grep "node.kubernetes.io/instance-type"
                    node.kubernetes.io/instance-type=c5d.large
                    node.kubernetes.io/instance-type=t3.medium
                    node.kubernetes.io/instance-type=c5d.large
                    
# Nitro μΈμŠ€ν„΄μŠ€ μœ ν˜• 확인                    
aws ec2 describe-instance-types --filters Name=hypervisor,Values=nitro --query "InstanceTypes[*].[InstanceType]" --output text | sort | egrep 't3\.|c5\.|c5d\.'

Ingress

Service LoadBalancer Controller : AWS Load Balancer ControllerΒ +Β NLBΒ IP λͺ¨λ“œΒ λ™μž‘ with AWS VPC CNI


(좜처: κ°€μ‹œλ‹€λ‹˜ μŠ€ν„°λ”” λͺ¨μž„ 자료)

EC2 instance profiles μ„€μ • 및 AWS LoadBalancer 배포 & ExternalDNS μ„€μΉ˜ 및 배포 (이미 μ„€μ •λ˜μ–΄ μžˆμœΌλ―€λ‘œ 참고만)

# λ§ˆμŠ€ν„°/μ›Œμ»€ λ…Έλ“œμ— EC2 IAM Role 에 Policy (AWSLoadBalancerControllerIAMPolicy) μΆ”κ°€
## IAM Policy μ •μ±… 생성 : 2μ£Όμ°¨μ—μ„œ IAM Policy λ₯Ό 미리 λ§Œλ“€μ–΄λ‘μ—ˆμœΌλ‹ˆ Skip
curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.4.5/docs/install/iam_policy.json
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json

# EC2 instance profiles 에 IAM Policy μΆ”κ°€(attach)
aws iam attach-role-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy --role-name masters.$KOPS_CLUSTER_NAME
aws iam attach-role-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy --role-name nodes.$KOPS_CLUSTER_NAME

# IAM Policy μ •μ±… 생성 : 2μ£Όμ°¨μ—μ„œ IAM Policy λ₯Ό 미리 λ§Œλ“€μ–΄λ‘μ—ˆμœΌλ‹ˆ Skip
curl -s -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/AKOS/externaldns/externaldns-aws-r53-policy.json
aws iam create-policy --policy-name AllowExternalDNSUpdates --policy-document file://externaldns-aws-r53-policy.json

# EC2 instance profiles 에 IAM Policy μΆ”κ°€(attach)
aws iam attach-role-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AllowExternalDNSUpdates --role-name masters.$KOPS_CLUSTER_NAME
aws iam attach-role-policy --policy-arn arn:aws:iam::$ACCOUNT_ID:policy/AllowExternalDNSUpdates --role-name nodes.$KOPS_CLUSTER_NAME

# kOps ν΄λŸ¬μŠ€ν„° νŽΈμ§‘ : μ•„λž˜ λ‚΄μš© μΆ”κ°€
kops edit cluster
-----
spec:
  certManager:
    enabled: true
  awsLoadBalancerController:
    enabled: true
  externalDns:
    provider: external-dns
-----

# μ—…λ°μ΄νŠΈ 적용
kops update cluster --yes && echo && sleep 3 && kops rolling-update cluster

μ„œλΉ„μŠ€/νŒŒλ“œ 배포 ν…ŒμŠ€νŠΈ with Ingress(ALB)

ingress1.yaml λ‚΄μš© 확인.
"2048" κ²Œμž„ 및 ingress λ₯Ό μƒμ„±ν•œλ‹€.

apiVersion: v1
kind: Namespace
metadata:
  name: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: game-2048
  name: deployment-2048
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: app-2048
  replicas: 2
  template:
    metadata:
      labels:
        app.kubernetes.io/name: app-2048
    spec:
      containers:
      - image: public.ecr.aws/l6m2t8p7/docker-2048:latest
        imagePullPolicy: Always
        name: app-2048
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  namespace: game-2048
  name: service-2048
spec:
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: NodePort
  selector:
    app.kubernetes.io/name: app-2048
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: game-2048
  name: ingress-2048
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
    - http:
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: service-2048
              port:
                number: 80

# 생성 확인
kubectl get-all -n game-2048
kubectl get ingress,svc,ep,pod -n game-2048

(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl get targetgroupbindings -n game-2048
NAME                               SERVICE-NAME   SERVICE-PORT   TARGET-TYPE   AGE
k8s-game2048-service2-c9310624f4   service-2048   80             ip            49s

# Ingress 확인
kubectl describe ingress -n game-2048 ingress-2048
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl describe ingress -n game-2048 ingress-2048
Name:             ingress-2048
Labels:           <none>
Namespace:        game-2048
Address:          k8s-game2048-ingress2-56100cdd1f-2017222223.ap-northeast-2.elb.amazonaws.com
Ingress Class:    alb
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *
              /   service-2048:80 (172.30.61.112:80,172.30.93.48:80)
Annotations:  alb.ingress.kubernetes.io/scheme: internet-facing
              alb.ingress.kubernetes.io/target-type: ip
Events:
  Type    Reason                  Age   From     Message
  ----    ------                  ----  ----     -------
  Normal  SuccessfullyReconciled  67s   ingress  Successfully reconciled
  
# κ²Œμž„ 접속 : ALB μ£Όμ†Œλ‘œ μ›Ή 접속
kubectl get ingress -n game-2048 ingress-2048 -o jsonpath={.status.loadBalancer.ingress[0].hostname} | awk '{ print "Game URL = http://"$1 }'
kubectl get logs -n game-2048 -l app=game-2048

(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl get ingress -n game-2048 ingress-2048 -o jsonpath={.status.loadBalancer.ingress[0].hostname} | awk '{ print "Game URL = http://"$1 }'
Game URL = http://k8s-game2048-ingress2-56100cdd1f-2017222223.ap-northeast-2.elb.amazonaws.com

# νŒŒλ“œ IP 확인
(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl get pod -n game-2048 -owide
NAME                               READY   STATUS    RESTARTS   AGE     IP              NODE                  NOMINATED NODE   READINESS GATES
deployment-2048-6bc9fd6bf5-9q9rj   1/1     Running   0          3m24s   172.30.61.112   i-0f94a3e2d9abe939f   <none>           <none>
deployment-2048-6bc9fd6bf5-q79sp   1/1     Running   0          3m24s   172.30.93.48    i-05538a0cedc2ceac8   <none>           <none>

EC2 > Target Group μ—μ„œ 확인가λŠ₯ν•˜λ‹€.

(sparkandassociates:default) [root@kops-ec2 pkos]# kubectl scale deployment -n game-2048 deployment-2048 --replicas 3
deployment.apps/deployment-2048 scaled

pod 개수λ₯Ό 3개둜 늘리면 μˆ˜μ΄ˆλ‚΄ ALB에 반영됨. (So cool...)

λ°˜λŒ€λ‘œ pod을 1개둜 쀄이면 κ·Έ μ¦‰μ‹œ νƒ€κ²Ÿ κ·Έλ£Ήμ—μ„œ draining 됨. (soooooo cool)

ExternalDNS μ„€μ •

# ν˜„μž¬ 별도 DNS 등둝이 μ•ˆλ˜μ–΄μžˆμ–΄ elb μ£Όμ†Œλ‘œ 섀정됨.
(sparkandassociates:default) [root@kops-ec2 pkos]# k -n game-2048 get ingress
NAME           CLASS   HOSTS   ADDRESS                                                                        PORTS   AGE
ingress-2048   alb     *       k8s-game2048-ingress2-56100cdd1f-2017222223.ap-northeast-2.elb.amazonaws.com   80      11m

# host μΆ”κ°€ν•΄μ€€λ‹€.
(sparkandassociates:default) [root@kops-ec2 pkos]# k -n game-2048 edit ingress ingress-2048
...
  ingressClassName: alb
  rules:
  - host: albweb.sparkandassociates.net
    http:
      paths:
      - backend:
...


μ„€μ •ν•œ λ„λ©”μΈμœΌλ‘œ 접속 확인

URI λΆ„κΈ° 확인

ν…ŒνŠΈλ¦¬μŠ€, 마리였 κ²Œμž„ μΆ”κ°€ 배포

cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tetris
  labels:
    app: tetris
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tetris
  template:
    metadata:
      labels:
        app: tetris
    spec:
      containers:
      - name: tetris
        image: bsord/tetris
---
apiVersion: v1
kind: Service
metadata:
   name: tetris
spec:
  selector:
    app: tetris
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  type: NodePort
EOF

cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mario
  labels:
    app: mario
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mario
  template:
    metadata:
      labels:
        app: mario
    spec:
      containers:
      - name: mario
        image: pengbai/docker-supermario
---
apiVersion: v1
kind: Service
metadata:
   name: mario
spec:
  selector:
    app: mario
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  type: NodePort
  externalTrafficPolicy: Local
EOF

Ingress 생성

cat <<EOF | kubectl create -f - 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-ps5
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
    - host: albps5.sparkandassociates.net
      http:
        paths:
        - path: /mario
          pathType: Prefix
          backend:
            service:
              name: mario
              port:
                number: 80
        - path: /tetris
          pathType: Prefix
          backend:
            service:
              name: tetris
              port:
                number: 80
EOF                    

EC2 sec group에 elb target group binding μ΄λž€ description 달린채 80-8080 룰이 μΆ”κ°€λœλ‹€.

https://aws.amazon.com/ko/blogs/containers/how-to-expose-multiple-applications-on-amazon-eks-using-a-single-application-load-balancer/

SSL μΈμ¦μ„œ λ°œκΈ‰ν•˜κ³ , CNAME μΆ”κ°€ν•΄μ„œ 도메인 인증후 μ‚¬μš©κ°€λŠ₯.
ingress에 μΈμ¦μ„œλ₯Ό μ„€μ •ν•˜λŠ” 방법은 κ°„λ‹¨ν•˜λ‹€.
μ•„λž˜μ™€ 같이 cert arn 을 λ„£μ–΄μ£ΌκΈ°λ§Œ ν•˜λ©΄λ˜κ³ 
listen portλ₯Ό μ–΄λ…Έν…Œμ΄μ…˜μ— λ„£μ–΄μ£Όλ©΄ λœλ‹€.
(μ°Έκ³  : https://guide.ncloud-docs.com/docs/k8s-k8suse-albingress)

kind: Ingress
metadata:
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-northeast-2:7842695:certificate/57533b9f3
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip

μΈμ¦μ„œκΉŒμ§€ μ˜¬λ Έμ§€λ§Œ μ—¬μ „νžˆ

404μ—λŸ¬κ°€ λ‚œλ‹€.

tetris pod 둜그 확인.

/usr/share/nginx/html/tetris 경둜λ₯Ό μ°ΎλŠ”λ‹€?

/tetris λ₯Ό ν˜ΈμΆœν• κ²½μš° -> tetris endpoint 80 port의 / 경둜둜 가도둝 μ˜λ„ν•˜μ˜€μœΌλ‚˜ tetris pod의 /tetris 경둜둜 보내진닀.

    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-northeast-2:784246164695:certificate/57533136-de19-4485-b686-1f85e604b9f3
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    # custom annotations (redirects, header versioning) (if any):
    alb.ingress.kubernetes.io/actions.viewer-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "Path":"/", "Query": "#{query}", "StatusCode": "HTTP_301"}}'

(μ°Έκ³ : https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/ingress/annotations/#:~:text=alb.ingress.kubernetes.io/actions.%24%7Baction-name%7D%20Provides%20a%20method%20for%20configuring%20custom%20actions%20on%20a%20listener%2C%20such%20as%20Redirect%20Actions )

  annotations:
    alb.ingress.kubernetes.io/actions.mario: '{"Type":"redirect","RedirectConfig":{"Host":"mario.sparkandassociates.net","Port":"443","Protocol":"HTTPS","Query":"#{query}","path":"/","StatusCode":"HTTP_301"}}'
    alb.ingress.kubernetes.io/actions.tetris: '{"Type":"redirect","RedirectConfig":{"Host":"tetris.sparkandassociates.net","Port":"443","Protocol":"HTTPS","Query":"#{query}","path":"/","StatusCode":"HTTP_301"}}'
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-northeast-2:784246164695:certificate/57533136-de19-4485-b686-1f85e604b9f3
    alb.ingress.kubernetes.io/conditions.mario: |
      [{"field":"host-header","hostHeaderConfig":{"values":["mario.sparkandassociates.net"]}}]
    alb.ingress.kubernetes.io/conditions.tetris: |
      [{"field":"host-header","hostHeaderConfig":{"values":["tetris.sparkandassociates.net"]}}]
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    external-dns.alpha.kubernetes.io/hostname: mario.sparkandassociates.net,tetris.sparkandassociates.net
    
spec:
  ingressClassName: alb
  rules:
  - host: albps5.sparkandassociates.net
    http:
      paths:
      - backend:
          service:
            name: mario
            port:
              name: use-annotation
        path: /mario
        pathType: Prefix
      - backend:
          service:
            name: tetris
            port:
              name: use-annotation
        path: /tetris
        pathType: Prefix
        

(*3μ›” 24일 μ—…λ°μ΄νŠΈ)

alb ingressλŠ” L7 λ‘œλ“œλ°ΈλŸ°μ„œλ‘œμ„œ λͺ¨λ“ κΈ°λŠ₯을 μ œκ³΅ν•˜μ§€ μ•ŠμŒ.
rewrite κΈ°λŠ₯이 μ—†μ–΄ redirect둜 ν•΄λ³΄λ €ν–ˆμœΌλ‚˜ μ œλŒ€λ‘œ λ˜μ§€μ•Šμ•„ 방법을 λ³€κ²½. (ν˜Ήμ‹œ λˆ„κ°€ μ•„μ‹œλ©΄ μ—°λ½μ£Όμ„Έμš”.)

μ»¨ν…Œμ΄λ„ˆλ‚΄ web root 경둜λ₯Ό λ³€κ²½ 각 /mario, /tetris
μ»¨ν…Œμ΄λ„ˆ 이미지 자체λ₯Ό λ³€κ²½ν•˜μ§€ μ•Šκ³  pod 기동이후 lifecycle μ΄μš©ν•΄ 슀크립트 μ‚¬μš©ν•˜κΈ°λ‘œ..
(lifecycle μ°Έκ³  : https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/ )

mario deployment λ₯Ό μ•„λž˜μ™€ 같이 μˆ˜μ •ν•΄μ€€λ‹€.

    spec:
      containers:
      - image: pengbai/docker-supermario
        imagePullPolicy: Always
        name: mario
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        lifecycle:
          postStart:
            exec:
              command: ["/bin/sh", "-c", "mkdir -p /usr/local/tomcat/webapps/mario && cp -R /usr/local/tomcat/webapps/ROOT/* /usr/local/tomcat/webapps/mario; mv /usr/local/tomcat/webapps/mario /usr/local/tomcat/webapps/ROOT/mario"]

tetris deployment도 λ§ˆμ°¬κ°€μ§€λ‘œ μˆ˜μ •

    spec:
      containers:
      - image: bsord/tetris
        imagePullPolicy: Always
        name: tetris
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        lifecycle:
          postStart:
            exec:
              command: ["/bin/sh", "-c", "mkdir -p /usr/share/nginx/tetris && cp -R /usr/share/nginx/html/* /usr/share/nginx/tetris; mv /usr/share/nginx/tetris /usr/share/nginx/html/tetris"]

확인

μ‹€μŠ΅μžμ› μ‚­μ œ
kOps ν΄λŸ¬μŠ€ν„° μ‚­μ œ & AWS CloudFormation μŠ€νƒ μ‚­μ œ

kops delete cluster --yes && aws cloudformation delete-stack --stack-name mykops

DNSλ ˆμ½”λ“œλŠ” μ‚­μ œκ°€ μžλ™μœΌλ‘œ μ•ˆλ˜λ―€λ‘œ
route53μ•ˆμ— externalDNS둜 λ§Œλ“€μ–΄μ§„ DNS λ ˆμ½”λ“œλŠ” 직접 μ‚­μ œν•΄μ€€λ‹€.

profile
Hello world

0개의 λŒ“κΈ€