λ³΄μ•ˆ

SparkΒ·2023λ…„ 4μ›” 7일
0

PKOS

λͺ©λ‘ 보기
5/6

πŸ“Œ λͺ©ν‘œ

μ˜€λŠ˜μ€ PKOS μŠ€ν„°λ””μ˜ λ§ˆμ§€λ§‰ 5주차의 주제인 λ³΄μ•ˆμ— λŒ€ν•΄μ„œ μ•Œμ•„κ°€ 보도둝 ν•˜μž!

EC2 IAM Role, 메타데이터

Pod λ‚΄μ—μ„œ EC2 메타데이터 IAM 토큰정보λ₯Ό μ‚¬μš©ν•΄ AWS μ„œλΉ„μŠ€ μ‚¬μš©ν•΄λ³΄λŠ” 것을 μ‹€μŠ΅ν•΄λ³΄μž.

μΈμŠ€ν„΄μŠ€λ‚΄μ—μ„œ EC2μ—μ„œ μ‚¬μš©ν•˜λŠ” 메타데이터λ₯Ό μ‘°νšŒν•˜κ³  ν™œμš©ν•  수 μžˆλ„λ‘ μ œκ³΅ν•˜λ©°
"http://169.254.169.254/latest/meta-data/" λ₯Ό 톡해 확인 ν•  수 μžˆλ‹€.

curl 169.254.169.254/latest/meta-data/

user-data(sparkandassociates:harbor) [root@kops-ec2 ~]# curl 169.254.169.254/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
events/
hostname
identity-credentials/
instance-action
instance-id
instance-life-cycle
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
services/

참고둜 Openstack λ‚΄ μΈμŠ€ν„΄μŠ€λ„ λ™μΌν•˜κ²Œ 메타데이터 μ„œλ²„λ₯Ό μ‚¬μš© κ°€λŠ₯ν•˜λ‹€.

root@y-1:~# curl 169.254.169.254/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
reservation-id
security-groupsroot@y-1:~#

pod λ‚΄μ—μ„œ 기본적으둜 μ‘°νšŒκ°€ μ•ˆλ˜λ„λ‘ λ³΄μ•ˆμ„€μ •μ΄ λ˜μ–΄μžˆλ‹€.

# netshoot-pod 생성
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: netshoot-pod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: netshoot-pod
  template:
    metadata:
      labels:
        app: netshoot-pod
    spec:
      containers:
      - name: netshoot-pod
        image: nicolaka/netshoot
        command: ["tail"]
        args: ["-f", "/dev/null"]
      terminationGracePeriodSeconds: 0
EOF

# νŒŒλ“œ 이름 λ³€μˆ˜ 지정
PODNAME1=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[0].metadata.name})
PODNAME2=$(kubectl get pod -l app=netshoot-pod -o jsonpath={.items[1].metadata.name})

# EC2 메타데이터 정보 확인
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME1 -- curl 169.254.169.254 ;echo

(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME2 -- curl 169.254.169.254 ;echo

μ›Œμ»€ λ…Έλ“œ 1λŒ€μ—μ„œ EC2 메타데이터 λ³΄μ•ˆμ„ μ œκ±°ν•˜κ³  λ‹€μ‹œ ν•΄λ³΄μž.
(nodes-ap-northeast-2a, nodes-ap-northeast-2c μ›Œμ»€ λ‘λŒ€μ€‘ 첫번째 μ›Œμ»€λ§Œ 적용)

#
kops edit ig nodes-ap-northeast-2a
---
# μ•„λž˜ 3쀄 제거
spec:
  instanceMetadata:
    httpPutResponseHopLimit: 1
    httpTokens: required
---

# μ—…λ°μ΄νŠΈ 적용 : λ…Έλ“œ1λŒ€ λ‘€λ§μ—…λ°μ΄νŠΈ
kops update cluster --yes && echo && sleep 3 && kops rolling-update cluster --yes
..
..
Detected single-control-plane cluster; won't detach before draining
NAME                            STATUS          NEEDUPDATE      READY   MIN     TARGET  MAX     NODES
control-plane-ap-northeast-2a   Ready           0               1       1       1       1       1
nodes-ap-northeast-2a           NeedsUpdate     1               0       1       1       1       1
nodes-ap-northeast-2c           Ready           0               1       1       1       1       1
..
..
I0403 10:40:08.777396   23960 instancegroups.go:467] waiting for 15s after terminating instance
I0403 10:40:23.779129   23960 instancegroups.go:501] Validating the cluster.
I0403 10:40:24.601875   23960 instancegroups.go:540] Cluster validated; revalidating in 10s to make sure it does not flap.
I0403 10:40:35.248158   23960 instancegroups.go:537] Cluster validated.
I0403 10:40:35.248192   23960 rollingupdate.go:234] Rolling update completed for cluster "sparkandassociates.net"!

λ‹€μ‹œ νŒŒλ“œ1,2μ—μ„œ EC2 메타데이터λ₯Ό ν™•μΈν•΄λ³΄μž.

(sparkandassociates:harbor) [root@kops-ec2 ~]# kops get instances

ID                      NODE-NAME               STATUS          ROLES           STATE   INTERNAL-IP     INSTANCE-GROUP                                                  MACHINE-TYPE
i-066cf2f8937746e50     i-066cf2f8937746e50     UpToDate        node                    172.30.83.26    nodes-ap-northeast-2c.sparkandassociates.net                    c5a.2xlarge
i-0c421069027ec2d2d     i-0c421069027ec2d2d     UpToDate        node                    172.30.34.113   nodes-ap-northeast-2a.sparkandassociates.net                    c5a.2xlarge
i-0d3de3051f46d267d     i-0d3de3051f46d267d     UpToDate        control-plane           172.30.55.185   control-plane-ap-northeast-2a.masters.sparkandassociates.net    c5a.2xlarge

(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl get pod -l app=netshoot-pod -o wide
NAME                            READY   STATUS    RESTARTS   AGE    IP              NODE                  NOMINATED NODE   READINESS GATES
netshoot-pod-7757d5dd99-qhgjv   1/1     Running   0          8m1s   172.30.49.190   i-0c421069027ec2d2d   <none>           <none>
netshoot-pod-7757d5dd99-x5lts   1/1     Running   0          17m    172.30.83.71    i-066cf2f8937746e50   <none>           <none>
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get nodes i-0c421069027ec2d2d -o yaml | grep topology.kubernetes.io/zone
    topology.kubernetes.io/zone: ap-northeast-2a
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get nodes i-066cf2f8937746e50 -o yaml | grep topology.kubernetes.io/zone
    topology.kubernetes.io/zone: ap-northeast-2c
(sparkandassociates:harbor) [root@kops-ec2 ~]#

# EC2 meta λ³΄μ•ˆμ„ μ œκ±°ν•œ λ…Έλ“œ i-0c421069027ec2d2d 에 배포된 podμ—μ„œ ec2 metadata μ‘°νšŒκ°€ κ°€λŠ₯ν•˜λ‹€!!
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME1 -- curl 169.254.169.254 ;echo
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
2011-01-01
2011-05-01
2012-01-12
2014-02-25
2014-11-05
2015-10-20
2016-04-19
2016-06-30
2016-09-02
2018-03-28
2018-08-17
2018-09-24
2019-10-01
2020-10-27
2021-01-03
2021-03-23
2021-07-15
2022-09-24
latest

## i-066cf2f8937746e50 λ…Έλ“œμ— 배포된 netshoot podμ—μ„œλŠ” 쑰회 λΆˆκ°€ν•˜λ‹€.
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME2 -- curl 169.254.169.254 ;echo

(sparkandassociates:harbor) [root@kops-ec2 ~]#

## λ‹€μ‹œ pod 1μ—μ„œ 토큰정보λ₯Ό μ–»μ–΄λ³΄μž.
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME1 -- curl 169.254.169.254/latest/meta-data/iam/security-credentials/ ;echo
nodes.sparkandassociates.net
(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl exec -it $PODNAME1 -- curl 169.254.169.254/latest/meta-data/iam/security-credentials/nodes.$KOPS_CLUSTER_NAME | jq
{
  "Code": "Success",
  "LastUpdated": "2023-04-03T01:40:01Z",
  "Type": "AWS-HMAC",
  "AccessKeyId": "ASIA3NGF...UUFH6IL5",
  "SecretAccessKey": "avvnBTS+zHB...55vl9Qp",
  "Token": "IQoJb3JpZ2luX2VjEPL//////////wEaDmFwLW5vcnRoZWFzdC0yIkgwRgIhAO1bRK...xjCJ3aihBjqwARkFYS+ye6qItlZqbjxOZbA4CEE79Pnn8Qt3UuRn+QqyFW1b7cseWPf24+LlKuyefBUENCpsoNNdJyF0+8FRCl2bG3vWRxfDl0TjlMmjlcB/k/tkdB8NrhAbjA/Y1g7q1va0Zgvu5so1R6yWFPrOSpp6PL6smibnVGb120++BLMC1VDgUEKM6wBX5mkJ2azjuCcWdj7Qbq3VXBNOGw/PCBVpF4jwbofcQxVKaxEvJc1m",
  "Expiration": "2023-04-03T08:15:25Z"
}

πŸ‘‰ λ„μ „κ³Όμ œ1 boto3 톡해 AWS μ„œλΉ„μŠ€ μ œμ–΄

νŒŒλ“œμ—μ„œ νƒˆμ·¨ν•œ EC2 메타데이터 IAM role token 정보λ₯Ό ν™œμš©ν•΄μ„œ
python boto3λ₯Ό 톡해 SDK둜 AWS μ„œλΉ„μŠ€λ₯Ό μ‚¬μš©ν•΄λ³΄μž.

boto3배포

# boto3 μ‚¬μš©μ„ μœ„ν•œ νŒŒλ“œ 생성
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: boto3-pod
spec:
  replicas: 2
  selector:
    matchLabels:
      app: boto3
  template:
    metadata:
      labels:
        app: boto3
    spec:
      containers:
      - name: boto3
        image: jpbarto/boto3
        command: ["tail"]
        args: ["-f", "/dev/null"]
      terminationGracePeriodSeconds: 0
EOF

# 확인
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get pod -o wide -l app=boto3
NAME                         READY   STATUS    RESTARTS   AGE     IP             NODE                  NOMINATED NODE   READINESS GATES
boto3-pod-7944d7b4db-d4z4n   1/1     Running   0          2m35s   172.30.58.42   i-0c421069027ec2d2d   <none>           <none>
boto3-pod-7944d7b4db-gnrpj   1/1     Running   0          12s     172.30.83.72   i-066cf2f8937746e50   <none>           <none>

νƒˆμ·¨ν•œ PODλ‚΄μ—μ„œ μΈμŠ€ν„΄μŠ€ μ •λ³΄μ‘°νšŒ

μˆœν•œλ§›μœΌλ‘œ μΈμŠ€ν„΄μŠ€ μ •λ³΄μ‘°νšŒλΆ€ν„° ν•΄λ³΄μž.
"μΈμŠ€ν„΄μŠ€ 정보 μ‘°νšŒν•˜λŠ” μ½”λ“œ"

import boto3

ec2 = boto3.client('ec2', region_name = 'ap-northeast-2')
response = ec2.describe_instances()
print(response)

sample μ½”λ“œ μ„€λͺ…μ—” region이 λΉ μ ΈμžˆλŠ”λ° 이럴경우 μ—λŸ¬κ°€ λ‚˜λ―€λ‘œ boto3.client 에 region_name을 λ°˜λ“œμ‹œ λ„£μ–΄μ£Όμž.
(ref: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/ec2-example-managing-instances.html)


μΈμŠ€ν„΄μŠ€ λͺ¨λ“  상세정보가 좜λ ₯λœλ‹€.

λ³΄μ•ˆμ΄ μ„€μ •λœ λ…Έλ“œμ—μ„œ μƒμ„±λœ podμ—μ„œ μ‹€ν–‰


credential 정보가 μ—†λ‹€κ³  λ‚˜μ˜¨λ‹€.

νƒˆμ·¨ν•œ PODλ‚΄μ—μ„œ S3 μ„œλΉ„μŠ€ μ ‘κ·Ό 및 파일 λ‹€μš΄λ‘œλ“œ

"s3 filedownload μ½”λ“œ"

import boto3

s3 = boto3.client('s3')
s3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME')

S3 "pkos2" 버킷내에 admin secret νŒŒμΌμ„ λ‹€μš΄λ‘œλ“œ λ°›μ•„λ³΄μž.

# cat s3.py
import boto3

BUCKET_NAME = 'pkos2'
OBJECT_NAME = 'sparkandassociates.net/secrets/admin'
FILE_NAME = 'admin'

s3 = boto3.client('s3', region_name = 'ap-northeast-2')
s3.download_file(BUCKET_NAME, OBJECT_NAME, FILE_NAME)
~/dev #

# λ‹€μš΄λ‘œλ“œ μ½”λ“œ μ‹€ν–‰
~/dev # python s3.py


403μ—λŸ¬κ°€ λ°œμƒν•˜λŠ”λ°
ν•΄λ‹Ή EC2 μΈμŠ€ν„΄μŠ€ IAM role에 S3 κΆŒν•œμ΄ λΆ€μ‘±ν•΄μ„œ κ·Έλ ‡λ‹€.
κΆŒν•œμ„ μΆ”κ°€ν•΄μ£Όμž.


ν˜„μž¬ μ‘°νšŒκΆŒν•œλ§Œ 있고
object get κΆŒν•œμ΄ μ—†μ–΄ object λ‹€μš΄λ‘œλ“œκ°€ λΆˆκ°€ν•˜λ‹€.

"GetObject" κΆŒν•œμ„ μΆ”κ°€ν•΄μ€€λ‹€.

κΆŒν•œμ„ μΆ”κ°€ν•˜κ³  λ‹€μ‹œ λ‹€μš΄λ‘œλ“œ μ½”λ“œ μ‹€ν–‰.

νƒˆμ·¨ν•œ POD λ‚΄μ—μ„œ S3 λ²„ν‚·μ—μ„œ νŒŒμΌμ„ λ‹€μš΄λ‘œλ“œ λ°›μ•˜λ‹€.

kubescape

kubescape λŠ” k8s ν΄λŸ¬μŠ€ν„°μ˜ 취약점을 μ κ²€ν•΄μ£ΌλŠ” 툴이며
yaml, helm 차트λ₯Ό μ§„λ‹¨ν•œλ‹€λŠ” 점이 νŠΉμ§•μ΄λ‹€.

(μ΄λ―Έμ§€μΆœμ²˜ : https://github.com/kubescape/kubescape/blob/master/docs/architecture.md )

kubescape μ„€μΉ˜

curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash

## download artifacts
kubescape download artifacts

(sparkandassociates:harbor) [root@kops-ec2 ~]# tree ~/.kubescape/
/root/.kubescape/
β”œβ”€β”€ allcontrols.json
β”œβ”€β”€ armobest.json
β”œβ”€β”€ attack-tracks.json
β”œβ”€β”€ cis-eks-t1.2.0.json
β”œβ”€β”€ cis-v1.23-t1.0.1.json
β”œβ”€β”€ controls-inputs.json
β”œβ”€β”€ devopsbest.json
β”œβ”€β”€ exceptions.json
β”œβ”€β”€ mitre.json
└── nsa.json

# μ œκ³΅ν•˜λŠ” 정책은 μ•„λž˜μ™€ 같이 확인가λŠ₯ν•˜λ‹€.
kubescape list controls

(sparkandassociates:harbor) [root@kops-ec2 ~]# kubescape list controls

+------------+---------------------------------------------------------------+------------------------------------+------------+
| CONTROL ID |                         CONTROL NAME                          |                DOCS                | FRAMEWORKS |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0001     | Forbidden Container Registries                                | https://hub.armosec.io/docs/c-0001 |            |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0002     | Exec into container                                           | https://hub.armosec.io/docs/c-0002 |            |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0004     | Resources memory limit and                                    | https://hub.armosec.io/docs/c-0004 |            |
|            | request                                                       |                                    |            |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0005     | API server insecure port is                                   | https://hub.armosec.io/docs/c-0005 |            |
|            | enabled                                                       |                                    |            |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0007     | Data Destruction                                              | https://hub.armosec.io/docs/c-0007 |            |
+------------+---------------------------------------------------------------+------------------------------------+------------+
| C-0009     | Resource limits                                               | https://hub.armosec.io/docs/c-0009 |            |
...

scan

(sparkandassociates:harbor) [root@kops-ec2 ~]# kubescape scan --enable-host-scan --verbose

host-scanner λΌλŠ” νŒŒλ“œκ°€ 각 λ…Έλ“œλ§ˆλ‹€ κΈ°λ™λ˜λ©° ν΄λŸ¬μŠ€ν„° 점검을 μ§„ν–‰ν•œλ‹€.

kubescape-host-scanner   host-scanner-j4t5z                                          1/1     Running   0               6s
kubescape-host-scanner   host-scanner-n2v78                                          1/1     Running   0               6s
kubescape-host-scanner   host-scanner-v7j7d                                          1/1     Running   0               6s

scan κ²°κ³Όλ₯Ό μ•„λž˜μ™€ 같이 확인할 수 μžˆλ‹€.

Controls: 65 (Failed: 35, Passed: 22, Action Required: 8)
Failed Resources by Severity: Critical β€” 0, High β€” 83, Medium β€” 370, Low β€” 128

+----------+-------------------------------------------------------+------------------+---------------+--------------------+
| SEVERITY |                     CONTROL NAME                      | FAILED RESOURCES | ALL RESOURCES |    % RISK-SCORE    |
+----------+-------------------------------------------------------+------------------+---------------+--------------------+
| Critical | API server insecure port is enabled                   |        0         |       1       |         0%         |
| Critical | Disable anonymous access to Kubelet service           |        0         |       3       |         0%         |
| Critical | Enforce Kubelet client TLS authentication             |        0         |       6       |         0%         |
| Critical | CVE-2022-39328-grafana-auth-bypass                    |        0         |       1       |         0%         |
| High     | Forbidden Container Registries                        |        0         |      65       | Action Required *  |
| High     | Resources memory limit and request                    |        0         |      65       | Action Required *  |
| High     | Resource limits                                       |        49        |      65       |        76%         |
| High     | Applications credentials in configuration files       |        0         |      147      | Action Required *  |
| High     | List Kubernetes secrets                               |        20        |      108      |        19%         |
| High     | Host PID/IPC privileges                               |        1         |      65       |         1%         |
| High     | HostNetwork access                                    |        6         |      65       |         8%         |
| High     | Writable hostPath mount                               |        3         |      65       |         4%         |
| High     | Insecure capabilities                                 |        0         |      65       |         0%         |
| High     | HostPath mount                                        |        3         |      65       |         4%         |
| High     | Resources CPU limit and request                       |        0         |      65       | Action Required *  |
| High     | Instance Metadata API                                 |        0         |       0       |         0%         |
| High     | Privileged container                                  |        1         |      65       |         1%         |
| High     | CVE-2021-25742-nginx-ingress-snippet-annotation-vu... |        0         |       1       |         0%         |
| High     | Workloads with Critical vulnerabilities exposed to... |        0         |       0       | Action Required ** |
| High     | Workloads with RCE vulnerabilities exposed to exte... |        0         |       0       | Action Required ** |
| High     | CVE-2022-23648-containerd-fs-escape                   |        0         |       3       |         0%         |
| High     | RBAC enabled                                          |        0         |       1       |         0%         |
| High     | CVE-2022-47633-kyverno-signature-bypass               |        0         |       0       |         0%         |
| Medium   | Exec into container                                   |        2         |      108      |         2%         |
| Medium   | Data Destruction                                      |        9         |      108      |         8%         |

πŸ‘‰ λ„μ „κ³Όμ œ2 : kubescape armo μ›Ή μ‚¬μš©

kubescapeμ—μ„œ armo λΌλŠ” μ›Ήμ„œλΉ„μŠ€λ₯Ό μ œκ³΅ν•œλ‹€.
λΈŒλΌμš°μ €μ—μ„œ portal.armo.cloud 접속
νšŒμ›κ°€μž…ν›„ μ•„λž˜ 진단을 μ›ν•˜λŠ” ν΄λŸ¬μŠ€ν„°μ— μ‹€ν–‰ν• 
helm repo, chart install λͺ…령을 μ•ˆλ‚΄ν•΄μ£Όλ©°, κ·ΈλŒ€λ‘œ λ³΅μ‚¬ν•΄μ„œ μ‹€ν–‰ν•΄μ£Όλ©΄ λœλ‹€.

λ‚΄ ν΄λŸ¬μŠ€ν„°μ— μ‹€ν–‰

(sparkandassociates:harbor) [root@kops-ec2 ~]# helm repo add kubescape https://kubescape.github.io/helm-charts/ ; helm repo update ; helm upgrade --install kubescape kubescape/kubescape-cloud-operator -n kubescape --create-namespace --set clusterName=`kubectl config current-context` --set account=a014fa2a-98a6df

"kubescape" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kubescape" chart repository
...Successfully got an update from the "harbor" chart repository
...Successfully got an update from the "argo" chart repository
...Successfully got an update from the "prometheus-community" chart repository
...Successfully got an update from the "gitlab" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
Release "kubescape" does not exist. Installing it now.
NAME: kubescape
LAST DEPLOYED: Mon Apr  3 17:03:15 2023
NAMESPACE: kubescape
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing kubescape-cloud-operator version 1.10.8.

You can see and change the values of your's recurring configurations daily scan in the following link:
https://cloud.armosec.io/settings/assets/clusters/scheduled-scans?cluster=sparkandassociates-net
> kubectl -n kubescape get cj kubescape-scheduler -o=jsonpath='{.metadata.name}{"\t"}{.spec.schedule}{"\n"}'

You can see and change the values of your's recurring images daily scan in the following link:
https://cloud.armosec.io/settings/assets/images
> kubectl -n kubescape get cj kubevuln-scheduler -o=jsonpath='{.metadata.name}{"\t"}{.spec.schedule}{"\n"}'

See you!!!

(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl -n kubescape get all
NAME                                     READY   STATUS      RESTARTS   AGE
pod/gateway-5b987fff9f-98shv             1/1     Running     0          22h
pod/kollector-0                          1/1     Running     0          22h
pod/kubescape-6884bcf5b7-22vtp           1/1     Running     0          22h
pod/kubescape-scheduler-28009247-dbbzv   0/1     Completed   0          9h
pod/kubevuln-6d964b688c-m45jm            1/1     Running     0          22h
pod/kubevuln-scheduler-28009808-mfblx    0/1     Completed   0          10m
pod/operator-867c5bcdff-gj7v8            1/1     Running     0          22h
pod/otel-collector-5f69f464d7-cr48x      1/1     Running     0          22h

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/gateway          ClusterIP   100.68.29.246   <none>        8001/TCP,8002/TCP   22h
service/kubescape        ClusterIP   100.69.30.16    <none>        8080/TCP            22h
service/kubevuln         ClusterIP   100.69.146.86   <none>        8080/TCP,8000/TCP   22h
service/operator         ClusterIP   100.64.97.181   <none>        4002/TCP            22h
service/otel-collector   ClusterIP   100.64.43.12    <none>        4317/TCP            22h

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/gateway          1/1     1            1           22h
deployment.apps/kubescape        1/1     1            1           22h
deployment.apps/kubevuln         1/1     1            1           22h
deployment.apps/operator         1/1     1            1           22h
deployment.apps/otel-collector   1/1     1            1           22h

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/gateway-5b987fff9f          1         1         1       22h
replicaset.apps/kubescape-6884bcf5b7        1         1         1       22h
replicaset.apps/kubevuln-6d964b688c         1         1         1       22h
replicaset.apps/operator-867c5bcdff         1         1         1       22h
replicaset.apps/otel-collector-5f69f464d7   1         1         1       22h

NAME                         READY   AGE
statefulset.apps/kollector   1/1     22h

NAME                                SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/kubescape-scheduler   47 20 * * *   False     0        9h              22h
cronjob.batch/kubevuln-scheduler    8 6 * * *     False     0        10m             22h

NAME                                     COMPLETIONS   DURATION   AGE
job.batch/kubescape-scheduler-28009247   1/1           6s         9h
job.batch/kubevuln-scheduler-28009808    1/1           4s         10m

ARMO μ›Ήμ„œλΉ„μŠ€μ— λ‚΄κ°€ 등둝 ν•œ ν΄λŸ¬μŠ€ν„° λ¦¬μŠ€νŠΈμ™€ 정보가 보이며
ν•΄λ‹Ή ν΄λŸ¬μŠ€ν„°μ˜ λ³΄μ•ˆ 취약점듀이 μŠ€μΊλ‹λ˜μ„œ 결과둜 보여진닀.

"FIX" λ²„νŠΌμ„ λˆ„λ₯΄κ²Œ 되면 yaml 에디터가 μ—΄λ¦¬λ©΄μ„œ κ±°κΈ° 취약점을 κ°œμ„ ν•  수 μžˆλ„λ‘
ν•΄λ‹Ή 라인에 ν•˜μ΄λΌμ΄νŠΈμ™€ 값을 λ„£μœΌλΌλŠ” μ•ˆλ‚΄κ°€ λ‚˜μ˜¨λ‹€.

λ‹€λ§Œ, λ³΄μ•ˆμŠ€μΊ”μ΄λΌλŠ”κ±΄ 일반적인 λ³΄μ•ˆ 기쀀에 맞좰 μ•ˆλ‚΄ν•˜λŠ” κ²ƒμ΄λ―€λ‘œ
λ‚΄κ°€ μ‚¬μš©ν•˜λŠ” ν™˜κ²½ 및 ꡬ성에 맞좰 ν•„μš”ν•œ λΆ€λΆ„λ§Œ μ μš©μ„ ν•˜κ³  κ·Έμ™Έ 뢀뢄은 "Ignore" λ₯Ό μ²΄ν¬ν•˜μ—¬ skip ν•˜λ©΄ λœλ‹€.

SecurityContext

μ˜ˆμ „μ— CKS 자격증 μ·¨λ“ν• λ•Œ μ‚¬μš©ν–ˆλ˜ κ²½ν—˜μ΄ μƒκ°λ‚˜ killer 문제λ₯Ό λ‹€μ‹œ κΊΌλ‚΄μ–΄ λ³΄μ•˜λ‹€.

λ¨Όμ € μ•„λž˜μ™€ 같이 deployment μ΄μš©ν•΄μ„œ pod을 λ°°ν¬ν•œλ‹€.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: immutable-deployment
  labels:
    app: immutable-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: immutable-deployment
  template:
    metadata:
      labels:
        app: immutable-deployment
    spec:
      containers:
      - image: busybox:1.32.0
        command: ['sh', '-c', 'tail -f /dev/null']
        imagePullPolicy: IfNotPresent
        name: busybox
      restartPolicy: Always

λ°°ν¬ν•œ pod의 / κ²½λ‘œμ— νŒŒμΌμ„ 생성해본닀.

(sparkandassociates:harbor) [root@kops-ec2 ~]# k exec immutable-deployment-698dc94df9-xsdpt -- touch /abc.txt
(sparkandassociates:harbor) [root@kops-ec2 ~]# k exec immutable-deployment-698dc94df9-xsdpt -- ls -al /abc.txt
-rw-r--r--    1 root     root             0 Apr  7 13:56 /abc.txt
(sparkandassociates:harbor) [root@kops-ec2 ~]#

μ΄λ²ˆμ—λŠ” security context의 readOnlyRootFilesystem 을 μ μš©ν•΄λ³΄μž.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: immutable-deployment
  labels:
    app: immutable-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: immutable-deployment
  template:
    metadata:
      labels:
        app: immutable-deployment
    spec:
      containers:
      - image: busybox:1.32.0
        command: ['sh', '-c', 'tail -f /dev/null']
        imagePullPolicy: IfNotPresent
        name: busybox
        securityContext:                  # add
          readOnlyRootFilesystem: true    # add
        volumeMounts:                     # add
        - mountPath: /tmp                 # add
          name: temp-vol                  # add
      volumes:                            # add
      - name: temp-vol                    # add
        emptyDir: {}                      # add
      restartPolicy: Always

μž¬μƒμ„±ν›„ νŒŒμΌμ„ λ‹€μ‹œ μƒμ„±ν•΄λ³΄μž.

(sparkandassociates:harbor) [root@kops-ec2 ~]# k delete -f 1.yaml
deployment.apps "immutable-deployment" deleted
(sparkandassociates:harbor) [root@kops-ec2 ~]# k create -f 1.yaml
deployment.apps/immutable-deployment created
(sparkandassociates:harbor) [root@kops-ec2 ~]#


(sparkandassociates:harbor) [root@kops-ec2 ~]# k exec immutable-deployment-6dc8987698-7stxq -- touch /abc.txt
touch: /abc.txt: Read-only file system
command terminated with exit code 1
(sparkandassociates:harbor) [root@kops-ec2 ~]#

Read-only file system μ΄λΌλŠ” μ—λŸ¬μ™€ ν•¨κ»˜ 생성에 μ‹€νŒ¨ν•œλ‹€. 즉, λ³΄μ•ˆμ μš©μ΄ λœκ²ƒμ΄λ‹€.

πŸ‘‰ λ„μ „κ³Όμ œ3 : Polaris μ‚¬μš©

polaris λŠ” λ³΄μ•ˆμ κ²€ 도ꡬ이며 μ›Ή μ„œλΉ„μŠ€λ₯Ό μ œκ³΅ν•œλ‹€.

# μ„€μΉ˜
kubectl create ns polaris

#
cat <<EOT > polaris-values.yaml
dashboard:
  replicas: 1
  service:
    type: LoadBalancer
EOT

# 배포
helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm install polaris fairwinds-stable/polaris --namespace polaris --version 5.7.2 -f polaris-values.yaml

# CLB에 ExternanDNS 둜 도메인 μ—°κ²°
kubectl annotate service polaris-dashboard "external-dns.alpha.kubernetes.io/hostname=polaris.$KOPS_CLUSTER_NAME" -n polaris

# μ›Ή 접속 μ£Όμ†Œ 확인 및 접속
(sparkandassociates:harbor) [root@kops-ec2 ~]# echo -e "Polaris Web URL = http://polaris.$KOPS_CLUSTER_NAME"
Polaris Web URL = http://polaris.sparkandassociates.net

μ΄λ ‡κ²Œ λ³΄μ•ˆμ·¨μ•½μ μ„ μ•ˆλ‚΄ν•΄μ£Όκ³  κ°€μ΄λ“œλ„ μ œκ³΅ν•œλ‹€.

μ˜¨ν”„λ ˆλ―ΈμŠ€ ν™˜κ²½μ—μ„œ λŒλ¦°λ‹€λŠ” μ μ—μ„œ λ³΄μ•ˆμ μœΌλ‘œ μ’‹μœΌλ‚˜
μ‚¬μš©μ„±μ΄λ‚˜ λ³΄μ•ˆμ§„λ‹¨, μ‘°μΉ˜κ°€μ΄λ“œ 등은 μ•žμ„œ μ‚¬μš©ν•œ kubescape armo κ°€ 더 λ›°μ–΄λ‚œλ“― ν•˜λ‹€.

πŸ‘‰ λ„μ „κ³Όμ œ4 : SA (Service Account) 생성 및 role ν• λ‹Ή

μ‹ κ·œ μ„œλΉ„μŠ€ μ–΄μΉ΄μš΄νŠΈ(SA) 생성 ν›„ 'ν΄λŸ¬μŠ€ν„° μˆ˜μ€€(λͺ¨λ“  λ„€μž„μŠ€νŽ˜μ΄μŠ€ 포함)μ—μ„œ 읽기 μ „μš©'의 κΆŒν•œμ„ μ£Όκ³  ν…ŒμŠ€νŠΈ

(sparkandassociates:harbor) [root@kops-ec2 ~]# k create sa master

(sparkandassociates:harbor) [root@kops-ec2 ~]# kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods
clusterrole.rbac.authorization.k8s.io/pod-reader created

(sparkandassociates:harbor) [root@kops-ec2 ~]# k create clusterrolebinding master --clusterrole pod-reader --serviceaccount default:master
clusterrolebinding.rbac.authorization.k8s.io/master created

# 확인
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get sa master -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2023-04-07T14:13:33Z"
  name: master
  namespace: harbor
  resourceVersion: "5061627"
  uid: 4527c4ba-f8e6-4987-a86a-007538ede6ab
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get clusterrole pod-reader -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: "2023-04-07T14:23:06Z"
  name: pod-reader
  resourceVersion: "5063951"
  uid: 9b4f48a6-810c-40fb-99e6-68276adf9ece
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get clusterrolebindings master
NAME     ROLE                     AGE
master   ClusterRole/pod-reader   52s
(sparkandassociates:harbor) [root@kops-ec2 ~]# k get clusterrolebindings master -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2023-04-07T14:25:21Z"
  name: master
  resourceVersion: "5064502"
  uid: 1485d98d-d7ed-410f-833a-05a3716530f2
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: pod-reader
subjects:
- kind: ServiceAccount
  name: master
  namespace: default
  

μƒμ„±ν•œ SA 인 "master" κ³„μ •μ˜ κΆŒν•œ 확인은 kubectl 의 auth can-i λ₯Ό ν™œμš©ν•˜λ©΄ λœλ‹€.

## pod 생성 κΆŒν•œ 확인 -> "no"
(sparkandassociates:harbor) [root@kops-ec2 ~]# k auth can-i create pod --as system:serviceaccount:default:master
no

## pod read κΆŒν•œ -> "yes"
(sparkandassociates:harbor) [root@kops-ec2 ~]# k auth can-i get pod --as system:serviceaccount:default:master
yes

## λΆ€μ—¬ν•˜μ§€ μ•Šμ€ secret κΆŒν•œ μΆ”κ°€ 확인 -> "no"
(sparkandassociates:harbor) [root@kops-ec2 ~]# k auth can-i get secret --as system:serviceaccount:default:master
no
profile
Hello world

0개의 λŒ“κΈ€