본문 바로가기
K8s

Cilium CNL

by 식사법 2024. 10. 26.

8주차 Cilium CNL

1. Cilium 개요

  • Cilium이란?
    • Cilium은 eBPF(Extended Berkeley Packet Filter)를 기반으로 한 Kubernetes CNI 플러그인입니다. 주로 네트워크 보안과 가시성을 제공하기 위해 사용됩니다.
  • Cilium을 선택한다면?
    • Cilium은 기존의 iptables나 kube-proxy 기반의 Kubernetes 네트워크 대신 eBPF를 사용하여 더 빠르고 유연한 네트워크 성능을 제공합니다.
  • Cilium의 핵심 개념
    • eBPF 기반 동작:
      • eBPF는 커널에서 직접 실행되는 프로그램으로, 네트워크 패킷 처리 및 보안 정책을 더 세밀하게 제어할 수 있습니다.
      • iptables 대신 eBPF를 사용해 네트워크 트래픽을 처리하므로, 성능 향상과 낮은 레이턴시를 제공합니다.
    • 쿠버네티스 네트워킹:
      • Cilium은 쿠버네티스 클러스터 내에서 Pod 간의 통신을 담당하며, 동적으로 네트워크 보안 정책을 적용할 수 있습니다.
  1. Cilium 배포
    • Cilium은 Helm을 통해 테스트 환경 Kubernetes에 배포 합니다.

# 모니터링
$ watch -d kubectl get node,pod -A -owide
NAME          STATUS     ROLES           AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE
   KERNEL-VERSION   CONTAINER-RUNTIME
node/k8s-s    NotReady   control-plane   73s   v1.30.6   192.168.10.10    <none>        Ubuntu 22.04.5 LTS
   6.8.0-1015-aws   containerd://1.7.22
node/k8s-w1   NotReady   <none>          51s   v1.30.6   192.168.10.101   <none>        Ubuntu 22.04.5 LTS
   6.8.0-1015-aws   containerd://1.7.22
node/k8s-w2   NotReady   <none>          54s   v1.30.6   192.168.10.102   <none>        Ubuntu 22.04.5 LTS
   6.8.0-1015-aws   containerd://1.7.22

NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE   IP              NODE
    NOMINATED NODE   READINESS GATES
kube-system   pod/coredns-55cb58b774-s7d25        0/1     Pending   0          56s   <none>          <none
>   <none>           <none>
kube-system   pod/coredns-55cb58b774-v4gj8        0/1     Pending   0          56s   <none>          <none
>   <none>           <none>
kube-system   pod/etcd-k8s-s                      1/1     Running   0          71s   192.168.10.10   k8s-s
    <none>           <none>
kube-system   pod/kube-apiserver-k8s-s            1/1     Running   0          71s   192.168.10.10   k8s-s
    <none>           <none>
kube-system   pod/kube-controller-manager-k8s-s   1/1     Running   0          71s   192.168.10.10   k8s-s
    <none>           <none>
kube-system   pod/kube-scheduler-k8s-s            1/1     Running   0          71s   192.168.10.10   k8s-s
    <none>           <none>

# cilium helm repo를 추가합니다.
$ helm repo add cilium https://helm.cilium.io/
"cilium" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "cilium" chart repository
Update Complete. ⎈Happy Helming!⎈

# helm cilium을 배포합니다.
$ helm install cilium cilium/cilium --version 1.16.3 --namespace kube-system \
--set k8sServiceHost=192.168.10.10 --set k8sServicePort=6443 --set debug.enabled=true \
--set rollOutCiliumPods=true --set routingMode=native --set autoDirectNodeRoutes=true \
--set bpf.masquerade=true --set bpf.hostRouting=true --set endpointRoutes.enabled=true \
--set ipam.mode=kubernetes --set k8s.requireIPv4PodCIDR=true --set kubeProxyReplacement=true \
--set ipv4NativeRoutingCIDR=192.168.0.0/16 --set installNoConntrackIptablesRules=true \
--set hubble.ui.enabled=true --set hubble.relay.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.metrics.enableOpenMetrics=true \
--set hubble.metrics.enabled="{dns:query;ignoreAAAA,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}" \
--set operator.replicas=1
NAME: cilium
LAST DEPLOYED: Wed Oct 23 18:25:52 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.16.3.

For any further help, visit https://docs.cilium.io/en/v1.16/gettinghelp

*## 주요 파라미터 설명
--set debug.enabled=true # cilium 파드에 로그 레벨을 debug 설정
**--set autoDirectNodeRoutes=true # 동일 대역 내의 노드들 끼리는 상대 노드의 podCIDR 대역의 라우팅이 자동으로 설정**
--set endpointRoutes.enabled=true # 호스트에 endpoint(파드)별 개별 라우팅 설정
--set hubble.relay.enabled=true --set hubble.ui.enabled=true # hubble 활성화
--set ipam.mode=kubernetes --set k8s.requireIPv4PodCIDR=true # k8s IPAM 활용
**--set kubeProxyReplacement=true # kube-proxy 없이 (최대한) 대처할수 있수 있게**
--set ipv4NativeRoutingCIDR=192.168.0.0/16 # 해당 대역과 통신 시 IP Masq 하지 않음, 보통 사내망 대역을 지정
--set operator.replicas=1 # cilium-operator 파드 기본 1개
--set enableIPv4Masquerade=true --set bpf.masquerade=true # 파드를 위한 Masquerade , 추가로 Masquerade 을 BPF 로 처리 >> enableIPv4Masquerade=true 인 상태에서 추가로 bpf.masquerade=true 적용이 가능*

# 설정 및 확인
$ **ip -c addr
3: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
    link/ether 1a:d7:bf:37:c2:03 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::18d7:bfff:fe37:c203/64 scope link
       valid_lft forever preferred_lft forever
4: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:7c:41:c7:5e:48 brd ff:ff:ff:ff:ff:ff
    inet 172.16.0.193/32 scope global cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::5c7c:41ff:fec7:5e48/64 scope link
       valid_lft forever preferred_lft forever**

$ **kubectl get node,pod,svc -A -owide
NAME          STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
node/k8s-s    Ready    control-plane   3m26s   v1.30.6   192.168.10.10    <none>        Ubuntu 22.04.5 LTS   6.8.0-1015-aws   containerd://1.7.22
node/k8s-w1   Ready    <none>          3m4s    v1.30.6   192.168.10.101   <none>        Ubuntu 22.04.5 LTS   6.8.0-1015-aws   containerd://1.7.22
node/k8s-w2   Ready    <none>          3m7s    v1.30.6   192.168.10.102   <none>        Ubuntu 22.04.5 LTS   6.8.0-1015-aws   containerd://1.7.22

NAMESPACE     NAME                                   READY   STATUS              RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
kube-system   pod/cilium-4btzk                       1/1     Running             0          58s     192.168.10.102   k8s-w2   <none>           <none>
kube-system   pod/cilium-envoy-24szd                 1/1     Running             0          58s     192.168.10.101   k8s-w1   <none>           <none>
kube-system   pod/cilium-envoy-p8w5r                 1/1     Running             0          58s     192.168.10.10    k8s-s    <none>           <none>
kube-system   pod/cilium-envoy-w62pn                 1/1     Running             0          58s     192.168.10.102   k8s-w2   <none>           <none>
kube-system   pod/cilium-f98zq                       1/1     Running             0          58s     192.168.10.10    k8s-s    <none>           <none>
kube-system   pod/cilium-operator-76bb588dbc-n2v2v   1/1     Running             0          58s     192.168.10.102   k8s-w2   <none>           <none>
kube-system   pod/cilium-qcm4z                       1/1     Running             0          58s     192.168.10.101   k8s-w1   <none>           <none>
kube-system   pod/coredns-55cb58b774-s7d25           1/1     Running             0          3m9s    172.16.1.22      k8s-w2   <none>           <none>
kube-system   pod/coredns-55cb58b774-v4gj8           1/1     Running             0          3m9s    172.16.1.20      k8s-w2   <none>           <none>
kube-system   pod/etcd-k8s-s                         1/1     Running             0          3m24s   192.168.10.10    k8s-s    <none>           <none>
kube-system   pod/hubble-relay-88f7f89d4-ncvqw       0/1     Running             0          58s     172.16.1.142     k8s-w2   <none>           <none>
kube-system   pod/hubble-ui-59bb4cb67b-kvf88         0/2     ContainerCreating   0          58s     <none>           k8s-w2   <none>           <none>
kube-system   pod/kube-apiserver-k8s-s               1/1     Running             0          3m24s   192.168.10.10    k8s-s    <none>           <none>
kube-system   pod/kube-controller-manager-k8s-s      1/1     Running             0          3m24s   192.168.10.10    k8s-s    <none>           <none>
kube-system   pod/kube-scheduler-k8s-s               1/1     Running             0          3m24s   192.168.10.10    k8s-s    <none>           <none>

NAMESPACE     NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
default       service/kubernetes       ClusterIP   10.10.0.1       <none>        443/TCP                  3m26s   <none>
kube-system   service/cilium-envoy     ClusterIP   None            <none>        9964/TCP                 58s     k8s-app=cilium-envoy
kube-system   service/hubble-metrics   ClusterIP   None            <none>        9965/TCP                 58s     k8s-app=cilium
kube-system   service/hubble-peer      ClusterIP   10.10.92.207    <none>        443/TCP                  58s     k8s-app=cilium
kube-system   service/hubble-relay     ClusterIP   10.10.212.194   <none>        80/TCP                   58s     k8s-app=hubble-relay
kube-system   service/hubble-ui        ClusterIP   10.10.238.206   <none>        80/TCP                   58s     k8s-app=hubble-ui
kube-system   service/kube-dns         ClusterIP   10.10.0.10      <none>        53/UDP,53/TCP,9153/TCP   3m24s   k8s-app=kube-dns

$ iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N CILIUM_OUTPUT_nat
-N CILIUM_POST_nat
-N CILIUM_PRE_nat
-N KUBE-KUBELET-CANARY
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat

$** iptables -t filter -S
...
-A INPUT -m comment --comment "cilium-feeder: CILIUM_INPUT" -j CILIUM_INPUT
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "cilium-feeder: CILIUM_FORWARD" -j CILIUM_FORWARD
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT" -j CILIUM_OUTPUT
-A OUTPUT -j KUBE-FIREWALL
-A CILIUM_FORWARD -o cilium_host -m comment --comment "cilium: any->cluster on cilium_host forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_host -m comment --comment "cilium: cluster->any on cilium_host forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_net -m comment --comment "cilium: cluster->any on cilium_net forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -o lxc+ -m comment --comment "cilium: any->cluster on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept (nodeport)" -j ACCEPT
-A CILIUM_INPUT -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m comment --comment "cilium: ACCEPT for l7 proxy upstream traffic" -j ACCEPT
-A CILIUM_OUTPUT -m comment --comment "cilium: host->any mark as from host" -j MARK --set-xmark 0xc00/0xf00
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP

$ **conntrack -L**
tcp      6 431995 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=52738 dport=2379 src=127.0.0.1 dst=127.0.0.1 sport=2379 dport=52738 [ASSURED] mark=0 use=1
tcp      6 431908 ESTABLISHED src=192.168.10.10 dst=192.168.10.10 sport=48348 dport=6443 src=192.168.10.10 dst=192.168.10.10 sport=6443 dport=48348 [ASSURED] mark=0 use=1
tcp      6 431908 ESTABLISHED src=192.168.10.102 dst=192.168.10.10 sport=52010 dport=6443 src=192.168.10.10 dst=192.168.10.102 sport=6443 dport=52010 [ASSURED] mark=0 use=1
tcp      6 431997 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=52652 dport=2379 src=127.0.0.1 dst=127.0.0.1 sport=2379 dport=52652 [ASSURED] mark=0 use=1
tcp      6 431996 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=52604 dport=2379 src=127.0.0.1 dst=127.0.0.1 sport=2379 dport=52604 [ASSURED] mark=0 use=1
...

$ **kubectl get crd
NAME                                         CREATED AT
ciliumcidrgroups.cilium.io                   2024-10-23T09:26:20Z
ciliumclusterwidenetworkpolicies.cilium.io   2024-10-23T09:26:22Z
ciliumendpoints.cilium.io                    2024-10-23T09:26:21Z
ciliumexternalworkloads.cilium.io            2024-10-23T09:26:20Z
ciliumidentities.cilium.io                   2024-10-23T09:26:21Z
ciliuml2announcementpolicies.cilium.io       2024-10-23T09:26:20Z
ciliumloadbalancerippools.cilium.io          2024-10-23T09:26:21Z
ciliumnetworkpolicies.cilium.io              2024-10-23T09:26:22Z
ciliumnodeconfigs.cilium.io                  2024-10-23T09:26:20Z
ciliumnodes.cilium.io                        2024-10-23T09:26:21Z
ciliumpodippools.cilium.io                   2024-10-23T09:26:21Z**

$ kubectl get ciliumnodes # cilium_host 인터페이스의 IP 확인 : CILIUMINTERNALIP
NAME     CILIUMINTERNALIP   INTERNALIP       AGE
k8s-s    172.16.0.193       192.168.10.10    2m4s
k8s-w1   172.16.2.238       192.168.10.101   116s
k8s-w2   172.16.1.48        192.168.10.102   2m6s

$ kubectl get ciliumendpoints -A
NAMESPACE     NAME                           SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
kube-system   coredns-55cb58b774-s7d25       21890               ready            172.16.1.22
kube-system   coredns-55cb58b774-v4gj8       21890               ready            172.16.1.20
kube-system   hubble-relay-88f7f89d4-ncvqw   64108               ready            172.16.1.142
kube-system   hubble-ui-59bb4cb67b-kvf88     54953               ready            172.16.1.218

$ kubectl get cm -n kube-system **cilium-config** -o json | jq
{
  "apiVersion": "v1",
  "data": {
    "agent-not-ready-taint-key": "node.cilium.io/agent-not-ready",
    "arping-refresh-period": "30s",
    "auto-direct-node-routes": "true",
    "bpf-events-drop-enabled": "true",
    "bpf-events-policy-verdict-enabled": "true",
    "bpf-events-trace-enabled": "true",
    "bpf-lb-acceleration": "disabled",
    "bpf-lb-external-clusterip": "false",
    ...

$ kubetail -n kube-system **-l k8s-app=cilium** --since 1h
[cilium-f98zq cilium-agent] time="2024-10-23T09:26:25Z" level=info msg="  --install-iptables-rules='true'" subsys=daemon
[cilium-4btzk cilium-agent] time="2024-10-23T09:26:23Z" level=info msg="  --hubble-disable-tls='false'" subsys=daemon
[cilium-qcm4z cilium-agent] time="2024-10-23T09:26:33Z" level=info msg="  --hubble-metrics-server=':9965'" subsys=daemon
[cilium-f98zq cilium-agent] time="2024-10-23T09:26:25Z" level=info msg="  --install-no-conntrack-iptables-rules='true'" subsys=daemon

$ kubetail -n kube-system **-l k8s-app=cilium-envoy** --since 1h
[cilium-envoy-p8w5r] [2024-10-23 09:26:32.295][7][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:240] cm init: all clusters initialized
[cilium-envoy-p8w5r] [2024-10-23 09:26:32.295][7][info][main] [external/envoy/source/server/server.cc:932] all clusters initialized. initializing init manager
[cilium-envoy-p8w5r] [2024-10-23 09:26:32.299][7][info][config] [external/envoy/source/common/listener_manager/listener_manager_impl.cc:926] all dependencies initialized. starting workers
[cilium-envoy-p8w5r] [2024-10-23 09:26:33.829][7][info][upstream] [external/envoy/source/common/upstream/cds_api_helper.cc:32] cds: add 0 cluster(s), remove 6 cluster(s)
[cilium-envoy-p8w5r] [2024-10-23 09:26:33.829][7][info][upstream] [external/envoy/source/common/upstream/cds_api_helper.cc:71] cds: added/updated 0 cluster(s), skipped 0 unmodified cluster(s)

# Native XDP 지원 NIC 확인 : https://docs.cilium.io/en/stable/bpf/progtypes/#xdp-drivers
$ **ethtool -i ens5**
driver: ena
version: 6.8.0-1015-aws
firmware-version:
expansion-rom-version:
bus-info: 0000:00:05.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no

# https://docs.cilium.io/en/stable/operations/performance/tuning/#bypass-iptables-connection-tracking
$ watch -d kubectl get pod -A # 모니터링
$ **helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values --set installNoConntrackIptablesRules=true**

# 확인: 기존 raw 에 아래 rule 추가 확인
$ iptables -t raw -S | grep notrack
-A CILIUM_PRE_raw -d 192.168.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack
-A CILIUM_PRE_raw -s 192.168.0.0/16 -m comment --comment "cilium: NOTRACK for pod traffic" -j CT --notrack

$ conntrack -F
conntrack v1.4.6 (conntrack-tools): connection tracking table has been emptied.
$ conntrack -L
tcp      6 431997 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=2379 dport=52976 src=127.0.0.1 dst=127.0.0.1 sport=52976 dport=2379 [ASSURED] mark=0 use=1
tcp      6 431996 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=2379 dport=50516 src=127.0.0.1 dst=127.0.0.1 sport=50516 dport=2379 [ASSURED] mark=0 use=1
tcp      6 431998 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=2379 dport=51472 src=127.0.0.1 dst=127.0.0.1 sport=51472 dport=2379 [ASSURED] mark=0 use=1
tcp      6 295 ESTABLISHED src=127.0.0.1 dst=127.0.0.1 sport=52576 dport=2379 src=127.0.0.1 dst=127.0.0.1 sport=2379 dport=52576 mark=0 use=1
conntrack v1.4.6 (conntrack-tools): 74 flow entries have been shown.

$ conntrack -L |grep -v 2379
257 dport=55552 [ASSURED] mark=0 use=1
tcp      6 90 TIME_WAIT src=127.0.0.1 dst=127.0.0.1 sport=60526 dport=2381 src=127.0.0.1 dst=127.0.0.1 sport=2381 dport=60526 [ASSURED] mark=0 use=1
tcp      6 102 TIME_WAIT src=127.0.0.1 dst=127.0.0.1 sport=45724 dport=9878 src=127.0.0.1 dst=127.0.0.1 sport=9878 dport=45724 [ASSURED] mark=0 use=1
conntrack v1.4.6 (conntrack-tools): 82 flow entries have been shown.
  • conntrack
    • 이번 검증에서 사용 한 conntrack 는 리눅스에서 netfilter를 기반으로 연결 추적(Connection Tracking)을 관리하고 모니터링하는 데 사용됩니다.
    • 연결 추적은 네트워크를 통해 송수신되는 패킷들의 상태를 추적하여, 해당 연결이 어떤 상태에 있는지 파악할 수 있도록 해줍니다.
    • 이를 통해 Cilium을 통한 연결상태나 패킷등을 체크해봤습니다.
  1. Cilium CLI 설치
    • 지난 Istio등의 실습때도 그렇지만, Kubernets 즉 컨테이너 환경을 지원해주는 Tool 중에서 CLI가 지원되는 솔루션들이 있습니다.
    • Cilium도 그 중 하나이며 다음과 같이 설치가 가능합니다.
# Cilium CLI 설치
$ CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
$ if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
$ curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 46.8M  100 46.8M    0     0  18.8M      0  0:00:02  0:00:02 --:--:-- 42.5M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100    92  100    92    0     0     74      0  0:00:01  0:00:01 --:--:--     0
cilium-linux-amd64.tar.gz: OK

$ sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
$ rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

# 확인
$ cilium status --wait
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    OK
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

DaemonSet              cilium             Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet              cilium-envoy       Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium             Running: 3
                       cilium-envoy       Running: 3
                       cilium-operator    Running: 1
                       hubble-relay       Running: 1
                       hubble-ui          Running: 1
...                       
$ cilium config view
agent-not-ready-taint-key                         node.cilium.io/agent-not-ready
arping-refresh-period                             30s
auto-direct-node-routes                           true
bpf-events-drop-enabled                           true
bpf-events-policy-verdict-enabled                 true
bpf-events-trace-enabled                          true
bpf-lb-acceleration                               disabled
...

# cilium 데몬셋 파드 내에서 cilium 명령어로 상태 확인
$ export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-s  -o jsonpath='{.items[0].metadata.name}')
$ alias c0="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium"
$ c0 status --verbose
KVStore:                Ok   Disabled
Kubernetes:             Ok   1.30 (v1.30.6) [linux/amd64]
Kubernetes APIs:        ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:   True   [ens5   192.168.10.10 fe80::47:deff:febc:3511 (Direct Routing)]
Host firewall:          Disabled
SRv6:                   Disabled
CNI Chaining:           none
CNI Config file:        successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium:                 Ok   1.16.3 (v1.16.3-f2217191)
NodeMonitor:            Listening for events on 4 CPUs with 64x4096 of shared memory
Cilium health daemon:   Ok
IPAM:                   IPv4: 2/254 allocated from 172.16.0.0/24,
...
Routing:                Network: Native   Host: BPF
...
Device Mode:            veth
Masquerading:           BPF   [ens5]   192.168.0.0/16 [IPv4: Enabled, IPv6: Disabled]
...  
Proxy Status:            OK, ip 172.16.0.159, 0 redirects active on ports 10000-20000, Envoy: external
...
Cluster health:       3/3 reachable    (2024-10-23T09:38:38Z)
  Name                IP               Node        Endpoints
  k8s-s (localhost)   192.168.10.10    reachable   reachable
  k8s-w1              192.168.10.101   reachable   reachable
  k8s-w2              192.168.10.102   reachable   reachable

...

# Native Routing 확인 : # 192.168.0.0/16 대역은 IP Masq 없이 라우팅
$ c0 status | grep KubeProxyReplacement
KubeProxyReplacement:    True   [ens5   192.168.10.10 fe80::47:deff:febc:3511 (Direct Routing)]

# enableIPv4Masquerade=true(기본값) , bpf.masquerade=true 확인
$ cilium config view | egrep 'enable-ipv4-masquerade|enable-bpf-masquerade'
enable-bpf-masquerade                          true
enable-ipv4-masquerade                         true

$ c0 status --verbose | grep Masquerading
Masquerading:           BPF   [ens5]   192.168.0.0/16 [IPv4: Enabled, IPv6: Disabled]

# Configure the eBPF-based ip-masq-agent
# https://docs.cilium.io/en/stable/network/concepts/masquerading/
$ helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values --set ipMasqAgent.enabled=true

#
$ cilium config view | grep -i masq
enable-bpf-masquerade                             true
enable-ip-masq-agent                              true
enable-ipv4-masquerade                            true
enable-ipv6-masquerade                            true
enable-masquerade-to-route-source                 false
.

$ export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=k8s-s  -o jsonpath='{.items[0].metadata.name}')
$ alias c0="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium"
$ c0 status --verbose | grep Masquerading
Masquerading:           BPF (ip-masq-agent)   [ens5]   192.168.0.0/16 [IPv4: Enabled, IPv6: Disabled]

$ kubectl get cm -n kube-system cilium-config -o yaml  | grep ip-masq
  enable-ip-masq-agent: "true"
  1. Cilium 기본 정보 확인
# cilium 파드 이름
$ export CILIUMPOD0=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=**k8s-s**  -o jsonpath='{.items[0].metadata.name}')
$ export CILIUMPOD1=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=**k8s-w1** -o jsonpath='{.items[0].metadata.name}')
$ export CILIUMPOD2=$(kubectl get -l k8s-app=cilium pods -n kube-system --field-selector spec.nodeName=**k8s-w2** -o jsonpath='{.items[0].metadata.name}')

# 단축키(alias) 지정
$ alias c0="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- cilium"
$ alias c1="kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- cilium"
$ alias c2="kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- cilium"

$ alias c0bpf="kubectl exec -it $CILIUMPOD0 -n kube-system -c cilium-agent -- bpftool"
$ alias c1bpf="kubectl exec -it $CILIUMPOD1 -n kube-system -c cilium-agent -- bpftool"
$ alias c2bpf="kubectl exec -it $CILIUMPOD2 -n kube-system -c cilium-agent -- bpftool"

# Hubble UI 웹 접속
$ kubectl patch -n kube-system svc hubble-ui -p '{"spec": {"type": "NodePort"}}'
$ HubbleUiNodePort=$(kubectl get svc -n kube-system hubble-ui -o jsonpath={.spec.ports[0].nodePort})
$ echo -e "Hubble UI URL = http://$(curl -s ipinfo.io/ip):$HubbleUiNodePort"
Hubble UI URL = http://43.203.253.27:30691

# 자주 사용 명령
$ helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values --set
$ kubetail -n kube-system -l k8s-app=cilium --since 12h
[cilium-qnsm6 cilium-agent] time="2024-10-23T10:06:35Z" level=debug msg="Greeting successful" host="http://192.168.10.101:4240" ipAddr=192.168.10.101 nodeName=k8s-w1 path="Via L3" rtt="274.529µs" subsys=health-server
[cilium-qnsm6 cilium-agent] time="2024-10-23T10:06:35Z" level=debug msg="Greeting host" host="http://172.16.0.232:4240" ipAddr=172.16.0.232 nodeName=k8s-s path="Via L3" subsys=health-server
[cilium-qnsm6 cilium-agent] time="2024-10-23T10:06:35Z" level=debug msg="probe successful" ipAddr=192.168.10.101 nodeName=k8s-w1 rtt="278.033µs" subsys=health-server
[cilium-qnsm6 cilium-agent] time="2024-10-23T10:06:35Z" level=debug msg="probe successful" ipAddr=172.16.2.56 nodeName=k8s-w1 rtt="299.41µs" subsys=health-server
[cilium-qnsm6 cilium-agent] time="2024-10-23T10:06:35Z" level=debug msg="probe successful" ipAddr=172.16.0.232 nodeName=k8s-s rtt="317.627µs" subsys=health-server
...
$ kubetail -n kube-system -l k8s-app=cilium-envoy --since 12h
[cilium-envoy-24szd] [2024-10-23 09:26:38.183][7][info][main] [external/envoy/source/server/server.cc:932] all clusters initialized. initializing init manager
[cilium-envoy-24szd] [2024-10-23 09:26:38.189][7][info][config] [external/envoy/source/common/listener_manager/listener_manager_impl.cc:926] all dependencies initialized. starting workers
[cilium-envoy-24szd] [2024-10-23 09:26:39.299][7][info][upstream] [external/envoy/source/common/upstream/cds_api_helper.cc:32] cds: add 0 cluster(s), remove 6 cluster(s)
[cilium-envoy-24szd] [2024-10-23 09:26:39.299][7][info][upstream] [external/envoy/source/common/upstream/cds_api_helper.cc:71] cds: added/updated 0 cluster(s), skipped 0 unmodified cluster(s)
[cilium-envoy-24szd] [2024-10-23 09:40:58.180][7][warning][config] [external/envoy/source/extensions/config_subscription/grpc/grpc_stream.h:155] StreamClusters gRPC config stream to xds-grpc-cilium closed: 13,
[cilium-envoy-24szd] [2024-10-23 09:40:58.181][7][warning][config] [external/envoy/source/extensions/config_subscription/grpc/grpc_stream.h:155] StreamListeners gRPC config stream to xds-grpc-cilium closed: 13,
[cilium-envoy-24szd] [2024-10-23 09:41:38.188][7][info][main] [external/envoy/source/server/drain_manager_impl.cc:208] shutting down parent after drai
  • 자주 쓰는 명령어
# cilium 파드 확인
$ kubectl get pod -n kube-system -l k8s-app=cilium -owide
NAME           READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
cilium-j2lqr   1/1     Running   0          26m   192.168.10.102   k8s-w2   <none>           <none>
cilium-qnsm6   1/1     Running   0          27m   192.168.10.101   k8s-w1   <none>           <none>
cilium-qqrvj   1/1     Running   0          27m   192.168.10.10    k8s-s    <none>           <none>

# cilium 파드 재시작
$ kubectl -n kube-system rollout restart ds/cilium

# cilium 설정 정보 확인
$ cilium config view
**agent-not-ready-taint-key                         node.cilium.io/agent-not-ready
arping-refresh-period                             30s
auto-direct-node-routes                           true
bpf-events-drop-enabled                           true
bpf-events-policy-verdict-enabled                 true
bpf-events-trace-enabled                          true
bpf-lb-acceleration                               disabled
bpf-lb-external-clusterip                         false
bpf-lb-map-max                                    65536
bpf-lb-sock                                       false
bpf-lb-sock-terminate-pod-connections             false
bpf-map-dynamic-size-ratio                        0.0025**

# cilium 파드의 cilium 상태 확인
$ c0 status --verbose
**KVStore:                Ok   Disabled
Kubernetes:             Ok   1.30 (v1.30.6) [linux/amd64]
Kubernetes APIs:        ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:   True   [ens5   192.168.10.10 fe80::47:deff:febc:3511 (Direct Routing)]
Host firewall:          Disabled
SRv6:                   Disabled
CNI Chaining:           none
...**
# cilium 엔드포인트 확인
$ kubectl get ciliumendpoints -A
NAMESPACE     NAME                           SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
kube-system   coredns-55cb58b774-s7d25       21890               ready            172.16.1.22
kube-system   coredns-55cb58b774-v4gj8       21890               ready            172.16.1.20
kube-system   hubble-relay-88f7f89d4-ncvqw   64108               ready            172.16.1.142
kube-system   hubble-ui-59bb4cb67b-kvf88     54953               ready            172.16.1.218

$ c0 endpoint list
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                   IPv6   IPv4           STATUS
           ENFORCEMENT        ENFORCEMENT
1969       Disabled           Disabled          1          k8s:node-role.kubernetes.io/control-plane                                           ready
                                                           k8s:node.kubernetes.io/exclude-from-external-load-balancers
                                                           reserved:host
2335       Disabled           Disabled          4          reserved:health                                                      172.16.0.232   ready

$ c0 bpf endpoint list
IP ADDRESS        LOCAL ENDPOINT INFO
192.168.10.10:0   (localhost)
172.16.0.232:0    id=2335  sec_id=4     flags=0x0000 ifindex=10  mac=1E:5E:65:7A:AA:5C nodemac=DE:07:06:36:98:5C
172.16.0.193:0    (localhost)

$ c0 map get cilium_lxc
Key              Value                                                                                            State   Error
172.16.0.232:0   id=2335  sec_id=4     flags=0x0000 ifindex=10  mac=1E:5E:65:7A:AA:5C nodemac=DE:07:06:36:98:5C   sync

$ c0 ip list
IP                  IDENTITY                                                                     SOURCE
0.0.0.0/0           reserved:world
172.16.0.193/32     reserved:host
                    reserved:kube-apiserver
172.16.0.232/32     reserved:health
172.16.1.20/32      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   custom-resource
                    k8s:io.cilium.k8s.policy.cluster=default
                    k8s:io.cilium.k8s.policy.serviceaccount=coredns
                    k8s:io.kubernetes.pod.namespace=kube-system
                    k8s:k8s-app=kube-dns

# Manage the IPCache mappings for IP/CIDR <-> Identity
$ c0 bpf ipcache list
IP PREFIX/ADDRESS   IDENTITY
172.16.0.232/32     identity=4 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>
172.16.1.20/32      identity=21890 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>
172.16.1.22/32      identity=21890 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>
172.16.1.218/32     identity=54953 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>
0.0.0.0/0           identity=2 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>
172.16.0.193/32     identity=1 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>
...
# Service/NAT List 확인
$ c0 service list
ID   Frontend           Service Type   Backend
1    10.10.0.1:443      ClusterIP      1 => 192.168.10.10:6443 (active)
2    10.10.92.207:443   ClusterIP      1 => 192.168.10.10:4244 (active)
3    10.10.212.194:80   ClusterIP      1 => 172.16.1.142:4245 (active)
4    10.10.238.206:80   ClusterIP      1 => 172.16.1.218:8081 (active)
5    10.10.0.10:53      ClusterIP      1 => 172.16.1.20:53 (active)

$ c0 bpf lb list
SERVICE ADDRESS        BACKEND ADDRESS (REVNAT_ID) (SLOT)
10.10.0.10:53 (1)      172.16.1.20:53 (5) (1)
10.10.0.10:9153 (2)    172.16.1.22:9153 (6) (2)
10.10.0.10:53 (2)      172.16.1.22:53 (5) (2)
10.10.238.206:80 (1)   172.16.1.218:8081 (4) (1)

$ c0 bpf lb list --revnat
ID   BACKEND ADDRESS (REVNAT_ID) (SLOT)
5    10.10.0.10:53
2    10.10.92.207:443
6    10.10.0.10:9153

$ c0 bpf nat list
UDP IN 169.254.169.123:123 -> 192.168.10.10:42447 XLATE_DST 192.168.10.10:42447 Created=136sec ago NeedsCT=1
TCP IN 192.168.10.101:10250 -> 192.168.10.10:33332 XLATE_DST 192.168.10.10:33332 Created=424sec ago NeedsCT=1
TCP IN 172.16.0.232:4240 -> 192.168.10.10:41694 XLATE_DST 192.168.10.10:41694 Created=258sec ago NeedsCT=1
UDP IN 175.195.167.194:123 -> 192.168.10.10:58753 XLATE_DST 192.168.10.10:58753 Created=143sec ago NeedsCT=1
UDP IN 169.254.169.123:123 -> 192.168.10.10:52821 XLATE_DST 192.168.10.10:52821 Created=39sec ago NeedsCT=1
UDP IN 169.254.169.123:123 -> 192.168.10.10:53341 XLATE_DST 192.168.10.10:53341 Created=168sec ago NeedsCT=1

# List all open BPF maps
$ c0 map list
Name                       Num entries   Num errors   Cache enabled
cilium_runtime_config      256           0            true
cilium_lb4_source_range    0             0            true
cilium_policy_02335        3             0            true
cilium_lb_affinity_match   0             0            true
cilium_policy_01969        2             0            true
cilium_ipcache             14            0            true

$ c0 map list --verbose
## Map: cilium_runtime_config
Key             Value              State   Error
UTimeOffset     3378272083984376
AgentLiveness   3094775365454
Unknown         0
Unknown         0
Unknown         0
...## Map: cilium_lb4_source_range
Cache is empty

## Map: cilium_policy_02335
Key              Value       State   Error
Ingress: 0 ANY   0 73 5982
Egress: 0 ANY    0 0 0
Ingress: 1 ANY   0 47 4203

# List contents of a policy BPF map : Dump all policy maps
$ c0 bpf policy get --all

Endpoint ID: 1969
Path: /sys/fs/bpf/tc/globals/cilium_policy_01969

POLICY   DIRECTION   LABELS (source:key[=value])   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX
Allow    Ingress     reserved:unknown              ANY          NONE         disabled    0       0         0
Allow    Egress      reserved:unknown              ANY          NONE         disabled    0       0         0

$ c0 bpf policy get --all -n

Endpoint ID: 1969
Path: /sys/fs/bpf/tc/globals/cilium_policy_01969

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX
Allow    Ingress     0          ANY          NONE         disabled    0       0         0
Allow    Egress      0          ANY          NONE         disabled    0       0         0

Endpoint ID: 2335
Path: /sys/fs/bpf/tc/globals/cilium_policy_02335

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX
Allow    Ingress     0          ANY          NONE         disabled    6964    85        0
Allow    Ingress     1          ANY          NONE         disabled    4570    51        0
Allow    Egress      0          ANY          NONE         disabled    0       0         0

# cilium monitor
$ c0 monitor -v
Listening for events on 4 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
CPU 01: [pre-xlate-rev] cgroup_id: 6945 sock_cookie: 15270, dst [127.0.0.1]:40008 tcp
CPU 02: [pre-xlate-rev] cgroup_id: 7024 sock_cookie: 11206, dst [127.0.0.1]:10257 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 6945 sock_cookie: 15270, dst [127.0.0.1]:40008 tcp
CPU 01: [pre-xlate-rev] cgroup_id: 6945 sock_cookie: 15270, dst [127.0.0.1]:40008 tcp
CPU 03: [pre-xlate-rev] cgroup_id: 6945 sock_cookie: 15270, dst [127.0.0.1]:40008 tcp
CPU 03: [pre-xlate-rev] cgroup_id: 6945 sock_cookie: 15270, dst [127.0.0.1]:40008 tcp
  • 기본 정보 확인
# cilium 버전 확인
$ cilium version
cilium-cli: v0.16.19 compiled with go1.23.1 on linux/amd64
cilium image (default): v1.16.2
cilium image (stable): v1.16.3
cilium image (running): 1.16.3

# cilium 상태 확인
$ cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    OK
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

DaemonSet              cilium             Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet              cilium-envoy       Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
Deployment             hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium             Running: 3
                       cilium-envoy       Running: 3
                       cilium-operator    Running: 1
                       hubble-relay       Running: 1
                       hubble-ui          Running: 1

# kube-proxy 파드 확인 >> 없다!
$ kubectl get pod -A

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   cilium-envoy-24szd                 1/1     Running   0          51m
kube-system   cilium-envoy-p8w5r                 1/1     Running   0          51m
kube-system   cilium-envoy-w62pn                 1/1     Running   0          51m
kube-system   cilium-kqclm                       1/1     Running   0          8m41s
kube-system   cilium-operator-76bb588dbc-n2v2v   1/1     Running   0          51m
kube-system   cilium-smkn7                       1/1     Running   0          8m22s
kube-system   cilium-vxs6t                       1/1     Running   0          8m41s
kube-system   coredns-55cb58b774-s7d25           1/1     Running   0          53m
kube-system   coredns-55cb58b774-v4gj8           1/1     Running   0          53m
kube-system   etcd-k8s-s                         1/1     Running   0          53m
kube-system   hubble-relay-88f7f89d4-ncvqw       1/1     Running   0          51m
kube-system   hubble-ui-59bb4cb67b-kvf88         2/2     Running   0          51m
kube-system   kube-apiserver-k8s-s               1/1     Running   0          53m
kube-system   kube-controller-manager-k8s-s      1/1     Running   0          53m
kube-system   kube-scheduler-k8s-s               1/1     Running   0          53m

# cilium 설정 정보 확인
$ **cilium config view**
agent-not-ready-taint-key                         node.cilium.io/agent-not-ready
arping-refresh-period                             30s
auto-direct-node-routes                           true
bpf-events-drop-enabled                           true
bpf-events-policy-verdict-enabled                 true
bpf-events-trace-enabled                          true
bpf-lb-acceleration                               disabled
bpf-lb-external-clusterip                         false
bpf-lb-map-max                                    65536
bpf-lb-sock                                       false
bpf-lb-sock-terminate-pod-connections             false
bpf-map-dynamic-size-ratio                        0.0025
bpf-policy-map-max                                16384
bpf-root                                          /sys/fs/bpf
cgroup-root                                       /run/cilium/cgroupv2

# ciliumnodes(cn) 정보 확인
$ kubectl get cn
NAME     CILIUMINTERNALIP   INTERNALIP       AGE
k8s-s    172.16.0.193       192.168.10.10    51m
k8s-w1   172.16.2.238       192.168.10.101   51m
k8s-w2   172.16.1.48        192.168.10.102   51m

$ kubectl get cn k8s-s -o yaml
apiVersion: cilium.io/v2
kind: CiliumNode
metadata:
  creationTimestamp: "2024-10-23T09:26:26Z"
  generation: 1
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: k8s-s
    kubernetes.io/os: linux
    node-role.kubernetes.io/control-plane: ""
    node.kubernetes.io/exclude-from-external-load-balancers: ""

# 노드별 파드 대역 확인
$ kubectl get ciliumnodes -o yaml | grep podCIDRs -A1
      podCIDRs:
      - 172.16.0.0/24
--
      podCIDRs:
      - 172.16.2.0/24
--
      podCIDRs:
      - 172.16.1.0/24

# cilium 파드 확인
$ kubectl get pod -n kube-system -l k8s-app=cilium -owide
NAME           READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
cilium-kqclm   1/1     Running   0          13m   192.168.10.10    k8s-s    <none>           <none>
cilium-smkn7   1/1     Running   0          13m   192.168.10.102   k8s-w2   <none>           <none>
cilium-vxs6t   1/1     Running   0          13m   192.168.10.101   k8s-w1   <none>           <none>

# cilium 엔드포인트 확인
$ kubectl get ciliumendpoints.cilium.io -A
NAMESPACE     NAME                           SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
kube-system   coredns-55cb58b774-s7d25       21890               ready            172.16.1.22
kube-system   coredns-55cb58b774-v4gj8       21890               ready            172.16.1.20
kube-system   hubble-relay-88f7f89d4-ncvqw   64108               ready            172.16.1.142
kube-system   hubble-ui-59bb4cb67b-kvf88     54953               ready            172.16.1.218

--------------------------------------------
# cilium cli 도움말
$ c0 help
CLI for interacting with the local Cilium Agent

Usage:
  cilium-dbg [command]

Available Commands:

# 노드 리스트
$ c0 node list
Name     IPv4 Address     Endpoint CIDR   IPv6 Address   Endpoint CIDR   Source
k8s-s    192.168.10.10    172.16.0.0/24                                  local
k8s-w1   192.168.10.101   172.16.2.0/24                                  custom-resource
k8s-w2   192.168.10.102   172.16.1.0/24                                  custom-resource

# 해당 노드의 로컬 엔드포인트 리스트 : nodemac 은 해당 파드와 veth pair 인 인터페이스의 mac 주소이다!
$ c0 bpf endpoint list
IP ADDRESS        LOCAL ENDPOINT INFO
192.168.10.10:0   (localhost)
172.16.0.232:0    id=2335  sec_id=4     flags=0x0000 ifindex=10  mac=1E:5E:65:7A:AA:5C nodemac=DE:07:06:36:98:5C
172.16.0.193:0    (localhost)

# Connection tracking tables - List connection tracking entries
$ c0 bpf ct list global
ICMP OUT 192.168.10.10:40200 -> 192.168.10.101:0 expires=3316 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=3256 TxFlagsSeen=0x00 LastTxReport=3256 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=0 IfIndex=0 BackendID=0
UDP OUT 192.168.10.10:60726 -> 169.254.169.123:123 expires=3560 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=3500 TxFlagsSeen=0x00 LastTxReport=3500 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=0 IfIndex=0 BackendID=0
UDP OUT 192.168.10.10:55677 -> 192.168.0.2:53 expires=3340 Packets=0 Bytes=0 RxFlagsSeen=0x00 LastRxReport=3280 TxFlagsSeen=0x00 LastTxReport=3280 Flags=0x0000 [ ] RevNAT=0 SourceSecurityID=0 IfIndex=0 BackendID=0

# Flush all NAT mapping entries
$ c0 bpf nat flush
Flushed 132 entries from /sys/fs/bpf/tc/globals/cilium_snat_v4_external

# List all NAT mapping entries
$ c0 bpf nat list

# service list 확인, Frontend 는 Service IP, Backend 는 Pod IP(endpoint IP)를 보여준다.
$ c0 service list
ID   Frontend           Service Type   Backend
1    10.10.0.1:443      ClusterIP      1 => 192.168.10.10:6443 (active)
2    10.10.92.207:443   ClusterIP      1 => 192.168.10.10:4244 (active)
3    10.10.212.194:80   ClusterIP      1 => 172.16.1.142:4245 (active)
4    10.10.238.206:80   ClusterIP      1 => 172.16.1.218:8081 (active)
5    10.10.0.10:53      ClusterIP      1 => 172.16.1.20:53 (active)
                                       2 => 172.16.1.22:53 (active)

# List load-balancing configuration
$ c0 bpf lb list
SERVICE ADDRESS        BACKEND ADDRESS (REVNAT_ID) (SLOT)
10.10.238.206:80 (0)   0.0.0.0:0 (4) (0) [ClusterIP, non-routable]
10.10.92.207:443 (1)   192.168.10.10:4244 (2) (1)
10.10.0.10:9153 (1)    172.16.1.20:9153 (6) (1)
10.10.238.206:80 (1)   172.16.1.218:8081 (4) (1)
10.10.0.10:9153 (0)    0.0.0.0:0 (6) (0) [ClusterIP, non-routable]
1

# List reverse NAT entries
$ c0 bpf lb list --revnat
ID   BACKEND ADDRESS (REVNAT_ID) (SLOT)
1    10.10.0.1:443
4    10.10.238.206:80
5    10.10.0.10:53

# List all open BPF maps
$ **c0 map list**
Name                       Num entries   Num errors   Cache enabled
cilium_ipcache             14            0            true
cilium_lb4_reverse_sk      0             0            true
cilium_lb4_backends_v3     2             0            true
cilium_lb4_reverse_nat     6             0            true
cilium_lb_affinity_match   0             0            true
cilium_policy_01969        2             0            true
cilium_lxc                 1             0            true
cilium_lb4_services_v2     14            0            true
cilium_ipmasq_v4           0             0            true

$ c0 map get cilium_lxc
Key              Value                                                                                            State   Error
172.16.0.232:0   id=2335  sec_id=4     flags=0x0000 ifindex=10  mac=1E:5E:65:7A:AA:5C nodemac=DE:07:06:36:98:5C   sync

$ c0 map get cilium_ipcache
Key                 Value                                                                     State   Error
192.168.10.10/32    identity=1 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>              sync
172.16.1.20/32      identity=21890 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>   sync
172.16.1.198/32     identity=4 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>       sync
0.0.0.0/0           identity=2 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>              sync
172.16.0.193/32     identity=1 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>              sync
172.16.1.22/32      identity=21890 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>   sync
172.16.2.238/32     identity=6 encryptkey=0 tunnelendpoint=192.168.10.101, flags=<none>       sync
192.168.10.101/32   identity=6 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>              sync
192.168.10.102/32   identity=6 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>              sync
172.16.1.142/32     identity=64108 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>   sync
172.16.1.218/32     identity=54953 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>   sync
172.16.2.56/32      identity=4 encryptkey=0 tunnelendpoint=192.168.10.101, flags=<none>       sync
172.16.0.232/32     identity=4 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>              sync
172.16.1.48/32      identity=6 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>       sync

# cilium monitor
$ c0 monitor -v --type l7
Listening for events on 4 CPUs with 64x4096 of shared memory
Press Ctrl-C to quit
CPU 02: [pre-xlate-rev] cgroup_id: 6866 sock_cookie: 11976, dst [192.168.10.10]:52006 tcp
CPU 02: [pre-xlate-rev] cgroup_id: 6866 sock_cookie: 11976, dst [192.168.10.10]:52006 tcp
CPU 02: [pre-xlate-rev] cgroup_id: 6866 sock_cookie: 11976, dst [192.168.10.10]:52006 tcp
CPU 03: [pre-xlate-rev] cgroup_id: 7024 sock_cookie: 3693, dst [192.168.10.10]:6443 tcp
CPU 03: [pre-xlate-rev] cgroup_id: 6866 sock_cookie: 11976, dst [192.168.10.10]:52006 tcp
CPU 03: [pre-xlate-rev] cgroup_id: 6866 sock_cookie: 11976, dst [192.168.10.10]:52006 tcp
CPU 02: [pre-xlate-rev] cgroup_id: 7024 sock_cookie: 11977, dst [127.0.0.1]:2381 tcp

# Cilium will automatically mount cgroup v2 filesystem required to attach BPF cgroup programs by default at the path /run/cilium/cgroupv2
$ mount | grep cilium
none on /run/cilium/cgroupv2 type cgroup2 (rw,relatime)

$ tree /run/cilium/cgroupv2/ -L 1
/run/cilium/cgroupv2/
├── cgroup.controllers
├── cgroup.max.depth
├── cgroup.max.descendants
├── cgroup.pressure
├── cgroup.procs
├── cgroup.stat
├── cgroup.subtree_control
├── cgroup.threads
├── cpu.pressure
├── cpu.stat
├── cpu.stat.local
├── cpuset.cpus.effective

# CNI Plugin 확인
$ tree /etc/cni/net.d/
/etc/cni/net.d/
└── 05-cilium.conflist

# Manage IP addresses and associated information - IP List
$ c0 ip list
IP                  IDENTITY                                                                     SOURCE
0.0.0.0/0           reserved:world
172.16.0.193/32     reserved:host
                    reserved:kube-apiserver
172.16.0.232/32     reserved:health
172.16.1.20/32      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   custom-resource
                    k8s:io.cilium.k8s.policy.cluster=default
                    k8s:io.cilium.k8s.policy.serviceaccount=coredns
                    k8s:io.kubernetes.pod.namespace=kube-system
                    k8s:k8s-app=kube-dns

# IDENTITY :  1(host), 2(world), 4(health), 6(remote), 파드마다 개별 ID를 가지는 것으로 보인다!
$ c0 ip list -n
IP                  IDENTITY   SOURCE
0.0.0.0/0           2
172.16.0.193/32     1
172.16.0.232/32     4
172.16.1.20/32      21890      custom-resource
172.16.1.22/32      21890      custom-resource
172.16.1.48/32      6
172.16.1.142/32     64108      custom-resource
172.16.1.198/32     4
172.16.1.218/32     54953      custom-resource
172.16.2.56/32      4
172.16.2.238/32     6
192.168.10.10/32    1
192.168.10.101/32   6
192.168.10.102/32   6

# Show bpf filesystem mount details
$ c0 bpf fs show
MountID:          1322
ParentID:         1311
Mounted State:    true
MountPoint:       /sys/fs/bpf
MountOptions:     rw,nosuid,nodev,noexec,relatime
OptionFields:     [master:11]
FilesystemType:   bpf
MountSource:      bpf
SuperOptions:     rw,mode=700

# bfp 마운트 폴더 확인
$ **tree /sys/fs/bpf**
/sys/fs/bpf
├── cilium
│   ├── devices
│   │   ├── cilium_host
│   │   │   └── links
│   │   │       ├── cil_from_host
│   │   │       └── cil_to_host
│   │   ├── cilium_net
│   │   │   └── links
│   │   │       └── cil_to_host
│   │   └── ens5
│   │       └── links
│   │           ├── cil_from_netdev
│   │           └── cil_to_netdev
...
# List contents of a policy BPF map : Dump all policy maps
$ **c0 bpf policy get --all**
Endpoint ID: 1969
Path: /sys/fs/bpf/tc/globals/cilium_policy_01969

POLICY   DIRECTION   LABELS (source:key[=value])   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX
Allow    Ingress     reserved:unknown              ANY          NONE         disabled    0       0         0
Allow    Egress      reserved:unknown              ANY          NONE         disabled    0       0         0

Endpoint ID: 2335
Path: /sys/fs/bpf/tc/globals/cilium_policy_02335

POLICY   DIRECTION   LABELS (source:key[=value])   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX
Allow    Ingress     reserved:unknown              ANY          NONE         disabled    26034   325       0
Allow    Ingress     reserved:host                 ANY          NONE         disabled    18459   209       0
                     reserved:kube-apiserver
Allow    Egress      reserved:unknown              ANY          NONE         disabled    0       0         0

$ c0 bpf policy get --all -n
Endpoint ID: 1969
Path: /sys/fs/bpf/tc/globals/cilium_policy_01969

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX
Allow    Ingress     0          ANY          NONE         disabled    0       0         0
Allow    Egress      0          ANY          NONE         disabled    0       0         0

Endpoint ID: 2335
Path: /sys/fs/bpf/tc/globals/cilium_policy_02335

POLICY   DIRECTION   IDENTITY   PORT/PROTO   PROXY PORT   AUTH TYPE   BYTES   PACKETS   PREFIX
Allow    Ingress     0          ANY          NONE         disabled    26298   329       0
Allow    Ingress     1          ANY          NONE         disabled    18591   211       0
Allow    Egress      0          ANY          NONE         disabled    0       0         0

# BPF datapath traffic metrics
$ c0 bpf metrics list
REASON                                    DIRECTION   PACKETS   BYTES      LINE   FILE
Interface                                 INGRESS     54576     60225274   1132   bpf_host.c
Interface                                 INGRESS     744       59283      2306   bpf_lxc.c
Success                                   EGRESS      19        1330       1713   bpf_host.c
Success                                   EGRESS      456       39591      1309   bpf_lxc.c
Success                                   EGRESS      51480     25364588   235    trace.h
Success                                   EGRESS      799       61918      1258   bpf_lxc.c
Success                                   EGRESS      82        3556       1598   bpf_host.c
Success                                   INGRESS     1525      126861     235    trace.h
Success                                   INGRESS     594       52423      2111   bpf_lxc.c
Success                                   INGRESS     931       74438      86     l3.h
Unsupported L3 protocol                   INGRESS     20        1400       2389   bpf_lxc.c
Unsupported protocol for NAT masquerade   EGRESS      6         564        3192   nodeport.h

# Manage the IPCache mappings for IP/CIDR <-> Identity
$ c0 bpf ipcache list
IP PREFIX/ADDRESS   IDENTITY
172.16.1.218/32     identity=54953 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>
192.168.10.10/32    identity=1 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>
192.168.10.101/32   identity=6 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>
172.16.0.232/32     identity=4 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>
172.16.1.22/32      identity=21890 encryptkey=0 tunnelendpoint=192.168.10.102, flags=<none>
172.16.2.56/32      identity=4 encryptkey=0 tunnelendpoint=192.168.10.101, flags=<none>
172.16.2.238/32     identity=6 encryptkey=0 tunnelendpoint=192.168.10.101, flags=<none>
192.168.10.102/32   identity=6 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>
0.0.0.0/0           identity=2 encryptkey=0 tunnelendpoint=0.0.0.0, flags=<none>

# Manage compiled BPF template objects
$ c0 bpf sha list
Datapath SHA                                                       Endpoint(s)
db7b209ea985a270a43402220ed3606963497108fa331adff4e19903e811241d   2335
f6761dd1721e69ff57d549a26da90ca45df842a30f3402de89774b989419ad60   1969

# Get datapath SHA header
$ c0 bpf sha get db7b209ea985a270a43402220ed3606963497108fa331adff4e19903e811241d
#include "lib/utils.h"

DEFINE_IPV6(LXC_IP, 0x20, 0x1, 0xdb, 0x8, 0xb, 0xad, 0xca, 0xfe, 0x60, 0xd, 0xbe, 0xe2, 0xb, 0xad, 0xca, 0xfe);
#define LXC_IP_V
DEFINE_U32(LXC_IPV4, 0x030200c0);    /* 50462912 */
#define LXC_IPV4 fetch_u32(LXC_IPV4)
DEFINE_U16(LXC_ID, 0xffff);    /* 65535 */
#define LXC_ID fetch_u16(LXC_ID)
DEFINE_MAC(THIS_INTERFACE_MAC, 0x2, 0x0, 0x60, 0xd, 0xf0, 0xd);
#define THIS_INTERFACE_MAC fetch_mac(THIS_INTERFACE_MAC)
DEFINE_U32(THIS_INTERFACE_IFINDEX, 0xffffffff);    /* 4294967295 */
#define THIS_INTERFACE_IFINDEX fetch_u32(THIS_INTERFACE_IFINDEX)
...

# Retrieve information about an identity
$ c0 identity list
ID      LABELS
1       reserved:host
        reserved:kube-apiserver
2       reserved:world
3       reserved:unmanaged
4       reserved:health
5       reserved:init
6       reserved:remote-node
7       reserved:kube-apiserver
        reserved:remote-node
8       reserved:ingress
9       reserved:world-ipv4
10      reserved:world-ipv6
21890   k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system
        k8s:io.cilium.k8s.policy.cluster=default
        k8s:io.cilium.k8s.policy.serviceaccount=coredns
        k8s:io.kubernetes.pod.namespace=kube-system

# 엔드포인트 기준 ID
$ c0 identity list --endpoints
ID   LABELS                                                        REFCOUNT
1    k8s:node-role.kubernetes.io/control-plane                     1
     k8s:node.kubernetes.io/exclude-from-external-load-balancers
     reserved:host
4    reserved:health                                               1
  1. Node 간 Pod 통신 확인
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: netpod
  labels:
    app: netpod
spec:
  nodeName: k8s-s
  containers:
  - name: netshoot-pod
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
---
apiVersion: v1
kind: Pod
metadata:
  name: webpod1
  labels:
    app: webpod
spec:
  nodeName: k8s-w1
  containers:
  - name: container
    image: traefik/whoami
  terminationGracePeriodSeconds: 0
---
apiVersion: v1
kind: Pod
metadata:
  name: webpod2
  labels:
    app: webpod
spec:
  nodeName: k8s-w2
  containers:
  - name: container
    image: traefik/whoami
  terminationGracePeriodSeconds: 0
EOF
pod/netpod created
pod/webpod1 created
pod/webpod2 created

# 확인
$ kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
netpod    1/1     Running   0          23s   172.16.0.140   k8s-s    <none>           <none>
webpod1   1/1     Running   0          23s   172.16.2.158   k8s-w1   <none>           <none>
webpod2   1/1     Running   0          22s   172.16.1.98    k8s-w2   <none>           <none>

$ c0 status --verbose | grep Allocated -A5
Allocated addresses:
  172.16.0.140 (default/netpod)
  172.16.0.193 (router)
  172.16.0.232 (health)
IPv4 BIG TCP:           Disabled
IPv6 BIG TCP:           Disabled

$ kubectl get ciliumendpoints
NAME      SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
netpod    6704                ready            172.16.0.140
webpod1   38058               ready            172.16.2.158
webpod2   38058               ready            172.16.1.98

$ kubectl get ciliumendpoints -A
NAMESPACE     NAME                           SECURITY IDENTITY   ENDPOINT STATE   IPV4           IPV6
default       netpod                         6704                ready            172.16.0.140
default       webpod1                        38058               ready            172.16.2.158
default       webpod2                        38058               ready            172.16.1.98
kube-system   coredns-55cb58b774-s7d25       21890               ready            172.16.1.22
kube-system   coredns-55cb58b774-v4gj8       21890               ready            172.16.1.20
kube-system   hubble-relay-88f7f89d4-ncvqw   64108               ready            172.16.1.142
kube-system   hubble-ui-59bb4cb67b-kvf88     54953               ready            172.16.1.218

$ c0 endpoint list
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6   IPv4           STATUS
           ENFORCEMENT        ENFORCEMENT
59         Disabled           Disabled          6704       k8s:app=netpod                                                                  172.16.0.140   ready
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
                                                           k8s:io.cilium.k8s.policy.cluster=default
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default
                                                           k8s:io.kubernetes.pod.namespace=default
1969       Disabled           Disabled          1          k8s:node-role.kubernetes.io/control-plane                                                      ready
                                                           k8s:node.kubernetes.io/exclude-from-external-load-balancers
                                                           reserved:host
2335       Disabled           Disabled          4          reserved:health

$ c0 bpf endpoint list
IP ADDRESS        LOCAL ENDPOINT INFO
192.168.10.10:0   (localhost)
172.16.0.232:0    id=2335  sec_id=4     flags=0x0000 ifindex=10  mac=1E:5E:65:7A:AA:5C nodemac=DE:07:06:36:98:5C
172.16.0.193:0    (localhost)
172.16.0.140:0    id=59    sec_id=6704  flags=0x0000 ifindex=12  mac=52:4F:4C:2B:11:31 nodemac=BA:DE:4D:6E:08:94

$ c0 map get cilium_lxc
Key              Value                                                                                            State   Error
172.16.0.232:0   id=2335  sec_id=4     flags=0x0000 ifindex=10  mac=1E:5E:65:7A:AA:5C nodemac=DE:07:06:36:98:5C   sync
172.16.0.140:0   id=59    sec_id=6704  flags=0x0000 ifindex=12  mac=52:4F:4C:2B:11:31 nodemac=BA:DE:4D:6E:08:94   sync

$ c0 ip list
IP                  IDENTITY                                                                     SOURCE
0.0.0.0/0           reserved:world
172.16.0.140/32     k8s:app=netpod                                                               custom-resource
                    k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
                    k8s:io.cilium.k8s.policy.cluster=default
                    k8s:io.cilium.k8s.policy.serviceaccount=default
                    k8s:io.kubernetes.pod.namespace=default
  • 파드 변수 지정
    • 추가 테스트를 위해 다음과 같은 변수를 지정합니다.
# 테스트 파드들 IP
NETPODIP=$(kubectl get pods netpod -o jsonpath='{.status.podIP}')
WEBPOD1IP=$(kubectl get pods webpod1 -o jsonpath='{.status.podIP}')
WEBPOD2IP=$(kubectl get pods webpod2 -o jsonpath='{.status.podIP}')

# 단축키(alias) 지정
alias p0="kubectl exec -it netpod  -- "
alias p1="kubectl exec -it webpod1 -- "
alias p2="kubectl exec -it webpod2 -- "
  1. 서비스 통신확인
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: svc
spec:
  ports:
    - name: svc-webport
      port: 80
      targetPort: 80
  selector:
    app: webpod
  type: ClusterIP
EOF
service/svc created

# 서비스 생성 확인
$ kubectl get svc,ep svc
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/svc   ClusterIP   10.10.207.230   <none>        80/TCP    19s

NAME            ENDPOINTS                        AGE
endpoints/svc   172.16.1.98:80,172.16.2.158:80   19s

# 노드에 iptables 더이상 KUBE-SVC rule 이 생성되지 않는다!
$ iptables-save | grep KUBE-SVC
$ iptables-save | grep CILIUM
:CILIUM_POST_mangle - [0:0]
:CILIUM_PRE_mangle - [0:0]
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_mangle" -j CILIUM_PRE_mangle
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_mangle" -j CILIUM_POST_mangle
-A CILIUM_PRE_mangle ! -o lo -m socket --transparent -m comment --comment "cilium: any->pod redirect proxied traffic to host proxy" -j MARK --set-xmark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p tcp -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 34213 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p udp -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 34213 --on-ip 127.0.0.1 --tproxy-mark 0x200/0xffffffff
...

# 서비스IP를 변수에 지정
$ SVCIP=$(kubectl get svc svc -o jsonpath='{.spec.clusterIP}')

# Pod1 에서 Service(ClusterIP) 접속 트래픽 발생
$ kubectl exec netpod -- curl -s $SVCIP
Hostname: webpod2
IP: 127.0.0.1
IP: ::1
IP: 172.16.1.98
IP: fe80::2433:a3ff:fea1:94e3
RemoteAddr: 172.16.0.140:40934
GET / HTTP/1.1
Host: 10.10.207.230
User-Agent: curl/8.7.1
Accept: */*

$ kubectl exec netpod -- curl -s $SVCIP | grep Hostname
Hostname: webpod1

# 지속적으로 접속 트래픽 발생
$ SVCIP=$(kubectl get svc svc -o jsonpath='{.spec.clusterIP}')
$ while true; do kubectl exec netpod -- curl -s $SVCIP | grep Hostname;echo "-----";sleep 1;done
Hostname: webpod1
-----
Hostname: webpod1
-----
Hostname: webpod1
-----
Hostname: webpod1
-----
Hostname: webpod2

# 파드에서 SVC(ClusterIP) 접속 시 tcpdump 로 확인 >> **파드 내부 캡쳐**인데, SVC(10.108.12.195)는 보이지 않고, **DNAT 된 web-pod 의 IP**가 확인! **Magic**!
**kubectl exec netpod -- tcpdump -enni any -q**
10:40:04.028223 eth0  Out ifindex 11 52:4f:4c:2b:11:31 172.16.0.140.42498 > 172.16.1.98.80: tcp 0
10:40:05.210559 eth0  Out ifindex 11 52:4f:4c:2b:11:31 172.16.0.140.42512 > 172.16.1.98.80: tcp 0
10:40:05.211208 eth0  In  ifindex 11 ba:de:4d:6e:08:94 172.16.1.98.80 > 172.16.0.140.42512: tcp 0
10:40:05.211265 eth0  Out ifindex 11 52:4f:4c:2b:11:31 172.16.0.140.42512 > 172.16.1.98.80: tcp 0
10:40:05.211620 eth0  Out ifindex 11 52:4f:4c:2b:11:31 172.16.0.140.42512 > 172.16.1.98.80: tcp 76
10:40:05.212000 eth0  In  ifindex 11 ba:de:4d:6e:08:94 172.16.1.98.80 > 172.16.0.140.42512: tcp 0
10:40:05.212339 eth0  In  ifindex 11 ba:de:4d:6e:08:94 172.16.1.98.80 > 172.16.0.140.42512: tcp 311...
...

$ **kubectl exec netpod --** sh -c "**ngrep** -tW byline -d **eth0** '' 'tcp port **80**'"
interface: eth0 (172.16.0.140/255.255.255.255)

# 서비스 정보 확인
$ c0 service list
ID   Frontend           Service Type   Backend
1    10.10.0.1:443      ClusterIP      1 => 192.168.10.10:6443 (active)
2    10.10.92.207:443   ClusterIP      1 => 192.168.10.10:4244 (active)
3    10.10.212.194:80   ClusterIP      1 => 172.16.1.142:4245 (active)
4    10.10.238.206:80   ClusterIP      1 => 172.16.1.218:8081 (active)
5    10.10.0.10:53      ClusterIP      1 => 172.16.1.20:53 (active)
                                       2 => 172.16.1.22:53 (active)

$ c0 bpf lb list
SERVICE ADDRESS        BACKEND ADDRESS (REVNAT_ID) (SLOT)
10.10.0.10:53 (2)      172.16.1.22:53 (5) (2)
10.10.238.206:80 (0)   0.0.0.0:0 (4) (0) [ClusterIP, non-routable]
10.10.207.230:80 (2)   172.16.1.98:80 (7) (2)
10.10.212.194:80 (0)   0.0.0.0:0 (3) (0) [ClusterIP, non-routable]
10.10.238.206:80 (1)   172.16.1.218:8081 (4) (1)
10.10.92.207:443 (0)   0.0.0.0:0 (2) (0) [ClusterIP, InternalLocal, non-routable]
  • strace 시스템 콜 트레이싱
**$ kubectl exec netpod -- strace -c curl -s $SVCIP**
Hostname: webpod1
IP: 127.0.0.1
IP: ::1
IP: 172.16.2.158
IP: fe80::444c:61ff:fe74:4bc9
RemoteAddr: 172.16.0.140:50064
GET / HTTP/1.1
Host: 10.10.207.230
User-Agent: curl/8.7.1
Accept: */*

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 22.83    0.001399          17        79           mmap
 14.34    0.000879          15        56        32 open
 12.30    0.000754          24        31           lseek
  9.38    0.000575          21        27           close
  9.24    0.000566          18        31           rt_sigaction
  6.36    0.000390          14        27           munmap
  5.52    0.000338          24        14           mprotect
  5.25    0.000322          14        23           fcntl
  3.35    0.000205          17        12           rt_sigprocmask

$ kubectl exec netpod -- **strace** -s 65535 -f -tt curl -s $SVCIP
10:43:36.182458 execve("/usr/bin/curl", ["curl", "-s", "10.10.207.230"], 0x7ffd68956250 /* 11 vars */) = 0
10:43:36.183273 arch_prctl(ARCH_SET_FS, 0x763aa7c9ab28) = 0
10:43:36.184236 set_tid_address(0x763aa7c9af90) = 233
10:43:36.184646 brk(NULL)               = 0x55b2b9130000
10:43:36.184785 brk(0x55b2b9132000)     = 0x55b2b9132000
10:43:36.185224 mmap(0x55b2b9130000, 4096, PROT_NONE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x55b2b9130000

$ kubectl exec netpod -- strace **-e trace=connect** curl -s $SVCIP
connect(5, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("10.10.207.230")}, 16) = -1 EINPROGRESS (Operation in progress)
Hostname: webpod2
IP: 127.0.0.1
IP: ::1
IP: 172.16.1.98
IP: fe80::2433:a3ff:fea1:94e3
RemoteAddr: 172.16.0.140:56904
GET / HTTP/1.1
Host: 10.10.207.230
User-Agent: curl/8.7.1
Accept: */*

+++ exited with 0 +++  

$ kubectl exec netpod -- strace **-e trace=getsockname** curl -s $SVCIP
getsockname(5, {sa_family=AF_INET, sin_port=htons(49438), sin_addr=inet_addr("172.16.0.140")}, [128 => 16]) = 0
getsockname(5, {sa_family=AF_INET, sin_port=htons(49438), sin_addr=inet_addr("172.16.0.140")}, [128 => 16]) = 0
getsockname(5, {sa_family=AF_INET, sin_port=htons(49438), sin_addr=inet_addr("172.16.0.140")}, [128 => 16]) = 0
getsockname(5, {sa_family=AF_INET, sin_port=htons(49438), sin_addr=inet_addr("172.16.0.140")}, [128 => 16]) = 0
Hostname: webpod2
IP: 127.0.0.1
IP: ::1
IP: 172.16.1.98
IP: fe80::2433:a3ff:fea1:94e3
RemoteAddr: 172.16.0.140:49438
GET / HTTP/1.1
Host: 10.10.207.230
User-Agent: curl/8.7.1
Accept: */*

+++ exited with 0 +++
  1. Running Prometheus & Grafana
  • Prometheus를 통해 쉽게 데이터를 수집하고 Grafana로 확인할 수 있습니다.
# 배포
$ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.16.3/examples/kubernetes/addons/prometheus/monitoring-example.yaml
namespace/cilium-monitoring created
serviceaccount/prometheus-k8s created
configmap/grafana-config created
configmap/grafana-cilium-dashboard created
configmap/grafana-cilium-operator-dashboard created
configmap/grafana-hubble-dashboard created
configmap/grafana-hubble-l7-http-metrics-by-workload created
configmap/prometheus created
clusterrole.rbac.authorization.k8s.io/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
service/grafana created
service/prometheus created
deployment.apps/grafana created
deployment.apps/prometheus created

$ kubectl get all -n cilium-monitoring
NAME                              READY   STATUS              RESTARTS   AGE
pod/grafana-65d4578dc4-qbrvd      0/1     Running             0          17s
pod/prometheus-7cc8784659-b8wmz   0/1     ContainerCreating   0          17s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/grafana      ClusterIP   10.10.229.131   <none>        3000/TCP   17s
service/prometheus   ClusterIP   10.10.119.132   <none>        9090/TCP   17s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana      0/1     1            0           17s
deployment.apps/prometheus   0/1     1            0           17s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-65d4578dc4      1         1         0       17s
replicaset.apps/prometheus-7cc8784659   1         1         0       17s

# 파드와 서비스 확인
$ kubectl get pod,svc,ep -o wide -n cilium-monitoring
NAME                              READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
pod/grafana-65d4578dc4-qbrvd      1/1     Running   0          27s   172.16.2.199   k8s-w1   <none>           <none>
pod/prometheus-7cc8784659-b8wmz   1/1     Running   0          27s   172.16.2.20    k8s-w1   <none>           <none>

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   SELECTOR
service/grafana      ClusterIP   10.10.229.131   <none>        3000/TCP   27s   app=grafana
service/prometheus   ClusterIP   10.10.119.132   <none>        9090/TCP   27s   app=prometheus

NAME                   ENDPOINTS           AGE
endpoints/grafana      172.16.2.199:3000   27s
endpoints/prometheus   172.16.2.20:9090    27s

# NodePort 설정
$ kubectl patch svc grafana -n cilium-monitoring -p '{"spec": {"type": "NodePort"}}'
service/grafana patched

$ kubectl patch svc prometheus -n cilium-monitoring -p '{"spec": {"type": "NodePort"}}'
service/prometheus patched

# Grafana 웹 접속
$ GPT=$(kubectl get svc -n cilium-monitoring grafana -o jsonpath={.spec.ports[0].nodePort})
$ echo -e "Grafana URL = http://$(curl -s ipinfo.io/ip):$GPT"
Grafana URL = http://43.203.253.27:30672

# Prometheus 웹 접속 정보 확인
$ PPT=$(kubectl get svc -n cilium-monitoring prometheus -o jsonpath={.spec.ports[0].nodePort})
$ echo -e "Prometheus URL = http://$(curl -s ipinfo.io/ip):$PPT"
Prometheus URL = http://43.203.253.27:32157
  1. Bandwidth Manager
  • Kubernetes 클러스터 내에 네트워크 대역폭을 관리하고 제한하는 기능을 제공합니다.
  • 이를 통해 클러스터의 네트워크 자원을 효율적으로 관리할 수 있게됩니다.
# 인터페이스 tc qdisc 확인
$ tc qdisc show dev ens5
qdisc mq 0: root
qdisc fq_codel 0: parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
qdisc fq_codel 0: parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
qdisc fq_codel 0: parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
qdisc fq_codel 0: parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64

# 설정
$ helm upgrade cilium cilium/cilium --namespace kube-system --reuse-values --set bandwidthManager.enabled=true
Release "cilium" has been upgraded. Happy Helming!
NAME: cilium
LAST DEPLOYED: Wed Oct 23 19:51:14 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 5
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble Relay and Hubble UI.

Your release version is 1.16.3.

For any further help, visit https://docs.cilium.io/en/v1.16/gettinghelp

# 적용 확인
$ cilium config view | grep bandwidth
enable-bandwidth-manager                       true

# egress bandwidth limitation 동작하는 인터페이스 확인
$ c0 status | grep  BandwidthManager
BandwidthManager:        EDT with BPF [CUBIC] [ens5]

# 인터페이스 tc qdisc 확인 : 설정 전후 옵션값들이 상당히 추가된다
$ tc qdisc
qdisc noqueue 0: dev lo root refcnt 2
qdisc mq 8002: dev ens5 root
qdisc fq 8005: dev ens5 parent 8002:2 limit 10000p flow_limit 100p buckets 32768 orphan_mask 1023 quantum 18030b initial_quantum 90150b low_rate_threshold 550Kbit refill_delay 40ms timer_slack 10us horizon 2s horizon_drop
qdisc fq 8003: dev ens5 parent 8002:4 limit 10000p flow_limit 100p buckets 32768 orphan_mask 1023 quantum 18030b initial_quantum 90150b low_rate_threshold 550Kbit refill_delay 40ms timer_slack 10us horizon 2s horizon_drop
qdisc fq 8004: dev ens5 parent 8002:3 limit 10000p flow_limit 100p buckets 32768 orphan_mask 1023 quantum 18030b initial_quantum 90150b low_rate_threshold 550Kbit refill_delay 40ms timer_slack 10us horizon 2s horizon_drop
qdisc fq 8006: dev ens5 parent 8002:1 limit 10000p flow_limit 100p buckets 32768 orphan_mask 1023 quantum 18030b initial_quantum 90150b low_rate_threshold 550Kbit refill_delay 40ms timer_slack 10us horizon 2s horizon_drop

# 테스트를 위한 트래픽 발생 서버/클라이언트 파드 생성
$ cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: Pod
metadata:
  annotations:
    # Limits egress bandwidth to 10Mbit/s.
    kubernetes.io/egress-bandwidth: "10M"
  labels:
    # This pod will act as server.
    app.kubernetes.io/name: netperf-server
  name: **netperf-server**
spec:
  containers:
  - name: netperf
    image: cilium/netperf
    ports:
    - containerPort: 12865
---
apiVersion: v1
kind: Pod
metadata:
  # This Pod will act as client.
  name: netperf-client
spec:
  affinity:
    # Prevents the client from being scheduled to the
    # same node as the server.
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app.kubernetes.io/name
            operator: In
            values:
            - netperf-server
        topologyKey: kubernetes.io/hostname
  containers:
  - name: **netperf**
    args:
    - sleep
    - infinity
    image: cilium/netperf
EOF
pod/netperf-server created
pod/netperf-client created

$ kubectl describe pod netperf-server | grep Annotations:
Annotations:      kubernetes.io/egress-bandwidth: 10M

$ c1 bpf bandwidth list
IDENTITY   EGRESS BANDWIDTH (BitsPerSec)
1418       10M

$ c2 bpf bandwidth list
No entries found.

$ c1 endpoint list
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6   IPv4           STATUS
           ENFORCEMENT        ENFORCEMENT
238        Disabled           Disabled          38058      k8s:app=webpod                                                                            172.16.2.158   ready
                                                           k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
                                                           k8s:io.cilium.k8s.policy.cluster=default
                                                           k8s:io.cilium.k8s.policy.serviceaccount=default
                                                           k8s:io.kubernetes.pod.namespace=default
345        Disabled           Disabled          1          reserved:host

$ c2 endpoint list
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6   IPv4           STATUS
           ENFORCEMENT        ENFORCEMENT
220        Disabled           Disabled          21890      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          172.16.1.22    ready
                                                           k8s:io.cilium.k8s.policy.cluster=default
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns
                                                           k8s:io.kubernetes.pod.namespace=kube-system
                                                           k8s:k8s-app=kube-dns
486        Disabled           Disabled          21890      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system          172.16.1.20    ready
                                                           k8s:io.cilium.k8s.policy.cluster=default
                                                           k8s:io.cilium.k8s.policy.serviceaccount=coredns
                                                           k8s:io.kubernetes.pod.namespace=kube-system
                                                           k8s:k8s-app=kube-dns


$ NETPERF_SERVER_IP=$(kubectl get pod netperf-server -o jsonpath='{.status.podIP}')
$ **kubectl exec netperf-client -- netperf -t TCP_MAERTS -H "${NETPERF_SERVER_IP}"
MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.2.154 (172.16.) port 0 AF_INET

$** kubectl get pod netperf-server -o json | sed -e 's|10M|5M|g' | kubectl apply -f -
pod/netperf-server configured

$ c1 bpf bandwidth list
IDENTITY   EGRESS BANDWIDTH (BitsPerSec)
1418       5M

$ c2 bpf bandwidth list
No entries found.

$ kubectl exec netperf-client -- netperf -t TCP_MAERTS -H "${NETPERF_SERVER_IP}"
MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.2.154 (172.16.) port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

131072  16384  16384    10.01      **4.73**

$ kubectl get pod netperf-server -o json | sed -e 's|5M|20M|g' | kubectl apply -f -
pod/netperf-server configured

$ kubectl exec netperf-client -- netperf -t TCP_MAERTS -H "${NETPERF_SERVER_IP}"
MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 172.16.2.154 (172.16.) port 0 AF_INET
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

131072  16384  16384    10.01      19.80

$ **tc qdisc show dev ens5**
qdisc mq 8002: root
qdisc fq 8005: parent 8002:2 limit 10000p flow_limit 100p buckets 32768 orphan_mask 1023 quantum 18030b initial_quantum 90150b low_rate_threshold 550Kbit refill_delay 40ms timer_slack 10us horizon 2s horizon_drop
qdisc fq 8003: parent 8002:4 limit 10000p flow_limit 100p buckets 32768 orphan_mask 1023 quantum 18030b initial_quantum 90150b low_rate_threshold 550Kbit refill_delay 40ms timer_slack 10us horizon 2s horizon_drop
qdisc fq 8004: parent 8002:3 limit 10000p flow_limit 100p buckets 32768 orphan_mask 1023 quantum 18030b initial_quantum 90150b low_rate_threshold 550Kbit refill_delay 40ms timer_slack 10us horizon 2s horizon_drop

$ kubectl delete pod netperf-client netperf-server
  1. LoadBalancer IP Address Management (LB IPAM)
  • Kubernetes 클러스터에서 LoadBalancer 서비스의 IP 주소를 관리하는 기능으로 eBPF를 사용해 kube-proxy 없이도 네이티브 로드 밸런싱을 제공하며, 외부 및 내부 IP 주소 할당을 효율적으로 처리합니다.
# BGP Config 설정
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: bgp-config
  namespace: kube-system
data:
  config.yaml: |
    peers:
      - peer-address: 192.168.10.254
        peer-asn: 64513
        my-asn: 64512
    address-pools:
      - name: default
        protocol: bgp
        avoid-buggy-ips: true
        addresses:
          - 172.20.1.0/24
EOF

# config 확인
$ kubectl get cm -n kube-system bgp-config
NAME         DATA   AGE
bgp-config   1      3m1s

# cilium 파드 재시작 >> 단축키 재설정
$ kubectl -n kube-system rollout restart ds/cilium
daemonset.apps/cilium restarted

# 설정 확인
$ cilium config view | grep bgp
bgp-announce-lb-ip                             true
bgp-announce-pod-cidr                          true

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: test-lb
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  selector:
    svc: test-lb
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      svc: test-lb
  template:
    metadata:
      labels:
        svc: test-lb
    spec:
      containers:
      - name: web
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
            path: /
            port: 80
EOF
service/test-lb created
deployment.apps/nginx created
# 서비스 확인
kubectl get svc,ep test-lb
NAME              TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/test-lb   LoadBalancer   10.10.63.18   <pending>     80:30607/TCP   26s

NAME                ENDPOINTS         AGE
endpoints/test-lb   172.16.2.174:80   26s

# [k8s-rtr] Service(LB, VIP) 라우팅 ECMP 확인
root@k8s-rtr:~# ip -c route
default via 192.168.10.1 dev ens5 proto dhcp src 192.168.10.10 metric 100
172.16.0.140 dev lxc2e4992b6ca2a proto kernel scope link
172.16.0.232 dev lxc_health proto kernel scope link
172.16.1.0/24 via 192.168.10.102 dev ens5 proto kernel
172.16.2.0/24 via 192.168.10.101 dev ens5 proto kernel
192.168.0.2 via 192.168.10.1 dev ens5 proto dhcp src 192.168.10.10 metric 100
192.168.10.0/24 dev ens5 proto kernel scope link src 192.168.10.10 metric 100
192.168.10.1 dev ens5 proto dhcp scope link src 192.168.10.10 metric 100
...

# k8s-rtr , k8s-pc 에서 Service(LB) IP 로 접속 확인
curl -s 172.20.1.1 | grep -o "<title>.*</title>"
<title>Welcome to nginx!</title>
  1. 마치며…
  • 지금까지 Cilium에 대해 살펴보면서 eBPF 기반 네트워크 솔루션의 강력함을 체감할 수 있었습니다. 특히 네트워크 성능 최적화와 세밀한 보안 정책을 통해 클러스터의 네트워킹을 효율적으로 관리할 수 있다는 점이 인상 깊습니다.
  • 제가 스티디에서 경험한 Istio와 업무에서 경험한 Consul는 각각 서비스 메시와 서비스 디스커버리 및 키-값 저장소를 통해 애플리케이션 간의 연결과 보안에 집중합니다. 반면, Cilium은 네트워크 계층에서 Pod 간 통신을 보다 세밀하게 제어하며, 특히 eBPF를 활용해 성능과 확장성 측면에서 더욱 효율적인 네트워킹을 제공합니다.
  • 이번 스터디를 통해 Cilium이 제공하는 새로운 가능성을 더 깊이 이해하게 되었고, 앞으로의 프로젝트에서도 이를 활용할 기회를 기대하고 있습니다.

'K8s' 카테고리의 다른 글

AKS-eks  (0) 2024.11.02
Istio - Mode : Sidecar, Ambient  (0) 2024.10.19
LoadBalancer(MetalLB)  (3) 2024.10.04