Kubernetes 이야기

Kubernetes + containerd 설치 ( centos8 ) 본문

Kubernetes/일반

Kubernetes + containerd 설치 ( centos8 )

kmaster 2022. 5. 9. 12:18
반응형

 

얼마전 Kubernetes 1.24 가 릴리즈되었다.  Kubernetes 1.24 릴리즈의 주요변경 사항은 dockershim 제거이다.

Kubernetes 1.24의 자세한 변경사항은 아래를 참고한다.

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md

 

GitHub - kubernetes/kubernetes: Production-Grade Container Scheduling and Management

Production-Grade Container Scheduling and Management - GitHub - kubernetes/kubernetes: Production-Grade Container Scheduling and Management

github.com

 

Kubernetes 1.24와 Containerd를 사용하여 설치를 진행해 보자. OS는 Centos8을 사용한다.

 

[설치환경]

구분 hostname
master-note master
worker-node node1
worker-node node2

 

설치

 

[공통]

 

1. SELinux Disable

# dnf -y upgrade
# setenforce 0
# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

 

2. Swap Off

# swapoff -a

 

3. 커널 모드설정

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 필요한 sysctl 파라미터를 설정하면 재부팅 후에도 유지된다.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# 재부팅하지 않고 sysctl 파라미터 적용
sudo sysctl --system

 

4. kubelet, kubeadm, kubectl 설치

# cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

# dnf -y install kubelet kubeadm kubectl --disableexcludes=kubernetes epel-release
# sudo systemctl enable --now kubelet

 

5. Container Runtime 설치 ( Containerd )

# yum install yum-utils -y
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum install containerd.io -y

 

6. Containerd 설정

sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml

 

SystemGroup 설정

# vi /etc/containerd/config.toml
..
        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
          SystemdCgroup = true
..

 

7. Containerd 재시작

# sudo systemctl enable containerd
# sudo systemctl restart containerd

 

[Master]

 

1. 방화벽 설정 ( 운영기에서는 아래와 같이 방화벽 설정을 해야 하지만, 테스트 서버라서 disable함 )

# firewall-cmd --add-port={80,443,6443,2379,2380,10250,10251,10252,30000-32767}/tcp --permanent
# firewall-cmd --reload

 

disable 방법

# sudo systemctl stop firewalld
# sudo systemctl disable firewalld

 

2. kubeadm init

# kubeadm config print init-defaults > kubeadm-init.yaml

 

생성된 kubeadm-init.yaml 파일을 환경에 맞게 변경 ( hostname도 변경해야 한다. )

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.60.200.121
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: master
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.24.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

 

# sudo cat << EOF | cat >> kubeadm-init.yaml
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

 

이제 Kubernetes 설치를 시작해 보자.

# kubeadm init --config=kubeadm-init.yaml

 

아래와 같이 kubeconfig 파일 세팅을 진행한다.

# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

# export KUBECONFIG=/etc/kubernetes/admin.conf

 

3. CNI ( Calico ) 설정

wget https://docs.projectcalico.org/manifests/calico.yaml --no-check-certificate

다운로드 후 Pod IP 대역대를 확인하여 환경에 맞게 수정한다.

            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"

 

수정 후 calico.yaml 을 실행한다.

# kubectl apply -f calico.yaml

 

[Node]

 

1. 방화벽 설정 ( 운영기에서는 아래와 같이 방화벽 설정을 해야 하지만, 테스트 서버라서 disable함 )

# firewall-cmd --add-port={80,443,10250,30000-32767}/tcp --permanent
# firewall-cmd --reload

 

disable 방법

# sudo systemctl stop firewalld
# sudo systemctl disable firewalld

 

2. join

 

이제 Kubeadm 으로 Join명령을 실행해 보자.

kubeadm join 10.60.200.121:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:b6542a7fadc7c6171096ac222fdbb3b586f7af2f8ccd40212ea7d

 

설치가 완료되었으면 상태를 확인해 보자.

# kubectl get nodes
NAME     STATUS   ROLES           AGE   VERSION
master   Ready    control-plane   71s   v1.24.0
node1    Ready    <none>          31s   v1.24.0
node2    Ready    <none>          29s   v1.24.0


# kubectl get pod --all-namespaces
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-77484fbbb5-p5jj7   1/1     Running   0          64s
kube-system   calico-node-n575z                          1/1     Running   0          44s
kube-system   calico-node-tpxks                          1/1     Running   0          46s
kube-system   calico-node-vzsx8                          1/1     Running   0          64s
kube-system   coredns-6d4b75cb6d-d7zxz                   1/1     Running   0          77s
kube-system   coredns-6d4b75cb6d-rw4mf                   1/1     Running   0          77s
kube-system   etcd-master                                1/1     Running   0          83s
kube-system   kube-apiserver-master                      1/1     Running   0          84s
kube-system   kube-controller-manager-master             1/1     Running   0          83s
kube-system   kube-proxy-hdtg9                           1/1     Running   0          78s
kube-system   kube-proxy-k2bbj                           1/1     Running   0          46s
kube-system   kube-proxy-pxm6n                           1/1     Running   0          44s
kube-system   kube-scheduler-master                      1/1     Running   0          82s

 

 

반응형
Comments