容器系列-3安装kubernetes集群,使用containerd作为runtime

本节练习安装kubernetes集群,runtime使用containerd而非是docker。

本实验共用到两台机器,拓扑图如下:

容器系列-3安装kubernetes集群,使用containerd作为runtime

系统设置

在所有的机器上关闭swap,关闭selinux,同步/etc/hosts,配置好yum,设置好firewalld,把firewalld的默认zone设置为trusted即可。

[root@vms2X ~]# swapoff  -a

[root@vms2X ~]# sed -i '/swap/s/^/#/' /etc/fstab

[root@vms2X ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.26.20 vms20.rhce.cc vms20

192.168.26.21 vms21.rhce.cc vms21

[root@vms2X ~]#

安装及配置containerd

在所有机器上安装containerd:

[root@vms2X ~]# yum install  containerd.io cri-tools  -y

...大量输出...

作为依赖被安装:

...

selinux-policy.noarch 0:3.13.1-268.el7_9.2 selinux-policy-targeted.noarch 0:3.13.1-268.el7_9.2

完毕!

[root@vms2X ~]#

修改containerd的配置文件/etc/containerd/config.toml,内容如下:

[root@vms2X ~]# cat >  /etc/containerd/config.toml <<EOF

disabled_plugins = ["restart"]

[plugins.linux]

shim_debug = true

[plugins.cri.registry.mirrors."docker.io"]

endpoint = ["https://frz7i079.mirror.aliyuncs.com"]

[plugins.cri]

sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.2"

EOF

[root@vms2X ~]#

启动containerd并设置为开机自动启动:

[root@vms2X ~]# systemctl enable containerd ; systemctl start containerd

Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.

[root@vms2X ~]# crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

设置系统参数

在所有机器上修改内存参数及加载模块。

[root@vms2X ~]#  cat > /etc/modules-load.d/containerd.conf <<EOF

overlay

br_netfilter

EOF

[root@vms2X ~]#

[root@vms2X ~]#  cat <<EOF >  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

[root@vms2X ~]#

[root@vms2X ~]# modprobe overlay

[root@vms2X ~]# modprobe br_netfilter

[root@vms2X ~]# sysctl -p /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

[root@vms2X ~]#

安装kubernetes相关软件包

在所有机器上安装kubernetes相关软件包。

[root@vms2X ~]# yum install -y kubelet-1.20.1-0 kubeadm-1.20.1-0 kubectl-1.20.1-0  --disableexcludes=kubernetes

[root@vms2X ~]#

启动kubelet并设置开机自动启动。

[root@vms2X ~]# systemctl restart kubelet ; systemctl enable kubelet

[root@vms2X ~]#

初始化kubernetes集群

在master及vms20上初始化集群。

[root@vms20 ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.20.1 --pod-network-cidr=10.244.0.0/16

...大量输出...

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

...输出...

kubeadm join 192.168.26.20:6443 --token j9chck.b6zc1kbf7u4sagrh \

    --discovery-token-ca-cert-hash sha256:b2370e51e156f34012841c0d74815a9e33bbf11acb92949335822788814a7153

[root@vms20 ~]#

按提示创建kubeconfig文件:

[root@vms20 ~]# mkdir -p $HOME/.kube

[root@vms20 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@vms20 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[root@vms20 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf

[root@vms20 ~]# kubectl get nodes -o wide

NAME                STATUS    ROLES                         ...  VERSION   CONTAINER-RUNTIME

vms20.rhce.cc  NotReady  control-plane,master    ...  v1.20.1    containerd://1.4.3

[root@vms20 ~]#

把vms21.rhce.cc加入到集群:

[root@vms21 ~]# kubeadm join 192.168.26.20:6443 --token j9chck.b6zc1kbf7u4sagrh --discovery-token-ca-cert-hash sha256:b2370e51e156f34012841c0d74815a9e33bbf11acb92949335822788814a7153

[preflight] Running pre-flight checks

...大量输出...

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@vms21 ~]#

再次切换到vms20上

[root@vms20 ~]# kubectl get nodes -o wide

NAME               STATUS      ROLES                    ...    VERSION   CONTAINER-RUNTIME

vms20.rhce.cc  NotReady  control-plane,master ...    v1.20.1    containerd://1.4.3

vms21.rhce.cc  NotReady  <none>                   ...    v1.20.1   containerd://1.4.3

[root@vms20 ~]#

这里可以看到vms21已经加入到集群了,但是两台机器的状态都为NotReady,所以这里开始安装calico网络。

提前在两台机器上下载好calico所需要的镜像,然后导入:

[root@vms2X ~]# ctr -n k8s.io i import  calico.tar

unpacking docker.io/calico/cni:v3.14.2

...大量输出...

(sha256:6d6f0bf3ea4194c71f1b35a7ca97974c6f4783406552dc15ebc6c2b1047a5efe)...done

[root@vms2X ~]#

在master上部署calico网络

[root@vms20 ~]# kubectl apply -f calico.yaml

...大量输出...

deployment.apps/calico-kube-controllers created

serviceaccount/calico-kube-controllers created

[root@vms20 ~]#

[root@vms20 ~]# kubectl get nodes

NAME            STATUS   ROLES                          AGE     VERSION

vms20.rhce.cc   Ready    control-plane,master   15m     v1.20.1

vms21.rhce.cc   Ready    <none>                   8m50s   v1.20.1

[root@vms20 ~]#

这样整个kubernetes集群部署完毕。

相关新闻

                                                                                                                                    RHCE9学习指南连载,点击阅读