升级kubernetes 1.23.2到1.24.1

前面讲了runtime如何由docker迁移到containerd,点击阅读kubernetes环境从docker迁移到containerd
下面看迁移之后如何由1.23.2升级到1.24.1。

1.准备工作

修改两个节点的参数,下面的步骤应该在runtime迁移之后就做的。

[root@vms61 ~]# kubectl edit nodes vms61.rhce.cc
把
apiVersion: v1
kind: Node
metadata:
  annotations:
    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
修改为
apiVersion: v1
kind: Node
metadata:
  annotations:
    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock
[root@vms61 ~]# kubectl edit nodes vms62.rhce.cc
做上面类似的修改,然后两台机器上重启kubelet
[root@vms6X ~]#  systemctl restart kubelet
[root@vms6X ~]#

如果不改的话,再更新时会有如下报错。

    [ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.1: output: 
    ...输出... error: exit status 1

2.升级master

1.先安装kubeadm1.24.1

[root@vms61 ~]# yum install -y kubeadm-1.24.1-0 --disableexcludes=kubernetes
    ...输出...
[root@vms61 ~]# 

2.腾空节点vms61

[root@vms61 ~]# kubectl drain vms61.rhce.cc --ignore-daemonsets
node/vms61.rhce.cc cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-lf6t2, kube-system/kube-proxy-mhr5g
evicting pod kube-system/coredns-6d8c4cb4d-f2vq4
evicting pod kube-system/calico-kube-controllers-78d6f96c7b-wfx5b
evicting pod kube-system/coredns-6d8c4cb4d-clll2
pod/calico-kube-controllers-78d6f96c7b-wfx5b evicted
pod/coredns-6d8c4cb4d-clll2 evicted
pod/coredns-6d8c4cb4d-f2vq4 evicted
node/vms61.rhce.cc drained
[root@vms61 ~]#

3.升级控制节点各组件

[root@vms61 ~]# kubeadm upgrade apply v1.24.1
    ...输出..
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.24.1". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@vms61 ~]# 

4.升级kubelet和kubectl到1.24.1

[root@vms61 ~]# yum install -y kubelet-1.24.1-0 kubectl-1.24.1-0 --disableexcludes=kubernetes
[root@vms61 ~]#

修改kubelet参数

[root@vms61 ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.5"
[root@vms61 ~]#

改为

[root@vms61 ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.5"
[root@vms61 ~]#

即这里删除了参数--network-plugin=cni,如果不删除此参数则kubelet启动不了。

5.重启kubelet

[root@vms61 ~]# systemctl daemon-reload;  systemctl restart kubelet
[root@vms61 ~]#

6.对vms61执行uncordon

[root@vms61 ~]# kubectl uncordon vms61.rhce.cc 
node/vms61.rhce.cc uncordoned
[root@vms61 ~]#

查看节点状态。

[root@vms61 ~]# kubectl get nodes 
NAME            STATUS   ROLES           AGE    VERSION
vms61.rhce.cc   Ready    control-plane   235d   v1.24.1
vms62.rhce.cc   Ready    <none>          235d   v1.23.2
[root@vms61 ~]#

2.升级worker

1.在worker上安装kubeadm v1.24.1。

[root@vms62 ~]# yum install -y kubeadm-1.24.1-0 --disableexcludes=kubernetes
[root@vms62 ~]# 

在master(vms61)上腾空节点vms62

[root@vms61 ~]#  kubectl drain vms62.rhce.cc --ignore-daemonsets
node/vms62.rhce.cc cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-hth7g, kube-system/kube-proxy-zdhtx
evicting pod kube-system/coredns-6d8c4cb4d-rzqfm
evicting pod default/web1-665f6b46cb-jgzp8
evicting pod kube-system/calico-kube-controllers-78d6f96c7b-8nfvs
evicting pod kube-system/coredns-6d8c4cb4d-mnjwm
evicting pod default/web1-665f6b46cb-fl9b4
pod/web1-665f6b46cb-jgzp8 evicted
pod/web1-665f6b46cb-fl9b4 evicted
pod/calico-kube-controllers-78d6f96c7b-8nfvs evicted
pod/coredns-6d8c4cb4d-rzqfm evicted
pod/coredns-6d8c4cb4d-mnjwm evicted
node/vms62.rhce.cc drained
[root@vms61 ~]#

2.切换到vms62上升级kubelet配置

[root@vms62 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
[root@vms62 ~]#

修改kubelet的变量文件,由

[root@vms62 ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.5"
[root@vms62 ~]#

改为

[root@vms62 ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.5"
[root@vms62 ~]#

即这里删除了参数--network-plugin=cni 。

3.升级kubelet和kubectl

[root@vms62 ~]#  yum install -y kubelet-1.24.1-0 kubectl-1.24.1-0 --disableexcludes=kubernetes
[root@vms62 ~]# 

4.重启kubelet

[root@vms62 ~]# systemctl daemon-reload;  systemctl restart kubelet

5.解除对vms62的保护

切换到vms61对vms62执行uncordon操作

[root@vms61 ~]# kubectl uncordon vms62.rhce.cc 
node/vms62.rhce.cc uncordoned
[root@vms61 ~]# 

3.验证

查看节点状态

[root@vms61 ~]# kubectl get nodes
NAME            STATUS   ROLES           AGE    VERSION
vms61.rhce.cc   Ready    control-plane   235d   v1.24.1
vms62.rhce.cc   Ready    <none>          235d   v1.24.1
[root@vms61 ~]# 

查看pod

[root@vms61 ~]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
web1-665f6b46cb-g9kcl   1/1     Running   0          4m46s
web1-665f6b46cb-q8db5   1/1     Running   0          4m47s
[root@vms61 ~]# 

相关新闻

发表回复

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据

                                                                                                                                    RHCE9学习指南连载,点击阅读