二进制安装kubernetes 1.23.2

点击阅读 二进制安装kubernetes v1.24.1

实验环境

两台机器,vms41和vms42
系统:centos7.4
vms41为master,vms42是worker

1.基本设置

1.安装基本设置
下面的命令要在所有节点上执行。
docker通过手动安装

[root@vms4X ~]# sed -i -e '/gpgcheck=1/cgpgcheck=0' -e '/repo_gpgcheck=1/crepo_gpgcheck=0' /etc/yum.repos.d/k8s.repo
[root@vms4X ~]# yum install docker-ce -y

[root@vms4X ~]# systemctl enable docker --now
[root@vms4X ~]#  cat > /etc/docker/daemon.json <<EOF
{
   "registry-mirrors": ["https://frz7i079.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@vms4X ~]# 
[root@vms4X ~]#  systemctl daemon-reload ; systemctl restart docker

[root@vms4X ~]#  cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@vms4X ~]# 

如果使用的是containerd的话,需要执行如下操作
--- 下面的命令是containerd要执行的 ---

cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

modprobe overlay ; modprobe br_netfilter

--------上面的命令docker不需要的。------

[root@vms4X ~]#  swapoff -a ; sed -i '/swap/d' /etc/fstab
2.安装cfssl工具
[root@vms41 ~]# cd /usr/local/bin/
[root@vms41 bin]# ls
cfssl-certinfo_linux-amd64  cfssljson_linux-amd64  cfssl_linux-amd64
[root@vms41 bin]# 
[root@vms41 bin]# 
[root@vms41 bin]# for i in * ; do n=${i%_*} ; mv $i $n; done ; chmod +x *
[root@vms41 bin]# 
[root@vms41 bin]# ls
cfssl  cfssl-certinfo  cfssljson
[root@vms41 bin]# cd
[root@vms41 ~]#

2.配置k8s各组件所需要的证书

在kubernetes里涉及到很多TLS认证,所以需要做大量的证书,这里先归纳一下。
file
此操作在vms41(即master)上操作
在/xx里生成证书,生成的各个证书拷贝到/etc/kubernetes/pki里,

[root@vms41 ~]# mkdir -p /xx  /etc/kubernetes/pki
[root@vms41 ~]#
[root@vms41 ~]# cd /xx
[root@vms41 xx]#

这里/xx作为生产基地,生产各种证书,之后拷贝到对应的目录。

如果对证书认证不理解的话,可以先阅读 https://www.rhce.cc/3625.html

1.搭建ca

步骤1 生成CA配置

[root@vms41 xx]# cfssl print-defaults config > ca-config.json
[root@vms41 xx]#
[root@vms41 xx]# cat ca-config.json
{
    "signing": {
        "default": {
            "expiry": "1680h"
        },
        "profiles": {
            "www": {
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

[root@vms41 xx]#

步骤2:生成CA证书请求文件

[root@vms41 xx]# cfssl print-defaults csr > ca-csr.json
[root@vms41 xx]# cat ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Jiangsu",
            "L": "Xuzhou",
            "O": "kubernetes",
            "OU": "system"
        }
    ]
}

[root@vms41 xx]#

步骤3:生成CA自签名证书

[root@vms41 xx]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
    ...输出...
[root@vms41 xx]# 

上面命令最后一部分是可以随便写的,只是指定生成证书文件的前缀,比如这里写的是ca,那么生成的证书就是以ca开头的,比如ca.pem是证书,ca-key.pem是私钥,后面的步骤类似。

[root@vms41 xx]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem
[root@vms41 xx]#

[root@vms41 xx]# ls ca*.pem
ca-key.pem  ca.pem
[root@vms41 xx]#

2.front-proxy-ca

这里创建的是聚合层(Aggregation Layer)所需要的证书,如果环境里没有配置聚合层的话,这一步和下一步创建front-proxy-client证书的步骤是不需要的。如果需要,则按下面步骤创建,本次练习并没有使用到聚合层。

拷贝原来ca配置文件和ca证书申请请求文件。

[root@vms41 xx]# cp ca-config.json front-proxy-ca-config.json
[root@vms41 xx]#cp ca-csr.json front-proxy-ca-csr.json
[root@vms41 xx]#

生成front-proxy-ca自签名证书

[root@vms41 xx]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca
    ...输出...
[root@vms41 xx]#

[root@vms41 xx]# ls -1 front*.pem
front-proxy-ca-key.pem
front-proxy-ca.pem
[root@vms41 xx]#

3.front-proxy-client聚合层客户端证书

同上,如果没有使用到聚合层(Aggregation Layer),则不需要做如下命令,如果需要则按如下步骤操作。本练习并没有用到。

[root@vms41 xx]# cfssl print-defaults csr > front-proxy-client-csr.json
[root@vms41 xx]#
[root@vms41 xx]# cat front-proxy-client-csr.json
{
  "CN": "front-proxy-client",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Jiangsu",
      "L": "Xuzhou",
      "O": "kubernetes",
      "OU": "system"
    }
  ]
}

[root@vms41 xx]#

生成聚合层客户端所需要的证书。

[root@vms41 xx]# cfssl gencert -ca=front-proxy-ca.pem -ca-key=front-proxy-ca-key.pem -config=front-proxy-ca-config.json -profile=www front-proxy-client-csr.json | cfssljson  -bare front-proxy-client
    ...输出...
[root@vms41 xx]# 

[root@vms41 xx]# ls -1 front-proxy-client*.pem
front-proxy-client-key.pem
front-proxy-client.pem
[root@vms41 xx]# 

4.etcd服务器端证书

这步骤是用于生成etcd服务器端证书的,etcd需要跟apiserver进行双向TLS(mTLS)认证,etcd服务器端需要证书和私钥,先创建etcd证书请求文件。

[root@vms41 xx]# cfssl print-defaults csr > etcd-csr.json
[root@vms41 xx]#

修改内容如下。

[root@vms41 xx]# cat etcd-csr.json
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
   "hosts": [
     "127.0.0.1",
     "192.168.26.41"
    ],
  "names": [
    {
      "C": "CN",
      "ST": "Jiangsu",
      "L": "Xuzhou",
      "O": "kubernetes",
      "OU": "system"
    }
  ]
}

[root@vms41 xx]#

颁发etcd服务器端所需要的证书。

[root@vms41 xx]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www etcd-csr.json | cfssljson  -bare etcd
    ...输出...
[root@vms41 xx]#

[root@vms41 xx]# ls etcd*.pem
etcd-key.pem  etcd.pem
[root@vms41 xx]#

4.etcd客户端

这步骤是用于生成etcd客户端证书的,当向apiserver向etcd连接时由apiserver向etcd出示的证书。
先生成证书请求文件。

[root@vms41 xx]# cfssl print-defaults csr > etcd-client-csr.json
[root@vms41 xx]# cat etcd-client-csr.json
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
   "hosts": [
     "127.0.0.1",
     "192.168.26.41"
    ],
  "names": [
    {
      "C": "CN",
      "ST": "Jiangsu",
      "L": "Xuzhou",
      "O": "kubernetes",
      "OU": "system"
    }
  ]
}
[root@vms41 xx]# 

颁发证书

[root@vms41 xx]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www etcd-client-csr.json | cfssljson  -bare etcd-client
    ...输出...
[root@vms41 xx]# 
[root@vms41 xx]# ls -1 etcd-client*.pem
etcd-client-key.pem
etcd-client.pem
[root@vms41 xx]#

5.apiserver

k8s的其他组件跟apiserver要进行双向TLS(mTLS)认证,所以apiserver需要有自己的证书,下面生成apiserver所需要的证书请求文件。

[root@vms41 xx]# cfssl print-defaults csr > apiserver-csr.json
[root@vms41 xx]#

改成如下内容。

[root@vms41 xx]# cat apiserver-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
   "hosts": [
     "127.0.0.1",
     "192.168.26.41",
     "192.168.26.42",
     "10.96.0.1",
     "kubernetes",
     "kubernetes.default",
     "kubernetes.default.svc",
     "kubernetes.default.svc.cluster",
     "kubernetes.default.svc.cluster.local"
    ],
  "names": [
    {
      "C": "CN",
      "ST": "Jiangsu",
      "L": "Xuzhou",
      "O": "kubernetes",
      "OU": "system"
    }
  ]
}

[root@vms41 xx]#

由于该证书后续被 kubernetes master 集群使用,需要将master节点的IP都填上,同时还需要填写 service 网络的第一个IP(这里是10.96.0.1)都填写上去。
生成证书。

[root@vms41 xx]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www apiserver-csr.json | cfssljson -bare apiserver
    ...输出...
[root@vms41 xx]#
[root@vms41 xx]# ls apiserver*.pem -1
apiserver-key.pem
apiserver.pem
[root@vms41 xx]#

6.controller-manager

controller-manager需要跟apiserver进行mTLS认证,生成证书申请请求文件。

[root@vms41 xx]# cfssl print-defaults csr > controller-manager-csr.json
[root@vms41 xx]#
[root@vms41 xx]# cat controller-manager-csr.json
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
   "hosts": [
     "127.0.0.1",
     "192.168.26.41"
    ],
  "names": [
    {
      "C": "CN",
      "ST": "Jiangsu",
      "L": "Xuzhou",
      "O": "system:kube-controller-manager",
      "OU": "system"
    }
  ]
}

[root@vms41 xx]#

hosts 列表包含所有 kube-controller-manager 节点 IP;CN 为 system:kube-controller-manager,O 为 system:kube-controller-manager,k8s里内置的ClusterRoleBindings system:kube-controller-manager 授权用户 kube-controller-manager所需的权限。后面组件证书都做类似解释。
颁发证书。

[root@vms41 xx]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www controller-manager-csr.json | cfssljson -bare controller-manager
    ...输出...
[root@vms41 xx]#
[root@vms41 xx]# ls -1 control*.pem
controller-manager-key.pem
controller-manager.pem
[root@vms41 xx]#

7.kube-scheduler

kube-scheduler需要跟apiserver进行mTLS认证,生成证书申请请求文件。

[root@vms41 xx]# cfssl print-defaults csr > scheduler-csr.json
[root@vms41 xx]#
[root@vms41 xx]# cat scheduler-csr.json
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
   "hosts": [
     "127.0.0.1",
     "192.168.26.41"
    ],
  "names": [
    {
      "C": "CN",
      "ST": "Jiangsu",
      "L": "Xuzhou",
      "O": "system:kube-scheduler",
      "OU": "system"
    }
  ]
}

[root@vms41 xx]#

kubernetes内置的ClusterRoleBindings system:kube-scheduler将授权kube-scheduler所需的权限。
生成证书。

[root@vms41 xx]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www scheduler-csr.json | cfssljson -bare scheduler
[root@vms41 xx]#

8.kube-proxy

kube-proxy需要跟apiserver进行mTLS认证,生成证书申请请求文件。

[root@vms41 xx]# cfssl print-defaults csr > proxy-csr.json
[root@vms41 xx]# cat proxy-csr.json
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Jiangsu",
      "L": "Xuzhou",
      "O": "kubernetes",
      "OU": "system"
    }
  ]
}
[root@vms41 xx]#

颁发证书。

[root@vms41 xx]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www proxy-csr.json | cfssljson -bare proxy
    ...输出...
[root@vms41 xx]#

[root@vms41 xx]# ls -1 proxy*.pem
proxy-key.pem
proxy.pem
[root@vms41 xx]#

9.管理员admin能用的证书

管理员也要先登录才能执行kubectl命令,所以也需要kubeconfig文件。
先生成证书请求文件,并修改内容如下。

[root@vms41 xx]# cfssl print-defaults csr > admin-csr.json
[root@vms41 xx]# cat admin-csr.json
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Jiangsu",
      "L": "Xuzhou",
      "O": "system:masters",
      "OU": "system"
    }
  ]
}
[root@vms41 xx]#

颁发证书。

[root@vms41 xx]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www admin-csr.json | cfssljson -bare admin
    ...输出...
[root@vms41 xx]# 

[root@vms41 xx]# ls -1 admin*.pem
admin-key.pem
admin.pem
[root@vms41 xx]#

10.拷贝证书到/etc/kubernetes/pki

把以上生成出来的所有的证书(后缀为pem)都拷贝到/etc/kubernetes/pki里。

[root@vms41 xx]# cp *.pem /etc/kubernetes/pki/
[root@vms41 xx]# ls /etc/kubernetes/pki/
admin-key.pem          ca-key.pem             etcd-client-key.pem  front-proxy-ca-key.pem  proxy-key.pem         admin.pem       ca.pem               etcd-client.pem         front-proxy-ca.pem      proxy.pem
apiserver-key.pem  controller-manager-key.pem  etcd-key.pem         front-proxy-client-key.pem  scheduler-key.pem
apiserver.pem      controller-manager.pem      etcd.pem             front-proxy-client.pem      scheduler.pem
[root@vms41 xx]#

3.配置和安装etcd

1.安装etcd创建etcd的启动文件

到如下链接下载最新版的etcd二进制文件。
https://github.com/etcd-io/etcd/releases/download/v3.5.4/etcd-v3.5.4-linux-amd64.tar.gz
之后解压并把可执行文件拷贝到/usr/local/bin目录下。

[root@vms41 ~]# tar zxf etcd-v3.5.4-linux-amd64.tar.gz 
[root@vms41 ~]#
[root@vms41 ~]# cp etcd-v3.5.4-linux-amd64/etcd* /usr/local/bin/
[root@vms41 ~]#

创建如下两个目录。

[root@vms41 ~]# mkdir /etc/etcd  /var/lib/etcd
[root@vms41 ~]#

注意,这个目录/var/lib/etcd一定要创建出来,否则etcd启动不了,会有
code=exited, status=200/CHDIR
报错信息。

创建etcd所需要配置文件 /etc/etcd/etcd.conf内容如下。

[root@vms41 ~]# cat /etc/etcd/etcd.conf
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.26.41:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.26.41:2379,http://127.0.0.1:2379"
ETCD_NAME="etcd1"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.26.41:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.26.41:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.26.41:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
[root@vms41 ~]# 
创建etcd的启动脚本,内容如下。
[root@vms41 ~]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
  --cert-file=/etc/kubernetes/pki/etcd.pem \
  --key-file=/etc/kubernetes/pki/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/pki/ca.pem \
  --peer-cert-file=/etc/kubernetes/pki/etcd.pem \
  --peer-key-file=/etc/kubernetes/pki/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/pki/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
[root@vms41 ~]#

2.启动etcd

在vms41上启动etcd并设置开机自动启动。

[root@vms41 ~]# systemctl enable etcd --now
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service...
[root@vms41 ~]#

[root@vms41 ~]# systemctl is-active etcd
active
[root@vms41 ~]#

4.配置kubernetes各个组件

下载kubernetes对应版本的二进制包,这里安装的是1.23.2版本的。
https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
下载之后先解压,并把对应的可执行命令拷贝到对应的目录

[root@vms41 ~]# tar zxf kubernetes-server-linux-amd64.tar.gz 
[root@vms41 ~]#
[root@vms41 ~]# cd kubernetes/server/bin/
[root@vms41 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubelet kubectl  kube-proxy  /usr/bin/
[root@vms41 bin]# scp kubelet kube-proxy 192.168.26.42:/usr/bin/
root@192.168.26.42's password:  
[root@vms41 bin]# cd
[root@vms41 ~]#

1.安装和配置apiserver

因为后面要配置kubelet的bootstrap认证,即kubelet启动时自动创建CSR请求,这里需要在apiserver上开启token的认证。所以先在master上生成一个随机值作为token。

[root@vms41 ~]# openssl rand -hex 10
6440328e1b3a1f4873dc
[root@vms41 ~]#

把这个token写入到一个文件里,这里写入到 /etc/kubernetes/bb.csv。

[root@vms41 ~]# cat /etc/kubernetes/bb.csv
6440328e1b3a1f4873dc,kubelet-bootstrap,10001,"system:node-bootstrapper"
[root@vms41 ~]# 

这里第二列定义了一个用户名kubelet-bootstrap,后面在配置kubelet时会为此用户授权。

1.创建apiserver的启动脚本

创建apiserver的启动脚本,路径和内容如下。

[root@vms41 ~]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=192.168.26.41  \
      --secure-port=6443 \
      --insecure-port=0  \
      --token-auth-file=/etc/kubernetes/bb.csv  \
      --advertise-address=192.168.26.41 \
      --service-cluster-ip-range=10.96.0.0/16  \
      --service-node-port-range=30000-60000  \
      --etcd-servers=https://192.168.26.41:2379 \
      --etcd-cafile=/etc/kubernetes/pki/ca.pem  \
      --etcd-certfile=/etc/kubernetes/pki/etcd.pem  \
      --etcd-keyfile=/etc/kubernetes/pki/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/ca-key.pem  \
      --service-account-signing-key-file=/etc/kubernetes/pki/ca-key.pem  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true
      #--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      #--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      #--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      #--requestheader-allowed-names=aggregator  \
      #--requestheader-group-headers=X-Remote-Group  \
      #--requestheader-extra-headers-prefix=X-Remote-Extra-  \
      #--requestheader-username-headers=X-Remote-User  

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
[root@vms41 ~]#

上面注释的部分是配置聚合层的,本环境里没有启用聚合层所以这些选项被注释了,如果配置了聚合层的话,则需要把#取消。
几个选项作用如下。

--v:日志等级
--enable-bootstrap-token-auth:启用TLS bootstrap机制
--token-auth-file:bootstrap token文件

2.启动apiserver

[root@vms41 ~]# systemctl enable  kube-apiserver --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service...
[root@vms41 ~]#

2.安装和配置controller-manager

1.创建controller-manager的启动服务

创建controller-manager的启动脚本,路径和内容如下。

[root@vms41 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/bin/kube-controller-manager \
      --v=2 \
      --logtostderr=true \
      --bind-address=127.0.0.1 \
      --root-ca-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
      --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \
      --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --pod-eviction-timeout=2m0s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --cluster-cidr=10.244.0.0/16 \
      --node-cidr-mask-size=24
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
[root@vms41 ~]#

2.创建controller-manager所需要的kubeconfig文件

controller-manager和apiserver之间的认证是通过kubeconfig的方式来认证的,即controller-manager的私钥、公钥及CA的证书要放在一个kubeconfig文件里。下面创建controller-manager所用的kubeconfig文件kube-controller-manager.kubeconfig,现在在/etc/kubernetes/pki里创建,然后剪切到/etc/kubernetes里。

[root@vms41 ~]# cd /etc/kubernetes/pki/
[root@vms41 pki]#
#设置集群信息
[root@vms41 pki]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.26.41:6443 --kubeconfig=kube-controller-manager.kubeconfig
Cluster "kubernetes" set.
[root@vms41 pki]# 
#下条命令设置用户信息,这里用户名是system:kube-controller-manager ,也就是前面controller-manager-csr.json里CN指定的。
[root@vms41 pki]# kubectl config set-credentials system:kube-controller-manager --client-certificate=controller-manager.pem --client-key=controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
User "system:kube-controller-manager" set.
[root@vms41 pki]#
#设置上下文信息。
[root@vms41 pki]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Context "system:kube-controller-manager" created.
[root@vms41 pki]# 
#设置默认的上下文
[root@vms41 pki]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Switched to context "system:kube-controller-manager".
[root@vms41 pki]# 
[root@vms41 pki]# mv kube-controller-manager.kubeconfig /etc/kubernetes/
[root@vms41 pki]#

后面其他组件设置kubeconfig时意思相同,后面不再做解释。
3.启动controller-manager服务
启动controller-manager并设置开机自动启动
[root@vms41 ~]# systemctl daemon-reload
[root@vms41 ~]# systemctl enable kube-controller-manager --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service ...
[root@vms41 ~]# systemctl is-active kube-controller-manager
active
[root@vms41 ~]#

3.安装和配置scheduler

1.创建kube-scheduler的启动服务

创建kube-scheduler的启动脚本,路径和内容如下。

[root@vms41 ~]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/bin/kube-scheduler \
      --v=2 \
      --logtostderr=true \
      --address=127.0.0.1 \
      --leader-elect=true \
      --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
[root@vms41 ~]#

2.创建kube-scheduler所需要的kubeconfig文件

kube-scheduler和apiserver之间的认证是通过kubeconfig的方式来认证的,即kube-scheduler的私钥、公钥及CA的证书要放在一个kubeconfig文件里。下面创建kube-scheduler所用的kubeconfig文件kube-scheduler.kubeconfig,现在在/etc/kubernetes/pki里创建,然后剪切到/etc/kubernetes里。

[root@vms41 ~]# cd /etc/kubernetes/pki/
[root@vms41 pki]# 
[root@vms41 pki]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.26.41:6443 --kubeconfig=kube-scheduler.kubeconfig
Cluster "kubernetes" set.
[root@vms41 pki]# kubectl config set-credentials system:kube-scheduler --client-certificate=scheduler.pem --client-key=scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
User "system:kube-scheduler" set.
[root@vms41 pki]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Context "system:kube-scheduler" created.
[root@vms41 pki]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Switched to context "system:kube-scheduler".
[root@vms41 pki]# 
[root@vms41 pki]# mv kube-scheduler.kubeconfig /etc/kubernetes/
[root@vms41 pki]# cd
[root@vms41 ~]#

这里的意思跟controller-manager里创建kubeconfig的意思一样,这里不再解释。

3.启动kube-scheduler服务

启动kube-scheduler 并设置开机自动启动

[root@vms41 ~]# systemctl enable kube-scheduler --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service....
[root@vms41 ~]# systemctl is-active kube-scheduler
active
[root@vms41 ~]#

4.创建admin能用的kubeconfig文件

创建管理员用户用的kubeconfig,最后拷贝为~/.kube/config作为默认的kubeconfig文件。

[root@vms41 ~]# cd /etc/kubernetes/pki/
[root@vms41 pki]# 
[root@vms41 pki]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.26.41:6443 --kubeconfig=admin.conf
Cluster "kubernetes" set.
[root@vms41 pki]# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=admin.conf
User "admin" set.
[root@vms41 pki]# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=admin.conf
Context "kubernetes" created.
[root@vms41 pki]# kubectl config use-context kubernetes --kubeconfig=admin.conf
Switched to context "kubernetes".
[root@vms41 pki]# cp admin.conf /etc/kubernetes/
[root@vms41 pki]# mkdir  ~/.kube
[root@vms41 pki]# cp admin.conf ~/.kube/config
[root@vms41 pki]# cd
[root@vms41 ~]# 

5.kubelet

为用户kubelet-bootstrap授权,允许kubelet tls bootstrap创建CSR请求。

[root@vms41 ~]# kubectl create clusterrolebinding kubelet-bootstrap1 --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@vms41 ~]# 

这个用户名是在配置apiserver时用到的token文件/etc/kubernetes/bb.csv里指定的。

[root@vms41 ~]# cat /etc/kubernetes/bb.csv 
6440328e1b3a1f4873dc,kubelet-bootstrap,10001,"system:kubelet-bootstrapper"
[root@vms41 ~]#

创建kubelet bootstap的kubeconfig文件。

[root@vms41 ~]# cd /etc/kubernetes/pki/
[root@vms41 pki]# 
[root@vms41 pki]# #kubelet-bootstrap.conf 
[root@vms41 pki]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.26.41:6443 --kubeconfig=kubelet-bootstrap.conf 
Cluster "kubernetes" set.
[root@vms41 pki]# kubectl config set-credentials kubelet-bootstrap --token=6440328e1b3a1f4873dc  --kubeconfig=kubelet-bootstrap.conf 
User "kubelet-bootstrap" set.
[root@vms41 pki]# kubectl config set-context kubernetes --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.conf 
Context "kubernetes" created.
[root@vms41 pki]# kubectl config use-context kubernetes --kubeconfig=kubelet-bootstrap.conf 
Switched to context "kubernetes".
[root@vms41 pki]# 
[root@vms41 pki]# mv kubelet-bootstrap.conf  /etc/kubernetes/
[root@vms41 pki]# cd
[root@vms41 ~]#

1.创建启动脚本

我们后面将会把给kubelet颁发的证书放在/var/lib/kubelet/pki里,所以我们先把此目录创建出来。

[root@vms4X ~]# mkdir -p /var/lib/kubelet/pki

创建启动脚本,路径即内容如下。

[root@vms41 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.conf  \
  --cert-dir=/var/lib/kubelet/pki \
  --hostname-override=vms41.rhce.cc \
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
  --config=/etc/kubernetes/kubelet-config.yaml \
  --network-plugin=cni \
  --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6 \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
[root@vms41 ~]#

这里我们也指定了pause镜像的版本,所以在所有节点拉取pause镜像

[root@vms4X ~]# docker pull registry.aliyuncs.com/google_containers/pause:3.6
    ...输出...
[root@vms4X ~]# 

2.创建kubelet用的配置文件

创建kubelet能用到的配置文件,这个文件的路径要和kubelet启动文件里指定的一致,内容如下。

[root@vms41 ~]# cat /etc/kubernetes/kubelet-config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250 
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
[root@vms41 ~]# 

这里我们指定clusterDNS的IP是10.96.0.10。

3.同步文件到其他工作节点

把kubelet能用到的文件同步到所有的worker上。
拷贝启动脚本

[root@vms41 ~]# scp /usr/lib/systemd/system/kubelet.service 192.168.26.42:/usr/lib/systemd/system/kubelet.service
root@192.168.26.42's password:
kubelet.service           100%  684   425.3KB/s   00:00
[root@vms41 ~]#

拷贝ca证书

[root@vms41 ~]# scp /etc/kubernetes/pki/ca.pem 192.168.26.42:/etc/kubernetes/pki/
root@192.168.26.42's password:
ca.pem            100% 1375     1.1MB/s   00:00
[root@vms41 ~]#

拷贝bootstrap kubeconfig和kubelet的参数文件

[root@vms41 ~]# scp /etc/kubernetes/kubelet-bootstrap.conf /etc/kubernetes/kubelet-config.yaml 192.168.26.42:/etc/kubernetes/
root@192.168.26.42's password:
kubelet-bootstrap.conf       100% 4091     1.3MB/s   00:00
kubelet-config.yaml          100%  555   277.3KB/s   00:00
[root@vms41 ~]#

修改启动脚本里参数hostname-override的值,确保和自己主机名一致。

[root@vms42 ~]# sed  -i 's/vms41/vms42/' /usr/lib/systemd/system/kubelet.service
[root@vms42 ~]# mkdir -p /etc/kubernetes/pki
[root@vms42 ~]#

4.启动kubelet

启动kubelet并设置开机自动启动。

[root@vms4X ~]# systemctl daemon-reload
[root@vms4X ~]# systemctl enable kubelet --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service ...
[root@vms4X ~]# 

切换到master(vms41)上查看证书申请请求。

[root@vms41 ~]# kubectl get csr
NAME      AGE   SIGNERNAME   REQUESTOR REQUESTEDDURATION   CONDITION
node-csr-xx  6s    kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
node-csr-yy   6s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Pending
[root@vms41 ~]# 

实际csr名字比较长,这里改为了xx和yy。
批准证书。

[root@vms41 ~]# kubectl certificate approve node-csr-xx  
certificatesigningrequest.certificates.k8s.io/node-csr-xx approved
[root@vms41 ~]# kubectl certificate approve node-csr-yy
certificatesigningrequest.certificates.k8s.io/node-csr-yy approved
[root@vms41 ~]#

再次查看CSR。

[root@vms41 ~]# kubectl get csr
NAME        AGE   SIGNERNAME    REQUESTOR    REQUESTEDDURATION   CONDITION
node-csr-xx   42s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Approved,Issued
node-csr-yy   42s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   <none>              Approved,Issued
[root@vms41 ~]#

然后查看节点

[root@vms41 ~]# kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
vms41.rhce.cc   NotReady   <none>   3m4s    v1.23.2
vms42.rhce.cc   NotReady   <none>   3m10s   v1.23.2
[root@vms41 ~]#

这里的证书都是手动批准的,我们也可以设置自动批准,具体操作大家可以关注后续的文章。

6.安装kube-proxy

1.创建kube-proxy所需的kubeconfig文件

[root@vms41 ~]# cd /etc/kubernetes/pki/
[root@vms41 pki]#
[root@vms41 pki]# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.26.41:6443 --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.
[root@vms41 pki]# kubectl config set-credentials kube-proxy --client-certificate=proxy.pem --client-key=proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
[root@vms41 pki]# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
Context "default" created.
[root@vms41 pki]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".
[root@vms41 pki]# 
[root@vms41 pki]# mv kube-proxy.kubeconfig /etc/kubernetes/
[root@vms41 pki]# cd
[root@vms41 ~]#

2.创建kube-proxy配置文件

[root@vms41 ~]# cat /etc/kubernetes/kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16 
kind: KubeProxyConfiguration
metricsBindAddress: 0.0.0.0:10249
mode: "ipvs"
[root@vms41 ~]#

3.创建启动文件

[root@vms41 ~]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
[root@vms41 ~]# 

4.同步配置文件

[root@vms41 ~]# scp /etc/kubernetes/{kube-proxy.yaml,kube-proxy.kubeconfig} 192.168.26.42:/etc/kubernetes/
root@192.168.26.42's password: 
kube-proxy.yaml                 100%  240   221.3KB/s   00:00
kube-proxy.kubeconfig            100%  240   221.3KB/s   00:00      
[root@vms41 ~]# scp /usr/lib/systemd/system/kube-proxy.service 192.168.26.42:/usr/lib/systemd/system/
root@192.168.26.42's password: 
kube-proxy.service                                                                             100%  430   308.8KB/s   00:00    
[root@vms41 ~]#

5.启动kube-proxy

在各个节点上创建目录

[root@vms4X ~]# mkdir -p /var/lib/kube-proxy
[root@vms4X ~]# systemctl daemon-reload 
[root@vms4X ~]# systemctl enable kube-proxy --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service...
[root@vms4X ~]#
[root@vms4X ~]# systemctl is-active kube-proxy.service 
active
[root@vms4X ~]#

7.创建必要的权限

1.编写rbac.yaml

[root@vms41 ~]# cat rbac.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kubernetes-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kubernetes
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kubernetes-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
[root@vms41 ~]#

2.创建角色及授权

创建角色并授权

[root@vms41 ~]# kubectl apply -f rbac.yaml 
clusterrole.rbac.authorization.k8s.io/system:kubernetes-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kubernetes created
[root@vms41 ~]#

5.安装calico

在master上,通过如下命令
wget https://docs.projectcalico.org/manifests/calico.yaml
修改calico.yaml文件,改如下两处。
1.搜索CALICO_IPV4POOL_CIDR
file
2.搜索IP_AUTODETECTION_METHOD,把interface设置为自己的网卡。
file
开始安装calico。

[root@vms41 ~]# kubectl apply -fcalico.yaml
[root@vms41 ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
vms41.rhce.cc   Ready    <none>   34m   v1.23.2
vms42.rhce.cc   Ready    <none>   34m   v1.23.2
[root@vms41 ~]#

[root@vms41 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-54cd668bd-t7cv5   1/1     Running   0          35s
calico-node-9t79n                         1/1     Running   0          35s
calico-node-gz65c                         1/1     Running   0          35s
[root@vms41 ~]#

6.部署coreDNS

1.下载并修改manifest文件

下载并修改coreDNS的manifest文件。
https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed
复制如下yaml
https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
1.搜索apra,大概62行
file
改为cluster.local in-addr.arpa ip6.arpa
file
file
file

2.安装coreDNS

[root@vms41 ~]# kubectl apply -f coredns.yaml 
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@vms41 ~]# 

[root@vms41 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-54cd668bd-t7cv5   1/1     Running   0          10m
calico-node-9t79n                         1/1     Running   0          10m
calico-node-gz65c                         1/1     Running   0          10m
coredns-799bc9dbc6-7k25v                  1/1     Running   0          2s
[root@vms41 ~]# 

[root@vms41 ~]# kubectl get svc -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   44s
[root@vms41 ~]#

7.测试

[root@vms41 ~]# kubectl run pod1 --image=nginx --image-pull-policy=IfNotPresent --dry-run=client -o yaml > pod1.yaml
[root@vms41 ~]# kubectl apply -f pod1.yaml 
pod/pod1 created
[root@vms41 ~]# sed 's/pod1/pod2/' pod1.yaml | kubectl apply -f -
pod/pod2 created
[root@vms41 ~]# kubectl get pods
NAME   READY   STATUS    RESTARTS   AGE
pod1   1/1     Running   0          29s
pod2   1/1     Running   0          5s
[root@vms41 ~]#
[root@vms41 ~]# kubectl expose --name=svc1 pod pod1 --port=80
service/svc1 exposed
[root@vms41 ~]#

[root@vms41 ~]# kubectl exec -it pod1 -- sh -c "echo 111 > /usr/share/nginx/html/index.html"
[root@vms41 ~]#

[root@vms41 ~]# kubectl exec -it pod2 -- bash
root@pod2:/# curl svc1
111
root@pod2:/# exit
exit
[root@vms41 ~]#

相关新闻

发表回复

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据

                                                                                                                                    RHCE9学习指南连载,点击阅读