0、环境准备

  • Linux: Ubuntu 18.04.5 LTS
  • Docker: 20.10.1
  • Golang: go version go1.15.4 linux/amd64
  • Kubernetes: v1.18.8

默认已经准备好的如上环境

1、kubernetes源码编译

1
2
3
git clone https://github.com.cnpmjs.org/kubernetes/kubernetes (国内clone加速)
cd kubernetes
git checkout v1.18.8

指定环境变量并开始编译:

KUBE_BUILD_PLATFORMS=linux/amd64 make all GOFLAGS=-v GOGCFLAGS="-N -l"

说明:

  • KUBE_BUILD_PLATFORMS=linux/amd64 指定当前编译平台环境类型为 linux/amd64。
  • make all 表示在本地环境中编译所有组件。
  • GOFLAGS=-v 编译参数,开启 verbose 日志。
  • GOGCFLAGS="-N -l" 编译参数,禁止编译优化和内联,减小可执行程序大小。

如果只想编译某一个组件,如kubelet, 可以执行make WATH=cmd/kubelet

编译完成后生成_output目录:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
$ ls _output/bin -Ll
total 1251976
-rwxr-xr-x 1   44486656 Jan  2 11:15 apiextensions-apiserver
-rwxr-xr-x 1    6234112 Jan  2 11:08 conversion-gen
-rwxr-xr-x 1    6221824 Jan  2 11:08 deepcopy-gen
-rwxr-xr-x 1    6193152 Jan  2 11:08 defaulter-gen
-rwxr-xr-x 1  126819312 Jan  2 11:15 e2e_node.test
-rwxr-xr-x 1  114370736 Jan  2 11:15 e2e.test
-rwxr-xr-x 1   40521728 Jan  2 11:15 gendocs
-rwxr-xr-x 1  139782856 Jan  2 11:15 genkubedocs
-rwxr-xr-x 1  145989896 Jan  2 11:15 genman
-rwxr-xr-x 1    6705152 Jan  2 11:15 genswaggertypedocs
-rwxr-xr-x 1   40521728 Jan  2 11:15 genyaml
-rwxr-xr-x 1    7675904 Jan  2 11:15 ginkgo
-rwxr-xr-x 1    3671443 Jan  2 11:08 go2make
-rwxr-xr-x 1    2023424 Jan  2 11:09 go-bindata
-rwxr-xr-x 1    1941504 Jan  2 11:15 go-runner
-rwxr-xr-x 1   37036032 Jan  2 11:15 kubeadm
-rwxr-xr-x 1  109809664 Jan  2 11:15 kube-apiserver
-rwxr-xr-x 1  102002688 Jan  2 11:15 kube-controller-manager
-rwxr-xr-x 1   41078784 Jan  2 11:15 kubectl
-rwxr-xr-x 1  104114408 Jan  2 11:15 kubelet
-rwxr-xr-x 1  102382824 Jan  2 11:15 kubemark
-rwxr-xr-x 1   35663872 Jan  2 11:15 kube-proxy
-rwxr-xr-x 1   39657472 Jan  2 11:15 kube-scheduler
-rwxr-xr-x 1    5091328 Jan  2 11:15 linkcheck
-rwxr-xr-x 1    1634304 Jan  2 11:15 mounter
-rwxr-xr-x 1   10342400 Jan  2 11:09 openapi-gen

2、集群规划

集群搭建使用最小规模的安装方式: master节点、etcd都安装在Ubuntu的宿主机上,ip地址为192.168.122.1;工作节点是一个kvm的虚拟机,上面运行的centos8, IP地址为192.168.122.26

角色IP组件
k8s-master192.168.122.1kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node1192.168.122.26kubelet,kube-proxy,docker

2.1 操作系统初始化配置

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 根据规划设置主机名
hostnamectl set-hostname <hostname>

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.122.1 k8s-master
192.168.122.26 k8s-node1
EOF

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效

# 时间同步
yum install ntpdate -y
ntpdate time.windows.com

3、部署etcd

3.1、 准备cfssl证书生成工具

cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用 在master节点上执行安装从操作:

1
2
3
4
5
6
7
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

3.2、生成证书

  1. 自签证书颁发机构(CA)

    创建工作目录:

    1
    2
    
    mkdir -p ~/TLS/{etcd,k8s}
    cd TLS/etcd
    

    自签CA:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    
    cat > ca-config.json << EOF
    {
    "signing": {
        "default": {
        "expiry": "87600h"
        },
        "profiles": {
        "www": {
            "expiry": "87600h",
            "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
        }
        }
    }
    }
    EOF
    
    cat > ca-csr.json << EOF
    {
        "CN": "etcd CA",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing"
            }
        ]
    }
    EOF
    

    生成CA证书:

    1
    2
    3
    
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    ls *pem
    ca-key.pem  ca.pem
    
  2. 使用自签CA签发Etcd HTTPS证书 创建证书申请文件:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
     cat > server-csr.json << EOF
     {
         "CN": "etcd",
         "hosts": [
         "192.168.122.1"
         ],
         "key": {
             "algo": "rsa",
             "size": 2048
         },
         "names": [
             {
                 "C": "CN",
                 "L": "BeiJing",
                 "ST": "BeiJing"
             }
         ]
     }
     EOF
    
  • 注:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。 生成证书:
1
2
3
4
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

ls server*pem
server-key.pem  server.pem

3.3 编译etcd

下载: git clone https://github.com/etcd-io/etcd.git

cd etcd && make

在bin目录,生成 etcd、etcdctl两个文件

  1. 创建工作目录 复制文件

    1
    2
    
    mkdir /opt/etcd/{bin,cfg,ssl} -p
    cp bin/etcd bin/etcdctl /opt/etcd/bin/
    
  2. 创建etcd配置文件

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    
    cat > /opt/etcd/cfg/etcd.conf << EOF
    #[Member]
     ETCD_NAME="etcd-1"
     ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
     ETCD_LISTEN_PEER_URLS="https://192.168.122.1:2380"
     ETCD_LISTEN_CLIENT_URLS="https://192.168.122.1:2379"
    
     #[Clustering]
     ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.122.1:2380"
     ETCD_ADVERTISE_CLIENT_URLS="https://192.168.122.1:2379"
     ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.122.1:2380"                                                                                                                                                       
     ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
     ETCD_INITIAL_CLUSTER_STATE="new"
     EOF
    
    • ETCD_NAME:节点名称,集群中唯一
    • ETCD_DATA_DIR:数据目录
    • ETCD_LISTEN_PEER_URLS:集群通信监听地址
    • ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
    • ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
    • ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
    • ETCD_INITIAL_CLUSTER:集群节点地址
    • ETCD_INITIAL_CLUSTER_TOKEN:集群Token
    • ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
  3. systemd管理etcd

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    
     cat > /lib/systemd/system/etcd.service <<EOF
     [Unit]
     Description=Etcd Server
     After=network.target
     After=network-online.target
     Wants=network-online.target
    
     [Service]
     Type=notify
     #EnvironmentFile=/opt/etcd/cfg/etcd.conf # ubuntu not support
     ExecStart=/opt/etcd/bin/etcd \
     --cert-file=/opt/etcd/ssl/server.pem \
     --key-file=/opt/etcd/ssl/server-key.pem \
     --peer-cert-file=/opt/etcd/ssl/server.pem \
     --peer-key-file=/opt/etcd/ssl/server-key.pem \
     --trusted-ca-file=/opt/etcd/ssl/ca.pem \
     --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
     --name=etcd-master \
     --data-dir=/var/lib/etcd/default.etcd \
     --listen-peer-urls=https://192.168.122.1:2380 \
     --listen-client-urls=https://192.168.122.1:2379 \
     --initial-advertise-peer-urls=https://192.168.122.1:2380 \
     --advertise-client-urls=https://192.168.122.1:2379 \
     --initial-cluster=etcd-master=https://192.168.122.1:2380 \
     --initial-cluster-token=etcd-cluster \
     --initial-cluster-state=new
     --logger=zap
     Restart=on-failure
     LimitNOFILE=65536
    
     [Install]
     WantedBy=multi-user.target
     EOF
    
    • centos 路径为: /usr/lib/systemd/system/etcd.service
    • Ubuntu不支持 EnvironmentFile,所以只能将/opt/etcd/cfg/etcd.conf内容拷贝过来,之后的service有相同的操作
  4. 拷贝刚才生成的证书 把刚才生成的证书拷贝到配置文件中的路径:

    cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

  5. 启动并设置开机启动

    1
    2
    3
    
    systemctl daemon-reload
    systemctl start etcd
    systemctl enable etcd
    
  6. 查看集群状态

    1
    2
    3
    
    sudo ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.122.1:2379" endpoint health
    
    https://192.168.122.1:2379 is healthy: successfully committed proposal: took = 82.809416ms
    

4、部署master node

4.1 生成kube-apiserver证书

  1. 自签证书颁发机构(CA)
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    
    {
    "signing": {
        "default": {
        "expiry": "87600h"
        },
        "profiles": {
        "kubernetes": {
            "expiry": "87600h",
            "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
        }
        }
    }
    }
    EOF
    cat > ca-csr.json << EOF
    {
        "CN": "kubernetes",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
  • 也可以复用之前的ca证书 生成证书:
1
2
3

ls *pem
ca-key.pem  ca.pem
  1. 使用自签CA签发kube-apiserver HTTPS证书 创建证书申请文件:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    
    cd TLS/k8s
    cat > server-csr.json << EOF
    {
        "CN": "kubernetes",
        "hosts": [
        "10.0.0.1",
        "127.0.0.1",
        "192.168.122.1",
        "192.168.122.26",
        "192.168.122.27",
        "192.168.122.28",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
  • 注:上述文件hosts字段中IP为所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP 生成证书:
    1
    2
    3
    4
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
    
    ls server*pem
    server-key.pem  server.pem
    

4.2 拷贝编译的kubernetes二进制到/opt目录

1
2
3
4
kubernetes/_output/local/bin/linux/amd64
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 
cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin
cp kubectl /usr/bin/

4.3 部署kube-apiserver

  1. 创建配置文件
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    
    cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
    KUBE_APISERVER_OPTS="--logtostderr=false \\
     --v=2 \\
     --log-dir=/opt/kubernetes/logs \\
     --etcd-servers=https://192.168.122.1:2379 \\
     --bind-address=192.168.122.1 \\
     --secure-port=6443 \\
     --advertise-address=192.168.122.1 \\
     --allow-privileged=true \\
     --service-cluster-ip-range=10.0.0.0/24 \\
     --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
     --authorization-mode=RBAC,Node \\
     --enable-bootstrap-token-auth=true \\
     --token-auth-file=/opt/kubernetes/cfg/token.csv \\
     --service-node-port-range=30000-32767 \\
     --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \\
     --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \\
     --tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
     --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
     --client-ca-file=/opt/kubernetes/ssl/ca.pem \\
     --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
     --etcd-cafile=/opt/etcd/ssl/ca.pem \\
     --etcd-certfile=/opt/etcd/ssl/server.pem \\
     --etcd-keyfile=/opt/etcd/ssl/server-key.pem \\
     --audit-log-maxage=30 \\
     --audit-log-maxbackup=3 \\
     --audit-log-maxsize=100 \\
     --audit-log-path=/opt/kubernetes/logs/k8s-audit.log"
    EOF
    
  • 注:上面两个\ \ 第一个是转义符,第二个是换行符,使用转义符是为了使用EOF保留换行符。 个别选项说明:
  • –logtostderr:启用日志
  • —v:日志等级
  • –log-dir:日志目录
  • –etcd-servers:etcd集群地址
  • –bind-address:监听地址
  • –secure-port:https安全端口
  • –advertise-address:集群通告地址
  • –allow-privileged:启用授权
  • –service-cluster-ip-range:Service虚拟IP地址段
  • –enable-admission-plugins:准入控制模块
  • –authorization-mode:认证授权,启用RBAC授权和节点自管理
  • –enable-bootstrap-token-auth:启用TLS bootstrap机制
  • –token-auth-file:bootstrap token文件
  • –service-node-port-range:Service nodeport类型默认分配端口范围
  • –kubelet-client-xxx:apiserver访问kubelet客户端证书
  • –tls-xxx-file:apiserver https证书
  • –etcd-xxxfile:连接Etcd集群证书
  • –audit-log-xxx:审计日志
  1. 拷贝刚才生成的证书 把刚才生成的证书拷贝到配置文件中的路径: cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/
  2. 启用 TLS Bootstrapping 机制 详细介绍可以参考:TLS bootstrapping

TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。

创建token文件:

cat > /opt/kubernetes/cfg/token.csv << EOF 0b47c417547d9a4f8adb91aeebfb0f8c,kubelet-bootstrap,10001,"system:node-bootstrapper" EOF

格式:token,用户名,UID,用户组

token也可自行生成替换:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

  1. systemd管理apiserver
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    
     [Unit]
     Description=Kubernetes API Server
     Documentation=https://github.com/kubernetes/kubernetes
    
     [Service]
     # EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf #ubuntu not support
     ExecStart=/opt/kubernetes/bin/kube-apiserver \
     --logtostderr=false \
     --v=2 \
     --log-dir=/opt/kubernetes/logs \
     --etcd-servers=https://192.168.122.1:2379 \
     --bind-address=192.168.122.1 \
     --secure-port=6443 \
     --advertise-address=192.168.122.1 \
     --allow-privileged=true \
     --service-cluster-ip-range=10.0.0.0/24 \
     --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
     --authorization-mode=RBAC,Node \
     --enable-bootstrap-token-auth=true \
     --token-auth-file=/opt/kubernetes/cfg/token.csv \
     --service-node-port-range=30000-32767 \
     --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
     --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
     --tls-cert-file=/opt/kubernetes/ssl/server.pem  \
     --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
     --client-ca-file=/opt/kubernetes/ssl/ca.pem \
     --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
     --etcd-cafile=/opt/etcd/ssl/ca.pem \
     --etcd-certfile=/opt/etcd/ssl/server.pem \
     --etcd-keyfile=/opt/etcd/ssl/server-key.pem \
     --audit-log-maxage=30 \
     --audit-log-maxbackup=3 \
     --audit-log-maxsize=100 \
     --audit-log-path=/opt/kubernetes/logs/k8s-audit.log
    
     Restart=on-failure
    
     [Install]
     WantedBy=multi-user.target
    
  2. 启动并设置开机启动 systemctl daemon-reload systemctl start kube-apiserver systemctl enable kube-apiserver

4.4 部署kube-controller-manager

  1. 创建配置文件
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  cat /opt/kubernetes/cfg/kube-controller-manager.conf
  KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\
  --v=2 \\
  --log-dir=/opt/kubernetes/logs \\
  --leader-elect=true \\
  --master=127.0.0.1:8080 \\
  --bind-address=127.0.0.1 \\
  --allocate-node-cidrs=true \\
  --cluster-cidr=10.244.0.0/16 \\
  --service-cluster-ip-range=10.0.0.0/24 \\
  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
  --root-ca-file=/opt/kubernetes/ssl/ca.pem \\
  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
  --experimental-cluster-signing-duration=87600h0m0s"
  • –master:通过本地非安全本地端口8080连接apiserver。
  • –leader-elect:当该组件启动多个时,自动选举(HA)
  • –cluster-signing-cert-file/–cluster-signing-key-file:自动为kubelet颁发证书的CA,与apiserver保持一致
  1. systemd管理controller-manager
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
 cat /lib/systemd/system/kube-controller-manager.service << EOF
 [Unit]
 Description=Kubernetes Controller Manager
 Documentation=https://github.com/kubernetes/kubernetes

 [Service]
 #EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
 ExecStart=/opt/kubernetes/bin/kube-controller-manager --logtostderr=false \
 --v=2 \\
     --log-dir=/opt/kubernetes/logs \\
     --leader-elect=true \\
     --master=127.0.0.1:8080 \\
     --bind-address=127.0.0.1 \\
     --allocate-node-cidrs=true \\
     --cluster-cidr=10.244.0.0/16 \\
     --service-cluster-ip-range=10.0.0.0/24 \\
     --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
     --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
     --root-ca-file=/opt/kubernetes/ssl/ca.pem \\
     --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
     --experimental-cluster-signing-duration=87600h0m0s
 Restart=on-failure

 [Install]
 WantedBy=multi-user.target
 EOF
  1. 启动并设置开机启动 systemctl daemon-reload systemctl start kube-controller-manager systemctl enable kube-controller-manager

4.5 部署kube-scheduler

  1. 创建配置文件
    1
    2
    3
    4
    5
    6
    7
    8
    
     cat > /opt/kubernetes/cfg/kube-scheduler.conf << EOF
     KUBE_SCHEDULER_OPTS="--logtostderr=false \
     --v=2 \
     --log-dir=/opt/kubernetes/logs \
     --leader-elect \
     --master=127.0.0.1:8080 \
     --bind-address=127.0.0.1"
     EOF
    
  • –master:通过本地非安全本地端口8080连接apiserver。
  • –leader-elect:当该组件启动多个时,自动选举(HA)
  1. systemd管理scheduler
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
    cat > /lib/systemd/system/kube-scheduler.service << EOF
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf
    ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
  2. 启动并设置开机启动
    1
    2
    3
    
    systemctl daemon-reload
    systemctl start kube-scheduler
    systemctl enable kube-scheduler
    
  3. 查看集群状态 所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:
    1
    2
    3
    4
    5
    
    kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok                  
    controller-manager   Healthy   ok                  
    etcd-0               Healthy   {"health":"true"}  
    
    如上输出说明Master节点组件运行正常。

5、部署worker node

下面还是在Master Node上操作,即同时作为Worker Node

5.1 创建工作目录并拷贝二进制文件

在所有worker node创建工作目录:

mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} 从master节点拷贝:

cd kubernetes/server/bin cp kubelet kube-proxy /opt/kubernetes/bin # 本地拷贝

5.2 部署kubelet

  1. 创建配置文件
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    
    cat > /opt/kubernetes/cfg/kubelet.conf << EOF
    KUBELET_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs \\
    --hostname-override=k8s-master \\
    --network-plugin=cni \\
    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
    --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
    --config=/opt/kubernetes/cfg/kubelet-config.yml \\
    --cert-dir=/opt/kubernetes/ssl \\
    --pod-infra-container-image=lizhenliang/pause-amd64:3.0"
    EOF
    
  • –hostname-override:显示名称,集群中唯一
  • –network-plugin:启用CNI
  • –kubeconfig:空路径,会自动生成,后面用于连接apiserver
  • –bootstrap-kubeconfig:首次启动向apiserver申请证书
  • –config:配置参数文件
  • –cert-dir:kubelet证书生成目录
  • –pod-infra-container-image:管理Pod网络容器的镜像
  1. 配置参数文件

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    
    cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 0.0.0.0
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: cgroupfs
    clusterDNS:
    - 10.0.0.2
    clusterDomain: cluster.local 
    failSwapOn: false
    authentication:
    anonymous:
        enabled: false
    webhook:
        cacheTTL: 2m0s
        enabled: true
    x509:
        clientCAFile: /opt/kubernetes/ssl/ca.pem 
    authorization:
    mode: Webhook
    webhook:
        cacheAuthorizedTTL: 5m0s
        cacheUnauthorizedTTL: 30s
    evictionHard:
    imagefs.available: 15%
    memory.available: 100Mi
    nodefs.available: 10%
    nodefs.inodesFree: 5%
    maxOpenFiles: 1000000
    maxPods: 110
    EOF
    
  2. 生成bootstrap.kubeconfig文件

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    
     KUBE_APISERVER="https://192.168.122.1:6443" # apiserver IP:PORT
     TOKEN="0b47c417547d9a4f8adb91aeebfb0f8c" # 与token.csv里保持一致
    
     # 生成 kubelet bootstrap kubeconfig 配置文件
     kubectl config set-cluster kubernetes \
     --certificate-authority=/opt/kubernetes/ssl/ca.pem \
     --embed-certs=true \
     --server=${KUBE_APISERVER} \
     --kubeconfig=bootstrap.kubeconfig
     kubectl config set-credentials "kubelet-bootstrap" \
     --token=${TOKEN} \
     --kubeconfig=bootstrap.kubeconfig
     kubectl config set-context default \
     --cluster=kubernetes \
     --user="kubelet-bootstrap" \
     --kubeconfig=bootstrap.kubeconfig
     kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    

    拷贝到配置文件路径:

    cp bootstrap.kubeconfig /opt/kubernetes/cfg

  3. systemd管理kubelet

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    
        cat /lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    
    [Service]
    #EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
    ExecStart=/opt/kubernetes/bin/kubelet --logtostderr=false \
    --v=2 \
    --log-dir=/opt/kubernetes/logs \
    --hostname-override=k8s-master \
    --network-plugin=cni \
    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
    --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
    --config=/opt/kubernetes/cfg/kubelet-config.yml \
    --cert-dir=/opt/kubernetes/ssl \
    --pod-infra-container-image=lizhenliang/pause-amd64:3.0
    
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    
  4. 启动并设置开机启动

    1
    2
    3
    
    systemctl daemon-reload
    systemctl start kubelet
    systemctl enable kubelet
    

5.3 批准kubelet证书申请并加入集群

```
# 查看kubelet证书请求
kubectl get csr
NAME                                                   AGE    SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-sfjaslfdajsfiwl-sdhfakwis   6m3s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

# 批准申请
kubectl certificate approve node-csr-sfjaslfdajsfiwl-sdhfakwis

# 查看节点
kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   <none>   7s    v1.18.3
```
  • 注:由于网络插件还没有部署,节点会没有准备就绪 NotReady

5.4 部署kube-proxy

  1. 创建配置文件
    1
    2
    3
    4
    5
    6
    
    cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
    KUBE_PROXY_OPTS="--logtostderr=false \\
    --v=2 \\
    --log-dir=/opt/kubernetes/logs \\
    --config=/opt/kubernetes/cfg/kube-proxy-config.yml"
    EOF
    
  2. 配置参数文件
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    
    cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
     kind: KubeProxyConfiguration
     apiVersion: kubeproxy.config.k8s.io/v1alpha1
     bindAddress: 0.0.0.0
     metricsBindAddress: 0.0.0.0:10249
     clientConnection:
     kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
     hostnameOverride: k8s-master
     clusterCIDR: 10.0.0.0/24
     EOF
    
  3. 生成kube-proxy.kubeconfig文件 生成kube-proxy证书
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    
     # 切换工作目录
     cd TLS/k8s
    
     # 创建证书请求文件
     cat > kube-proxy-csr.json << EOF
     {
     "CN": "system:kube-proxy",
     "hosts": [],
     "key": {
         "algo": "rsa",
         "size": 2048
     },
     "names": [
         {
         "C": "CN",
         "L": "BeiJing",
         "ST": "BeiJing",
         "O": "k8s",
         "OU": "System"
         }
     ]
     }
     EOF
    
     # 生成证书
     cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
    
     ls kube-proxy*pem
     kube-proxy-key.pem  kube-proxy.pem
    

生成kubeconfig文件:

```
KUBE_APISERVER="https://192.168.122.1:6443"

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
```

拷贝到配置文件指定路径:

cp kube-proxy.kubeconfig /opt/kubernetes/cfg/

  1. systemd管理kube-proxy
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    
    cat /lib/systemd/system/kube-proxy.service << EOF
     [Unit]
     Description=Kubernetes Proxy
     After=network.target
    
     [Service]
     #EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
     ExecStart=/opt/kubernetes/bin/kube-proxy --logtostderr=false \
     --v=2 \
     --log-dir=/opt/kubernetes/logs \
     --config=/opt/kubernetes/cfg/kube-proxy-config.yml
    
     Restart=on-failure
     LimitNOFILE=65536
    
     [Install]
     WantedBy=multi-user.target
     EOF
    
  2. 启动并设置开机启动
    1
    2
    3
    
    systemctl daemon-reload
    systemctl start kube-proxy
    systemctl enable kube-proxy
    

5.5 部署CNI网络

先准备好CNI二进制文件:

下载地址:https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz

解压二进制包并移动到默认工作目录: mkdir /opt/cni/bin tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin 部署CNI网络: wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/xxx:v0.12.0-amd64#g" kube-flannel.yml * 去docker hub上找一个替换xxx, 默认镜像地址无法访问

```
kubectl apply -f kube-flannel.yml

kubectl get pods -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
```

5.6 授权apiserver访问kubelet

```
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
    kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
    - ""
    resources:
    - nodes/proxy
    - nodes/stats
    - nodes/log
    - nodes/spec
    - nodes/metrics
    - pods/log
    verbs:
    - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml
```

5.7 新增加Worker Node

  1. 拷贝已部署好的Node相关文件到新节点 在master节点将Worker Node涉及文件拷贝到新节点

    scp /opt/kubernetes root@192.168.122.26:/opt/

    scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.122.26:/usr/lib/systemd/system

    scp -r /opt/cni/ root@192.168.122.26:/opt/

    scp /opt/kubernetes/ssl/ca.pem root@192.168.122.26:/opt/kubernetes/ssl

  2. 删除kubelet证书和kubeconfig文件 rm /opt/kubernetes/cfg/kubelet.kubeconfig rm -f /opt/kubernetes/ssl/kubelet* 注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除重新生成。

  3. 修改主机名 vi /opt/kubernetes/cfg/kubelet.conf –hostname-override=k8s-node1

    vi /opt/kubernetes/cfg/kube-proxy-config.yml hostnameOverride: k8s-node1

  4. 启动并设置开机启动 systemctl daemon-reload systemctl start kubelet systemctl enable kubelet systemctl start kube-proxy systemctl enable kube-proxy

  5. 在Master上批准新Node kubelet证书申请 kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION

kubectl certificate approve node-csr-sfasflafas-sfaklfl 6. 查看Node状态 kubectl get node

部署CoreDNS

CoreDNS用于集群内部Service名称解析。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
kubectl apply -f coredns.yaml

kubectl get pods -n kube-system 
NAME                          READY   STATUS    RESTARTS   AGE
coredns-5ffbfd976d-j6shb      1/1     Running   0          32s
kube-flannel-ds-amd64-2pc95   1/1     Running   0          38m
kube-flannel-ds-amd64-7qhdx   1/1     Running   0          15m
kube-flannel-ds-amd64-99cr8   1/1     Running   0          26m
DNS解析测试:

kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
#If you don't see a command prompt, try pressing enter.

/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local
解析没问题。