1、 环境准备
ip | type | docker | os | k8s version |
---|---|---|---|---|
172.21.17.4 | master,etcd | CentOS Linux release 7.4.1708 | v1.13.3 | |
172.21.16.230 | master,etcd | CentOS Linux release 7.4.1708 | ||
172.21.16.240 | master,etcd | CentOS Linux release 7.4.1708 | ||
172.21.16.244 | node,flanneld,ha+kee | 18.06.2-ce | CentOS Linux release 7.4.1708 | |
172.21.16.248 | node,flanneld,ha+kee | 18.06.2-ce | CentOS Linux release 7.4.1708 | |
172.21.16.45 | vip | CentOS Linux release 7.4.1708 |
2、部署ETC集群
etcd的正常运行是k8s集群运行的提前条件,因此部署k8s集群首先部署etcd集群。安装CA证书,安装CFSSL证书管理工具。直接下载二进制安装包
2.1、下载cfssl
1 | # curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 |
2.2 、创建etcd证书
etcd-ca-csr.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21# mkdir etcd_ssl && cd etcd_ssl
# cat etcd-ca-csr.json
{
"CN": "etcd-ca",
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"O": "etcd",
"OU": "etcd Security",
"L": "Beijing",
"ST": "Beijing",
"C": "CN"
}
],
"ca": {
"expiry": "87600h"
}
}etcd-gencert.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14# cat etcd-gencert.json
{
"signing": {
"default": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}etcd-csr.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24# cat etcd-csr.json
{
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"O": "etcd",
"OU": "etcd Security",
"L": "Beijing",
"ST": "Beijing",
"C": "CN"
}
],
"CN": "etcd",
"hosts": [
"127.0.0.1",
"localhost",
"172.21.17.4",
"172.21.16.231",
"172.21.16.240"
]
}接下来执行生成即可
1
2
3
4
5
6
7
8# cfssl gencert --initca=true etcd-ca-csr.json | cfssljson --bare etcd-ca
# cfssl gencert --ca etcd-ca.pem --ca-key etcd-ca-key.pem --config etcd-gencert.json etcd-csr.json | cfssljson --bare etcd
# mkdir -p /etc/etcd/ssl &&mkdir -p /var/lib/etcd
# cp *.pem /etc/etcd/ssl
# ls /etc/etcd/ssl/
etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem
# scp -r /etc/etcd k8s-master-02:/etc
# scp -r /etc/etcd k8s-master-03:/etc
2.3、开始配置etcd
2.3.1、下载etcd
1 | # wget https://github.com/etcd-io/etcd/releases/download/v3.3.15/etcd-v3.3.15-linux-amd64.tar.gz |
2.3.2、创建etcd的Systemd unit 文件
Etcd 这里采用最新的 3.3.15 版本,安装方式直接复制二进制文件、systemd service 配置即可,不过需要注意相关用户权限问题,以下脚本配置等参考了 etcd rpm 安装包
2.3.3、配置etcd.conf
- k8s-master-01
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30# cat /etc/etcd/etcd.conf
# [member]
ETCD_NAME=etcd1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_SNAPSHOT_COUNT="100"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://172.21.17.4:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.21.17.4:2379,http://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.21.17.4:2380"
ETCD_INITIAL_CLUSTER="etcd1=https://172.21.17.4:2380,etcd2=https://172.21.16.231:2380,etcd3=https://172.21.16.240:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.21.17.4:2379"
# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
ETCD_PEER_AUTO_TLS="true"
k8s-master-02
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30# cat /etc/etcd/etcd.conf
# [member]
ETCD_NAME=etcd2
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_SNAPSHOT_COUNT="100"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://172.21.16.231:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.21.16.231:2379,http://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.21.16.231:2380"
ETCD_INITIAL_CLUSTER="etcd1=https://172.21.17.4:2380,etcd2=https://172.21.16.231:2380,etcd3=https://172.21.16.240:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.21.16.231:2379"
# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
ETCD_PEER_AUTO_TLS="true"k8s-master-03
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30# cat /etc/etcd/etcd.conf
# [member]
ETCD_NAME=etcd3
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_SNAPSHOT_COUNT="100"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://172.21.16.240:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.21.16.240:2379,http://127.0.0.1:2379"
ETCD_MAX_SNAPSHOTS="5"
ETCD_MAX_WALS="5"
# [cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.21.16.240:2380"
ETCD_INITIAL_CLUSTER="etcd1=https://172.21.17.4:2380,etcd2=https://172.21.16.231:2380,etcd3=https://172.21.16.240:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.21.16.240:2379"
# [security]
ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
ETCD_AUTO_TLS="true"
ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/etcd-ca.pem"
ETCD_PEER_AUTO_TLS="true"
2.3.4、配置etcd启动文件
1 | # cat /lib/systemd/system/etcd.service |
2.3.5、etcd授权
1 | # groupadd -r etcd |
2.3.6、验证etcd
由于etcd使用了证书,所以etcd命令需要带上证书
查看成员列表
1
2
3
4# etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/etcd/ssl/etcd-ca.pem member list
93c04a995ff8aa8: name=etcd3 peerURLs=https://172.21.16.240:2380 clientURLs=https://172.21.16.240:2379 isLeader=false
7cc4daf6e4db3a8a: name=etcd2 peerURLs=https://172.21.16.231:2380 clientURLs=https://172.21.16.231:2379 isLeader=false
ec7ea930930d012e: name=etcd1 peerURLs=https://172.21.17.4:2380 clientURLs=https://172.21.17.4:2379 isLeader=true查看集群状态
1
2
3
4
5# etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/etcd/ssl/etcd-ca.pem cluster-health
member 93c04a995ff8aa8 is healthy: got healthy result from https://172.21.16.240:2379
member 7cc4daf6e4db3a8a is healthy: got healthy result from https://172.21.16.231:2379
member ec7ea930930d012e is healthy: got healthy result from https://172.21.17.4:2379
cluster is healthy
3、部署kubernetes
3.1 介绍
新版本已经越来越趋近全面 TLS + RBAC 配置,所以本次安装将会启动大部分 TLS + RBAC 配置,包括 kube-controler-manager、kube-scheduler 组件不再连接本地 kube-apiserver 的 8080 非认证端口,kubelet 等组件 API 端点关闭匿名访问,启动 RBAC 认证等;为了满足这些认证,需要签署以下证书
3.2、创建CA
3.2.1、创建CA配置文件
kubernetes-ca-csr.json集群CA根证书
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21# mkdir ssl && cd ssl/
# cat kubernetes-ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "kubernetes",
"OU": "System"
}
],
"ca": {
"expiry": "87600h"
}
}- “CN”: Common Name,kube-apiserver 从该证书中提取该字段作为请求的用户名(User Name);浏览器使用该字段验证网站合法性;
- “O”: Organization,kube-apiserver从该证书中提取该字段作为请求用户所属组(Group);
kubernetes-gencert.json
用于生成其他证书的标准1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19# cat kubernetes-gencert.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}kube-apiserver-csr.json
apiserver TLS 认证端口需要的证书1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29# cat kube-apiserver-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"10.254.0.1",
"localhost",
"172.21.16.45",
"*.master.kubernetes.node",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "kubernetes",
"OU": "System"
}
]
}
- 172.21.16.45: vip地址
- 如果hosts字段不为空则需要指定授权使用该证书的ip或域名列表,kube-apiserver指定的service-cluster-ip-range网段的第一个ip,如10.254.0.1
kube-controller-manager-csr.json
controller manager 连接 apiserver 需要使用的证书,同时本身 10257 端口也会使用此证书1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22# cat kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"hosts": [
"127.0.0.1",
"localhost",
"*.master.kubernetes.node"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-controller-manager",
"OU": "System"
}
]
}kube-scheduler-csr.json
scheduler连接 apiserver 需要使用的证书,同时本身 10259 端口也会使用此证书1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22# cat kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"localhost",
"*.master.kubernetes.node"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-scheduler",
"OU": "System"
}
]
}kube-proxy-csr.json
proxy 组件连接 apiserver 需要使用的证书1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18# cat kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kube-proxy",
"OU": "System"
}
]
}kubelet-api-admin-csr.json
apiserver 反向连接 kubelet 组件 10250 端口需要使用的证书(例如执行 kubectl logs)1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18# cat kubelet-api-admin-csr.json
{
"CN": "system:kubelet-api-admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:kubelet-api-admin",
"OU": "System"
}
]
}admin-csr.json
集群管理员(kubectl)连接 apiserver 需要使用的证书1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18# cat admin-csr.json
{
"CN": "system:masters",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "system:masters",
"OU": "System"
}
]
}
注意: 证书文件里面的CN、O字段,两个比较特殊的字段,基本都是system:开头,是为了匹配RBAC规则,详情参考
3.3、使用命令生成即可
1 | # cfssl gencert --initca=true kubernetes-ca-csr.json | cfssljson --bare kubernetes-ca |
3.4、分发证书
将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器;kubernetes系统的各个组建需要使用tls证书对通信进行加密。
1)、生成的证书ca证书和秘钥文件如下:
- admin-key.pem
- admin.pem
- kube-apiserver-key.pem
- kube-apiserver.pem
- kube-controller-manager-key.pem
- kube-controller-manager.pem
- kubelet-api-admin-key.pem
- kubelet-api-admin.pem
- kube-proxy-key.pem
- kube-proxy.pem
- kubernetes-ca-key.pem
- kubernetes-ca.pem
- kube-scheduler-key.pem
- kube-scheduler.pem
3)、证书拷贝
master 节点拷贝
1
2
3
4
5
6# mkdir -p /etc/kubernetes/ssl
# cp *.pem /etc/kubernetes/ssl/
# scp -r /etc/kubernetes k8s-master-02:/etc
# scp -r /etc/kubernetes k8s-master-03:/etc
# scp -r /etc/kubernetes node-01:/etc
# scp -r /etc/kubernetes node-02:/etc创建目录
1
# mkdir -p /var/log/kube-audit && mkdir /var/lib/kubelet -p && mkdir /usr/libexec -p
4、创建kube config文件
kubelet、kube-proxy等Node机器上的经常与master机器的kube-apiserver进程通信时需要认证和授权;kubernetes 1.4 开始支持有kube-apiserver为客户端生成tls证书的 TLS Bootstrapping功能,这样就不需要为每个客户端生成证书了,该功能当前仅支持为kuelet生成证书;
4.1、生成配置文件
- bootstrap.kubeconfig kubelet TLS Bootstarp 引导阶段需要使用的配置文件
- kube-controller-manager.kubeconfig controller manager 组件开启安全端口及 RBAC 认证所需配置
- kube-scheduler.kubeconfig scheduler 组件开启安全端口及 RBAC 认证所需配置
- kube-proxy.kubeconfig proxy 组件连接 apiserver 所需配置文件
- audit-policy.yaml apiserver RBAC 审计日志配置文件
- bootstrap.secret.yaml kubelet TLS Bootstarp 引导阶段使用 Bootstrap Token 方式引导,需要预先创建此 Token
4.2、创建kubelet bootstrapping kubeconfig文件
在这之前我们需要下载kubernetes 相关的二进制包,把对应的工具和命令拷贝到/usr/bin目录下面;下载二进制包
1 | # wget https://dl.k8s.io/v1.13.3/kubernetes-server-linux-amd64.tar.gz |
- master节点拷贝
1 | # mv apiextensions-apiserver cloud-controller-manager hyperkube kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubectl kubelet mounter kubeadm /usr/bin/ && cd &&rm -rf kubernetes kubernetes-server-linux-amd64.tar.gz |
4.2.1、生成文件bootstrapping
- master-01
config 是一个通用配置文件要连接本地的 6443 加密端口;而这个变量将会覆盖 kubeconfig 中指定的master_vip地址172.21.16.45:6443 地址
1 | # export KUBE_APISERVER="https://172.21.16.45:6443" |
- 生成 Bootstrap Token
1
2
3
4# BOOTSTRAP_TOKEN_ID=$(head -c 6 /dev/urandom | md5sum | head -c 6)
# BOOTSTRAP_TOKEN_SECRET=$(head -c 16 /dev/urandom | md5sum | head -c 16)
# BOOTSTRAP_TOKEN="${BOOTSTRAP_TOKEN_ID}.${BOOTSTRAP_TOKEN_SECRET}"
# echo "Bootstrap Tokne: ${BOOTSTRAP_TOKEN}"
4.2.2、生成 kubelet tls bootstrap 配置
1 | # kubectl config set-cluster kubernetes \ |
4.2.3、生成 kube-controller-manager 配置文件
1 | # kubectl config set-cluster kubernetes \ |
4.2.4、生成 kube-scheduler 配置文件
1 | # kubectl config set-cluster kubernetes \ |
4.2.5、生成 kube-proxy 配置文件
1 | # kubectl config set-cluster kubernetes \ |
4.2.6、生成 apiserver RBAC 审计配置文件
1 | # cat >> audit-policy.yaml <<EOF |
4.2.7、生成 tls bootstrap token secret 配置文件
1 | # cat >> bootstrap.secret.yaml <<EOF |
4.3、复制文件
把刚生成的文件复制到/etc/kubernetes
目录下面
1 | # master 节点 |
4.4、处理 ipvs 及依赖
新版本目前 kube-proxy 组件全部采用 ipvs 方式负载,所以为了 kube-proxy 能正常工作需要预先处理一下 ipvs 配置以及相关依赖(每台 node 都要处理)
1 | # cat >> /etc/sysctl.conf <<EOF |
kubernetes 中启用 ipvs,详细介绍,官方,参考文献
1 | # yum -y install ipvsadm |
5、配置和启动kube-apiserver
5.1、设置启动文件
- kube-apiserver.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
5.2、apiserver配置文件
1 | # cat /etc/kubernetes/apiserver |
- –client-ca-file: 定义客户端 CA
- –endpoint-reconciler-type: master endpoint 策略
- –kubelet-client-certificate、–kubelet-client-key: master 反向连接 kubelet 使用的证书
- –service-account-key-file: service account 签名 key(用于有效性验证)
- –tls-cert-file、–tls-private-key-file: master apiserver 6443 端口证书
详细参数介绍
5.2.1、启动kube-apiserver
1 | # systemctl daemon-reload |
5.3、配置kube-controller-manager
创建kube-controller-manager的service配置文件
5.3.1、配置kube-controller-manager启动文件
1 | # cat /usr/lib/systemd/system/kube-controller-manager.service |
5.3.2、配置controller-manager文件
1 | # cat /etc/kubernetes/controller-manager |
controller manager 将不安全端口 10252 绑定到 127.0.0.1 确保 kuebctl get cs 有正确返回;将安全端口 10257 绑定到 0.0.0.0 公开,提供服务调用;由于 controller manager 开始连接 apiserver 的 6443 认证端口,所以需要 –use-service-account-credentials 选项来让 controller manager 创建单独的 service account(默认 system:kube-controller-manager 用户没有那么高权限)
1 | # kubectl get componentstatuses |
5.3.3、启动kube-controller-manager
1 | # systemctl daemon-reload |
5.4、配置kube-scheduler
创建kube-scheduler的service配置文件
5.4.1、创建kube-scheduler启动文件
1 | # cat /lib/systemd/system/kube-scheduler.service |
5.4.2、创建scheduler配置文件
1 | # cat /etc/kubernetes/scheduler |
shceduler 同 controller manager 一样将不安全端口绑定在本地,安全端口对外公开
5.4.3、启动kube-scheduler
1 | # systemctl daemon-reload |
5.4、验证master节点
1 | # kubectl get componentstatuses |
至此master节点部署完毕
kubernetes高可用使用haproxy进行代理,haproxy代理安装