把简单的事情做好

0%

使用 Vultr 快速部署一个高可用的 kubernetes 集群

前段时间注册了一个 vultr 的账号,还参与了一个充10美元送100美元的活动。不过这100美元有一个30天的使用时限,马上就要到期了,就想着在到期之前用 vultr 来搭建一个 k8s 集群。 虽然 docker 的桌面客户端已经集成了单节点的 k8s,但是从头开始搭建一个高可用的集群还是有一些难度的,于是便有了这篇博客。使用国外的 vps 的好处,我们不需要担心网络环境的问题,如果没有条件可以借助 Kubeasz 这个工具在国内部署很方便。另外,本篇内容只适用 Vultr,因为利用了 Vultr 的一些特性,其他云服务器厂商也有类似的工具。

这个搭建的方式仅使用实验目的,不可作为生产环境。生产环境还需要避免如下几点:

  1. 为了简化操作,各个节点使用公网IP,并且关闭了防火墙
  2. kubeapi server 负载均衡器使用了固定IP,官方教程推荐使用 DNS

准备工作

集群拓扑

搭建之前,首先规划一下集群最终的拓扑结构。共6个节点( 1个load balancer 节点 + 3个 controll plane 节点 + 2 worker 节点)
k8s-cluster-topo

节点初始化

添加集群节点

为了简化操作,把安装容器运行时(docker)和 kubeadm,kubectl,kubelet 的安装都放到了一个脚本里,节点初始化完成自动执行。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# (Install Docker CE)
## Set up the repository
### Install required packages
yum install -y yum-utils device-mapper-persistent-data lvm2

## Add the Docker repository
yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

# Install Docker CE
yum update -y && yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.11 \
docker-ce-cli-19.03.11

## Create /etc/docker
mkdir /etc/docker

# Set up the Docker daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl restart docker
systemctl enable docker

# 安装kubeadm,kubelet,kubectl

## Disable Swap
sed -i '/swap/d' /etc/fstab
swapoff -a

## Disable Firewall
systemctl disable firewalld
systemctl stop firewalld

## Update Iptables
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

## 设置源、开始安装
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0

sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet

详细步骤如下:

  1. 在 Vultr 控制面板点击 “+”,Deploy New Server

  2. Choose Server: 选择默认 Cloud Compute

  3. Server Location: 选择日本或者韩国(注意要和 load balance 节点选择同一个区域)

  4. Server Type: 选择 CentOS 7 x86 Without SELinux

  5. Server Size: 选择至少”2 CPU/4G Memory”截屏2020-06-21 下午4.08.13

  6. Start Script: 选择添加新的脚本,把上面的脚本粘贴进去。然后选择新添加的脚本,这里脚本命名为 k8s-node-init

  7. SSH Keys: 选择自己的公钥方便登录截屏2020-06-21 下午4.08.27

  8. Server Hostname & Label: 如下,名称随意。点击左下角的 + 号可以增加节点数量截屏2020-06-21 下午4.08.38

  9. 点击部署

添加负载均衡器

  1. 选择 Load Balance 标签,点击 + 号添加负载均衡器
  2. 选择与上面的节点相同的地域
  3. 配置负载均衡,设置转发规则和健康检测,如下所示k8s-api-lb
  4. 记录下来负载均衡器的 ip 和 设置的端口,例如:141.164.49.152:43

使用kubeadm 部署集群

初始化主节点

1.在负载均衡器面板,点击 Attach,选择 k8s-master-01 节点,使负载均衡器的可以代理这个节点(目前还不可以代理其他 master 节点)
2.登录到 k8s-master-01

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@k8s-master-01 ~]# kubeadm init --control-plane-endpoint "141.164.49.152:43" --upload-certs

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 141.164.49.152:43 --token 3il8rw.12u0wd0gdalsw3g3 \
--discovery-token-ca-cert-hash sha256:aae75ca90283a58164e05e1625716b272c4a6ecb16a32ce9cf65a5175861e25d \
--control-plane --certificate-key fc3ddd0dd2b92f692fdc0b11a00930c32bab0c91bcf1fa277f52f9e7c2f5f590

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!

As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 141.164.49.152:43 --token 3il8rw.12u0wd0gdalsw3g3 \
--discovery-token-ca-cert-hash sha256:aae75ca90283a58164e05e1625716b272c4a6ecb16a32ce9cf65a5175861e25d

把其他节点加入到集群

1.分别登录 k8s-master-02 和 k8s-master-03,执行

1
kubeadm join 141.164.49.152:43 --token 3il8rw.12u0wd0gdalsw3g3 --discovery-token-ca-cert-hash sha256:aae75ca90283a58164e05e1625716b272c4a6ecb16a32ce9cf65a5175861e25d --control-plane --certificate-key fc3ddd0dd2b92f692fdc0b11a00930c32bab0c91bcf1fa277f52f9e7c2f5f590

2.分别登录 k8s-worker-01 和 k8s-worker-02,执行

1
kubeadm join 141.164.49.152:43 --token 3il8rw.12u0wd0gdalsw3g3 --discovery-token-ca-cert-hash sha256:aae75ca90283a58164e05e1625716b272c4a6ecb16a32ce9cf65a5175861e25d

3.修改负载均衡配置,把 k8s-master-02 和 k8s-master-03 也加入到代理中

添加使用用户

1.登录到任意 master 节点,添加 kube 用户并赋予 sudo 权限

1
2
useradd kube
echo "kube ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

2.切换到 kube 用户,初始化 k8s 配置

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.添加 CNI 插件

1
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

4.执行 kubectl get nodes 查看节点信息

1
2
3
4
5
6
7
[kube@k8s-master-01 root]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-01 Ready master 44m v1.18.4
k8s-master-02 Ready master 8m24s v1.18.4
k8s-master-03 Ready master 6m29s v1.18.4
k8s-worker-01 Ready <none> 118s v1.18.4
k8s-worker-02 Ready <none> 68s v1.18.4

执行 kubectl get nodes -A 查看 Pod 信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[kube@k8s-master-01 root]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-lknqr 1/1 Running 0 8m46s
kube-system coredns-66bff467f8-nn8kq 1/1 Running 0 8m46s
kube-system etcd-k8s-master-01 1/1 Running 0 8m47s
kube-system etcd-k8s-master-02 1/1 Running 0 7m26s
kube-system etcd-k8s-master-03 1/1 Running 0 5m37s
kube-system kube-apiserver-k8s-master-01 1/1 Running 0 8m47s
kube-system kube-apiserver-k8s-master-02 1/1 Running 0 7m26s
kube-system kube-apiserver-k8s-master-03 1/1 Running 0 5m38s
kube-system kube-controller-manager-k8s-master-01 1/1 Running 1 8m47s
kube-system kube-controller-manager-k8s-master-02 1/1 Running 0 7m25s
kube-system kube-controller-manager-k8s-master-03 1/1 Running 0 5m38s
kube-system kube-proxy-7fnxm 1/1 Running 0 4m58s
kube-system kube-proxy-chgxr 1/1 Running 0 5m38s
kube-system kube-proxy-jttkl 1/1 Running 0 7m27s
kube-system kube-proxy-rsj9h 1/1 Running 0 4m39s
kube-system kube-proxy-w6cbl 1/1 Running 0 8m45s
kube-system kube-scheduler-k8s-master-01 1/1 Running 1 8m47s
kube-system kube-scheduler-k8s-master-02 1/1 Running 0 7m25s
kube-system kube-scheduler-k8s-master-03 1/1 Running 0 5m38s
kube-system weave-net-4mlp2 2/2 Running 0 66s
kube-system weave-net-4z7sz 2/2 Running 0 66s
kube-system weave-net-6f2f8 2/2 Running 0 66s
kube-system weave-net-8hkkr 2/2 Running 0 66s
kube-system weave-net-djz7j 2/2 Running 0 66s

安装 Dashboard

1.在 kube 用户下执行

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

2.添加 SA 账户,绑定角色

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
cat > ~/sample-user.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF

kubectl apply -f ~/sample-user.yaml

3.生成 token

1
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

4.配置服务,这里直接修改 type: ClusterIPtype:NodePort 后保存。

1
kubectl -n kubernetes-dashboard edit service kubernetes-dashboard

5.查看端口并访问https://your-ip:your-port, 输入第3步生成的 token

1
kubectl -n kubernetes-dashboard get service kubernetes-dashboard

6.给 dashboard 组件分配权限,刚进入 dashboard 发现右上角有通知警告 serviceaccounts is forbidden: User "system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard" cannot list resource "serviceaccounts" in API group "" in the namespace "default"。应该是从 dashboard-2.0 开始需在外部给 dashboard 进行授权才可以

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
cat > ~/dashboard-auth.yaml <<EOF
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]

# Other resources
- apiGroups: [""]
resources: ["nodes", "namespaces", "pods", "serviceaccounts", "services", "configmaps", "endpoints", "persistentvolumeclaims", "replicationcontrollers", "replicationcontrollers/scale", "persistentvolumeclaims", "persistentvolumes", "bindings", "events", "limitranges", "namespaces/status", "pods/log", "pods/status", "replicationcontrollers/status", "resourcequotas", "resourcequotas/status"]
verbs: ["get", "list", "watch"]

- apiGroups: ["apps"]
resources: ["daemonsets", "deployments", "deployments/scale", "replicasets", "replicasets/scale", "statefulsets"]
verbs: ["get", "list", "watch"]

- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["get", "list", "watch"]

- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["get", "list", "watch"]

- apiGroups: ["extensions"]
resources: ["daemonsets", "deployments", "deployments/scale", "networkpolicies", "replicasets", "replicasets/scale", "replicationcontrollers/scale"]
verbs: ["get", "list", "watch"]

- apiGroups: ["networking.k8s.io"]
resources: ["ingresses", "networkpolicies"]
verbs: ["get", "list", "watch"]

- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["get", "list", "watch"]

- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses", "volumeattachments"]
verbs: ["get", "list", "watch"]

- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["clusterrolebindings", "clusterroles", "roles", "rolebindings", ]
verbs: ["get", "list", "watch"]
EOF

kubectl apply -f ~/dashboard-auth.yaml

7.最终看到的 dashboard 界面k8s-init-dashboard

参考

Kubenetes.io
vulter docs
Kubenetes Dashboard