开始之前

要遵循本指南,你需要:

  • 一台或多台运行兼容 deb/rpm 的 Linux 操作系统的计算机;例如:Ubuntu 或 CentOS。
  • 每台机器 2 GB 以上的内存,内存不足时应用会受限制。
  • 用作控制平面节点的计算机上至少有2个 CPU。
  • 集群中所有计算机之间具有完全的网络连接。你可以使用公共网络或专用网络。

端口开放

  • 如果你是用虚拟机部署,确保系统防火墙关闭即可
  • 如果你是云服务器部署,请在你的服务器的安全组策略中,开放以下端口

k8s中需要开放的端口

参考kubernetes官方文档

控制面

协议 方向 端口范围 目的 使用者
TCP 入站 6443 Kubernetes API server 所有
TCP 入站 2379-2380 etcd server client API kube-apiserver, etcd
TCP 入站 10250 Kubelet API 自身, 控制面
TCP 入站 10259 kube-scheduler 自身
TCP 入站 10257 kube-controller-manager 自身

尽管 etcd 的端口也列举在控制面的部分,但你也可以在外部自己托管 etcd 集群或者自定义端口。

工作节点

协议 方向 端口范围 目的 使用者
TCP 入站 10250 Kubelet API 自身, 控制面
TCP 入站 30000-32767 NodePort Services 所有

calico网络插件需要开放的端口

参考calico官方文档

网络要求

确保您的主机和防火墙根据您的配置允许必要的流量。

配置 主持人 连接类型 端口/协议
BGP 全部 双向 TCP 179

基础命令

查看Pod详情

1
kubectl describe pod xxx

所有服务器都要做的操作

升级系统内核

CentOS 7

1
2
3
4
5
6
7
8
9
10
11
12
cat >> start.sh  << EOF
#!/bin/bash

# 升级内核
rpm -Uvh https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-lt
# 安装必要工具
yum install -y nano
yum install -y wget
grub2-set-default 0
reboot
EOF

设置全新并运行

1
chmod +x start.sh && ./start.sh

CentOS 8

1
2
3
4
5
6
7
8
9
10
11
12
13
cat >> start.sh  << EOF
#!/bin/bash

# 安装必要工具
yum install -y nano
yum install -y wget

# 升级内核
rpm -Uvh http://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
yum --enablerepo=elrepo-kernel install -y kernel-lt
grub2-set-default 0
reboot
EOF

设置全新并运行

1
chmod +x start.sh && ./start.sh

修改hosts文件

根据实际服务器环境修改

1
2
3
4
5
6
cat >> /etc/hosts << EOF
192.168.3.200 k8s-master
192.168.3.201 k8s-node-1
192.168.3.210 k8s-nfs
192.168.3.211 k8s-harbor
EOF

根据不同节点各自操作

master

创建:master.sh

1
nano master.sh

加入如下内容:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
#!/bin/bash

startTime=`date +%Y%m%d_%H:%M:%S`
startTime_s=`date +%s`

ping -c2 k8s-master
ping -c2 k8s-node-1
ping -c2 k8s-node-2

#yum -y install ntp
#systemctl start ntpd && systemctl enable ntpd && systemctl status ntpd
systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
swapoff -a
sed -i 's/.*swap.*/#&/g' /etc/fstab

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
# centos8(centos8官方源已下线,建议切换centos-vault源)
#wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
yum clean all
yum makecache
yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y

# 每小时做一次时间同步 追加到root的定时任务 并重启定时服务
cat << EOF >> /var/spool/cron/root
* */1 * * * /usr/sbin/ntpdate time1.tencentyun.com > /dev/null 2>&1
EOF
systemctl restart crond

# 允许 iptables 检查桥接流量
# 不修改这个,紧接着的下一步会报错
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
# 这是临时修改
#modprobe br_netfilter
# ip_forward是数据包转发
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
#sudo sysctl --system
sysctl -p /etc/sysctl.d/k8s.conf

# 传输层负载均衡
# 比iptables吊,适合大集群,高可扩和性能,更复杂的LB算法,支持健康检查和连接重试
yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
EOF

kernel_version=$(uname -r | cut -d- -f1)
echo $kernel_version

if [ `expr $kernel_version \> 4.19` -eq 1 ]
then
modprobe -- nf_conntrack
else
modprobe -- nf_conntrack_ipv4
fi

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install -y containerd.io-1.6.7-3.1.el7
cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter
# 配置 sysctl 参数,这些配置在重启之后仍然起作用
#cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
#net.bridge.bridge-nf-call-iptables = 1
#net.ipv4.ip_forward = 1
#net.bridge.bridge-nf-call-ip6tables = 1
#EOF

cat << EOF >> /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
EOF
#sudo sysctl --system
systemctl daemon-reload
systemctl enable containerd --now
systemctl restart containerd
#systemctl status containerd

mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sed -ri 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
sed -ri 's#k8s.gcr.io\/pause:3.6#registry.aliyuncs.com\/google_containers\/pause:3.7#' /etc/containerd/config.toml
sed -ri 's#https:\/\/registry-1.docker.io#https:\/\/registry.aliyuncs.com#' /etc/containerd/config.toml
sed -ri 's#net.ipv4.ip_forward = 0#net.ipv4.ip_forward = 1#' /etc/sysctl.d/99-sysctl.conf
sudo sysctl --system
systemctl daemon-reload
sudo systemctl enable containerd && systemctl restart containerd
#sudo systemctl restart containerd

echo 1 > /proc/sys/net/ipv4/ip_forward

yum -y install kubeadm-1.24.3 kubelet-1.24.3 kubectl-1.24.3 --disableexcludes=kubernetes
systemctl enable --now kubelet
#systemctl enable --now kubelet

crictl config runtime-endpoint /run/containerd/containerd.sock

kubeadm config images list
kubeadm config print init-defaults > default-init.yaml

#kubeadm config images list --config kubeadm-init.yaml
#kubeadm config images pull --config kubeadm-init.yaml
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.3
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.3
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.3
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.3
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6


ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.3 k8s.gcr.io/kube-apiserver:v1.24.3
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.3 k8s.gcr.io/kube-controller-manager:v1.24.3
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7 k8s.gcr.io/pause:3.7
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0 k8s.gcr.io/etcd:3.5.3-0
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6 k8s.gcr.io/coredns/coredns:v1.8.6

ctr -n k8s.io i ls -q
crictl images
crictl ps -a

local_ip=$(ip addr | awk '/^[0-9]+: / {}; /inet.*global.*eth/ {print gensub(/(.*)\/(.*)/, "\\1", "g", $2)}')
echo $local_ip

cat <<EOF | sudo tee init.conf
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: $local_ip
bindPort: 6443
nodeRegistration:
kubeletExtraArgs:
volume-plugin-dir: "/opt/libexec/kubernetes/kubelet-plugins/volume/exec/"
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: $(hostname)
taints:
- effect: "NoSchedule"
key: "node-role.kubernetes.io/master"
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.24.3
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16
scheduler: {}

EOF

kubeadm init --config init.conf > kubeadm-init.log

rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
#source /etc/profile

curl https://docs.projectcalico.org/manifests/calico.yaml -O
#kubectl apply -f calico.yaml

kubectl get node -owide

# 安装网络工具
# sed -i 's/# - name: CALICO_IPV4POOL_CIDR/- name: CALICO_IPV4POOL_CIDR/g' calico.yaml
# sed -i 's@# value: "192.168.0.0/16"@ value: "10.244.0.0/16"@g' calico.yaml
# sed -i '/k8s,bgp/a\ - name: IP_AUTODETECTION_METHOD\n value: "interface=eth0"' calico.yaml
# kubectl apply -f calico.yaml

endTime=`date +%Y%m%d_%H:%M:%S`
endTime_s=`date +%s`
sumTime=$[ $endTime_s - $startTime_s]

echo "$startTime ---> $endTime" "Total:$sumTime seconds"

添加权限并执行:

1
chmod +x master.sh && ./master.sh

node

创建:node.sh

1
nano node.sh

加入如下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
#!/bin/bash

startTime=`date +%Y%m%d_%H:%M:%S`
startTime_s=`date +%s`

ping -c2 k8s-master
ping -c2 k8s-node-1
ping -c2 k8s-node-2

#yum -y install ntp
#systemctl start ntpd && systemctl enable ntpd && systemctl status ntpd
systemctl stop firewalld && systemctl disable firewalld && systemctl status firewalld
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
swapoff -a
sed -i 's/.*swap.*/#&/g' /etc/fstab

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
# centos8(centos8官方源已下线,建议切换centos-vault源)
#wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
yum clean all
yum makecache
yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y

# 每小时做一次时间同步 追加到root的定时任务 并重启定时服务
cat << EOF >> /var/spool/cron/root
* */1 * * * /usr/sbin/ntpdate time1.tencentyun.com > /dev/null 2>&1
EOF
systemctl restart crond

# 允许 iptables 检查桥接流量
# 不修改这个,紧接着的下一步会报错
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
# 这是临时修改
#modprobe br_netfilter
# ip_forward是数据包转发
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
#sudo sysctl --system
sysctl -p /etc/sysctl.d/k8s.conf

# 传输层负载均衡
# 比iptables吊,适合大集群,高可扩和性能,更复杂的LB算法,支持健康检查和连接重试
yum -y install ipset ipvsadm

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
EOF

kernel_version=$(uname -r | cut -d- -f1)
echo $kernel_version

if [ `expr $kernel_version \> 4.19` -eq 1 ]
then
modprobe -- nf_conntrack
else
modprobe -- nf_conntrack_ipv4
fi

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install -y containerd.io-1.6.7-3.1.el7
cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter
# 配置 sysctl 参数,这些配置在重启之后仍然起作用

cat <<EOF | sudo tee /etc/sysctl.d/99-sysctl.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

cat << EOF >> /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
EOF
#sudo sysctl --system
systemctl daemon-reload
systemctl enable containerd --now
systemctl restart containerd

mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sed -ri 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
sed -ri 's#k8s.gcr.io\/pause:3.6#registry.aliyuncs.com\/google_containers\/pause:3.7#' /etc/containerd/config.toml
sed -ri 's#https:\/\/registry-1.docker.io#https:\/\/registry.aliyuncs.com#' /etc/containerd/config.toml
sed -ri 's#net.ipv4.ip_forward = 0#net.ipv4.ip_forward = 1#' /etc/sysctl.d/99-sysctl.conf
systemctl daemon-reload
systemctl enable containerd --now
systemctl restart containerd

#yum list kubeadm --showduplicates | sort -r
yum -y install kubeadm-1.24.3-0 kubelet-1.24.3-0 kubectl-1.24.3-0 --disableexcludes=kubernetes
#yum -y install kubeadm kubelet kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet


crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.3
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7

ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3
ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.7 k8s.gcr.io/pause:3.7

ctr -n k8s.io i ls -q
crictl images
crictl ps -a


endTime=`date +%Y%m%d_%H:%M:%S`
endTime_s=`date +%s`
sumTime=$[ $endTime_s - $startTime_s]

echo "$startTime ---> $endTime" "Total:$sumTime seconds"

添加权限并执行:

1
chmod +x node.sh && ./node.sh

安装calico网络插件

1. 编辑网络配置文件,master.sh同级目录下

1
nano calico.yaml

2. 搜索CALICO_IPV4POOL_CIDR快速查找位置,更改为你的pod网段

1
2
- name: CALICO_IPV4POOL_CIDR
value: 10.244.0.0/16

注意: 如果在复杂网络环境下(多网卡机器且两张网卡无法正常ping通)可以指定使用网卡,以避免网络不能访问问题

搜索k8s,bgp快速查找位置,在后边添加如下部分并且等号后面更改为你的网卡名,比如:ens33,eth0 

1
2
- name: IP_AUTODETECTION_METHOD
value: "interface=eth0"

3. 保存之后就可以应用配置了

1
kubectl apply -f calico.yaml

4. 然后等所有的pod都running完毕,自然状态就是ready了

1
watch kubectl get po -A

5. 都ready好之后,现在可以查看nodes状态,所有的节点都已经ready了

1
kubectl get nodes -o wide

将工作节点加入到k8s集群

1. 查看master下的安装日志文件(xxx.log)找到加入指令:

1
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>

2. 令牌有效期24小时,可以在master节点生成新令牌命令

1
kubeadm token create --print-join-command

部署dashboard(只在master执行)

Kubernetes官方可视化界面

1. 部署

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml

2. 执行以下代码,将type: ClusterIP改为:type: NodePort

1
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

3. 查看端口

1
kubectl get svc -A | grep kubernetes-dashboard

4. 创建用户,获取token

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
cat >> admin-user.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF

5. 创建用户

1
kubectl apply -f admin-user.yaml

6. 生成token

1
kubectl -n kubernetes-dashboard create token admin-user

7. 查看端口

1
kubectl get svc -A | grep kubernetes-dashboard

8. 修改用户认证时长
执行一下代码进入编辑模式

1
kubectl -n kubernetes-dashboard edit deployment kubernetes-dashboard

这将会在你的默认文本编辑器中打开Deployment的配置。
找到 args: 这一行,并添加 --token-ttl=3600。应该看起来像这样:

1
2
3
4
5
containers:
- args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --token-ttl=3600

之后保存并关闭文件,Kubernetes会自动更新部署。
这将会把会话超时时间设置为1小时(3600秒)。请注意,更长的会话超时时间可能会带来更高的安全风险,因此请确保3600秒这个超时时间符合你的安全策略。

删除node

1. 查看所有节点

1
kubectl get nodes
1
2
3
4
NAME         STATUS   ROLES           AGE     VERSION
k8s-master Ready control-plane 109m v1.24.3
k8s-node-1 Ready <none> 52m v1.24.3
k8s-node-2 Ready <none> 3m52s v1.24.3

2. 建该节点上的pod进行驱逐(要删除本地的数据,有本地持久化数据的请先手动迁移)

1
kubectl drain k8s-node-1 --force --ignore-daemonsets --delete-local-data

3. 在所有节点都驱逐完成后,删除node

1
kubectl delete node k8s-node-1

4. node执行

1
kubeadm reset

5. 重新加入节点

同上。