更新 'MD/基于kubeadm部署kubernetes集群.md'

This commit is contained in:
diandian 2023-10-29 14:43:56 +08:00
parent e370ca7282
commit 4e72f1da1b
1 changed files with 229 additions and 217 deletions

View File

@ -1,217 +1,229 @@
<h1><center>基于kubeadm部署kubernetes集群</center></h1> <h1><center>基于kubeadm部署kubernetes集群</center></h1>
著作:行癫 <盗版必究> 著作:行癫 <盗版必究>
------ ------
## 一:环境准备 ## 一:环境准备
三台服务器一台master两台node,master节点必须是2核cpu 三台服务器一台master两台node,master节点必须是2核cpu
| 节点名称 | IP地址 | | 节点名称 | IP地址 |
| :------: | :--------: | | :------: | :--------: |
| master | 10.0.0.220 | | master | 10.0.0.220 |
| node-1 | 10.0.0.221 | | node-1 | 10.0.0.221 |
| node-2 | 10.0.0.222 | | node-2 | 10.0.0.222 |
| node-3 | 10.0.0.223 | | node-3 | 10.0.0.223 |
#### 1.所有服务器关闭防火墙和selinux #### 1.所有服务器关闭防火墙和selinux
```shell ```shell
[root@localhost ~]# systemctl stop firewalld [root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld [root@localhost ~]# systemctl disable firewalld
[root@localhost ~]# setenforce 0 [root@localhost ~]# setenforce 0
[root@localhost ~]# sed -i '/^SELINUX=/c SELINUX=disabled/' /etc/selinux/config [root@localhost ~]# sed -i '/^SELINUX=/c SELINUX=disabled/' /etc/selinux/config
[root@localhost ~]# swapoff -a  临时关闭 [root@localhost ~]# swapoff -a  临时关闭
[root@localhost ~]# sed -i 's/.*swap.*/#&/' /etc/fstab 永久关闭 [root@localhost ~]# sed -i 's/.*swap.*/#&/' /etc/fstab 永久关闭
注意: 注意:
关闭所有服务器的交换分区 关闭所有服务器的交换分区
所有节点操作 所有节点操作
``` ```
#### 2.保证yum仓库可用 #### 2.保证yum仓库可用
```shell ```shell
[root@localhost ~]# yum clean all [root@localhost ~]# yum clean all
[root@localhost ~]# yum makecache fast [root@localhost ~]# yum makecache fast
注意: 注意:
使用国内yum源 使用国内yum源
所有节点操作 所有节点操作
``` ```
#### 3.修改主机名 #### 3.修改主机名
```shell ```shell
[root@localhost ~]# hostnamectl set-hostname master [root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# hostnamectl set-hostname node-1 [root@localhost ~]# hostnamectl set-hostname node-1
[root@localhost ~]# hostnamectl set-hostname node-2 [root@localhost ~]# hostnamectl set-hostname node-2
[root@localhost ~]# hostnamectl set-hostname node-3 [root@localhost ~]# hostnamectl set-hostname node-3
注意: 注意:
所有节点操作 所有节点操作
``` ```
#### 4.添加本地解析 #### 4.添加本地解析
```shell ```shell
[root@master ~]# cat >> /etc/hosts <<eof [root@master ~]# cat >> /etc/hosts <<eof
10.0.0.220 master 10.0.0.220 master
10.0.0.221 node-1 10.0.0.221 node-1
10.0.0.222 node-2 10.0.0.222 node-2
10.0.0.223 node-3 10.0.0.223 node-3
eof eof
注意: 注意:
所有节点操作 所有节点操作
``` ```
#### 5.安装容器 #### 5.安装容器
```shell ```shell
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 [root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo [root@master ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@master ~]# yum -y install docker-ce [root@master ~]# yum -y install docker-ce
[root@master ~]# systemctl start docker [root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker [root@master ~]# systemctl enable docker
注意: [root@master ~]# vim /etc/docker/daemon.json
所有节点操作
``` {
"exec-opts": ["native.cgroupdriver=systemd"]
#### 6.安装kubeadm和kubelet }
[root@master ~]# systemctl restart docker
```shell
[root@master ~]# cat >> /etc/yum.repos.d/kubernetes.repo <<eof [kubelet-check] The HTTP call equal to curl -sSL http://localhost:10248/healthz failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.
[kubernetes] [kubelet-check] It seems like the kubelet isnt running or healthy.
name=Kubernetes ————————————————
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 版权声明本文为CSDN博主「看未来」的原创文章遵循CC 4.0 BY-SA版权协议转载请附上原文出处链接及本声明。
enabled=1 原文链接https://blog.csdn.net/qq_43762191/article/details/125567365
gpgcheck=0 注意:
repo_gpgcheck=0 所有节点操作
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg ```
eof
[root@master ~]# yum -y install kubeadm kubelet kubectl ipvsadm #### 6.安装kubeadm和kubelet
注意:
所有节点操作 ```shell
这里安装的是最新版本也可以指定版本号kubeadm-1.19.4 [root@master ~]# cat >> /etc/yum.repos.d/kubernetes.repo <<eof
``` [kubernetes]
name=Kubernetes
#### 7.配置kubelet的cgroups baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
```shell gpgcheck=0
[root@master ~]# cat >/etc/sysconfig/kubelet<<EOF repo_gpgcheck=0
KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1" gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF eof
k8s.gcr.io/pause:3.6 [root@master ~]# yum -y install kubeadm kubelet kubectl ipvsadm
``` 注意:
所有节点操作
#### 8.加载内核模块 这里安装的是最新版本也可以指定版本号kubeadm-1.19.4
```
```shell
[root@master ~]# modprobe br_netfilter #### 7.配置kubelet的cgroups
注意:
所有节点操作 ```shell
``` [root@master ~]# cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"
#### 9.修改内核参数 EOF
k8s.gcr.io/pause:3.6
```shell ```
[root@master ~]# cat >> /etc/sysctl.conf <<eof
net.bridge.bridge-nf-call-ip6tables = 1 #### 8.加载内核模块
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0 ```shell
eof [root@master ~]# modprobe br_netfilter
[root@master ~]# sysctl -p 注意:
net.bridge.bridge-nf-call-ip6tables = 1 所有节点操作
net.bridge.bridge-nf-call-iptables = 1 ```
vm.swappiness = 0
注意: #### 9.修改内核参数
所有节点操作
``` ```shell
[root@master ~]# cat >> /etc/sysctl.conf <<eof
## 二部署Kubernetes net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#### 1.镜像下载 vm.swappiness=0
eof
```shell [root@master ~]# sysctl -p
https://www.xingdiancloud.cn/index.php/s/6GyinxZwSRemHPz net.bridge.bridge-nf-call-ip6tables = 1
注意: net.bridge.bridge-nf-call-iptables = 1
下载后上传到所有节点 vm.swappiness = 0
``` 注意:
所有节点操作
#### 2.镜像导入 ```
```shell ## 二部署Kubernetes
[root@master ~]# cat image_load.sh
#!/bin/bash #### 1.镜像下载
image_path=`pwd`
for i in `ls "${image_path}" | grep tar` ```shell
do https://www.xingdiancloud.cn/index.php/s/6GyinxZwSRemHPz
docker load < $i 注意:
done 下载后上传到所有节点
[root@master ~]# bash image_load.sh ```
注意:
所有节点操作 #### 2.镜像导入
```
```shell
#### 3.master节点初始化 [root@master ~]# cat image_load.sh
#!/bin/bash
```shell image_path=`pwd`
[root@master ~]# kubeadm init --kubernetes-version=1.23.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.0.220 for i in `ls "${image_path}" | grep tar`
do
Your Kubernetes control-plane has initialized successfully! docker load < $i
done
To start using your cluster, you need to run the following as a regular user: [root@master ~]# bash image_load.sh
注意:
mkdir -p $HOME/.kube 所有节点操作
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config ```
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#### 3.master节点初始化
Alternatively, if you are the root user, you can run:
```shell
export KUBECONFIG=/etc/kubernetes/admin.conf [root@master ~]# kubeadm init --kubernetes-version=1.23.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.0.220
You should now deploy a pod network to the cluster. Your Kubernetes control-plane has initialized successfully!
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ To start using your cluster, you need to run the following as a regular user:
Then you can join any number of worker nodes by running the following on each as root: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
kubeadm join 10.0.0.220:6443 --token mzrm3c.u9mpt80rddmjvd3g \ sudo chown $(id -u):$(id -g) $HOME/.kube/config
--discovery-token-ca-cert-hash sha256:fec53dfeacc5187d3f0e3998d65bd3e303fa64acd5156192240728567659bf4a
``` Alternatively, if you are the root user, you can run:
#### 4.安装pod插件 export KUBECONFIG=/etc/kubernetes/admin.conf
```shell You should now deploy a pod network to the cluster.
[root@master ~]# wget http://www.xingdiancloud.cn:92/index.php/s/3Ad7aTxqPPja24M/download/flannel.yaml Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
[root@master ~]# kubectl create -f flannel.yaml https://kubernetes.io/docs/concepts/cluster-administration/addons/
```
Then you can join any number of worker nodes by running the following on each as root:
#### 5.将node加入工作节点
kubeadm join 10.0.0.220:6443 --token mzrm3c.u9mpt80rddmjvd3g \
```shell --discovery-token-ca-cert-hash sha256:fec53dfeacc5187d3f0e3998d65bd3e303fa64acd5156192240728567659bf4a
[root@node-1 ~]# kubeadm join 10.0.0.220:6443 --token mzrm3c.u9mpt80rddmjvd3g --discovery-token-ca-cert-hash sha256:fec53dfeacc5187d3f0e3998d65bd3e303fa64acd5156192240728567659bf4a ```
注意:
这里使用的是master初始化产生的token #### 4.安装pod插件
这里的token时间长了会改变需要使用命令获取见下期内容
没有记录集群 join 命令的可以通过以下方式重新获取: ```shell
kubeadm token create --print-join-command --ttl=0 [root@master ~]# wget http://www.xingdiancloud.cn:92/index.php/s/3Ad7aTxqPPja24M/download/flannel.yaml
``` [root@master ~]# kubectl create -f flannel.yaml
```
#### 6.master节点查看集群状态
#### 5.将node加入工作节点
```shell
[root@master ~]# kubectl get nodes ```shell
NAME STATUS ROLES AGE VERSION [root@node-1 ~]# kubeadm join 10.0.0.220:6443 --token mzrm3c.u9mpt80rddmjvd3g --discovery-token-ca-cert-hash sha256:fec53dfeacc5187d3f0e3998d65bd3e303fa64acd5156192240728567659bf4a
master Ready control-plane,master 26m v1.23.1 注意:
node-1 Ready <none> 4m45s v1.23.1 这里使用的是master初始化产生的token
node-2 Ready <none> 4m40s v1.23.1 这里的token时间长了会改变需要使用命令获取见下期内容
node-3 Ready <none> 4m46s v1.23.1 没有记录集群 join 命令的可以通过以下方式重新获取:
``` kubeadm token create --print-join-command --ttl=0
```
#### 6.master节点查看集群状态
```shell
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 26m v1.23.1
node-1 Ready <none> 4m45s v1.23.1
node-2 Ready <none> 4m40s v1.23.1
node-3 Ready <none> 4m46s v1.23.1
```