k8s集群搭建(kubadm方式)

前言

本文采用kubeadm方式,当然如果你是大佬可以采用HardWay方式秀一波。

一、版本统一

注意:在实际搭建过程中会发现k8s和docker版本兼容的问题。所以需要注意版本,本次文档是实操验证版本。其他版本需自己探索。如果环境允许就按下面个版本执行,就会看见彩虹。

  • Docker 18.09.0

  • kubeadm-1.14.0-0

  • kubelet-1.14.0-0

  • kubectl-1.14.0-0

    • k8s.gcr.io/kube-apiserver:v1.14.0

    • k8s.gcr.io/kube-controller-manager:v1.14.0

    • k8s.gcr.io/kube-scheduler:v1.14.0

    • k8s.gcr.io/kube-proxy:v1.14.0

    • k8s.gcr.io/pause:3.1

    • k8s.gcr.io/etcd:3.3.10

    • k8s.gcr.io/coredns:1.3.1

  • calico:v3.9

     

     

二、Docker 安装(考虑好自己的版本不要冲动)

1、Ubuntu安装Docker(所有节点)

#1 卸载原来的docker
sudo apt-get remove docker docker-engine docker.io containerd runc
[如果卸载不够干净可以参考](https://www.cnblogs.com/shmily3929/p/12085163.html)

#2 更新源
sudo apt-get update

#3 安装网络https访问
sudo apt-get install \
  apt-transport-https \
  ca-certificates \
   curl \
  gnupg \
  lsb-release

#4 安装秘钥
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

#5

>>>x86_64/amd64<<<
echo \
 "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
 $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
 
>>>armhf<<<
echo \
 "deb [arch=armhf signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
 $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
 
>>>arm64<<<
echo \
 "deb [arch=arm64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
 $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
 
#6 查看需要安装docker的列表
apt-cache madison docker-ce

#7 选择好版本执行安装(不选择默认安装最新)
sudo apt-get install docker-ce=<VERSION_STRING> docker-ce-cli=<VERSION_STRING> containerd.io
本文采用:18.09.0版本
sudo apt-get install docker-ce=5:18.09.0~3-0~ubuntu-bionic docker-ce-cli=5:18.09.0~3-0~ubuntu-bionic containerd.io

#8 查看是否安装成功
docker --version
Docker version 18.09.0, build 4d60db4 显示成功

2、CentOS安装Docker(所有节点)

#1 卸载原来的docker
sudo yum remove docker \
                docker-client \
                docker-client-latest \
                docker-common \
                docker-latest \
                docker-latest-logrotate \
                docker-logrotate \
                docker-engine
                 
#2 安装yum工具
sudo yum install -y yum-utils

#3 设置仓库
sudo yum-config-manager \
   --add-repo \
  https://download.docker.com/linux/centos/docker-ce.repo
   
#4 查看可安装的版本
yum list docker-ce --showduplicates | sort -r

#5 选择好版本执行安装(不选择默认安装最新)
sudo yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
通过其完全合格的软件包名称安装特定版本,该软件包名称是软件包名称(docker-ce)加上版本字符串(第二列),从第一个冒号(:)到第一个连字符,以连字符(-)分隔。例如,docker-ce-18.09.0。
sudo yum install docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io

#6 启动docker
sudo systemctl start docker

#7 查看是否安装成功
docker --version
Docker version 18.09.0, build 4d60db4 显示成功

 

 

三、k8s必备7大镜像(所有节点)

#!/bin/bash
KUBE_VERSION=v1.14.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

docker pull registry.cn-hangzhou.aliyuncs.com/dsz-docker/kube-proxy:${KUBE_VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/dsz-docker/kube-scheduler:${KUBE_VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/dsz-docker/kube-controller-manager:${KUBE_VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/dsz-docker/kube-apiserver:${KUBE_VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/dsz-docker/pause:${KUBE_PAUSE_VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/dsz-docker/etcd:${ETCD_VERSION}
docker pull registry.cn-hangzhou.aliyuncs.com/dsz-docker/coredns:${CORE_DNS_VERSION}

docker tag registry.cn-hangzhou.aliyuncs.com/dsz-docker/kube-proxy:${KUBE_VERSION} k8s.gcr.io/kube-proxy:${KUBE_VERSION}
docker tag registry.cn-hangzhou.aliyuncs.com/dsz-docker/kube-scheduler:${KUBE_VERSION} k8s.gcr.io/kube-scheduler:${KUBE_VERSION}
docker tag registry.cn-hangzhou.aliyuncs.com/dsz-docker/kube-controller-manager:${KUBE_VERSION} k8s.gcr.io/kube-controller-manager:${KUBE_VERSION}
docker tag registry.cn-hangzhou.aliyuncs.com/dsz-docker/kube-apiserver:${KUBE_VERSION} k8s.gcr.io/kube-apiserver:${KUBE_VERSION}
docker tag registry.cn-hangzhou.aliyuncs.com/dsz-docker/pause:${KUBE_PAUSE_VERSION} k8s.gcr.io/pause:${KUBE_PAUSE_VERSION}
docker tag registry.cn-hangzhou.aliyuncs.com/dsz-docker/etcd:${ETCD_VERSION} k8s.gcr.io/etcd:${ETCD_VERSION}
docker tag registry.cn-hangzhou.aliyuncs.com/dsz-docker/coredns:${CORE_DNS_VERSION} k8s.gcr.io/coredns:${CORE_DNS_VERSION}

注意:以上镜像都上传dsz-docker阿里云仓库,便于国内快速拉取。

 

四、k8s集群搭建前戏

1、Ubuntu(所有节点)

#1
cat /etc/hosts
#在集群的每台机器中添加域名映射
192.168.10.33 master
192.168.10.34 node01
#2
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
#3
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
#4
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#5
modprobe br_netfilter
#6
sysctl -p /etc/sysctl.d/k8s.conf
#7
swapoff -a
#8
/etc/fstab/注释Swap

2、CentOS(所有节点)

#1 master
# 设置master的hostname,并且修改hosts文件
sudo hostnamectl set-hostname master

#2 一个worker
# 设置worker01的hostname,并且修改hosts文件
sudo hostnamectl set-hostname node01

#3 两台台机器
vi /etc/hosts
192.168.10.33 master
192.168.10.34 node01

#4 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
#5 关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#6 关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
#7 配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables \
   -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
#8 设置系统参数
# ====================================================================================
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

 

 

五、安装 kubeadm, kubelet and kubectl

1、Ubuntu(所有节点)

#1
cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

#2
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

#3
apt-get update

#4
apt-get remove -y kubelet kubeadm kubectl

#5
apt-get install -y kubernetes-cni=0.7.5-00 kubeadm=1.14.0-00 kubelet=1.14.0-00 kubectl=1.14.0-00 --allow-downgrades

#6
systemctl enable kubelet

2、CentOS(所有节点)

#1
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
      http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#2 安装kubeadm&kubelet&kubectl
yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0

#3 docker和k8s设置同一个cgroup
# docker
vi /etc/docker/daemon.json 【文件没内容的话,就新建;有的话,就加上这一句,注意文件的格式[逗号]】
# ====================================================================================
{
"exec-opts": ["native.cgroupdriver=systemd"]
}

#4
systemctl restart docker 【`重启docker,一定要执行`】

#5
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" \ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

#6
systemctl enable kubelet && systemctl start kubelet 【`重启kubelet,一定要执行`】

 

六、kube init初始化master(Master节点,不区分系统版本)

#1 在master节点执行
kubeadm reset 【初始化集群状态】
kubeadm init --kubernetes-version=1.14.0 \
   --apiserver-advertise-address=10.13.11.21 \
   --pod-network-cidr=10.244.0.0/16 【初始化master节点】
   
## 注意:记得保存好最后kubeadm join的信息。
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
 ##记得执行一下三个动作
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
# 以下信息记得持久保存,往后子节点加入集群需要这个命令
kubeadm join 10.13.11.21:6443 --token fag134.3wot9edrvs82vh6d \
   --discovery-token-ca-cert-hash sha256:1df02a06552c02ba0e28e00c80a50e9ff40da81a4cdd53c136a16d3c0233f450
   
 
#2 根据日志提示执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#4 查看pod
等待一会儿,同时可以发现像etcd,controller,scheduler等组件都以pod的方式安装成功了
# 注意:coredns没有启动,需要安装网络插件
kubectl get pods -n kube-system 【查看kube-system的pods】
kubectl get pods --all-namespaces 【查看所有pods】

# =======================================================================================
NAME                       READY   STATUS   RESTARTS   AGE
coredns-fb8b8dccf-f7g6g     0/1     Pending   0         7m30s
coredns-fb8b8dccf-hx765     0/1     Pending   0         7m30s
etcd-m                      1/1     Running   0         6m30s
kube-apiserver-m            1/1     Running   0         6m36s
kube-controller-manager-m   1/1     Running   0         6m42s
kube-proxy-w9m72            1/1     Running   0         7m30s
kube-scheduler-m            1/1     Running   0         6m24s
# =======================================================================================
#5 健康检查
curl -k https://localhost:6443/healthz

[root@master-kubeadm-k8s ~]# curl -k https://localhost:6443/healthz
ok

#6 网络插件
docker pull calico/cni:v3.9.3
docker pull calico/pod2daemon-flexvol:v3.9.3
docker pull calico/node:v3.9.3
docker pull calico/kube-controllers:v3.9.3

#7 下载calico.yaml文件
wget https://docs.projectcalico.org/v3.9/manifests/calico.yaml

#8 启动calico
kubectl apply -f calico.yaml

#9 确认一下calico是否安装成功
kubectl get pods --all-namespaces -w

 

七、JOIN操作(node节点上,不区分系统版本)

kubeadm join 10.13.11.21:6443 --token fag134.3wot9edrvs82vh6d \
   --discovery-token-ca-cert-hash sha256:1df02a06552c02ba0e28e00c80a50e9ff40da81a4cdd53c136a16d3c0233f450

 

 

八、查看集群是否加入成功

root@master-k8s:/# kubectl get nodes

k8s集群搭建(kubadm方式)

 

 

 

 

 

 

 

 

 

上一篇:javascript – 用于地理空间操作的Node.js库


下一篇:如何在MySQL中存储空间文件