Kubernetes从0到1--第一篇 kubeadm部署k8s集群

目录

前言

运维kubernetes有些时日了,一直有想写一个关于k8s集群完整部署的文章,以记录自己的学习和成长历程,奈何懒作,一直未行动。现在终于动起来了,希望文章能给大家带来一些作用和启发。

架构

先看下基本架构:
Kubernetes从0到1--第一篇 kubeadm部署k8s集群

环境

名称 IP 配置
k8s-m 192.168.238.146 2CPU,4GB内存
k8s-n01 192.168.238.147 2CPU,2GB内存
k8s-n02 192.168.238.148 2CPU,2GB内存

部署

准备工作

在所有主机上执行

  1. 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
  1. 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config 
  1. 关闭swap
swapoff -a # 临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab  #永久关闭
  1. 修改主机名称
hostnamectl set-hostname k8s-m
hostnamectl set-hostname k8s-n01
hostnamectl set-hostname k8s-n02
  1. 添加主机名与IP对应关系(记得设置主机名):
cat /etc/hosts
192.168.238.146  k8s-m
192.168.238.147  k8s-n01
192.168.238.148  k8s-n02
  1. 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
sysctl --system

安装Docker/kubeadm/kubelet

在所有节点上执行

  1. 安装docker
yum install -y wget && wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum -y install docker-ce-19.03.9
systemctl enable docker && systemctl start docker
  1. 安装kubeadm,kubelet,kubectl

添加yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装

yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0

设置kubelet开机启动

systemctl enable kubelet

部署kubernetes

在master上执行

kubeadm init \
--apiserver-advertise-address=192.168.238.146 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.19.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

提示:Your Kubernetes control-plane has initialized successfully! 表明已经初始化成功,接下来进行后续操作。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

添加节点,在节点上执行命令

kubeadm join 192.168.238.146:6443 --token niulk2.21qj3qsxm0xkdjxy \
    --discovery-token-ca-cert-hash sha256:8aeb688e1572d72c76c57faba593d7e8bb3f30e0e0ada900ae9f0e872d0bf038

查看节点

kubectl get nodes
NAME      STATUS     ROLES    AGE     VERSION
k8s-m     NotReady   master   3m42s   v1.19.0
k8s-n01   NotReady   <none>   69s     v1.19.0
k8s-n02   NotReady   <none>   39s     v1.19.0

安装flannel网络组件

地址:https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

可能需要*,展现各位神通的时候到了。

安装flannel

kubectl apply -f kube-flannel.yaml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看flannel状态

kubectl get pods -n kube-system
NAME                            READY   STATUS    RESTARTS   AGE
...
kube-flannel-ds-5dn86           1/1     Running   0          57s
kube-flannel-ds-ddwjx           1/1     Running   0          57s
kube-flannel-ds-zjv2n           1/1     Running   0          57s
...

再次查看节点状态,已经变成ready

kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
k8s-m     Ready    master   12m     v1.19.0
k8s-n01   Ready    <none>   9m52s   v1.19.0
k8s-n02   Ready    <none>   9m22s   v1.19.0

查看cs,发现controller-manager、scheduler状态为Unhealthy

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0               Healthy     {"health":"true"}

通过修改对应yaml文件解决

cd /etc/kubernetes/manifests/

vi kube-controller-manager.yaml
    # - --port=0  将其注释掉

vi kube-scheduler.yaml
    # - --port=0  将其注释掉

重启kubelet后,再次查看

systemctl restart kubelet

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}

部署 Dashboard

地址:https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

git地址:https://github.com/kubernetes/dashboard

修改下载的yaml,因目前条件限制,暂时采用nodePort方式暴露服务

vi k8s-dashboard.yaml
...
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 32080
  selector:
    k8s-app: kubernetes-dashboard
...

部署

kubectl apply -f k8s-dashboard.yaml

kubectl get pods --all-namespaces
...
kubernetes-dashboard   dashboard-metrics-scraper-85cff7954d-xqff6   1/1     Running   0          3m11s
kubernetes-dashboard   kubernetes-dashboard-658485d5c7-kjxjt        1/1     Running   0          3m11s

kubectl get svc --all-namespaces
...
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.1.136.196   <none>        8000/TCP                 3m54s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.1.92.105    <none>        443:32080/TCP            3m54s

访问 https://192.168.238.146:32080/#/login
Kubernetes从0到1--第一篇 kubeadm部署k8s集群

配置使用kubeonfig登录

# 创建dashboard管理用户
kubectl create serviceaccount dashboard-admin -n kube-system

# 绑定用户为集群管理用户,也可绑定为普通角色
kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

# 获取tocken
kubectl get secret -n kube-system | grep dashboard-admin
kubectl describe secret dashboard-admin-token-hbsbk -n kube-system

此时,可以使用token登录,但我想通过kubeonfig登录。继续看如何生成kubeconfig

DASH_TOCKEN=$(kubectl get secret dashboard-admin-token-hbsbk -n kube-system -o jsonpath={.data.token}|base64 -d)
kubectl config set-cluster kubernetes --server=192.168.238.146:6443 --kubeconfig=/root/dashbord-admin.conf
kubectl config set-credentials dashboard-admin --token=$DASH_TOCKEN --kubeconfig=/root/dashbord-admin.conf
kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/dashbord-admin.conf
kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/root/dashbord-admin.conf

将刚才的dashbord-admin.conf文件下载到本地,选择文件,点击登录。
Kubernetes从0到1--第一篇 kubeadm部署k8s集群

登录成功后
Kubernetes从0到1--第一篇 kubeadm部署k8s集群

完成以上安装,才只是万里长征第一步,请关注后续章节。

上一篇:三、升级kubeasz部署的kubernetes


下一篇:单纯MYSQL递归查询上下级关系