企业实战(19)基于Kubeadm工具从零开始快速部署单节点K8S集群

Kubernetes架构图

企业实战(19)基于Kubeadm工具从零开始快速部署单节点K8S集群

基础概念

1.Cluster 集群
  计算、存储和网络资源的集合,Kubernetes利用这些资源运行各种基于容器的应用。

2.Master
  Master是Cluster的大脑,负责调度(决定将应用放在哪里运行),一般为了实现高可用,会有多个Master。

3.Node
  负责运行具体的容器,Node由Master管理,它会监控并汇报容器的状态,同时根据Master的要求管理容器的生命周期。

4.Pod
  Kubernetes的最小工作单元,每个Pod包含一个或多个容器。Pod中的容器会被作为一个整体被Master调度到一个Node上运行。

(1)为何引入Pod?

一是方便管理:

 有些容器天生联系紧密,需要在一起工作。Pod提供了比容器更高层次的抽象,将它们封装到一个部署单元中。K8S以Pod为最小单位进行调度、扩展、共享资源、管理生命周期。

eg.正例:File Puller & Web Server => 需要部署在一起工作
 反例:Tomecat & MySQL => 不需要部署在一起工作

二是可以共享资源和通信:

 Pod中所有容器使用同一个网络namespace,即相同的IP和端口空间,可以直接用localhost通信,而且还可以共享存储(本质是通过将Volume挂载到Pod中的每个容器)

(2)如何使用Pod?

 运行单个容器:one-container-per-Pod,K8S中最常见的模型,即使这种情形下,K8S管理的也是Pod而不是单个容器。

 运行多个容器:将联系非常紧密的多个容器部署到一个Pod中,可以直接共享资源。

5.Controller
  K8S不会直接创建Pod,是通过Controller来管理Pod的。为了满足不同业务场景,K8S提供了多种Controller:

(1)Deployment

 最常见的Controller,可以管理Pod的多个副本,并确保Pod按照期望的状态运行。

(2)ReplicaSet

 实现了Pod的多副本管理,使用Deployment时会自动创建ReplicaSet。换句话说,Deployment是通过ReplicaSet来管理Pod的多个副本的,通常不需要直接使用ReplicaSet。

(3)DaemonSet

 用于每个Node最多只运行一个Pod副本的场景,DaemonSet通常用于运行daemon(守护进程、后台程序)。

(4)StatefuleSet

 用于保证Pod的每个副本在整个生命周期中名称是不变的,而其他的Controller不提供这个功能。(非StatefuleSet下,当某个Pod发生故障需要删除并重启时,Pod的名称是会变化的)

(5)Job

 用于运行结束就删除的应用,其他Controller下Pod通常是长期持续运行的。

6.Service
  K8S定义了外界访问一个或一组特定Pod的方式,就是Service。每个Service有自己的IP和端口,并且为Pod提供了负载均衡。

 如果说K8S运行Pod的任务是交给了Controller去做,那么访问Pod的任务则是交给了Service去做。

7.Namespace
  Namespace将一个物理的Cluster从逻辑上划分为多个虚拟Cluster,每个虚拟Cluster就是一个Namespace,不同Namespace中的资源是完全隔离的。

K8S中会自动创建两个Namespace:

(1)default:创建资源时如果不指定Namespace就会放到这里
(2)kube-system: K8S自己创建的系统资源都会放到这个Namespace中

部署Kubernetes的3种方式

(1)Minikube

 Minikube是一个工具,可以在本地快速运行一个单点的K8S,供初步尝试K8S或日常开发的用户使用,不能用于生产环境。

(2)Kubeadm

 Kubeadm是K8S官方社区推出的一套用于简化快速部署K8S集群的工具,Kubeadm的设计目的是为新用户开始尝试K8S提供一种简单的方法。

(3)二进制包

 除了以上两种方式外,我们还可以通过从官方下载二进制包,手动部署每个组件组成K8S集群,这也是目前企业生产环境中广为使用的方式,但对K8S管理人员的要求较高。

本次主要借助Kubeadm工具搭建K8S集群,以便后续实践部署ASP.NET Core应用集群。

环境介绍:

Master test2 192.168.2.195

Node node1 192.168.2.135

CentOS 7.5

2核+CPU,2GB+内存

环境准备:

Master、Node两台主机都需要操作。

1.更改/etc/hosts文件添加主机名与IP映射关系

[root@test2 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.135 node1
192.168.2.129 test2

验证:

[root@test2 ~]# ping -c 2 node1
PING node1 (192.168.2.135) 56(84) bytes of data.
64 bytes from node1 (192.168.2.135): icmp_seq=1 ttl=64 time=0.644 ms
64 bytes from node1 (192.168.2.135): icmp_seq=2 ttl=64 time=0.370 ms
...

[root@node1 ~]# ping -c 2 test2
PING test2 (192.168.2.129) 56(84) bytes of data.
64 bytes from test2 (192.168.2.129): icmp_seq=1 ttl=64 time=0.492 ms
64 bytes from test2 (192.168.2.129): icmp_seq=2 ttl=64 time=9.45 ms
...

2.清空Iptables规则并永久关闭防火墙和Selinux

[root@test2 ~]# iptables -F

[root@test2 ~]# systemctl stop firewalld

[root@test2 ~]# systemctl disable firewalld

[root@test2 ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config

[root@test2 ~]# setenforce 0

[root@test2 ~]# getenforce
Disabled

3.校正系统时间

系统时间如果不一致,会导致node节点无法加入集群中。

[root@test2 ~]# yum -y install ntp
作为依赖被安装:
  autogen-libopts.x86_64 0:5.18-5.el7                                ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2

完毕!

[root@test2 ~]# ntpdate cn.pool.ntp.org
17 Jul 14:46:45 ntpdate[61183]: adjust time server 84.16.73.33 offset -0.030998 sec

4.关闭swap分区

编辑etc/fstab将swap那一行注释掉,因为K8S中不支持swap分区。

[root@test2 ~]# swapoff -a   //临时关闭

[root@test2 ~]# vim /etc/fstab    //永久关闭
...
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

5.将桥接的IPv4流量传递到iptables的链

[root@test2 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF

[root@test2 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2

6.每个节点唯一主机名、Mac地址、Product_uuid

master主机

[root@test2 ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:1a:8b:61 brd ff:ff:ff:ff:ff:ff
3: ens37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:1a:8b:6b brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:49:66:18:1b brd ff:ff:ff:ff:ff:ff

[root@test2 ~]# cat /sys/class/dmi/id/product_uuid
C2344D56-4199-0E76-6398-22536B1A8B61

node1主机

[root@node1 ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:48:a1:5b brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:ef:0e:58:c3 brd ff:ff:ff:ff:ff:ff

[root@node1 ~]# cat /sys/class/dmi/id/product_uuid
F4504D56-B069-0BCD-F156-DB54D848A15B

安装Docker

Master、Node两台主机都需要操作。

Docker与Kubernetes的关系:
企业实战(19)基于Kubeadm工具从零开始快速部署单节点K8S集群
Docker安装部署详解

注意:

 目前kubeadm最大支持docker-ce-18.06,所以要指定该版本安装.

[root@test2 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 

[root@test2 ~]# yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

[root@test2 ~]# yum -y install docker-ce-18.06.ce   //目前kubeadm最大支持docker-ce-18.06,所以要指定该版本安装

[root@test2 ~]# systemctl enable docker && systemctl start docker

使用Kubeadm工具安装部署Kubernetes集群

 Kubeadm工具的出发点很简单:就是把大部分组件都容器化,并通过StaticPod方式运行,大大简化了集群的配置及认证等工作,就是尽可能简单的部署一个生产可用的Kubernetes集群。Kubeadm部署实际要安装的组件有Kubeadm、Kubelet、Kubectl三个。Kubeadm部署Kubernetes集群的基本步骤如下:

  • kubeadm: 引导集群的命令
  • kubelet:集群中运行任务的代理程序
  • kubectl:命令行管理工具

Master、Node两台主机都需要操作。

1.添加阿里云K8s的yum源

[root@test2 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF
> [kunbernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

2.安装Kubeadm、Kubelet、Kubectl组件

本次安装部署的K8s版本为1.13.3

[root@test2 ~]# yum -y install kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3

安装报错:

 需要kubernetes-cni的问题。

[root@test2 ~]# yum -y install kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3
...
--> 解决依赖关系完成
错误:软件包:kubelet-1.13.3-0.x86_64 (kunbernetes)
          需要:kubernetes-cni = 0.6.0
          可用: kubernetes-cni-0.3.0.1-0.07a8a2.x86_64 (kunbernetes)
              kubernetes-cni = 0.3.0.1-0.07a8a2
          可用: kubernetes-cni-0.5.1-0.x86_64 (kunbernetes)
              kubernetes-cni = 0.5.1-0
          可用: kubernetes-cni-0.5.1-1.x86_64 (kunbernetes)
              kubernetes-cni = 0.5.1-1
          可用: kubernetes-cni-0.6.0-0.x86_64 (kunbernetes)
              kubernetes-cni = 0.6.0-0
          可用: kubernetes-cni-0.7.5-0.x86_64 (kunbernetes)
              kubernetes-cni = 0.7.5-0
          正在安装: kubernetes-cni-0.8.6-0.x86_64 (kunbernetes)
              kubernetes-cni = 0.8.6-0
 您可以尝试添加 --skip-broken 选项来解决该问题
 您可以尝试执行:rpm -Va --nofiles --nodigest

解决方法:

手动安装kubernetes-cni对应的版本。

[root@test2 ~]# yum -y install kubelet-1.13.3 kubeadm-1.13.3 kubectl-1.13.3 kubernetes-cni-0.6.0
...
Running transaction
  正在安装    : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                              1/10
  正在安装    : socat-1.7.3.2-2.el7.x86_64                                                                                             2/10
  正在安装    : cri-tools-1.13.0-0.x86_64                                                                                              3/10
  正在安装    : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                4/10
  正在安装    : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                              5/10
  正在安装    : conntrack-tools-1.4.4-7.el7.x86_64                                                                                     6/10
  正在安装    : kubernetes-cni-0.6.0-0.x86_64                                                                                          7/10
  正在安装    : kubelet-1.13.3-0.x86_64                                                                                                8/10
  正在安装    : kubectl-1.13.3-0.x86_64                                                                                                9/10
  正在安装    : kubeadm-1.13.3-0.x86_64                                                                                               10/10
  验证中      : kubectl-1.13.3-0.x86_64                                                                                                1/10
  验证中      : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                              2/10
  验证中      : conntrack-tools-1.4.4-7.el7.x86_64                                                                                     3/10
  验证中      : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                4/10
  验证中      : cri-tools-1.13.0-0.x86_64                                                                                              5/10
  验证中      : kubelet-1.13.3-0.x86_64                                                                                                6/10
  验证中      : kubeadm-1.13.3-0.x86_64                                                                                                7/10
  验证中      : kubernetes-cni-0.6.0-0.x86_64                                                                                          8/10
  验证中      : socat-1.7.3.2-2.el7.x86_64                                                                                             9/10
  验证中      : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                             10/10

已安装:
  kubeadm.x86_64 0:1.13.3-0        kubectl.x86_64 0:1.13.3-0        kubelet.x86_64 0:1.13.3-0        kubernetes-cni.x86_64 0:0.6.0-0

作为依赖被安装:
  conntrack-tools.x86_64 0:1.4.4-7.el7           cri-tools.x86_64 0:1.13.0-0                  libnetfilter_cthelper.x86_64 0:1.0.0-11.el7
  libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7    libnetfilter_queue.x86_64 0:1.0.2-2.el7_2    socat.x86_64 0:1.7.3.2-2.el7

完毕!

[root@test2 ~]# systemctl enable kubelet && systemctl start kubelet

3.查看kubeadm、kubelet版本

[root@test2 ~]# kubelet --version
Kubernetes v1.13.3

[root@test2 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}

部署Kubernetes Master

下面的步骤只在K8s-master主机上操作。

Kubeadm工具文档

Init 命令的工作流程
 kubeadm init 命令通过执行下列步骤来启动一个 Kubernetes 控制平面节点。

 1.在做出变更前运行一系列的预检项来验证系统状态。一些检查项目仅仅触发警告,其它的则会被视为错误并且退出 kubeadm,除非问题得到解决或者用户指定了 --ignore-preflight-errors= 参数。

 2.生成一个自签名的 CA 证书 (或者使用现有的证书,如果提供的话) 来为集群中的每一个组件建立身份标识。如果用户已经通过 --cert-dir 配置的证书目录(默认为 /etc/kubernetes/pki)提供了他们自己的 CA 证书以及/或者密钥, 那么将会跳过这个步骤,正如文档使用自定义证书所述。如果指定了 --apiserver-cert-extra-sans 参数, APIServer 的证书将会有额外的 SAN 条目,如果必要的话,将会被转为小写。

 3.将 kubeconfig 文件写入 /etc/kubernetes/ 目录以便 kubelet、控制器管理器和调度器用来连接到 API 服务器,它们每一个都有自己的身份标识,同时生成一个名为 admin.conf 的独立的 kubeconfig 文件,用于管理操作。

 4.为 API 服务器、控制器管理器和调度器生成静态 Pod 的清单文件。假使没有提供一个外部的 etcd 服务的话,也会为 etcd 生成一份额外的静态 Pod 清单文件。

1.初始化Kubernetes Master

注意:
 由于默认拉取镜像地址k8s.gcr.io在国内无法访问,所以下面指定阿里云镜像仓库地址(registry.aliyuncs.com/google_containers)。官方建议服务器至少2CPU+2G内存。

--apiserver-advertise-address string

 API 服务器所公布的其正在监听的 IP 地址。如果未设置,则使用默认网络接口。

--image-repository string

 选择用于拉取控制平面镜像的容器仓库,默认为"k8s.gcr.io"。

--kubernetes-version string

 为控制平面选择一个特定的 Kubernetes 版本, 默认值为"stable-1"。

--service-cidr string

 为服务的虚拟 IP 地址另外指定 IP 地址段,默认值为"10.96.0.0/12"。

--pod-network-cidr string

 指明 pod 网络可以使用的 IP 地址段。如果设置了这个参数,控制平面将会为每一个节点自动分配 CIDRs。

[root@test2 ~]# kubeadm init \
--apiserver-advertise-address=192.168.2.129 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.13.3 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [test2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.2.129]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [test2 localhost] and IPs [192.168.2.129 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [test2 localhost] and IPs [192.168.2.129 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 74.020718 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "test2" as an annotation
[mark-control-plane] Marking the node test2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node test2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: si974n.v8615659h9x6x4xe
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.2.129:6443 --token si974n.v8615659h9x6x4xe --discovery-token-ca-cert-hash sha256:d6d1c5d0290ee0217d14d7d6bdea23b1fc911186e0ea94847a1f52d8ed32761d

 记住上面kubeadm join完整命令,因为后续node节点加入集群是需要用到,其中包含token。

[root@test2 ~]# mkdir -p $HOME/.kube

[root@test2 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@test2 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

[root@test2 ~]# kubectl get nodes    //查看集群节点状态
NAME    STATUS     ROLES    AGE     VERSION
test2   NotReady   master   2d15h   v1.13.3

2.安装Pod网络插件(CNI)

如果wget 下载报错:“无法建立SSL连接”,就加上参数--no-check-certificate下载。

[root@test2 ~]# mkdir /root/k8s

[root@test2 ~]# wget https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

[root@test2 ~]# sed -i 's/quay.io\/coreos/registry.cn-beijing.aliyuncs.com\/imcto/g' k8s/kube-flannel.yml

[root@test2 ~]# cat k8s/kube-flannel.yml
...
547       - name: install-cni
548         image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.12.0-s390x
549         command:
550         - cp
551         args:
552         - -f
553         - /etc/kube-flannel/cni-conf.json
554         - /etc/cni/net.d/10-flannel.conflist
555         volumeMounts:
556         - name: cni
557           mountPath: /etc/cni/net.d
558         - name: flannel-cfg
559           mountPath: /etc/kube-flannel/
560       containers:
561       - name: kube-flannel
562         image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.12.0-s390x
563         command:
564         - /opt/bin/flanneld
...

[root@test2 ~]# kubectl apply -f k8s/kube-flannel.yml    //安装pod插件
podsecuritypolicy.policy/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds-amd64 configured
daemonset.apps/kube-flannel-ds-arm64 configured
daemonset.apps/kube-flannel-ds-arm configured
daemonset.apps/kube-flannel-ds-ppc64le configured
daemonset.apps/kube-flannel-ds-s390x configured

 然后通过以下命令验证:全部为Running则OK,其中一个不为Running,比如:Pending、ImagePullBackOff都表明Pod没有就绪。

报错解决:Kubernetes集群部署中安装Pods网络插件一直显示Pending状态解决

[root@test2 ~]# kubectl get pods -n kube-system
NAME                            READY   STATUS    RESTARTS   AGE
coredns-78d4cf999f-ffgpb        1/1     Running   0          2d1h
coredns-78d4cf999f-z5hxw        1/1     Running   0          2d1h
etcd-test2                      1/1     Running   1          2d1h
kube-apiserver-test2            1/1     Running   1          2d1h
kube-controller-manager-test2   1/1     Running   2          2d1h
kube-flannel-ds-amd64-khq7g     1/1     Running   0          115m
kube-flannel-ds-amd64-mqfsn     1/1     Running   0          115m
kube-proxy-m4j8z                1/1     Running   0          2d
kube-proxy-w44gf                1/1     Running   1          2d1h
kube-scheduler-test2            1/1     Running   3          2d1h

 如果其中有的Pod没有Running,可以通过以下命令查看具体错误原因,比如这里我想查看kube-flannel-ds-amd64-8bmbm这个pod的错误信息:

kubectl describe pod kube-flannel-ds-amd64-8bmbm -n kube-system

 这时,我们再看看master节点的状态就会从NotReady变为Ready:

[root@test2 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE    VERSION
test2   Ready    master   2d1h   v1.13.3

 到这里,Master节点部署成功。

加入Kubernetes Node

使用上面Master节点初始化完成时最后一行的kubeadm join完整命令将k8s node节点加入集群,如果这期间超过24小时,则需要重新生成token。

1.Master主机重新生成新的token

默认token的有效期为24小时,当过期之后,该token就不可用了,以后加入节点需要新token。

[root@test2 ~]# kubeadm token create
c4jjui.bpppj490ggpnmi3u

[root@test2 ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
c4jjui.bpppj490ggpnmi3u   22h       2020-07-21T14:37:12+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

获取ca证书sha256编码hash值

[root@test2 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt|openssl rsa -pubin -outform der 2>/dev/null|openssl dgst -sha256 -hex|awk '{print $NF}'
c1df6d1ad77fbc0cbdf2bb3dccd5d87eac41b936a5f3fb944f2c14b79af4de55

2.Node节点加入集群

[root@node1 ~]# kubeadm join 192.168.2.129:6443 --token c4jjui.bpppj490ggpnmi3u --discovery-token-ca-cert-hash sha256:c1df6d1ad77fbc0cbdf2bb3dccd5d87eac41b936a5f3fb944f2c14b79af4de55
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.8. Latest validated version: 18.06
[discovery] Trying to connect to API Server "192.168.2.129:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.2.129:6443"
[discovery] Requesting info from "https://192.168.2.129:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.2.129:6443"
[discovery] Successfully established connection with API Server "192.168.2.129:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

3.Master节点上查看集群状态

[root@test2 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE    VERSION
node1   Ready    <none>   2d     v1.13.3
test2   Ready    master   2d1h   v1.13.3

 可以看到集群中两个节点都显示Ready,如果有不是Ready状态的,则需要检查哪些Pod没有正常运行(kubectl get pod --all-namespaces),然后使用 kubectl describe pod 插件名称 -n kube-system来排查原因并解决。

测试Kubernetes集群

创建一个Pod

[root@test2 ~]# kubectl create deployment nginx --image=nginx   //使用nginx镜像创建deployment控制器
deployment.apps/nginx created

[root@test2 ~]# kubectl  expose deployment nginx --port=80 --type=NodePort     //暴露端口
service/nginx exposed

[root@test2 ~]# kubectl get pod,svc    //查看
NAME                       READY   STATUS    RESTARTS   AGE
pod/nginx-5c7588df-b6spv   1/1     Running   0          9m13s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP        2d2h
service/nginx        NodePort    10.1.77.34   <none>        80:31018/TCP   7m22s

 如果想要看到更多的信息,比如pod被部署在了哪个Node上,可以通过 kubectl get pods,svc -o wide来查看。

[root@test2 ~]# kubectl get pods,svc -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
pod/nginx-5c7588df-b6spv   1/1     Running   0          13m   10.244.1.2   node1   <none>           <none>
                                                                      //可看到部署在node1上                  
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE    SELECTOR
service/kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP        2d2h   <none>
service/nginx        NodePort    10.1.77.34   <none>        80:31018/TCP   11m    app=nginx

企业实战(19)基于Kubeadm工具从零开始快速部署单节点K8S集群
 因为是NodePort方式,因此其映射暴露出来的端口号会在30000-32767范围内随机取一个,我们可以直接通过浏览器输入IP地址访问,比如这时我们通过浏览器来访问一下任一Node节点的IP地址加端口号,例如192.168.2.135:31018
企业实战(19)基于Kubeadm工具从零开始快速部署单节点K8S集群
 如果成功访问,恭喜你一个单节点的K8s集群部署成功了。

查看node节点上容器运行情况

[root@node1 ~]# docker ps
CONTAINER ID        IMAGE                                               COMMAND                  CREATED             STATUS              PORTS               NAMES
2027502b5687        03ad33ab3dd7                                        "/opt/bin/flanneld -…"   50 minutes ago      Up 50 minutes                           k8s_kube-flannel_kube-flannel-ds-amd64-rbqd5_kube-system_10f8bb64-cbfa-11ea-8327-000c291a8b61_0
0d51e0ad053f        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 50 minutes ago      Up 50 minutes                           k8s_POD_kube-flannel-ds-amd64-rbqd5_kube-system_10f8bb64-cbfa-11ea-8327-000c291a8b61_0
457dd6f03fb7        98db19758ad4                                        "/usr/local/bin/kube…"   About an hour ago   Up About an hour                        k8s_kube-proxy_kube-proxy-lhb64_kube-system_ac3ef77c-cbf8-11ea-8327-000c291a8b61_0
13af5ed61085        registry.aliyuncs.com/google_containers/pause:3.1   "/pause"                 About an hour ago   Up About an hour                        k8s_POD_kube-proxy-lhb64_kube-system_ac3ef77c-cbf8-11ea-8327-000c291a8b61_0

 

 

上一篇:Kubernetes---修改证书可用年限


下一篇:kubernetes服务端命令,kubeadm子命令总结