9.1 Kubernetes_集群部署

文章目录

Kubernetes集群部署

安装

第一步:保证实验环境的纯净
避免Swarm集群对k8s的影响

[root@server62 ~]# docker stack rm myservice 
Removing service myservice_mysvc
Removing service myservice_visualizer
Removing network myservice_default
[root@server62 ~]# docker stack rm portainer 
Removing service portainer_agent
Removing service portainer_portainer
Removing network portainer_agent_network
[root@server62 ~]# docker stack ls
NAME                SERVICES            ORCHESTRATOR

3个节点离开swarm集群

[root@server62 ~]# docker swarm leave --force 
Node left the swarm.
[root@server63 ~]# docker swarm leave
Node left the swarm.
[root@server64 ~]# docker swarm leave 
Node left the swarm.

3个节点分别清除不需要的资源

[root@server62 ~]# docker container prune 
[root@server62 ~]# docker network prune
[root@server62 ~]# docker volume prune 
WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
portainer_portainer_data

Total reclaimed space: 148.3kB 
[root@server63 ~]# docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
[root@server63 ~]# docker network prune
WARNING! This will remove all custom networks not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Networks:
docker_gwbridge
[root@server64 ~]# docker container prune
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
[root@server64 ~]# docker network prune
WARNING! This will remove all custom networks not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Networks:
docker_gwbridge

清除docker machine 对节点的控制

[root@server61 ~]# docker-machine rm server62
About to remove server62
WARNING: This action will delete both local reference and remote instance.
Are you sure? (y/n): y
Successfully removed server62
[root@server61 ~]# docker-machine rm server63
About to remove server63
WARNING: This action will delete both local reference and remote instance.
Are you sure? (y/n): y 
Successfully removed server63
[root@server61 ~]# docker-machine rm server64
About to remove server64
WARNING: This action will delete both local reference and remote instance.
Are you sure? (y/n): y
Successfully removed server64
[root@server61 ~]# docker-machine ls
NAME   ACTIVE   DRIVER   STATE   URL   SWARM   DOCKER   ERRORS

删除之前Docker-Machine部署docker留下来的10-machine.conf文件

[root@server63 ~]# rm -f /etc/systemd/system/docker.service.d/10-machine.conf 
[root@server63 ~]# systemctl daemon-reload
[root@server63 ~]# systemctl restart docker.service 

第二步:安装docker,并使用systemd的方式管理k8s

[root@server62 ~]# cd /etc/docker/
[root@server62 docker]# ls
ca.pem  certs.d  daemon.json  key.json  server-key.pem  server.pem
[root@server62 docker]# vim daemon.json 
{
        "registry-mirrors": ["https://reg.westos.org"]
        "exec-opts": ["native.cgroupdriver=systemd"],
        "log-driver": "json-file",
        "log-opts": {
                "max-size": "100m"
        },
        "storage-driver": "overlay2"
}

记得删除之前Docker-Machine部署docker留下来的10-machine.conf文件

[root@server62 docker]# mv /etc/systemd/system/docker.service.d/10-machine.conf /mnt
[root@server62 docker]# systemctl daemon-reload
[root@server62 docker]# systemctl restart docker.service 

查看管理方式是否生效

[root@server62 docker]# docker info
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 9
 Server Version: 19.03.15
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: systemd			//管理方式为systemd
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 3.10.0-957.el7.x86_64
 Operating System: Red Hat Enterprise Linux Server 7.6 (Maipo)
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 991MiB
 Name: server62
 ID: VTF6:GFRR:JEPV:RSDH:EZ3U:JTH2:KIKW:YZ4M:ITF4:QJWZ:I7BR:3BTX
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://reg.westos.org/
 Live Restore Enabled: false

server63和server64作为数据节点
和server62的操作步骤一样

[root@server62 docker]# pwd
/etc/docker
[root@server62 docker]# scp daemon.json server63:/etc/docker  
[root@server62 docker]# scp daemon.json server64:/etc/docker

server63

[root@server63 ~]# rm -f /etc/systemd/system/docker.service.d/10-machine.conf 
[root@server63 ~]# cd /etc/docker/
[root@server63 docker]# cat daemon.json 
{
	"registry-mirrors": ["https://reg.westos.org"],
	"exec-opts": ["native.cgroupdriver=systemd"],
	"log-driver": "json-file",
 	"log-opts": {
		"max-size": "100m"
	},
	"storage-driver": "overlay2"
}
[root@server63 docker]# systemctl daemon-reload 
[root@server63 docker]# systemctl restart docker.service 

server64

[root@server64 ~]# rm -f /etc/systemd/system/docker.service.d/10-machine.conf
[root@server64 ~]# cat /etc/docker/daemon.json 
{
	"registry-mirrors": ["https://reg.westos.org"],
	"exec-opts": ["native.cgroupdriver=systemd"],
	"log-driver": "json-file",
 	"log-opts": {
		"max-size": "100m"
	},
	"storage-driver": "overlay2"
}
[root@server64 ~]# systemctl daemon-reload
[root@server64 ~]# systemctl restart docker.service 

第三步:3个节点禁用swap

[root@server62 docker]# swapon -s
Filename				Type		Size	Used	Priority
/dev/dm-1                              	partition	2097148	3080	-2
[root@server62 docker]# swapoff -a
[root@server62 docker]# vim /etc/fstab 
#####注释swap分区
#
# /etc/fstab
# Created by anaconda on Tue Apr 20 20:15:58 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=b5aa29c5-8423-493f-a101-f7e006e143b6 /boot                   xfs     defaults        0 0
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@server63 docker]# swapoff -a
[root@server63 docker]# vim /etc/fstab 
[root@server64 ~]# swapoff -a
[root@server64 ~]# vim /etc/fstab 

第四步:3个节点配置软件源指向阿里云,安装k8s

[root@server62 docker]# cd /etc/yum.repos.d/
[root@server62 yum.repos.d]# vim k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
[root@server62 yum.repos.d]# yum list kubeadm
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
docker                                                                                    | 3.0 kB  00:00:00     
kubernetes                                                                                | 1.4 kB  00:00:00     
rhel7.6                                                                                   | 4.3 kB  00:00:00     
kubernetes/primary                                                                        |  90 kB  00:00:00     
kubernetes                                                                                               666/666
Available Packages
kubeadm.x86_64                                        1.21.1-0                                         kubernetes

开始安装

[root@server62 yum.repos.d]# yum install -y kubeadm kubelet kubectl

server63和server64的操作同server62

[root@server62 yum.repos.d]# pwd
/etc/yum.repos.d
[root@server62 yum.repos.d]# scp k8s.repo server63:/etc/yum.repos.d  
[root@server62 yum.repos.d]# scp k8s.repo server64:/etc/yum.repos.d

server63

[root@server63 docker]# yum install -y kubeadm kubelet kubectl

server64

[root@server64 docker]# yum install -y kubeadm kubelet kubectl

第五步:3个节点开机自启docker和k8s
server62

[root@server62 yum.repos.d]# systemctl enable --now kubelet.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@server62 yum.repos.d]# systemctl enable docker

server63

[root@server63 docker]# systemctl enable --now kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@server63 yum.repos.d]# systemctl enable docker

server64

[root@server64 ~]# systemctl enable --now kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@server64 yum.repos.d]# systemctl enable docker

查看配置信息

[root@server62 yum.repos.d]# kubeadm config print init-defaults
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

列出镜像

[root@server62 yum.repos.d]# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.1
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.1
registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.1
registry.aliyuncs.com/google_containers/kube-proxy:v1.21.1
registry.aliyuncs.com/google_containers/pause:3.4.1
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
[root@server62 yum.repos.d]# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.21.0
registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
registry.aliyuncs.com/google_containers/pause:3.4.1
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0

拉取镜像

第一步:先将镜像放到私有仓库

上传指定镜像到harbor仓库

[root@server61 ~]# ll k8s-v1.21.tar 
-rw-r--r-- 1 root root 712433664 Jun  6 13:53 k8s-v1.21.tar
[root@server61 ~]# docker load -i k8s-v1.21.tar 

在harbor仓库创建公开的项目k8s
9.1 Kubernetes_集群部署

[root@server61 ~]# docker images | grep k8s
reg.westos.org/k8s/kube-apiserver            v1.21.0                          4d217480042e        8 weeks ago         126MB
reg.westos.org/k8s/kube-proxy                v1.21.0                          38ddd85fe90e        8 weeks ago         122MB
reg.westos.org/k8s/kube-controller-manager   v1.21.0                          09708983cc37        8 weeks ago         120MB
reg.westos.org/k8s/kube-scheduler            v1.21.0                          62ad3129eca8        8 weeks ago         50.6MB
reg.westos.org/k8s/pause                     3.4.1                            0f8457a4c2ec        4 months ago        683kB
reg.westos.org/k8s/coredns/coredns           v1.8.0                           296a6d5035e2        7 months ago        42.5MB
reg.westos.org/k8s/etcd                      3.4.13-0                         0369cf4303ff        9 months ago        253MB

上传

[root@server61 ~]# docker images | grep k8s | awk '{system("docker push "$1":"$2"")}'

9.1 Kubernetes_集群部署验证:k8s是否会从私有仓库拉取镜像
列出私有仓库的镜像

[root@server62 ~]# kubeadm config images list --image-repository reg.westos.org/k8s --kubernetes-version v1.21.0
reg.westos.org/k8s/kube-apiserver:v1.21.0
reg.westos.org/k8s/kube-controller-manager:v1.21.0
reg.westos.org/k8s/kube-scheduler:v1.21.0
reg.westos.org/k8s/kube-proxy:v1.21.0
reg.westos.org/k8s/pause:3.4.1
reg.westos.org/k8s/etcd:3.4.13-0
reg.westos.org/k8s/coredns/coredns:v1.8.0

拉取镜像发现,拉取的镜像都是来自私有仓库
设定成功

[root@server62 ~]# kubeadm config images pull --image-repository reg.westos.org/k8s --kubernetes-version v1.21.0
[config/images] Pulled reg.westos.org/k8s/kube-apiserver:v1.21.0
[config/images] Pulled reg.westos.org/k8s/kube-controller-manager:v1.21.0
[config/images] Pulled reg.westos.org/k8s/kube-scheduler:v1.21.0
[config/images] Pulled reg.westos.org/k8s/kube-proxy:v1.21.0
[config/images] Pulled reg.westos.org/k8s/pause:3.4.1
[config/images] Pulled reg.westos.org/k8s/etcd:3.4.13-0
[config/images] Pulled reg.westos.org/k8s/coredns/coredns:v1.8.0

控制节点初始化

第一步:server62作为控制节点

[root@server62 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           1998         141        1531          16         326        1688
Swap:             0           0           0
[root@server62 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository reg.westos.org/k8s --kubernetes-version v1.21.0
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local server62] and IPs [10.96.0.1 172.25.21.62]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost server62] and IPs [172.25.21.62 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost server62] and IPs [172.25.21.62 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 49.502270 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node server62 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node server62 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ctqx60.1073pf5extimi7vh
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.25.21.62:6443 --token ctqx60.1073pf5extimi7vh \
	--discovery-token-ca-cert-hash sha256:e99d31c7c1d875a539c3107302eb035dd17bc099dcb5c59fbd5be0f514626ba9 

当前集群只有server62

[root@server62 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@server62 ~]# kubectl get node
NAME       STATUS     ROLES                  AGE     VERSION
server62   NotReady   control-plane,master   2m38s   v1.21.1
[root@server62 ~]# kubectl -n kube-system get pod
NAME                               READY   STATUS    RESTARTS   AGE
coredns-85ffb569d4-22fct           0/1     Pending   0          2m47s
coredns-85ffb569d4-rkdjg           0/1     Pending   0          2m47s
etcd-server62                      1/1     Running   0          2m56s
kube-apiserver-server62            1/1     Running   0          2m56s
kube-controller-manager-server62   1/1     Running   0          2m56s
kube-proxy-vhv99                   1/1     Running   0          2m48s
kube-scheduler-server62            1/1     Running   0          2m56s

第二步:下载并编辑flannel的配置文件
修改镜像的拉取位置(2处)

[root@server62 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@server62 ~]# vim kube-flannel.yml 

      containers:
      - name: kube-flannel
        image: flannel:v0.14.0

第三步:拉取flannel镜像,上传到私有仓库

[root@server61 ~]# docker pull quay.io/coreos/flannel:v0.14.0
[root@server61 ~]# docker tag quay.io/coreos/flannel:v0.14.0 reg.westos.org/library/flannel:v0.14.0
[root@server61 ~]# docker push reg.westos.org/library/flannel:v0.14.0 

开启补齐功能

[root@server62 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

应用flannel的配置文件

[root@server62 ~]# kubectl apply -f kube-flannel.yml 
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

控制节点的组件部署完毕

[root@server62 ~]# kubectl get pod -n kube-system
NAME                               READY   STATUS    RESTARTS   AGE
coredns-85ffb569d4-22fct           1/1     Running   0          21m
coredns-85ffb569d4-rkdjg           1/1     Running   0          21m
etcd-server62                      1/1     Running   0          21m
kube-apiserver-server62            1/1     Running   0          21m
kube-controller-manager-server62   1/1     Running   0          21m
kube-flannel-ds-7p7nn              1/1     Running   0          86s
kube-proxy-vhv99                   1/1     Running   0          21m
kube-scheduler-server62            1/1     Running   0          21m

添加数据节点

第一步:执行join指令
server63

[root@server63 docker]# kubeadm join 172.25.21.62:6443 --token ctqx60.1073pf5extimi7vh \
> --discovery-token-ca-cert-hash sha256:e99d31c7c1d875a539c3107302eb035dd17bc099dcb5c59fbd5be0f514626ba9
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

server64

[root@server64 ~]# kubeadm join 172.25.21.62:6443 --token ctqx60.1073pf5extimi7vh \
> --discovery-token-ca-cert-hash sha256:e99d31c7c1d875a539c3107302eb035dd17bc099dcb5c59fbd5be0f514626ba9
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

第二步:当前集群中的节点状况

[root@server62 ~]# kubectl get node
NAME       STATUS     ROLES                  AGE   VERSION
server62   Ready      control-plane,master   24m   v1.21.1
server63   NotReady   <none>                 60s   v1.21.1
server64   NotReady   <none>                 36s   v1.21.1
上一篇:常见的 Kubernetes 面试题总结


下一篇:[转] $.ajax中contentType: “application/json” 的用法