Kubernetes集群搭建之CNI-Flanneld部署篇

Kubernetes集群搭建之CNI-Flanneld部署篇

本次系列使用的所需部署包版本都使用的目前最新的或最新稳定版,安装包地址请到公众号内回复【K8s实战】获取

Flannel是CoreOS提供用于解决Dokcer集群跨主机通讯的覆盖网络工具。它的主要思路是:预先留出一个网段,每个主机使用其中一部分,然后每个容器被分配不同的ip;让所有的容器认为大家在同一个直连的网络,底层通过UDP/VxLAN等进行报文的封装和转发。

架构介绍


Flannel默认使用8285端口作为UDP封装报文的端口,VxLan使用8472端口。

Kubernetes集群搭建之CNI-Flanneld部署篇

K8s的有很多CNI(网络网络接口)组件,比如Flannel、Calico,我司目前使用的是Flannel,稳定性还可以。所以我这里先只介绍Flannel,Calico后续有机会会分享。

etcd和docker部署在前两篇文章已经结束,这里就不过多展开了。

Flanneld部署


由于flanneld需要依赖etcd来保证集群IP分配不冲突的问题,所以首先要在etcd中设置 flannel节点所使用的IP段。

[root@master-01 ~]# etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.209.130:2379,https://192.168.209.130:2379,https://192.168.209.130:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}

注: flanneld默认的Backend 类型是udp   这里改成vxlan 性能要比udp好一些

1. 解压flanneld

[root@master-01 ~]# tar xf flannel-v0.11.0-linux-amd64.tar.gz [root@master-01 ~]# mv flanneld mk-docker-opts.sh /usr/bin/

2. 配置flanneld

[root@master-01 ~]# cat /etc/kubernetes/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.209.130:2379,https://192.168.209.130:2379,https://192.168.209.130:2379 -etcd-cafile=/etc/etcd/ssl/ca.pem -etcd-certfile=/etc/etcd/ssl/server.pem -etcd-keyfile=/etc/etcd/ssl/server-key.pem -etcd-prefix=/coreos.com/network"

3.  配置flanneld启动文件

[root@master-01 ~]# cat /usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/etc/kubernetes/flanneldExecStart=/usr/bin/flanneld --ip-masq $FLANNEL_OPTIONSExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.target

注: mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时 使用这个文件中的环境变量配置 docker0 网桥; flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口; flanneld 运行时需要 root 权限;

4. 配置Docker启动参数

在各个节点安装好以后最后要更改Docker的启动参数,使其能够使用flannel进行IP分配,以及网络通讯。

修改docker的启动参数,并使其启动后使用由flannel生成的配置参数,修改如下:

[root@master-01 ~]# cat /usr/lib/systemd/system/docker.service|grep -v "^#"[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comBindsTo=containerd.serviceAfter=network-online.target firewalld.service containerd.serviceWants=network-online.targetRequires=docker.socket[Service]Type=notifyEnvironmentFile=/run/flannel/subnet.envExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock $DOCKER_NETWORK_OPTIONSExecReload=/bin/kill -s HUP $MAINPIDTimeoutSec=0RestartSec=2Restart=alwaysStartLimitBurst=3StartLimitInterval=60sLimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityTasksMax=infinityDelegate=yesKillMode=process[Install]WantedBy=multi-user.target

5. 同步配置和二进制

现在把配置和二进制文件都拷贝到另外几个节点上

# 二进制 and 脚本[root@master-01 ~]# scp /usr/bin/flanneld /usr/bin/mk-docker-opts.sh 192.168.209.131:/usr/bin/[root@master-01 ~]# scp /usr/bin/flanneld /usr/bin/mk-docker-opts.sh 192.168.209.132:/usr/bin/[root@master-01 ~]# scp /usr/bin/flanneld /usr/bin/mk-docker-opts.sh 192.168.209.133:/usr/bin/# 配置文件[root@master-01 ~]# scp /etc/kubernetes/flanneld 192.168.209.131:/etc/kubernetes/[root@master-01 ~]# scp /etc/kubernetes/flanneld 192.168.209.132:/etc/kubernetes/[root@master-01 ~]# scp /etc/kubernetes/flanneld 192.168.209.133:/etc/kubernetes/# 启动文件[root@master-01 ~]# scp /usr/lib/systemd/system/flanneld.service /usr/lib/systemd/system/docker.service 192.168.209.131:/usr/lib/systemd/system/[root@master-01 ~]# scp /usr/lib/systemd/system/flanneld.service /usr/lib/systemd/system/docker.service 192.168.209.132:/usr/lib/systemd/system/[root@master-01 ~]# scp /usr/lib/systemd/system/flanneld.service /usr/lib/systemd/system/docker.service 192.168.209.133:/usr/lib/systemd/system/

6. 启动服务

相继启动各节点的服务 注意启动flannel再重启docker 才会覆盖docker0网桥

[root@master-01 ~]# systemctl daemon-reload[root@master-01 ~]# systemctl start flanneld[root@master-01 ~]# systemctl enable flanneld[root@master-01 ~]# systemctl restart dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.[root@master-01 ~]# systemctl restart docker

flannel服务启动时主要做了以下几步的工作: 从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录到/run/flannel/subnet.env中,以保证各个节点的flanneld IP不会重复分配

7. 验证服务

# master-01[root@master-01 ~]# ip addr2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:72:13:72 brd ff:ff:ff:ff:ff:ffinet 192.168.209.130/24 brd 192.168.209.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::5bba:68b5:e255:a636/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::23a0:4782:75a7:43b/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::46a2:a117:34d9:db1a/64 scope link noprefixroute valid_lft forever preferred_lft forever3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:3e:8f:c7:39 brd ff:ff:ff:ff:ff:ffinet 172.17.58.1/24 brd 172.17.58.255 scope global docker0valid_lft forever preferred_lft forever4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 7e:2b:b9:3c:ce:36 brd ff:ff:ff:ff:ff:ffinet 172.17.58.0/32 scope global flannel.1valid_lft forever preferred_lft foreverinet6 fe80::7c2b:b9ff:fe3c:ce36/64 scope link valid_lft forever preferred_lft forever# master-02[root@master-02 ~]# ip addr2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:ac:58:9d brd ff:ff:ff:ff:ff:ffinet 192.168.209.131/24 brd 192.168.209.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::5bba:68b5:e255:a636/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::23a0:4782:75a7:43b/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft foreverinet6 fe80::46a2:a117:34d9:db1a/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:32:c7:25:ca brd ff:ff:ff:ff:ff:ffinet 172.17.44.1/24 brd 172.17.44.255 scope global docker0valid_lft forever preferred_lft forever4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 62:c3:35:ec:17:54 brd ff:ff:ff:ff:ff:ffinet 172.17.44.0/32 scope global flannel.1valid_lft forever preferred_lft foreverinet6 fe80::60c3:35ff:feec:1754/64 scope link valid_lft forever preferred_lft forever

需要确保docker0和flanneld在同一个网段

测试不同节点互通,在master-01上ping另外几个节点的docker0 ip

[root@master-01 ~]# ping 172.17.44.1PING 172.17.44.1 (172.17.44.1) 56(84) bytes of data.64 bytes from 172.17.44.1: icmp_seq=1 ttl=64 time=2.03 ms^C--- 172.17.44.1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.441/1.236/2.032/0.796 ms[root@master-01 ~]# ping 172.17.79.1PING 172.17.79.1 (172.17.79.1) 56(84) bytes of data.64 bytes from 172.17.79.1: icmp_seq=1 ttl=64 time=0.919 ms^C--- 172.17.79.1 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.919/0.919/0.919/0.000 ms[root@master-01 ~]# ping 172.17.47.1PING 172.17.47.1 (172.17.47.1) 56(84) bytes of data.64 bytes from 172.17.47.1: icmp_seq=1 ttl=64 time=0.369 ms^C--- 172.17.47.1 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms

如果能通说明Flannel部署成功。如果不通检查下日志:journalctl -u flannel或 tailf /var/log/messages

最后我们来看下etcd中保存的网段信息

[root@master-01 ~]# etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.209.130:2379,https://192.168.209.131:2379,https://192.168.209.132:2379" ls /coreos.com/network/subnets/coreos.com/network/subnets/172.17.47.0-24/coreos.com/network/subnets/172.17.58.0-24/coreos.com/network/subnets/172.17.44.0-24/coreos.com/network/subnets/172.17.79.0-24[root@master-01 ~]# etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.209.130:2379,https://192.168.209.131:2379,https://192.168.209.132:2379" get /coreos.com/network/subnets/172.17.47.0-24{"PublicIP":"192.168.209.133","BackendType":"vxlan","BackendData":{"VtepMAC":"8a:b1:a7:26:fd:33"}}[root@master-01 ~]# etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.209.130:2379,https://192.168.209.131:2379,https://192.168.209.132:2379" get /coreos.com/network/subnets/172.17.58.0-24{"PublicIP":"192.168.209.130","BackendType":"vxlan","BackendData":{"VtepMAC":"7e:2b:b9:3c:ce:36"}}[root@master-01 ~]# etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.209.130:2379,https://192.168.209.131:2379,https://192.168.209.132:2379" get /coreos.com/network/subnets/172.17.44.0-24{"PublicIP":"192.168.209.131","BackendType":"vxlan","BackendData":{"VtepMAC":"62:c3:35:ec:17:54"}}[root@master-01 ~]# etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/server.pem --key-file=/etc/etcd/ssl/server-key.pem --endpoints="https://192.168.209.130:2379,https://192.168.209.131:2379,https://192.168.209.132:2379" get /coreos.com/network/subnets/172.17.79.0-24{"PublicIP":"192.168.209.132","BackendType":"vxlan","BackendData":{"VtepMAC":"02:a0:14:9f:ff:fe"}}

好了,到这一步我们就完成了flanneld的部署,下一章介绍master集群的部署,敬请期待,谢谢!

END

如果你觉得文章还不错,请大家点『好看』分享下。你的肯定是我最大的鼓励和支持。

Kubernetes集群搭建之CNI-Flanneld部署篇

上一篇:使用wwise音效引擎的好处


下一篇:Spring Cloud Edgware SR3 让Zuul支持形如 /xxx和/xxx/yyy 格式的路径配置