kuma 学习二 centos 安装

前边有使用minikube运行kuma,以下是在centos 上安装使用

环境准备

下载软件包

wget https://kong.bintray.com/kuma/kuma-0.1.1-centos.tar.gz

配置环境变量

tar xvzf kuma-0.1.1-centos.tar.gz
export PATH=$PATH:$PWD/bin

运行

启动控制面板

kuma-cp run

效果

2019-09-11T13:21:20.138+0800 INFO Skipping reading config from file
2019-09-11T13:21:20.232+0800 INFO bootstrap.auto-configure auto-generated TLS certificate for SDS server {"crtFile": "/tmp/782349258.crt", "keyFile": "/tmp/072995489.key"}
2019-09-11T13:21:20.234+0800 INFO kuma-cp.run starting Control Plane
2019-09-11T13:21:20.235+0800 INFO api-server starting {"port": ":5681"}
2019-09-11T13:21:20.235+0800 INFO Creating default mesh from the settings{"mesh": {"mtls":{"ca":{"Type":{"Builtin":{}}}},"tracing":{"Type":null},"logging":{"accessLogs":{}}}}
2019-09-11T13:21:20.236+0800 INFO sds-server.grpc starting {"port": 5677, "tls": true}
2019-09-11T13:21:20.236+0800 INFO xds-server.grpc starting {"port": 5678}
2019-09-11T13:21:20.236+0800 INFO xds-server.diagnostics starting {"port": 5680}
2019-09-11T13:21:20.236+0800 INFO bootstrap-server starting {"port": 5682}
启动数据面板
  • 暴露测试服务
 kuma-tcp-echo -port 9000

效果

kuma-tcp-echo -port 9000
2019/09/11 13:23:55 Kuma TCP Echo - Listening to connections on port 9000

访问:

curl http://localhost:9000
  • 应用网络配置
echo "type: Dataplane
mesh: default
name: dp-echo-1
networking:
  inbound:
  - interface: 127.0.0.1:10000:9000
    tags:
      service: echo" | kumactl apply -f -
  • 启动数据面板
KUMA_CONTROL_PLANE_BOOTSTRAP_SERVER_URL=http://127.0.0.1:5682 \
KUMA_DATAPLANE_MESH=default \
KUMA_DATAPLANE_NAME=dp-echo-1 \
kuma-dp run
 

数据面板日志:

[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:242] initializing epoch 0 (hot restart version=11.104)
[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:244] statically linked extensions:
[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:246] access_loggers: envoy.file_access_log,envoy.http_grpc_access_log,envoy.tcp_grpc_access_log
[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:249] filters.http: envoy.buffer,envoy.cors,envoy.csrf,envoy.ext_authz,envoy.fault,envoy.filters.http.dynamic_forward_proxy,envoy.filters.http.grpc_http1_reverse_bridge,envoy.filters.http.header_to_metadata,envoy.filters.http.jwt_authn,envoy.filters.http.original_src,envoy.filters.http.rbac,envoy.filters.http.tap,envoy.grpc_http1_bridge,envoy.grpc_json_transcoder,envoy.grpc_web,envoy.gzip,envoy.health_check,envoy.http_dynamo_filter,envoy.ip_tagging,envoy.lua,envoy.rate_limit,envoy.router,envoy.squash
[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:252] filters.listener: envoy.listener.http_inspector,envoy.listener.original_dst,envoy.listener.original_src,envoy.listener.proxy_protocol,envoy.listener.tls_inspector
[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:255] filters.network: envoy.client_ssl_auth,envoy.echo,envoy.ext_authz,envoy.filters.network.dubbo_proxy,envoy.filters.network.mysql_proxy,envoy.filters.network.rbac,envoy.filters.network.sni_cluster,envoy.filters.network.thrift_proxy,envoy.filters.network.zookeeper_proxy,envoy.http_connection_manager,envoy.mongo_proxy,envoy.ratelimit,envoy.redis_proxy,envoy.tcp_proxy
[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:257] stat_sinks: envoy.dog_statsd,envoy.metrics_service,envoy.stat_sinks.hystrix,envoy.statsd
[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:259] tracers: envoy.dynamic.ot,envoy.lightstep,envoy.tracers.datadog,envoy.tracers.opencensus,envoy.zipkin
[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:262] transport_sockets.downstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:265] transport_sockets.upstream: envoy.transport_sockets.alts,envoy.transport_sockets.tap,raw_buffer,tls
[2019-09-11 13:28:36.652][14579][info][main] [external/envoy/source/server/server.cc:271] buffer implementation: new
[2019-09-11 13:28:36.656][14579][warning][main] [external/envoy/source/server/server.cc:337] No admin address given, so no admin HTTP server started.
[2019-09-11 13:28:36.656][14579][info][main] [external/envoy/source/server/server.cc:445] runtime: layers:
  - name: base
    static_layer:
      {}
  - name: admin
    admin_layer:
      {}
[2019-09-11 13:28:36.656][14579][info][config] [external/envoy/source/server/configuration_impl.cc:62] loading 0 static secret(s)
[2019-09-11 13:28:36.656][14579][info][config] [external/envoy/source/server/configuration_impl.cc:68] loading 1 cluster(s)
[2019-09-11 13:28:36.657][14579][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:157] cm init: initializing cds
[2019-09-11 13:28:36.657][14579][info][config] [external/envoy/source/server/configuration_impl.cc:72] loading 0 listener(s)
[2019-09-11 13:28:36.657][14579][info][config] [external/envoy/source/server/configuration_impl.cc:97] loading tracing configuration
[2019-09-11 13:28:36.657][14579][info][config] [external/envoy/source/server/configuration_impl.cc:117] loading stats sink configuration
[2019-09-11 13:28:36.657][14579][info][main] [external/envoy/source/server/server.cc:530] starting main dispatch loop
[2019-09-11 13:28:37.661][14579][info][upstream] [external/envoy/source/common/upstream/cds_api_impl.cc:63] cds: add 1 cluster(s), remove 1 cluster(s)
[2019-09-11 13:28:37.662][14579][info][upstream] [external/envoy/source/common/upstream/cluster_manager_impl.cc:161] cm init: all clusters initialized
[2019-09-11 13:28:37.662][14579][info][main] [external/envoy/source/server/server.cc:513] all clusters initialized. initializing init manager
[2019-09-11 13:28:37.663][14579][info][upstream] [external/envoy/source/server/lds_api.cc:59] lds: add/update listener 'inbound:127.0.0.1:10000'
[2019-09-11 13:28:37.663][14579][info][config] [external/envoy/source/server/listener_manager_impl.cc:789] all dependencies initialized. starting workers
   

控制面板日志:

2019-09-11T13:28:36.630+0800 INFO bootstrap-server Generating bootstrap config {"params": {"Id":"default.dp-echo-1.default","Service":"echo","AdminPort":0,"XdsHost":"127.0.0.1","XdsPort":5678}}
  • 通过sidecar 访问服务
curl http://127.0.0.1:10000
 

效果

GET / HTTP/1.1
User-Agent: curl/7.29.0
Host: 127.0.0.1:10000
Accept: */*
  • 应用策略
    以下是应用tls 的配置
 
echo "type: Mesh
name: default
mtls:
  enabled: true 
  ca:
    builtin: {}" | kumactl apply -f -

配置kuma 的配置管理以及查看服务信息

  • 添加管理服务地址
kumactl config control-planes add --name=dalong --address=http://127.0.0.1:5681

效果

kumactl config control-planes list
ACTIVE NAME ADDRESS
         local http://localhost:5681
* dalong http://127.0.0.1:5681
  • 查看mesh
kumactl get meshes
NAME mTLS DP ACCESS LOGS
default on off
  • 查看数据面板
kumactl get dataplanes
MESH NAME TAGS
default dp-echo-1 service=echo

参考资料

https://kuma.io/docs/0.1.1/installation/centos/

上一篇:LeetCode--044--通配符匹配(java)*


下一篇:java – 我可以确定正则表达式匹配的第一个字符集吗?