kingbaseES R6主备流复制集群创建级联复制案例

案例环境:

数据库:
test=# select version();
                                                       version                                                  
------------------------------------------------------------------------------------------------------------------
 KingbaseES V008R006C003B0010 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-
bit
(1 row)

案例说明:

`在已有的一主一从的架构上,构建级联复制。`

案例架构:

kingbaseES R6主备流复制集群创建级联复制案例

操作步骤:

   1)增加新的主机节点,并配置相关数据库用户、ssh信任关系、资源管理等。
   2)在新节点创建和源节点相同的集群文件存储目录。
   3)拷贝源节点的文件到新节点(除了data目录下的数据)。
   4)在新节点上执行clone,注意upstream node。
   5)启动新节点数据库服务。
   6)在新节点以standby进行注册。
   7)查看集群节点状态信息。
   8)在主库做DML操作进行数据同步测试。
   9)重启集群测试。

**** 一、查看现有集群及流复制状态****

**1)查看集群节点状态**
[kingbase@node1 bin]$ ./repmgr cluster show
 ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                
----+---------+---------+-----------+----------+----------+----------+----------+-----------
 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
 2  | node249 | standby |   running | node248  | default  | 100      | 5        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3

2)查看主备流复制状态

test=# select * from sys_stat_replication;
  pid  | usesysid | usename | application_name |  client_addr  | client_hostname | client_port |         backend_s
tart         | backend_xmin |   state   |  sent_lsn  | write_lsn  | flush_lsn  | replay_lsn | write_lag | flush_la
g | replay_lag | sync_priority | sync_state |          reply_time           
-------+----------+---------+------------------+---------------+-----------------+-------------+-----
 11257 |    16384 | esrep   | node249          | 192.168.7.249 |                 |       56568 | 2021-03-01 14:41:
41.062467+08 |              | streaming | 2/1301BEC8 | 2/1301BEC8 | 2/1301BEC8 | 2/1301BEC8 |           |         
  |            |             1 | quorum     | 2021-03-01 15:02:50.835989+08
(1 row)

=== 首先确保现有节点的集群和流复制状态正常===

二、配置新节点下集群目录存储结构和相关配置文件

1)集群目录结构

[kingbase@node3 bin]$ ls -lh /home/kingbase/cluster/R6HA/KHA/kingbase/
total 24M
drwxrwxr-x.  2 kingbase kingbase  16K Jun 24  2021 archive
drwxr-xr-x.  2 kingbase kingbase 4.0K Jun 24  2021 bin
drwxrwxr-x.  2 kingbase kingbase   72 Mar  1  2021 etc
-rw-rw-r--.  1 kingbase kingbase    4 Mar  1 12:07 hamgrd.pid
-rw-rw-r--.  1 kingbase kingbase 2.0M Mar  1  2021 hamgr.log
-rw-rw-r--.  1 kingbase kingbase  18K Jun 24  2021 hamgr.log-20210301
-rw-rw-r--.  1 kingbase kingbase 2.2K Jun 24  2021 hamgr.log-20210423
-rw-rw-r--.  1 kingbase kingbase 3.7K Jun 24  2021 hamgr.log-20210425
-rw-rw-r--.  1 kingbase kingbase  20M Mar  1  2021 kbha.log
-rw-rw-r--.  1 kingbase kingbase 989K Jun 24  2021 kbha.log-20210301
-rw-rw-r--.  1 kingbase kingbase 130K Jun 24  2021 kbha.log-20210423
-rw-rw-r--.  1 kingbase kingbase 741K Jun 24  2021 kbha.log-20210425
drwxrwxr-x.  5 kingbase kingbase 8.0K Jun 24  2021 lib
-rw-------.  1 kingbase kingbase  47K Mar  1 12:02 logfile
drwxrwxr-x.  7 kingbase kingbase 4.0K Jun 24  2021 share

=== 新节点集群目录存储结构和源节点一致,除了data目录,所有数据从源节点拷贝===

2)配置repmgr.conf配置文件

kingbaseES R6主备流复制集群创建级联复制案例

三、在新节点上执行clone

=== 注意:通过指定upstream-node-id执行(upstream节点(上游节点))===

[kingbase@node3 bin]$ ./repmgr standby clone -h 192.168.7.249 -U esrep -d esrep --upstream-node-id=2NOTICE: destination directory "/home/kingbase/cluster/R6HA/KHA/kingbase/data" providedINFO: connecting to source nodeDETAIL: connection string is: host=192.168.7.249 user=esrep dbname=esrepDETAIL: current installation size is 512 MBNOTICE: checking for available walsenders on the source node (2 required)NOTICE: checking replication connections can be made to the source server (2 required)INFO: creating directory "/home/kingbase/cluster/R6HA/KHA/kingbase/data"...NOTICE: starting backup (using sys_basebackup)...HINT: this may take some time; consider using the -c/--fast-checkpoint optionINFO: executing:  /home/kingbase/cluster/R6HA/KHA/kingbase/bin/sys_basebackup -l "repmgr base backup"  -D /home/kingbase/cluster/R6HA/KHA/kingbase/data -h 192.168.7.249 -p 54321 -U esrep -X stream -S repmgr_slot_3 NOTICE: creating replication slot "repmgr_slot_3" on upstream node 2NOTICE: replication slot "repmgr_slot_3" deleted on source nodeNOTICE: standby clone (using sys_basebackup) completeNOTICE: you can now start your Kingbase serverHINT: for example: sys_ctl -D /home/kingbase/cluster/R6HA/KHA/kingbase/data startHINT: after starting the server, you need to register this standby with "repmgr standby register"

四、启动数据库服务注册standby节点

1)启动新备库数据库服务

[kingbase@node3 bin]$ ./sys_ctl start -D ../datawaiting for server to start....2021-03-01 12:37:58.541 CST [4741] LOG:  sepapower extension initialized2021-03-01 12:37:58.585 CST [4741] LOG:  starting KingbaseES V008R006C003B0010 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit2021-03-01 12:37:58.585 CST [4741] LOG:  listening on IPv4 address "0.0.0.0", port 543212021-03-01 12:37:58.585 CST [4741] LOG:  listening on IPv6 address "::", port 543212021-03-01 12:37:58.595 CST [4741] LOG:  listening on Unix socket "/tmp/.s.KINGBASE.54321"2021-03-01 12:37:58.640 CST [4741] LOG:  redirecting log output to logging collector process2021-03-01 12:37:58.640 CST [4741] HINT:  Future log output will appear in directory "sys_log".. doneserver started

2)注册standby节点
=== 注意:通过指定upstream-node-id执行(upstream节点(上游节点))===

[kingbase@node3 bin]$ ./repmgr standby register --upstream-node-id=2 --forceINFO: connecting to local node "node243" (ID: 3)INFO: connecting to primary databaseWARNING: this node does not appear to be attached to upstream node "node249" (ID: 2)INFO: standby registration completeNOTICE: standby node "node243" (ID: 3) successfully registered[kingbase@node3 bin]$ ./repmgr cluster show ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                ----+---------+---------+-----------+----------+----------+----------+----------+-------------- 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 2  | node249 | standby |   running | node248  | default  | 100      | 5        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3

3)查看集群节点状态

[kingbase@node3 bin]$ ./repmgr cluster show ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                ----+---------+---------+-----------+----------+----------+----------+----------+------------- 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 2  | node249 | standby |   running | node248  | default  | 100      | 5        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3

=== 注意:在新节点上没有看到 node243的注册信息,应该不是正常状态===

4)查看上游节点中集群节点状态信息

[kingbase@node1 bin]$ ./repmgr cluster show ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                ----+---------+---------+-----------+----------+----------+----------+----------+--------- 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 2  | node249 | standby |   running | node248  | default  | 100      | 5        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 3  | node243 | standby |   running | node249  | default  | 100      | 5        | host=192.168.7.243 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3
 [kingbase@node2 bin]$ ./repmgr cluster show ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                ----+---------+---------+-----------+----------+----------+----------+----------+--------- 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 2  | node249 | standby |   running | node248  | default  | 100      | 5        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 3  | node243 | standby |   running | node249  | default  | 100      | 5        | host=192.168.7.243 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3

=== 上游节点查询集群节点状态信息正常===

五、解决新节点(node243)集群状态非正常的故障

1)查看新节点数据库日志

node243:......2021-03-01 12:41:19.789 CST [4909] FATAL:  could not start WAL streaming: ERROR:  replication slot "repmgr_slot_3" does not exist2021-03-01 12:41:24.789 CST [4910] FATAL:  could not start WAL streaming: ERROR:  replication slot "repmgr_slot_3" does not exist2021-03-01 12:41:29.794 CST [4911] FATAL:  could not start WAL streaming: ERROR:  replication slot "repmgr_slot_3" does not exist

=== 从日志信息可知,node243通过"repmgr_slot_3"复制槽做流复制连接,而复制槽不存在===

2)查看上游节点(node249)复制槽和流复制状态信息

node249:1、查看复制槽信息test=# select * from sys_replication_slots; slot_name | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn -----------+--------+-----------+--------+----------+-----------+--------+------------+------+-(0 rows)

2、查看流复制信息

test=# select * from sys_stat_replication; pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | backend_xmin | state | sent_lsn | write_lsn | flush_lsn | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | sync_state | reply_time -----+----------+---------+------------------+-------------+-----------------+-------------+(0 rows)

=== 从以上获知,在上游节点node249查询,复制槽和流复制节点信息均为空===

3)在node249节点创建复制槽

test=# select sys_create_physical_replication_slot('repmgr_slot_3'); sys_create_physical_replication_slot -------------------------------------- (repmgr_slot_3,)(1 row)

查看复制槽:

test=# select * from sys_replication_slots;   slot_name   | plugin | slot_type | datoid | database | temporary | active | active_pid | xmin | catalog_xmin | restart_lsn | confirmed_flush_lsn ---------------+--------+-----------+--------+----------+-----------+--------+------------+------+--------------+- repmgr_slot_3 |        | physical  |        |          | f         | t      |      18334 |      |              | 2/1301BEC8  | (1 row)

查看流复制:

test=# select * from sys_stat_replication;                            pid  | usesysid | usename | application_name |  client_addr  | client_hostname | client_port |         backend_start         | backend_xmin |   state   |  sent_lsn  | write_lsn  | flush_lsn  | replay_lsn | write_lag | flush_lag | replay_lag | sync_priority | sync_state |          reply_time           -------+----------+---------+------------------+---------------+-----------------+-------------+ 18334 |    16384 | esrep   | node243          | 192.168.7.243 |                 |       48912 | 2021-03-01 15:08:13.075586+08 |              | streaming | 2/1301BEC8 | 2/1301BEC8 | 2/1301BEC8 | 2/1301BEC8 |           |           |            |             0 | async      | 2021-03-01 12:47:40.421207+08(1 row)

=== 从以上获知,复制槽创建后,流复制状态正常===

4)在node243上查看集群节点状态

[kingbase@node3 bin]$ ./repmgr cluster show ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                ----+---------+---------+-----------+----------+----------+----------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------- 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 2  | node249 | standby |   running | node248  | default  | 100      | 5        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 3  | node243 | standby |   running | node249  | default  | 100      | 5        | host=192.168.7.243 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3

=== 从以上获知,整个集群节点状态正常===

六、测试级联复制数据同步

node248:prod=# create table t1 (id int);CREATE TABLEprod=# insert into t1 values (generate_series(10,1000,10));INSERT 0 100prod=# select * from t1 limit 10; id  -----  10  20  30  40  50  60  70  80  90 100(10 rows)node249:prod=# \d t1;                 Table "public.t1" Column |  Type   | Collation | Nullable | Default --------+---------+-----------+----------+--------- id     | integer |           |          | prod=# select * from t1 limit 10; id  -----  10  20  30  40  50  60  70  80  90 100(10 rows)node243:prod=# select * from t1 limit 10; id  -----  10  20  30  40  50  60  70  80  90 100(10 rows)

七、重启集群测试

1)查看集群节点状态

[kingbase@node3 bin]$ ./ksql -U esrep -d esrepksql (V8.0)Type "help" for help.esrep=# select * from repmgr.nodes; node_id | upstream_node_id | active | node_name |  type   | location | priority |                                                                     conninfo                                                                      | repluser |   slot_name   |                           config_file                           ---------+------------------+--------+-----------+---------+----------+----------+-------------       1 |                  | t      | node248   | primary | default  |      100 | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 | esrep    | repmgr_slot_1 | /home/kingbase/cluster/R6HA/KHA/kingbase/bin/../etc/repmgr.conf       2 |                1 | t      | node249   | standby | default  |      100 | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 | esrep    | repmgr_slot_2 | /home/kingbase/cluster/R6HA/KHA/kingbase/bin/../etc/repmgr.conf       3 |                2 | t      | node243   | standby | default  |      100 | host=192.168.7.243 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 | esrep    | repmgr_slot_3 | /home/kingbase/cluster/R6HA/KHA/kingbase/bin/../etc/repmgr.conf(3 rows)

2)使用sys_monitor.sh重启集群测试

[kingbase@node1 bin]$ ./sys_monitor.sh restart2021-03-01 15:16:53 Ready to stop all DB ...Service process "node_export" was killed at process 12387Service process "postgres_ex" was killed at process 12388Service process "node_export" was killed at process 12886Service process "postgres_ex" was killed at process 12887There is no service "node_export" running currently.There is no service "postgres_ex" running currently.2021-03-01 15:17:04 begin to stop repmgrd on "[192.168.7.248]".2021-03-01 15:17:05 repmgrd on "[192.168.7.248]" stop success.2021-03-01 15:17:05 begin to stop repmgrd on "[192.168.7.249]".2021-03-01 15:17:06 repmgrd on "[192.168.7.249]" stop success.2021-03-01 15:17:06 begin to stop repmgrd on "[192.168.7.243]".2021-03-01 15:17:07 repmgrd on "[192.168.7.243]" already stopped.2021-03-01 15:17:07 begin to stop DB on "[192.168.7.249]".waiting for server to shut down.... doneserver stopped2021-03-01 15:17:08 DB on "[192.168.7.249]" stop success.2021-03-01 15:17:08 begin to stop DB on "[192.168.7.243]".waiting for server to shut down.... doneserver stopped2021-03-01 15:17:09 DB on "[192.168.7.243]" stop success.2021-03-01 15:17:09 begin to stop DB on "[192.168.7.248]".waiting for server to shut down..... doneserver stopped2021-03-01 15:17:11 DB on "[192.168.7.248]" stop success.2021-03-01 15:17:11 Done.2021-03-01 15:17:11 Ready to start all DB ...2021-03-01 15:17:11 begin to start DB on "[192.168.7.248]".waiting for server to start.... doneserver started2021-03-01 15:17:13 execute to start DB on "[192.168.7.248]" success, connect to check it.2021-03-01 15:17:14 DB on "[192.168.7.248]" start success.2021-03-01 15:17:14 Try to ping trusted_servers on host 192.168.7.248 ...2021-03-01 15:17:16 Try to ping trusted_servers on host 192.168.7.249 ...2021-03-01 15:17:19 Try to ping trusted_servers on host 192.168.7.243 ...2021-03-01 15:17:22 begin to start DB on "[192.168.7.249]".waiting for server to start.... doneserver started2021-03-01 15:17:24 execute to start DB on "[192.168.7.249]" success, connect to check it.2021-03-01 15:17:25 DB on "[192.168.7.249]" start success.2021-03-01 15:17:25 begin to start DB on "[192.168.7.243]".waiting for server to start.... doneserver started2021-03-01 15:17:27 execute to start DB on "[192.168.7.243]" success, connect to check it.2021-03-01 15:17:28 DB on "[192.168.7.243]" start success. ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                ----+---------+---------+-----------+----------+----------+----------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------- 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 2  | node249 | standby |   running | node248  | default  | 100      | 5        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 3  | node243 | standby |   running | node249  | default  | 100      | 5        | host=192.168.7.243 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=32021-03-01 15:17:28 The primary DB is started.WARNING: There are no 2 standbys in pg_stat_replication, please check all the standby servers replica from primary2021-03-01 15:17:52 Success to load virtual ip [192.168.7.240/24] on primary host [192.168.7.248].2021-03-01 15:17:52 Try to ping vip on host 192.168.7.248 ...2021-03-01 15:17:55 Try to ping vip on host 192.168.7.249 ...2021-03-01 15:17:57 Try to ping vip on host 192.168.7.243 ...2021-03-01 15:18:00 begin to start repmgrd on "[192.168.7.248]".[2021-03-01 15:18:01] [NOTICE] using provided configuration file "/home/kingbase/cluster/R6HA/KHA/kingbase/bin/../etc/repmgr.conf"[2021-03-01 15:18:01] [NOTICE] redirecting logging output to "/home/kingbase/cluster/R6HA/KHA/kingbase/hamgr.log"2021-03-01 15:18:01 repmgrd on "[192.168.7.248]" start success.2021-03-01 15:18:01 begin to start repmgrd on "[192.168.7.249]".[2021-03-01 15:17:38] [NOTICE] using provided configuration file "/home/kingbase/cluster/R6HA/KHA/kingbase/bin/../etc/repmgr.conf"[2021-03-01 15:17:38] [NOTICE] redirecting logging output to "/home/kingbase/cluster/R6HA/KHA/kingbase/hamgr.log"2021-03-01 15:18:02 repmgrd on "[192.168.7.249]" start success.2021-03-01 15:18:02 begin to start repmgrd on "[192.168.7.243]".[2021-03-01 12:56:57] [NOTICE] using provided configuration file "/home/kingbase/cluster/R6HA/KHA/kingbase/bin/../etc/repmgr.conf"[2021-03-01 12:56:57] [NOTICE] redirecting logging output to "/home/kingbase/cluster/R6HA/KHA/kingbase/hamgr.log"2021-03-01 15:18:04 repmgrd on "[192.168.7.243]" start success. ID | Name    | Role    | Status    | Upstream | repmgrd | PID   | Paused? | Upstream last seen----+---------+---------+-----------+----------+---------+-------+---------+-------------------- 1  | node248 | primary | * running |          | running | 3420  | no      | n/a                 2  | node249 | standby |   running | node248  | running | 21209 | no      | 1 second(s) ago     3  | node243 | standby |   running | node249  | running | 7376  | no      | 0 second(s) ago    2021-03-01 15:18:18 Done.
[kingbase@node1 bin]$ ./repmgr cluster show ID | Name    | Role    | Status    | Upstream | Location | Priority | Timeline | Connection string                                                                                                                                ----+---------+---------+-----------+----------+----------+----------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------- 1  | node248 | primary | * running |          | default  | 100      | 5        | host=192.168.7.248 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 2  | node249 | standby |   running | node248  | default  | 100      | 5        | host=192.168.7.249 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3 3  | node243 | standby |   running | node249  | default  | 100      | 5        | host=192.168.7.243 user=esrep dbname=esrep port=54321 connect_timeout=10 keepalives=1 keepalives_idle=10 keepalives_interval=1 keepalives_count=3

=== 从以上启动信息获知,通过sys_monitor.sh一键重启集群成功,新节点亦可以加入集群管理===

上一篇:RIP综合实验


下一篇:RIP综合实验