MySQL+MMM实现高可用

 admin   2022-09-26 02:20   38 人阅读  0 条评论

一.MMM简介
MMM即Multi-Master Replication Manager for MySQL:mysql多主复制治理器,基于perl完成,关于mysql主主复制设置的监控.缺点转移和治理的一套可伸缩的脚-本套件(在任什么时候刻惟有一位节点能够被写入),MMM也能对从处事器举行读负载平衡,因此能够用他来在一组用于复制的处事器启动虚拟ip,除这个之外,他另有完成数据备份.节点之中重新同步功效的脚-本呀。MySQL自身有无供应replication failover的处置计划,通过MMM计划能完成处事器的缺点转移,从而完成mysql的高可用呀。MMM不仅能供应浮动IP的功效,如果现在的主理事器挂掉后,会将你后端的从处事器努力转向新的主理事器举行同步复制,没必-要手工变更同步设置呀。这个计划是现在对比成熟的处置计划呀。真相请看
图片来源网络

长处高可用性,扩张性好,出-现缺点努力切换,关于主主同步,在统一时刻间只供应一台数据库写操做,保证的数据的一样性呀。当主理事器挂掉以后,另一位主立刻接收,其余的从处事器能努力切换,没必-要人-工干预呀。
弱点和缺点monitor节点是单点,可是这个你也能够或者者结合keepalived或者者haertbeat做到高可用;最少三个节点,对主机的数目有乞求,必-要完成读写分散,还必-要在前端编辑读写分散程-序呀。在读写十分忙碌的营业体制下体现不-是很稳固,应该会出-现复制延时.切换丢弃效果等疑呀。MMM计划一开始不太顺应于对数据平安性乞求很高,而且读.写忙碌的环-境中呀。
适用处景:
MMM的适用处景为数据库会见量大,而且能完成读写分散的场景呀。
Mmm主要功效由下面三个脚-本供应:
mmm_mond负-责所有一些监控工做的监控守护进-程,决定节点的移除(mmm_mond进-程准时心跳检测,失利则将write ip浮动到另一台master)等等
mmm_agentd运转在mysql处事器上的代庖守护进-程,通过简易差异远处事集供应给监控节点
mmm_control通过下令行治理mmm_mond进-程
在所有看管历程中,必-要在mysql中增添相关授权用户,授权的用户包罗一位mmm_monitor用户和一位mmm_agent用户,如果想运用mmm的备份器械则还要增添一位mmm_tools用户呀。
两.部署实行
(1).环-境推荐
操做体制CentOS 7.2(64位)
数据库体制MySQL 5.7.13
角色
IP
hostname
Server-id
Write vip
Read vip
monitor
192.168.10.100
monitor1





Master1
192.168.10.101
master1
1
192.168.10.2


Master2(backup)
192.168.10.102
master2
2


192.168.10.3
Slave1
192.168.10.103
slave1
3


192.168.10.4
Slave2
192.168.10.104
slave2
4


192.168.10.5
(2).准备工做
1.封锁所有主机的SElinux
2.设置NTP,保证同步时刻
3.在所有主机上设置/etc/hosts文件,增添以下内容
vim /etc/hosts
192.168.10.100 monitor1
192.168.10.101 master1
192.168.10.102 master2
192.168.10.103 slave1
192.168.10.104 slave2
4.在所有主机上安置perl perl-devel perl-CPAN libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64包
yum -y install perl-* libart_lgpl.x86_64 rrdtool.x86_64 rrdtool-perl.x86_64
运用centos7在线yum源安置
5.在所有主机上安置perl的相关库
cpan -i Algorithm::Diff Class::Singleton DBI DBD::mysql Log::Dispatch Log::Log4perl Mail::Send Net::Ping Proc::Daemon Time::HiRes Params::Validate Net::ARP Proc::Daemon Log::Log4perl
(3).设置MySQL基本环-境
1.编辑每一台MySQL主机的设置文件/etc/my.cnf,在其均分-别参与以下内容, 注重server-id不行以重复呀。
master1主机
log-bin = mysql-bin
binlog_format = mixed
server-id = 1
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
log-slave-updates = 1
auto-increment-increment = 2
auto-increment-offset = 1
master2主机
log-bin = mysql-bin
binlog_format = mixed
server-id = 2
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
log-slave-updates = 1
auto-increment-increment = 2
auto-increment-offset = 2
slave1主机
server-id = 3
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only = 1
slave2主机
server-id = 4
relay-log = relay-bin
relay-log-index = slave-relay-bin.index
read_only = 1
2.在完结了对my.cnf的修正后,重新启动mysql处事
systemctl restart mysqld
所有MySQL主机的uuid不行以一样,修正
/usr/local/mysql/data/auto.cnf中的值
3.为四台数据库主机建设防火墙谋划
firewall-cmd --permanent --add-port=3306/tcp
firewall-cmd --reload
(4).设置主从复制环-境
(master1和master2设置成主主,slave1和slave2设置成master1的从)
1.在master1和master2上授权
mysql> grant replication slave on *.* to rep@'192.168.10.%' identified by '123456';
2.将master2.slave1和slave2设置成master1的从库
在master1上实行show master status; 获取binlog文件和Position点
mysql> show master status;
在master2.slave1和slave2实行
mysql> change master to master_host='192.168.10.101', master_port=3306, master_user='rep', master_password='123456', master_log_file='mysql-bin.000001', master_log_pos=452;
mysql> start slave;
3.检查主从复制
master2主机
mysql> show slave status\G;
***** 1. row *****
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.10.101
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
slave1主机
mysql> show slave status\G;
***** 1. row *****
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.10.101
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
slave2主机
mysql> show slave status\G;
***** 1. row *****
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.10.101
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
如果Slave_IO_Running和Slave_SQL_Running都为YES,那么主从就以前设置OK了
4.把master1设置成master2的从库:
在master2上实行show master status ;获取binlog文件和Position点
mysql> show master status;
在master1上实行
mysql> change master to master_host='192.168.10.102', master_port=3306, master_user='rep', master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=452;
mysql> start slave;
检查主从复制
master1主机:
mysql> show slave status\G;
***** 1. row *****
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.10.102
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 452
Relay_Log_File: relay-bin.000002
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
如果Slave_IO_Running和Slave_SQL_Running都为YES,那么主从就以前设置OK了
(5).mysql-mmm设置
1.在mysql1主MySQL处事器上建立相关用户
由于现在为主从复制关系,所有只要要在master1上建立就能够,另外3台从处事器都市复制master1上的操做
建立代庖账号
mysql> grant super,replicationclient,process on *.* to 'mmm_agent'@'192.168.10.%' identified by '123456';
建立监控账号
mysql> grant replication client on *.* to 'mmm_monitor'@'192.168.10.%' identified by '123456';
2.为了保证都建立了,请搜查master2和slave1.slave2三台db上能否都存在监控和代庖账号
mysql> select user,host from mysql.user where user in ('mmm_monitor','mmm_agent');
相关用户的功效
mmm_monitor用户mmm监控用于对mysql处事器进-程安康搜查
mmm_agent用户mmm代庖用来变更只读形式,复制的主理事器等
(6).mysql-mmm安置
1.在monitor主机(192.168.31.106) 上安置监控程-序
wget /repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz
chmod +x mysql-mmm-2.2.1.tar.gz
tar -zxf mysql-mmm-2.2.1.tar.gz
cd mysql-mmm-2.2.1
make install
2.在数据库处事器(master1.master2.slave1.slave2)上安置代庖
wget /repo/pkgs/mysql-mmm/mysql-mmm-2.2.1.tar.gz/f5f8b48bdf89251d3183328f0249461e/mysql-mmm-2.2.1.tar.gz
chmod +x mysql-mmm-2.2.1.tar.gz
tar -zxf mysql-mmm-2.2.1.tar.gz
cd mysql-mmm-2.2.1
make install
(7).设置mmm
1.编辑设置文件,五台主机必须一样
vim /etc/mysql-mmm/mmm_common.conf
active_master_role writer 努力的master角色的标示,所有一些db处事器要开启read_only参数,关于writer处事器监控代领会努力将read_only属性封锁呀。

cluster_interface eno16777736 会集的网络接口
pid_path /var/run/mmm_agentd.pid pid途径
bin_path /usr/lib/mysql-mmm/ 可实行文件途径
replication_user rep 复制用户
replication_password 123456 复制用户密码
agent_user mmm_agent 代庖用户
agent_password 123456 代庖用户密码

master1的host名
ip 192.168.10.101 master1的ip
mode master 角色属性,master代表是主
peer master2 与master1同等的处事器的host名,也即是master2的处事器host名

和master的观点一样
ip 192.168.10.102
mode master
peer master1

从库的host名,如果存在多个从库能够重复一样的设置
ip 192.168.10.103 从的ip
mode slave slave的角色属性代体现在host是从

和slave的观点一样
ip 192.168.10.104
mode slave

writer角色设置
hosts master1,master2 能举行写操做的处事器的host名,如果不愿切换写操做这里能够只设置master,这样也能够或者者免由于网络延时而举行write的切换,可是一旦master出-现缺点那样确现在的MMM就有无writer了惟有对外的read操做呀。
ips 192.168.10.2 对外供应的写操做的虚拟IP
mode exclusive exclusive代表只赞成存在一位主,也即是只能供应一位写的IP

read角色设置
hosts master2,slave1,slave2 对外供应读操做的处事器的host名,固然这里也能够或者者把master加进去
ips 192.168.10.3, 192.168.10.4, 192.168.10.5 对外供应读操做的虚拟ip,这三个ip和host不-是逐一对应的,而且ips也hosts的数目也能够或者者不一样,如果这样设置的话这个内里一位hosts会分配两个ip
mode balanced balanced代表负载平衡

同时将这个文件拷贝到其余的处事器,设置不变
for host in master1 master2 slave1 slave2 ; do scp /etc/mysql-mmm/mmm_common.conf $host:/etc/mysql-mmm/ ; done
2.代庖文件设置
编辑 4台mysql节点机上的
/etc/mysql-mmm/mmm_agent.conf
vim /etc/mysql-mmm/mmm_agent.conf
includemmm_common.conf
this master1 该值依照自身主机名而设置
这个只设置db处事器,监控处事器没必-要要设置
3.启动代庖进-程 (在四台MySQL主机上都得设置)
编辑mysql-mmm-agent脚-本文件,在啦!/bin/sh下面参与以下内容
vim /etc/init.d/mysql-mmm-agent
source /root/.bash_profile 增添此内容是为了mysql-mmm-agent处事能启机自启
增添成体制处事并设置为自启动
chkconfig --add mysql-mmm-agent
chkconfigmysql-mmm-agent on
/etc/init.d/mysql-mmm-agent start
Daemon bin: '/usr/sbin/mmm_agentd'
Daemon pid: '/var/run/mmm_agentd.pid'
Starting MMM Agent daemon Ok
netstat -antp | grep mmm_agentd
tcp 0 0 192.168.10.101:9989 0.0.0.0:* LISTEN 9693/mmm_agentd
4.设置四台MySQL主机的防火墙
firewall-cmd --permanent --add-port=9989/tcp
firewall-cmd --reload
5.编辑 monitor主机上的mmm_mon.conf 设置
vim /etc/mysql-mmm/mmm_mon.conf
includemmm_common.conf

ip 127.0.0.1 为了平安性,设置只在本机监听,mmm_mond默许监听9988
pid_path /var/run/mmm_mond.pid
bin_path /usr/lib/mysql-mmm/
status_path /var/lib/misc/mmm_mond.status
ping_ips 192.168.10.101, 192.168.10.102, 192.168.10.103,192.168.10.104 用于尝试网络可用性 IP 位置列表,只要这个内里有一位位置 ping 通,就代表网络平时,这里不-要写入本机位置
auto_set_online 0 设置努力online的时刻,默许是凌驾60s就将他设置为online,默许是60s,这里将其设为0即是立刻online


check_period 5 搜查周期默许为5s
trap_period 10 一位节点被检测不行功的时刻连续trap_period秒,就稳重的以为这个节点失利了,默许值10s
timeout 2 搜查超时的时刻,默许值2s
restart_after 10000 在完结restart_after次搜查后,重启checker进-程默许10000
max_backlog 86400 纪录搜查rep_backlog日志的最大次数,默许值60


monitor_user mmm_monitor 监控db处事器的用户
monitor_password 123456 监控db处事器的密码

debug 0 debug 0平时形式,1为debug形式
6.启动监控进-程
编辑mysql-mmm-monitor脚-本文件,在啦!/bin/sh下面参与以下内容
vim /etc/init.d/mysql-mmm-monitor
source /root/.bash_profile
7.增添成体制处事并设置为自启动
chkconfig --add mysql-mmm-monitor
chkconfigmysql-mmm-monitor on
/etc/init.d/mysql-mmm-monitor start
Daemon bin: '/usr/sbin/mmm_mond'
Daemon pid: '/var/run/mmm_mond.pid'
Starting MMM Monitor daemon: Ok
[root@monitor1 ~] netstat -anpt | grep 9988
tcp 0 0 127.0.0.1:9988 0.0.0.0:* LISTEN 8546/mmm_mond
岂论是在db端仍然在监控端如果有对设置文件举行修正操做都必-要重启代庖进-程和监控进-程呀。MMM启动顺着纪律为先启动monitor,再启动 agent
搜查集群状态
[root@monitor1 ~] mmm_control show
master1(192.168.10.101) master/ONLINE. Roles: writer(192.168.10.2)
master2(192.168.10.102) master/ONLINE. Roles: reader(192.168.10.4)
slave1(192.168.10.103) slave/ONLINE. Roles: reader(192.168.10.3)
slave2(192.168.10.104) slave/ONLINE. Roles: reader(192.168.10.5)
如果处事器状态不-是ONLINE,能够用以下下令将处事器上线,比如
[root@monitor1 ~]mmm_controlset_online master1
[root@monitor1 ~]mmm_controlset_online master2
[root@monitor1 ~]mmm_controlset_onlineslave1
[root@monitor1 ~]mmm_controlset_onlineslave2
从上面的展现能够看到,写乞求的VIP在master1上,所有从节点也都把master1看成主节点呀。
8.检察能否启用vip
master一、
[root@master1 ~] ipaddr show dev eno16777736
3: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:82:86:c8 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.101/24 brd 192.168.10.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.10.2/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe82:86c8/64 scope link
valid_lft forever preferred_lft forever
master二、
[root@master2 ~] ipaddr show dev eno16777736
3: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:6e:b1:23 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.102/24 brd 192.168.10.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.10.4/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe6e:b123/64 scope link
valid_lft forever preferred_lft forever
slave一、
[root@slave1 ~] ipaddr show dev eno16777736
3: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:d4:84:54 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.103/24 brd 192.168.10.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.10.3/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fed4:8454/64 scope link
valid_lft forever preferred_lft forever
slave二、
[root@slave2 ~] ipaddr show dev eno16777736
3: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:29:80:59 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.104/24 brd 192.168.10.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.10.5/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe29:8059/64 scope link
valid_lft forever preferred_lft forever
在master2,slave1,slave2主机上检察主mysql的指向
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.10.101
Master_User: rep
Master_Port: 3306
Connect_Retry: 60
(8).MMM高可用性尝试
处事器读写采有VIP位置举行读写,出-现缺点时VIP会漂移到其余节点,由其余节点供应处事呀。
一最先的时刻检察所有集群的状态,能够看到所有集群状态平时
[root@monitor1 ~] mmm_control show
master1(192.168.10.101) master/ONLINE. Roles: writer(192.168.10.2)
master2(192.168.10.102) master/ONLINE. Roles: reader(192.168.10.4)
slave1(192.168.10.103) slave/ONLINE. Roles: reader(192.168.10.3)
slave2(192.168.10.104) slave/ONLINE. Roles: reader(192.168.10.5)
模拟master1宕机,手动终止mysql处事,视察monitor日志,master1的日志以下
[root@monitor1 ~] tail -f /var/log/mysql-mmm/mmm_mond.log
2017/03/31 15:43:00 FATAL State of host 'master1' changed from ONLINE to HARD_OFFLINE (ping: OK, mysql: not OK)
2017/03/31 15:43:00 INFO Removing all roles from host 'master1':
2017/03/31 15:43:00 INFO Removed role 'writer(192.168.10.2)' from host 'master1'
2017/03/31 15:43:00 INFO Orphaned role 'writer(192.168.10.2)' has been assigned to 'master2'
从以上信息中能够看进去,master1启机后,努力将writer角色转移给master2
检察会集的最新状态
[root@monitor1 ~] mmm_control show
master1(192.168.10.101) master/HARD_OFFLINE. Roles:
master2(192.168.10.102) master/ONLINE. Roles: reader(192.168.10.4), writer(192.168.10.2)
slave1(192.168.10.103) slave/ONLINE. Roles: reader(192.168.10.3)
slave2(192.168.10.104) slave/ONLINE. Roles: reader(192.168.10.5)
从展现结局能够看出master1的状态有ONLINE转换为HARD_OFFLINE,写VIP转移到了master2主机上呀。
搜查所有一些db处事器会集状态
[root@monitor1 ~] mmm_control checks all
master1 ping [last change: 2017/03/31 15:28:30] OK
master1 mysql [last change: 2017/03/31 15:43:00] ERROR: Connect error (host = 192.168.10.101:3306, user = mmm_monitor)啦! Can't connect to MySQL server on '192.168.10.101' (111)
master1 rep_threads [last change: 2017/03/31 15:28:30] OK
master1 rep_backlog [last change: 2017/03/31 15:28:30] OK: Backlog is null
slave1 ping [last change: 2017/03/31 15:28:30] OK
slave1 mysql [last change: 2017/03/31 15:28:30] OK
slave1 rep_threads [last change: 2017/03/31 15:28:30] OK
slave1 rep_backlog [last change: 2017/03/31 15:28:30] OK: Backlog is null
master2 ping [last change: 2017/03/31 15:28:30] OK
master2 mysql [last change: 2017/03/31 15:28:30] OK
master2 rep_threads [last change: 2017/03/31 15:28:30] OK
master2 rep_backlog [last change: 2017/03/31 15:28:30] OK: Backlog is null
slave2 ping [last change: 2017/03/31 15:28:30] OK
slave2 mysql [last change: 2017/03/31 15:28:30] OK
slave2 rep_threads [last change: 2017/03/31 15:28:30] OK
slave2 rep_backlog [last change: 2017/03/31 15:28:30] OK: Backlog is null
从上面能够看到master1能ping通,声明不过处事死掉了呀。
检察master2主机的ip位置
[root@master2 ~] ipaddr show dev eno16777736
3: eno16777736: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:6e:b1:23 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.102/24 brd 192.168.10.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.10.4/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.10.2/32 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe6e:b123/64 scope link
valid_lft forever preferred_lft forever
slave1主机
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host:192.168.10.102
Master_User: rep
Master_Port: 3306
slave2主机
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host:192.168.10.102
Master_User: rep
Master_Port: 3306
启动master1主机的mysql处事,视察monitor日志,master1的日志以下
[root@monitor1 ~] tail -f /var/log/mysql-mmm/mmm_mond.log
2017/03/31 15:54:46 INFO Check 'rep_backlog' on 'master1' is ok啦!
2017/03/31 15:54:46 INFO Check 'rep_threads' on 'master1' is ok啦!
2017/03/31 15:54:50 INFO Check 'mysql' on 'master1' is ok啦!
2017/03/31 15:54:53 FATAL State of host 'master1' changed from HARD_OFFLINE to AWAITING_RECOVERY
从上面能够看到master1的状态由hard_offline更改为awaiting_recovery状态
用以下下令将处事器上线
[root@monitor1 ~]mmm_control set_online master1
检察会集最新状态
[root@monitor1 ~] mmm_control show
master1(192.168.10.101) master/ONLINE. Roles:
master2(192.168.10.102) master/ONLINE. Roles: reader(192.168.10.4), writer(192.168.10.2)
slave1(192.168.10.103) slave/ONLINE. Roles: reader(192.168.10.3)
slave2(192.168.10.104) slave/ONLINE. Roles: reader(192.168.10.5)
能够看到主库启动不会接收主,只到现有一些主再次宕机呀。
三.总结
(1)master2备选主节点宕机不影响集群的状态,即是移除master2备选节点的读状态呀。
(2)master1主节点宕机,由master2备选主节点接收写角色,slave1,slave2指向新master2主库举行复制,slave1,slave2会努力change master到master2.
(3)如果master1主库宕机,master2复制运用又落伍于master1时就变成为了主可写状态,这个时刻的数据主无法保证一样性呀。
如果master2,slave1,slave2延迟于master1主,这个时master1宕机,slave1,slave2将会期待数据赶上db1后,再重新指向新的主node2举行复制操做,这个时刻的数据也无法保证同步的一样性呀。
(4)如果采用MMM高可用架构,主,主备选节点机械设置一样,而且开启半同步进一步提升平安性或者采用MariaDB/mysql5.7举行多线程从复制,提升复制的功效呀。
更一五一十的信息请遵照我的博客

——如果写的还能够,请你多多谈论,你的支持才是我维持的力气啦!!


本文地址:http://www.guopangzi.net/post/5058.html
版权声明:本文为原创文章,版权归 admin 所有,欢迎分享本文,转载请保留出处!

 发表评论


表情

还没有留言,还不快点抢沙发?