编程知识 cdmana.com

How can MySQL store trillions of data?

In the high concurrency scenario, the author has developed it , Simple to offer 、 Stable 、 Extensible delayed message queuing framework , It has precise timing task and delay queue processing function . Since the open source for more than half a year , It has successfully provided precise timing scheduling scheme for more than ten small and medium-sized enterprises , It has withstood the test of production environment . In order to benefit more children's shoes , Now give the open source framework address :

https://github.com/sunshinelyz/mykit-delay

PS: Welcome to Star Source code , It's fine too pr Your blazing code .

Preface

The industry has basic requirements for high availability of the system , To put it simply , These requirements can be summarized as follows .

  • There is no single point in the system architecture .
  • The availability of services can be guaranteed to the maximum extent .

In general, the high availability of the system can use several 9 To assess the . The so-called several 9 That is, the system can guarantee that the time of external service reaches the percentage of total time . For example, if you need to achieve 99.99 High availability , Then the total time of system failure in the whole year shall not exceed 52 minute .

System high availability Architecture

Since we need to implement the high availability architecture of the system , that , What kind of system architecture do we need to build ? We can simplify the system architecture to the following figure .

Server planning

Because of my limited computer resources , I'm here 4 Build a high availability environment on servers , You can expand the environment to more servers according to this article , The building steps are the same .

Host name

IP Address

Installed services

binghe151

192.168.175.151

Mycat、Zookeeper、MySQL、HAProxy、Keepalived、Xinetd

binghe152

192.168.175.152

Zookeeper、MySQL

binghe153

192.168.175.153

Zookeeper、MySQL

binghe154

192.168.175.154

Mycat、MySQL、HAProxy、Keepalived、Xinetd

binghe155

192.168.175.155

MySQL

Be careful :HAProxy and Keepalived The best and Mycat Deploy on the same server .

install MySQL

Friends can pay attention to 【 Glacier Technology 】 WeChat official account , Reference resources 《MySQL And —— Source code compilation MySQL8.x+ upgrade gcc+ upgrade cmake( Test the full version of )

install JDK

because Mycat and Zookeeper The operation of JDK Environmental support , So we need to install... On each server JDK Environmental Science .

here , I think that binghe151 Installation on server JDK For example , Other servers are installed in the same way as in binghe151 The installation on the server is the same . The installation steps are as follows .

(1) To JDK Download from the official website JDK 1.8 edition ,JDK1.8 The download address for is :https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html.

notes : I download the JDK The installation package version is :jdk-8u212-linux-x64.tar.gz, If JDK Version updated , You can download the corresponding version .

(2) Will download jdk-8u212-linux-x64.tar.gz Upload the installation package to binghe151 Server's /usr/local/src Under the table of contents .

(3) decompression jdk-8u212-linux-x64.tar.gz file , As shown below .

tar -zxvf jdk-8u212-linux-x64.tar.gz

(4) The jdk1.8.0_212 Directory move to binghe151 Under server /usr/local Under the table of contents , As shown below .

mv jdk1.8.0_212/ /usr/local/src/

(5) To configure JDK System environment variable , As shown below .

vim /etc/profile
JAVA_HOME=/usr/local/jdk1.8.0_212
CLASS_PATH=.:$JAVA_HOME/lib
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASS_PATH PATH

Make system environment variables effective , As shown below .

source /etc/profile

(6) see JDK edition , As shown below .

[root@binghe151 ~]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)

Results show , The output is correct JDK Version information for , explain JDK Installation successful .

install Mycat

download Mycat 1.6.7.4 Release edition , Unzip it to the server /usr/local/mycat Under the table of contents , And configuration Mycat The system environment variable of , And then , To configure Mycat Configuration file for ,Mycat The final result configuration of is as follows .

  • schema.xml
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">

 <schema name="shop" checkSQLschema="false" sqlMaxLimit="1000">
  <!--<table name="order_master" primaryKey="order_id" dataNode = "ordb"/>-->
  <table name="order_master" primaryKey="order_id" dataNode = "orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" autoIncrement="true">
   <childTable name="order_detail" primaryKey="order_detail_id" joinKey="order_id" parentKey="order_id" autoIncrement="true"/>
  </table>
  <table name="order_cart" primaryKey="cart_id" dataNode = "ordb"/>
  <table name="order_customer_addr" primaryKey="customer_addr_id" dataNode = "ordb"/>
  <table name="region_info" primaryKey="region_id" dataNode = "ordb,prodb,custdb" type="global"/>
  <table name="serial" primaryKey="id" dataNode = "ordb"/>
  <table name="shipping_info" primaryKey="ship_id" dataNode = "ordb"/>
  <table name="warehouse_info" primaryKey="w_id" dataNode = "ordb"/>
  <table name="warehouse_proudct" primaryKey="wp_id" dataNode = "ordb"/>
  
  <table name="product_brand_info" primaryKey="brand_id" dataNode = "prodb"/>
  <table name="product_category" primaryKey="category_id" dataNode = "prodb"/>
  <table name="product_comment" primaryKey="comment_id" dataNode = "prodb"/>
  <table name="product_info" primaryKey="product_id" dataNode = "prodb"/>
  <table name="product_pic_info" primaryKey="product_pic_id" dataNode = "prodb"/>
  <table name="product_supplier_info" primaryKey="supplier_id" dataNode = "prodb"/>
  
  <table name="customer_balance_log" primaryKey="balance_id" dataNode = "custdb"/>
  <table name="customer_inf" primaryKey="customer_inf_id" dataNode = "custdb"/>
  <table name="customer_level_inf" primaryKey="customer_level" dataNode = "custdb"/>
  <table name="customer_login" primaryKey="customer_id" dataNode = "custdb"/>
  <table name="customer_login_log" primaryKey="login_id" dataNode = "custdb"/>
  <table name="customer_point_log" primaryKey="point_id" dataNode = "custdb"/>
  
 </schema>
 
 <dataNode name="mycat" dataHost="binghe151" database="mycat" />
  
 <dataNode name="ordb" dataHost="binghe152" database="order_db" />
 <dataNode name="prodb" dataHost="binghe153" database="product_db" />
 <dataNode name="custdb" dataHost="binghe154" database="customer_db" />
 
 <dataNode name="orderdb01" dataHost="binghe152" database="orderdb01" />
 <dataNode name="orderdb02" dataHost="binghe152" database="orderdb02" />
 <dataNode name="orderdb03" dataHost="binghe153" database="orderdb03" />
 <dataNode name="orderdb04" dataHost="binghe153" database="orderdb04" />
 
 <dataHost name="binghe151" maxCon="1000" minCon="10" balance="1"
     writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
  <heartbeat>select user()</heartbeat>
  <writeHost host="binghe51" url="192.168.175.151:3306" user="mycat" password="mycat"/>
 </dataHost>
 
 <dataHost name="binghe152" maxCon="1000" minCon="10" balance="1"
     writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
  <heartbeat>select user()</heartbeat>
  <writeHost host="binghe52" url="192.168.175.152:3306" user="mycat" password="mycat"/>
 </dataHost>
 
 <dataHost name="binghe153" maxCon="1000" minCon="10" balance="1"
     writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
  <heartbeat>select user()</heartbeat>
  <writeHost host="binghe53" url="192.168.175.153:3306" user="mycat" password="mycat"/>
 </dataHost>
 
 <dataHost name="binghe154" maxCon="1000" minCon="10" balance="1"
     writeType="0" dbType="mysql" dbDriver="native" switchType="1"  slaveThreshold="100">
  <heartbeat>select user()</heartbeat>
  <writeHost host="binghe54" url="192.168.175.154:3306" user="mycat" password="mycat"/>
 </dataHost>
 
</mycat:schema>
  • server.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://io.mycat/">
 <system>
  <property name="useHandshakeV10">1</property>
        <property name="defaultSqlParser">druidparser</property>
  <property name="serverPort">3307</property>
  <property name="managerPort">3308</property>
  <property name="nonePasswordLogin">0</property>
  <property name="bindIp">0.0.0.0</property>
  <property name="charset">utf8mb4</property>
  <property name="frontWriteQueueSize">2048</property>
  <property name="txIsolation">2</property>
  <property name="processors">2</property>
  <property name="idleTimeout">1800000</property>
  <property name="sqlExecuteTimeout">300</property>
  <property name="useSqlStat">0</property>
  <property name="useGlobleTableCheck">0</property>
  <property name="sequenceHandlerType">1</property>
  <property name="defaultMaxLimit">1000</property>
  <property name="maxPacketSize">104857600</property>
  
  <property name="sqlInterceptor">
   io.mycat.server.interceptor.impl.StatisticsSqlInterceptor
  </property>
  <property name="sqlInterceptorType">
   UPDATE,DELETE,INSERT
  </property>
  <property name="sqlInterceptorFile">/tmp/sql.txt</property>
 </system>
 
 <firewall>
  <whitehost>
   <host user="mycat" host="192.168.175.151"></host>
  </whitehost>
  <blacklist check="true">
   <property name="noneBaseStatementAllow">true</property>
   <property name="deleteWhereNoneCheck">true</property>
  </blacklist>
 </firewall>
 
 <user name="mycat" defaultAccount="true">
  <property name="usingDecrypt">1</property>
  <property name="password">cTwf23RrpBCEmalp/nx0BAKenNhvNs2NSr9nYiMzHADeEDEfwVWlI6hBDccJjNBJqJxnunHFp5ae63PPnMfGYA==</property>
  <property name="schemas">shop</property>
 </user>

</mycat:server>
  • rule.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mycat:rule SYSTEM "rule.dtd">
<mycat:rule xmlns:mycat="http://io.mycat/">
 <tableRule name="order_master">
  <rule>
   <columns>customer_id</columns>
   <algorithm>mod-long</algorithm>
  </rule>
 </tableRule>
 
 <function name="mod-long" class="io.mycat.route.function.PartitionByMod">
  <property name="count">4</property>
 </function>
</mycat:rule>
  • sequence_db_conf.properties
#sequence stored in datanode
GLOBAL=mycat
ORDER_MASTER=mycat
ORDER_DETAIL=mycat

About Mycat Configuration of , For your reference only , You don't have to configure it according to me , It can be configured according to your own business needs . The focus of this paper is to realize Mycat Build a highly available environment for .

stay MySQL Created in Mycat Connect MySQL The account of , As shown below .

CREATE USER 'mycat'@'192.168.175.%' IDENTIFIED BY 'mycat';
ALTER USER 'mycat'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'mycat'; 
GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE  ON *.* TO 'mycat'@'192.168.175.%';
FLUSH PRIVILEGES;

install Zookeeper colony

After installation and configuration JDK after , It needs to be built Zookeeper Clustered , According to the planning of the server , Will now Zookeeper Clusters are built in “binghe151”、“binghe152”、“binghe153” On three servers .

1. download Zookeeper

To Apache The official website to download Zookeeper Installation package ,Zookeeper The download address of the installation package is :https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/. The details are shown in the following figure .

It can also be in binghe151 Execute the following command on the server to download directly zookeeper-3.5.5.

wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz

If you execute the above order, you can directly apache-zookeeper-3.5.5-bin.tar.gz Download the installation package to binghe151 Server .

2. Install and configure Zookeeper

Be careful :(1)、(2)、(3) The steps are all in binghe152 On the server .

(1) decompression Zookeeper Installation package

stay binghe151 Execute the following command on the server , take Zookeeper Unzip to “/usr/local/” Under the table of contents , And will Zookeeper The catalog is changed to zookeeper-3.5.5.

tar -zxvf apache-zookeeper-3.5.5-bin.tar.gz
mv apache-zookeeper-3.5.5-bin zookeeper-3.5.5

(2) To configure Zookeeper System environment variable

Again , Need to be in /etc/profile Configuration in file Zookeeper System environment variable , as follows :

ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.5
PATH=$ZOOKEEPER_HOME/bin:$PATH
export ZOOKEEPER_HOME PATH

Combined with the previous configuration JDK System environment variable ,/etc/profile, The overall configuration is as follows :

MYSQL_HOME=/usr/local/mysql
JAVA_HOME=/usr/local/jdk1.8.0_212
MYCAT_HOME=/usr/local/mycat
ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.5
MPC_HOME=/usr/local/mpc-1.1.0
GMP_HOME=/usr/local/gmp-6.1.2
MPFR_HOME=/usr/local/mpfr-4.0.2
CLASS_PATH=.:$JAVA_HOME/lib
LD_LIBRARY_PATH=$MPC_LIB_HOME/lib:$GMP_HOME/lib:$MPFR_HOME/lib:$LD_LIBRARY_PATH
PATH=$MYSQL_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$MYCAT_HOME/bin:$PATH
export JAVA_HOME ZOOKEEPER_HOME MYCAT_HOME CLASS_PATH MYSQL_HOME MPC_LIB_HOME GMP_HOME MPFR_HOME LD_LIBRARY_PATH PATH

(3) To configure Zookeeper

First , Need to put

ZOOKEEPER_HOME/conf(

ZOOKEEPER_HOME by Zookeeper Installation directory ) In the catalog zoo_sample.cfg Change the file to zoo.cfg file . The specific command is as follows :

cd /usr/local/zookeeper-3.5.5/conf/
mv zoo_sample.cfg zoo.cfg

I'm going to modify zoo.cfg file , The specific contents of the amendment are as follows :

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/usr/local/zookeeper-3.5.5/data
dataLogDir=/usr/local/zookeeper-3.5.5/dataLog
clientPort=2181
server.1=binghe151:2888:3888
server.2=binghe152:2888:3888
server.3=binghe153:2888:3888

stay Zookeeper Under the installation directory of data and dataLog Two folders .

mkdir -p /usr/local/zookeeper-3.5.5/data
mkdir -p /usr/local/zookeeper-3.5.5/dataLog

Switch to new data Under the table of contents , establish myid file , The specific content is the number 1, As shown below :

cd /usr/local/zookeeper-3.5.5/data
vim myid

The digital 1 Write to file myid.

3. take Zookeeper And system environment variable files to other servers

Be careful :(1)、(2) Step is to binghe151 On the server .

(1) Copy Zookeeper To other servers

According to the planning of the server , Will now Zookeeper Copied to the binghe152 and binghe53 The server , The specific operation is as follows :

scp -r /usr/local/zookeeper-3.5.5/ binghe152:/usr/local/
scp -r /usr/local/zookeeper-3.5.5/ binghe153:/usr/local/

(2) Copy system environment variable files to other servers

According to the planning of the server , Now the system environment variable file /etc/profile Copied to the binghe152、binghe153 The server , The specific operation is as follows :

scp /etc/profile binghe152:/etc/
scp /etc/profile binghe153:/etc/

The above operation may require a password , Enter the password as prompted .

4. Modify... On other servers myid file

modify binghe152 Server Zookeeper Of myid The contents of the document are numbers 2, At the same time to modify binghe153 Server Zookeeper Of myid The contents of the document are numbers 3. As follows :

stay binghe152 Do the following on the server :

echo "2" > /usr/local/zookeeper-3.5.5/data/myid
cat /usr/local/zookeeper-3.5.5/data/myid
2

stay binghe153 Do the following on the server :

echo "3" > /usr/local/zookeeper-3.5.5/data/myid
cat /usr/local/zookeeper-3.5.5/data/myid
3

5. Enable environment variables

Respectively in binghe151、binghe152、binghe153 Perform the following operations on , Make system environment variables effective .

source /etc/profile

6. start-up Zookeeper colony

Respectively in binghe151、binghe152、binghe153 Perform the following operations on , start-up Zookeeper colony .

zkServer.sh start

7. see Zookeeper The startup state of the cluster

  • binghe151 The server
[root@binghe151 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower
  • binghe152 The server
[root@binghe152 local]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: leader
  • binghe153 The server
[root@binghe153 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost.
Mode: follower

You can see ,binghe151 and binghe153 On the server Zookeeper The role of follower,binghe152 On the server Zookeeper The role of leader.

initialization Mycat Configuration to Zookeeper colony

Be careful : initialization Zookeeper Data in , Is in binghe151 On the server , The reason is that we have been in binghe151 There is... Installed on the server Mycat.

1. See the initialization script

stay Mycat Install under directory bin The catalog provides a init_zk_data.sh Script files , As shown below .

[root@binghe151 ~]# ll /usr/local/mycat/bin/
total 384
-rwxr-xr-x 1 root root   3658 Feb 26 17:10 dataMigrate.sh
-rwxr-xr-x 1 root root   1272 Feb 26 17:10 init_zk_data.sh
-rwxr-xr-x 1 root root  15701 Feb 28 20:51 mycat
-rwxr-xr-x 1 root root   2986 Feb 26 17:10 rehash.sh
-rwxr-xr-x 1 root root   2526 Feb 26 17:10 startup_nowrap.sh
-rwxr-xr-x 1 root root 140198 Feb 28 20:51 wrapper-linux-ppc-64
-rwxr-xr-x 1 root root  99401 Feb 28 20:51 wrapper-linux-x86-32
-rwxr-xr-x 1 root root 111027 Feb 28 20:51 wrapper-linux-x86-64

init_zk_data.sh Script files are used to Zookeeper In the initialization Mycat The configuration of , This file will be read by Mycat Install under directory conf The configuration file under the directory , Initialize it to Zookeeper In the cluster .

2. Copy Mycat The configuration file

First , Let's look at Mycat Install under directory conf File information in directory , As shown below .

[root@binghe151 ~]# cd /usr/local/mycat/conf/
[root@binghe151 conf]# ll
total 108
-rwxrwxrwx 1 root root   92 Feb 26 17:10 autopartition-long.txt
-rwxrwxrwx 1 root root   51 Feb 26 17:10 auto-sharding-long.txt
-rwxrwxrwx 1 root root   67 Feb 26 17:10 auto-sharding-rang-mod.txt
-rwxrwxrwx 1 root root  340 Feb 26 17:10 cacheservice.properties
-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql
-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql
-rw-r--r-- 1 root root   86 Mar  1 22:37 dnindex.properties
-rwxrwxrwx 1 root root  446 Feb 26 17:10 ehcache.xml
-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties
-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml
-rwxrwxrwx 1 root root  183 Feb 26 17:10 migrateTables.properties
-rwxrwxrwx 1 root root  271 Feb 26 17:10 myid.properties
-rwxrwxrwx 1 root root   16 Feb 26 17:10 partition-hash-int.txt
-rwxrwxrwx 1 root root  108 Feb 26 17:10 partition-range-mod.txt
-rwxrwxrwx 1 root root  988 Mar  1 16:59 rule.xml
-rwxrwxrwx 1 root root 3883 Mar  3 23:59 schema.xml
-rwxrwxrwx 1 root root  440 Feb 26 17:10 sequence_conf.properties
-rwxrwxrwx 1 root root   84 Mar  3 23:52 sequence_db_conf.properties
-rwxrwxrwx 1 root root   29 Feb 26 17:10 sequence_distributed_conf.properties
-rwxrwxrwx 1 root root   28 Feb 26 17:10 sequence_http_conf.properties
-rwxrwxrwx 1 root root   53 Feb 26 17:10 sequence_time_conf.properties
-rwxrwxrwx 1 root root 2420 Mar  4 15:14 server.xml
-rwxrwxrwx 1 root root   18 Feb 26 17:10 sharding-by-enum.txt
-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload

Next , take Mycat Install under directory conf In the catalog schema.xml file 、server.xml file 、rule.xml Document and sequence_db_conf.properties File copy to conf In the catalog zkconf Under the table of contents , As shown below .

cp schema.xml server.xml rule.xml sequence_db_conf.properties zkconf/

3. take Mycat Write configuration information to Zookeeper colony

perform init_zk_data.sh Script files , towards Zookeeper Initialization configuration information in the cluster , As shown below .

[root@binghe151 bin]# /usr/local/mycat/bin/init_zk_data.sh  
o2020-03-08 20:03:13 INFO JAVA_CMD=/usr/local/jdk1.8.0_212/bin/java
o2020-03-08 20:03:13 INFO Start to initialize /mycat of ZooKeeper
o2020-03-08 20:03:14 INFO Done

According to the above information ,Mycat towards Zookeeper Write initialization configuration information successfully .

4. verification Mycat Whether the configuration information was written successfully Mycat

We can use Zookeeper The client command of zkCli.sh Sign in Zookeeper To verify Mycat Whether the configuration information of is successfully written to Mycat.

First , Sign in Zookeeper, As shown below .

[root@binghe151 ~]# zkCli.sh 
Connecting to localhost:2181
################### Omit here N Line output ######################
Welcome to ZooKeeper!

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] 

Next , stay Zookeeper Command line view mycat Information about , As shown below .

[zk: localhost:2181(CONNECTED) 0] ls /
[mycat, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /mycat
[mycat-cluster-1]
[zk: localhost:2181(CONNECTED) 2] ls /mycat/mycat-cluster-1
[cache, line, rules, schema, sequences, server]
[zk: localhost:2181(CONNECTED) 3] 

You can see , stay /mycat/mycat-cluster-1 There is 6 A catalog , Next , Look at the schema Information in the directory , As shown below .

[zk: localhost:2181(CONNECTED) 3] ls /mycat/mycat-cluster-1/schema
[dataHost, dataNode, schema]

Next , Let's look at dataHost Configuration of , As shown below .

[zk: localhost:2181(CONNECTED) 4] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]

The format of the above output information is quite disordered , But it can be seen that Json Format information , We can format the output information , The formatted results are as follows .

[
    {
        "balance": 1,
        "maxCon": 1000,
        "minCon": 10,
        "name": "binghe151",
        "writeType": 0,
        "switchType": 1,
        "slaveThreshold": 100,
        "dbType": "mysql",
        "dbDriver": "native",
        "heartbeat": "select user()",
        "writeHost": [
            {
                "host": "binghe51",
                "url": "192.168.175.151:3306",
                "password": "root",
                "user": "root"
            }
        ]
    },
    {
        "balance": 1,
        "maxCon": 1000,
        "minCon": 10,
        "name": "binghe152",
        "writeType": 0,
        "switchType": 1,
        "slaveThreshold": 100,
        "dbType": "mysql",
        "dbDriver": "native",
        "heartbeat": "select user()",
        "writeHost": [
            {
                "host": "binghe52",
                "url": "192.168.175.152:3306",
                "password": "root",
                "user": "root"
            }
        ]
    },
    {
        "balance": 1,
        "maxCon": 1000,
        "minCon": 10,
        "name": "binghe153",
        "writeType": 0,
        "switchType": 1,
        "slaveThreshold": 100,
        "dbType": "mysql",
        "dbDriver": "native",
        "heartbeat": "select user()",
        "writeHost": [
            {
                "host": "binghe53",
                "url": "192.168.175.153:3306",
                "password": "root",
                "user": "root"
            }
        ]
    },
    {
        "balance": 1,
        "maxCon": 1000,
        "minCon": 10,
        "name": "binghe154",
        "writeType": 0,
        "switchType": 1,
        "slaveThreshold": 100,
        "dbType": "mysql",
        "dbDriver": "native",
        "heartbeat": "select user()",
        "writeHost": [
            {
                "host": "binghe54",
                "url": "192.168.175.154:3306",
                "password": "root",
                "user": "root"
            }
        ]
    }
]

You can see , We are Mycat Of schema.xml Configured in the file dataHost Node information , Successfully written to Zookeeper It's in .

In order to verify Mycat Configuration information , Whether it has been synchronized to Zookeeper On other nodes of , We can also do that binghe152 and binghe153 Log on to the server Zookeeper, see Mycat Whether the configuration information was written successfully .

  • binghe152 The server
[root@binghe152 ~]# zkCli.sh 
Connecting to localhost:2181
################# Omit N Line output information ################
[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]

You can see ,Mycat The configuration information of was successfully synchronized to binghe152 On the server Zookeeper in .

  • binghe153 The server
[root@binghe153 ~]# zkCli.sh 
Connecting to localhost:2181
##################### Omit here N Line output information #####################
[zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost
[{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]

You can see ,Mycat The configuration information of was successfully synchronized to binghe153 On the server Zookeeper in .

To configure Mycat Support Zookeeper start-up

1. stay binghe151 Configuration on the server Mycat

stay binghe151 Enter... On the server Mycat Installation directory conf Under the table of contents , View file information , As shown below .

[root@binghe151 ~]# cd /usr/local/mycat/conf/
[root@binghe151 conf]# ll
total 108
-rwxrwxrwx 1 root root   92 Feb 26 17:10 autopartition-long.txt
-rwxrwxrwx 1 root root   51 Feb 26 17:10 auto-sharding-long.txt
-rwxrwxrwx 1 root root   67 Feb 26 17:10 auto-sharding-rang-mod.txt
-rwxrwxrwx 1 root root  340 Feb 26 17:10 cacheservice.properties
-rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql
-rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql
-rw-r--r-- 1 root root   86 Mar  1 22:37 dnindex.properties
-rwxrwxrwx 1 root root  446 Feb 26 17:10 ehcache.xml
-rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties
-rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml
-rwxrwxrwx 1 root root  183 Feb 26 17:10 migrateTables.properties
-rwxrwxrwx 1 root root  271 Feb 26 17:10 myid.properties
-rwxrwxrwx 1 root root   16 Feb 26 17:10 partition-hash-int.txt
-rwxrwxrwx 1 root root  108 Feb 26 17:10 partition-range-mod.txt
-rwxrwxrwx 1 root root  988 Mar  1 16:59 rule.xml
-rwxrwxrwx 1 root root 3883 Mar  3 23:59 schema.xml
-rwxrwxrwx 1 root root  440 Feb 26 17:10 sequence_conf.properties
-rwxrwxrwx 1 root root   84 Mar  3 23:52 sequence_db_conf.properties
-rwxrwxrwx 1 root root   29 Feb 26 17:10 sequence_distributed_conf.properties
-rwxrwxrwx 1 root root   28 Feb 26 17:10 sequence_http_conf.properties
-rwxrwxrwx 1 root root   53 Feb 26 17:10 sequence_time_conf.properties
-rwxrwxrwx 1 root root 2420 Mar  4 15:14 server.xml
-rwxrwxrwx 1 root root   18 Feb 26 17:10 sharding-by-enum.txt
-rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf
drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload

You can see , stay Mycat Of conf Under the table of contents , There is one. myid.properties file , Next , Use vim Editor edit this file , As shown below .

vim myid.properties 

Edited myid.properties The contents of the document are as follows .

loadZk=true
zkURL=192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181
clusterId=mycat-cluster-1
myid=mycat_151
clusterSize=2
clusterNodes=mycat_151,mycat_154
#server  booster  ;   booster install on db same server,will reset all minCon to 2
type=server
boosterDataHosts=dataHost1

Several important parameters are described below .

  • loadZk: Indicates whether to load Zookeeper To configure .true: yes ;false: no ;
  • zkURL:Zookeeper The connection address of , Multiple Zookeeper Link addresses are separated by commas ;
  • clusterId: At present Mycat Clustered Id identification , This logo needs to be associated with Zookeeper in /mycat The directory name under the directory is the same , As shown below .
[zk: localhost:2181(CONNECTED) 1] ls /mycat
[mycat-cluster-1]
  • myid: At present Mycat Node id, Here I name it mycat_ Prefix with IP The last three digits of the address ;
  • clusterSize: Express Mycat In the cluster Mycat Number of nodes , here , We are binghe151 and binghe154 Deploy on node Mycat, therefore Mycat The number of nodes is 2.
  • clusterNodes:Mycat In the cluster , be-all Mycat node , The node here needs to be configured myid Configured in Mycat node id, Multiple nodes are preceded by commas . The node I configure here is :mycat_151,mycat_154.

2. stay binghe154 Install brand new Mycat

stay binghe154 Download and install and binghe151 The same version of Mycat, And extract it to binghe154 On the server /usr/local/mycat Under the table of contents .

It can also be in binghe151 Enter the following command directly on the server to Mycat Copy the installation directory of to binghe154 Server .

[root@binghe151 ~]# scp -r /usr/local/mycat binghe154:/usr/local

Be careful : Don't forget it. binghe154 Configuration on the server Mycat The system environment variable of .

3. modify binghe154 On the server Mycat To configure

stay binghe154 Modify on server Mycat Install under directory conf In the directory myid.properties file , As shown below .

vim /usr/local/mycat/conf/myid.properties

The modified myid.properties The contents of the document are as follows .

loadZk=true
zkURL=192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181
clusterId=mycat-cluster-1
myid=mycat_154
clusterSize=2
clusterNodes=mycat_151,mycat_154
#server  booster  ;   booster install on db same server,will reset all minCon to 2
type=server
boosterDataHosts=dataHost1

4. restart Mycat

Restart separately binghe151 The server and binghe154 On the server Mycat, As shown below .

Be careful : Restart first

  • binghe151 The server
[root@binghe151 ~]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...
  • binghe154 The server
[root@binghe154 ~]# mycat restart
Stopping Mycat-server...
Stopped Mycat-server.
Starting Mycat-server...

stay binghe151 and binghe154 Check the server separately Mycat Start log of , As shown below .

STATUS | wrapper  | 2020/03/08 21:08:15 | <-- Wrapper Stopped
STATUS | wrapper  | 2020/03/08 21:08:15 | --> Wrapper Started as Daemon
STATUS | wrapper  | 2020/03/08 21:08:15 | Launching a JVM...
INFO   | jvm 1    | 2020/03/08 21:08:16 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
INFO   | jvm 1    | 2020/03/08 21:08:16 |   Copyright 1999-2006 Tanuki Software, Inc.  All Rights Reserved.
INFO   | jvm 1    | 2020/03/08 21:08:16 | 
INFO   | jvm 1    | 2020/03/08 21:08:28 | MyCAT Server startup successfully. see logs in logs/mycat.log

From the output of the log, we can see that ,Mycat Restart successfully .

here , Restart first binghe151 On the server Mycat, Restart again binghe154 On the server Mycat after , We will find that binghe154 On the server Mycat Of conf In the catalog schema.xml、server.xml、rule.xml and sequence_db_conf.properties File with the binghe151 Server Mycat The configuration files for are the same , This is it. binghe154 On the server Mycat from Zookeeper The result of reading the configuration file on .

in the future , We just need to modify it Zookeeper In the relevant Mycat Configuration of , These configurations are automatically synchronized to Mycat in , This can guarantee multiple Mycat The configuration of nodes is consistent .

Configure virtual IP

Respectively in binghe151 and binghe154 Configure virtual on the server IP, As shown below .

ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 up
route add -host 192.168.175.110 dev eth0:1

After configuring virtual IP The effect is as follows , With binghe151 Server, for example .

[root@binghe151 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:10:A1:45  
          inet addr:192.168.175.151  Bcast:192.168.175.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe10:a145/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:116766 errors:0 dropped:0 overruns:0 frame:0
          TX packets:85230 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:25559422 (24.3 MiB)  TX bytes:55997016 (53.4 MiB)

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:10:A1:45  
          inet addr:192.168.175.110  Bcast:192.168.175.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:51102 errors:0 dropped:0 overruns:0 frame:0
          TX packets:51102 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2934009 (2.7 MiB)  TX bytes:2934009 (2.7 MiB)

Be careful : Add... On the command line VIP after , When the server restarts ,VIP Information will disappear , therefore , It's best to create VIP The command to write to a script file , for example , Write the command to /usr/local/script/vip.sh In file , As shown below .

mkdir /usr/local/script
vim /usr/local/script/vip.sh

The contents of the document are as follows .

ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 up
route add -host 192.168.175.110 dev eth0:1

Next , take /usr/local/script/vip.sh Add the file to the startup entry of the server , As shown below .

echo /usr/local/script/vip.sh >> /etc/rc.d/rc.local

To configure IP forward

stay binghe151 and binghe154 Configure the system kernel on the server IP Forward capability , edit /etc/sysctl.conf file , As shown below .

vim /etc/sysctl.conf

Find the following line of code .

net.ipv4.ip_forward = 0

Change it to the code shown below .

net.ipv4.ip_forward = 1

Save and exit vim Editor , And run the following command to make the configuration take effect .

sysctl -p

Install and configure xinetd service

We need to install HAProxy On , That is to say binghe151 and binghe154 Installation on server xinetd Service to open 48700 port .

(1) Execute the following command on the server command line to install xinetd service , As shown below .

yum install xinetd -y

(2) edit /etc/xinetd.conf file , As shown below .

vim /etc/xinetd.conf

Check if the following configuration exists in the file .

includedir /etc/xinetd.d

If /etc/xinetd.conf The file does not have the above configuration , It's in /etc/xinetd.conf Add the above configuration to the file ; If the above configuration exists , There is no need to modify .

(3) establish /etc/xinetd.d Catalog , As shown below .

mkdir /etc/xinetd.d

Be careful : If /etc/xinetd.d The directory already exists , The following error will be reported when creating the directory .

mkdir: cannot create directory `/etc/xinetd.d': File exists

You don't have to pay attention to this error message .

(4) stay /etc/xinetd.d Add under directory Mycat Configuration file of state detection server mycat_status, As shown below .

touch /etc/xinetd.d/mycat_status

(5) edit mycat_status file , As shown below .

vim /etc/xinetd.d/mycat_status

Edited mycat_status The contents of the document are as follows .

service mycat_status
{
flags = REUSE
socket_type = stream
port = 48700
wait = no
user = root
server =/usr/local/bin/mycat_check.sh
log_on_failure += USERID
disable = no
}

part xinetd The configuration parameters are described as follows .

  • socket_type: Indicates how packets are handled ,Stream by TCP Data packets .
  • port: Express xinetd Port number of the service listening .
  • wait: No need to wait , That is, the service will run in a multithreaded way .
  • user: function xinted Users of the service .
  • server: The service script that needs to be started .
  • log_on_failure: Record failed log content .
  • disable: Need to start the xinted The service , This configuration item needs to be set to no.

(6) stay /usr/local/bin Add under directory mycat_check.sh Service script , As shown below .

touch /usr/local/bin/mycat_check.sh

(7) edit /usr/local/bin/mycat_check.sh file , As shown below .

vim /usr/local/bin/mycat_check.sh

The contents of the edited file are as follows .

#!/bin/bash
mycat=`/usr/local/mycat/bin/mycat status | grep 'not running' | wc -l`
if [ "$mycat" = "0" ]; then
/bin/echo -e "HTTP/1.1 200 OK\r\n"
else
/bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n"
/usr/local/mycat/bin/mycat start
fi

by mycat_check.sh Files give executable rights , As shown below .

chmod a+x /usr/local/bin/mycat_check.sh

(8) edit /etc/services file , As shown below .

vim /etc/services

Add the following at the end of the file .

mycat_status  48700/tcp        # mycat_status

among , The port number needs to be in /etc/xinetd.d/mycat_status The port number configured in the file is the same .

(9) restart xinetd service , As shown below .

service xinetd restart

(10) see mycat_status Whether the service started successfully , As shown below .

  • binghe151 The server
[root@binghe151 ~]# netstat -antup|grep 48700
tcp    0   0 :::48700          :::*      LISTEN   2776/xinetd
  • binghe154 The server
[root@binghe154 ~]# netstat -antup|grep 48700
tcp    0   0 :::48700          :::*      LISTEN   6654/xinetd

Results show , On two servers mycat_status Server started successfully .

thus ,xinetd The service was installed and configured successfully , namely Mycat Status check service installed successfully .

Install and configure HAProxy

We are directly in the binghe151 and binghe154 Use the following command on the server to install HAProxy.

yum install haproxy -y

After installation , We need to be right about HAProxy To configure ,HAProxy The configuration file directory of is /etc/haproxy, Let's look at the file information in this directory , As shown below .

[root@binghe151 ~]# ll /etc/haproxy/
total 4
-rw-r--r-- 1 root root 3142 Oct 21  2016 haproxy.cfg

Find out /etc/haproxy/ There is one in the directory haproxy.cfg file . Next , We'll modify haproxy.cfg file , The modified haproxy.cfg The contents of the document are as follows .

global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen  admin_status
      bind 0.0.0.0:48800
      stats uri /admin-status
      stats auth  admin:admin
listen    allmycat_service
      bind 0.0.0.0:3366
      mode tcp
      option tcplog
   option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
      balance    roundrobin
      server    mycat_151 192.168.175.151:3307 check port 48700 inter 5s rise 2 fall 3
      server    mycat_154 192.168.175.154:3307 check port 48700 inter 5s rise 2 fall 3
listen    allmycat_admin
      bind 0.0.0.0:3377
      mode tcp
      option tcplog
   option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www
      balance    roundrobin
      server    mycat_151 192.168.175.151:3308 check port 48700 inter 5s rise 2 fall 3
      server    mycat_154 192.168.175.154:3308 check port 48700 inter 5s rise 2 fall 3

Next , stay binghe151 The server and binghe154 Start on the server HAProxy, As shown below .

haproxy -f /etc/haproxy/haproxy.cfg 

Next , We use mysql Command connection HAProxy Virtual monitoring IP Connect to the port Mycat, As shown below .

[root@binghe151 ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-auth=mysql_native_password  
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 

You can see , Connect Mycat success .

install Keepalived

1. Install and configure Keepalived

Directly in binghe151 and binghe154 Enter the following command on the server to install Keepalived.

yum install keepalived -y

After successful installation , Will be in /etc Create one in the directory keepalived Catalog , Next , We are /etc/keepalived Directory configuration keepalived.conf file , As shown below .

vim /etc/keepalived/keepalived.conf
  • binghe151 Server configuration
! Configuration Fileforkeepalived
vrrp_script chk_http_port {
  script "/etc/keepalived/check_haproxy.sh"
  interval 2
  weight 2
}
vrrp_instance VI_1 {
  state MASTER 
  interface eth0 
  virtual_router_id 51 
  priority 150 
  advert_int 1 
  authentication {
  auth_type PASS
  auth_pass 1111
}
track_script {
  chk_http_port
}
virtual_ipaddress { 
  192.168.175.110 dev eth0 scope global
}
}
  • binghe154 Server configuration
! Configuration Fileforkeepalived
vrrp_script chk_http_port {
  script "/etc/keepalived/check_haproxy.sh"
  interval 2
  weight 2
}
vrrp_instance VI_1 {
  state SLAVE 
  interface eth0 
  virtual_router_id 51 
  priority 120
  advert_int 1 
  authentication {
  auth_type PASS
  auth_pass 1111
}
track_script {
  chk_http_port
}
virtual_ipaddress { 
  192.168.175.110 dev eth0 scope global
}
}

2. Write tests HAProxy Script for

Next , Need to be separately in binghe151 and binghe154 On the server /etc/keepalived Create under directory check_haproxy.sh Script , The script is as follows .

#!/bin/bash
STARTHAPROXY="/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg"
STOPKEEPALIVED="/etc/init.d/keepalived stop"
#STOPKEEPALIVED="/usr/bin/systemctl stop keepalived"
LOGFILE="/var/log/keepalived-haproxy-state.log"
echo "[check_haproxy status]" >> $LOGFILE
A=`ps -C haproxy --no-header |wc -l`
echo "[check_haproxy status]" >> $LOGFILE
date >> $LOGFILE
if [ $A -eq 0 ];then
   echo $STARTHAPROXY >> $LOGFILE
   $STARTHAPROXY >> $LOGFILE 2>&1
   sleep 5
fi
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
   exit 0
else
   exit 1
fi

Use the following command for check_haproxy.sh Scripts grant executable permissions .

chmod a+x /etc/keepalived/check_haproxy.sh 

3. start-up Keepalived

When the configuration is complete , We can start it Keepalived 了 , Respectively in binghe151 and binghe154 Start on the server Keepalived, As shown below .

/etc/init.d/keepalived start

see Keepalived Startup successful , As shown below .

  • binghe151 The server
[root@binghe151 ~]# ps -ef | grep keepalived
root       1221      1  0 20:06 ?        00:00:00 keepalived -D
root       1222   1221  0 20:06 ?        00:00:00 keepalived -D
root       1223   1221  0 20:06 ?        00:00:02 keepalived -D
root      93290   3787  0 21:42 pts/0    00:00:00 grep keepalived
  • binghe154 The server
[root@binghe154 ~]# ps -ef | grep keepalived
root       1224      1  0 20:06 ?        00:00:00 keepalived -D
root       1225   1224  0 20:06 ?        00:00:00 keepalived -D
root       1226   1224  0 20:06 ?        00:00:02 keepalived -D
root      94636   3798  0 21:43 pts/0    00:00:00 grep keepalived

You can see , On two servers Keepalived Service started successfully .

4. verification Keepalived Bound virtual IP

Next , Let's look at the Keepalived Whether it is bound to virtual IP.

  • binghe151 The server
[root@binghe151 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ff
    inet 192.168.175.151/24 brd 192.168.175.255 scope global eth0
    inet 192.168.175.110/32 scope global eth0
    inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
    inet6 fe80::20c:29ff:fe10:a145/64 scope link 
       valid_lft forever preferred_lft forever

You can see the following line of code .

inet 192.168.175.110/32 scope global eth0

explain binghe151 On the server Keepalived Bound to virtual IP 192.168.175.110.

  • binghe154 The server
[root@binghe154 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
    inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
    inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
    inet6 fe80::250:56ff:fe22:2a75/64 scope link 
       valid_lft forever preferred_lft forever

You can see binghe154 On the server Keepalived There is no binding virtual IP.

5. Test virtual IP The drift of

How to test virtual IP What about the drift of ? First , Let's stop binghe151 On the server Keepalived, As shown below .

/etc/init.d/keepalived stop

Next , see binghe154 Server Keepalived Bind virtual IP The situation of , As shown below .

[root@binghe154 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
    inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
    inet 192.168.175.110/32 scope global eth0
    inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
    inet6 fe80::250:56ff:fe22:2a75/64 scope link 
       valid_lft forever preferred_lft forever

You can see , In the output result information , There is a line of information .

 inet 192.168.175.110/32 scope global eth0

explain binghe154 On the server Keepalived Bound to virtual IP 192.168.175.110, fictitious IP Drift to binghe154 Server .

6.binghe151 On the server Keepalived Grab virtual IP

Next , We started binghe151 On the server Keepalived, As shown below .

/etc/init.d/keepalived start

After successful startup , Let's look at virtual again IP Binding condition of , As shown below .

  • binghe151 The server
[root@binghe151 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ff
    inet 192.168.175.151/24 brd 192.168.175.255 scope global eth0
    inet 192.168.175.110/32 scope global eth0
    inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
    inet6 fe80::20c:29ff:fe10:a145/64 scope link 
       valid_lft forever preferred_lft forever
  • binghe154 The server
[root@binghe154 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff
    inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0
    inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1
    inet6 fe80::250:56ff:fe22:2a75/64 scope link 
       valid_lft forever preferred_lft forever

because binghe151 Configured on the server Keepalived Take precedence over binghe154 On the server Keepalived, therefore , Start again binghe151 On the server Keepalived after ,binghe151 On the server Keepalived Will seize the virtual IP.

To configure MySQL Master slave copy

here , For simplicity , I will binghe154 and binghe155 On the server MySQL Configure master slave replication , You can also according to the actual situation , Configure other servers by yourself MySQL Master-slave replication of ( Be careful : Here's my master configuration ).

1. edit my.cnf file

  • binghe154 The server
server_id = 154
log_bin = /data/mysql/log/bin_log/mysql-bin
binlog-ignore-db=mysql
binlog_format= mixed
sync_binlog=100
log_slave_updates = 1
binlog_cache_size=32m
max_binlog_cache_size=64m
max_binlog_size=512m
lower_case_table_names = 1
relay_log = /data/mysql/log/bin_log/relay-bin
relay_log_index = /data/mysql/log/bin_log/relay-bin.index
master_info_repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery
  • binghe155 The server
server_id = 155
log_bin = /data/mysql/log/bin_log/mysql-bin
binlog-ignore-db=mysql
binlog_format= mixed
sync_binlog=100
log_slave_updates = 1
binlog_cache_size=32m
max_binlog_cache_size=64m
max_binlog_size=512m
lower_case_table_names = 1
relay_log = /data/mysql/log/bin_log/relay-bin
relay_log_index = /data/mysql/log/bin_log/relay-bin.index
master_info_repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery

2. Synchronize two servers MySQL The data of

stay binghe154 There's only one... On the server customer_db database , We use mysqldump Command export customer_db database , As shown below .

[root@binghe154 ~]# mysqldump --master-data=2 --single-transaction -uroot -p --databases customer_db > binghe154.sql
Enter password: 

Next , We see the binghe154.sql file .

more binghe154.sql

In the document , We can find the following information .

CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000042', MASTER_LOG_POS=995;

Show the current MySQL The binary log file of is mysql-bin.000042, The location of the binary log file is 995.

Next , We will binghe154.sql File copy to binghe155 Server , As shown below .

scp binghe154.sql 192.168.175.155:/usr/local/src

stay binghe155 Server , take binghe154.sql Script import to MySQL in , As shown below .

mysql -uroot -p < /usr/local/src/binghe154.sql

here , The initialization of the data is completed .

3. Create a master-slave copy account

stay binghe154 Server's MySQL in , Create for master-slave replication MySQL account number , As shown below .

mysql> CREATE USER 'repl'@'192.168.175.%' IDENTIFIED BY 'repl123456';
Query OK, 0 rows affected (0.01 sec)

mysql> ALTER USER 'repl'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'repl123456';                           
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.175.%';
Query OK, 0 rows affected (0.00 sec)

mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

4. Configure replication Links

Sign in binghe155 On the server MySQL, And use the following command to configure the replication link .

mysql> change master to 
     > master_host='192.168.175.154',
     > master_port=3306,
     > master_user='repl',
     > master_password='repl123456',
     > MASTER_LOG_FILE='mysql-bin.000042',
     > MASTER_LOG_POS=995;

among ,MASTER_LOG_FILE='mysql-bin.000042', MASTER_LOG_POS=995 Is in the binghe154.sql Found in the file .

5. Start from library

stay binghe155 Server's MySQL The command line starts the slave library , As shown below .

mysql> start slave;

Check whether the slave library is successfully started , As shown below .

mysql> SHOW slave STATUS \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.175.151
                  Master_User: binghe152
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000007
          Read_Master_Log_Pos: 1360
               Relay_Log_File: relay-bin.000003
                Relay_Log_Pos: 322
        Relay_Master_Log_File: mysql-bin.000007
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
################# Omit part of the output result information ##################

Results show Slave_IO_Running Options and Slave_SQL_Running The values of the options are Yes, explain MySQL The master-slave replication environment is successfully set up .

Last , Don't forget it. binghe155 Server's MySQL Created in Mycat Connect MySQL Users of , As shown below .

CREATE USER 'mycat'@'192.168.175.%' IDENTIFIED BY 'mycat';
ALTER USER 'mycat'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'mycat'; 
GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE  ON *.* TO 'mycat'@'192.168.175.%';
FLUSH PRIVILEGES;

To configure Mycat Read / write separation

modify Mycatd Of schema.xml file , Realization binghe154 and binghe155 On the server MySQL Read / write separation . stay Mycat Installation directory conf/zkconf Under the table of contents , modify schema.xml file , The modified schema.xml The file is shown below .

<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
    <schema name="shop" checkSQLschema="true" sqlMaxLimit="1000">
        <table name="order_master" dataNode="orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" primaryKey="order_id" autoIncrement="true">
            <childTable name="order_detail" joinKey="order_id" parentKey="order_id" primaryKey="order_detail_id" autoIncrement="true"/>
        </table>
        <table name="order_cart" dataNode="ordb" primaryKey="cart_id"/>
        <table name="order_customer_addr" dataNode="ordb" primaryKey="customer_addr_id"/>
        <table name="region_info" dataNode="ordb,prodb,custdb" primaryKey="region_id" type="global"/>
        <table name="serial" dataNode="ordb" primaryKey="id"/>
        <table name="shipping_info" dataNode="ordb" primaryKey="ship_id"/>
        <table name="warehouse_info" dataNode="ordb" primaryKey="w_id"/>
        <table name="warehouse_proudct" dataNode="ordb" primaryKey="wp_id"/>
        <table name="product_brand_info" dataNode="prodb" primaryKey="brand_id"/>
        <table name="product_category" dataNode="prodb" primaryKey="category_id"/>
        <table name="product_comment" dataNode="prodb" primaryKey="comment_id"/>
        <table name="product_info" dataNode="prodb" primaryKey="product_id"/>
        <table name="product_pic_info" dataNode="prodb" primaryKey="product_pic_id"/>
        <table name="product_supplier_info" dataNode="prodb" primaryKey="supplier_id"/>
        <table name="customer_balance_log" dataNode="custdb" primaryKey="balance_id"/>
        <table name="customer_inf" dataNode="custdb" primaryKey="customer_inf_id"/>
        <table name="customer_level_inf" dataNode="custdb" primaryKey="customer_level"/>
        <table name="customer_login" dataNode="custdb" primaryKey="customer_id"/>
        <table name="customer_login_log" dataNode="custdb" primaryKey="login_id"/>
        <table name="customer_point_log" dataNode="custdb" primaryKey="point_id"/>
    </schema>
 
    <dataNode name="mycat" dataHost="binghe151" database="mycat"/>
    <dataNode name="ordb" dataHost="binghe152" database="order_db"/>
    <dataNode name="prodb" dataHost="binghe153" database="product_db"/>
    <dataNode name="custdb" dataHost="binghe154" database="customer_db"/>
    <dataNode name="orderdb01" dataHost="binghe152" database="orderdb01"/>
    <dataNode name="orderdb02" dataHost="binghe152" database="orderdb02"/>
    <dataNode name="orderdb03" dataHost="binghe153" database="orderdb03"/>
    <dataNode name="orderdb04" dataHost="binghe153" database="orderdb04"/>
 
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe151" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe51" url="192.168.175.151:3306" password="mycat" user="mycat"/>
    </dataHost>
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe152" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe52" url="192.168.175.152:3306" password="mycat" user="mycat"/>
    </dataHost>
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe153" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe53" url="192.168.175.153:3306" password="mycat" user="mycat"/>
    </dataHost>
    <dataHost balance="1" maxCon="1000" minCon="10" name="binghe154" writeType="0" switchTymycate="1" slaveThreshold="100" dbType="mysql" dbDriver="native">
        <heartbeat>select user()</heartbeat>
        <writeHost host="binghe54" url="192.168.175.154:3306" password="mycat" user="mycat">
   <readHost host="binghe55", url="192.168.175.155:3306" user="mycat" password="mycat"/>
  </writeHost>
         <writeHost host="binghe55" url="192.168.175.155:3306" password="mycat" user="mycat"/>
    </dataHost>
</mycat:schema>

Save and exit vim Editor , Next , initialization Zookeeper Data in , As shown below .

/usr/local/mycat/bin/init_zk_data.sh 

After the above command is executed successfully , Will automatically synchronize the configuration to binghe151 and binghe154 On the server Mycat Under the installation directory of conf In the catalog schema.xml in .

Next , To start, respectively, binghe151 and binghe154 On the server Mycat service .

mycat restart

How to access a highly available environment

here , The whole high availability environment is configured , When the upper application is connected to the high availability environment , Connection required HAProxy Monitoring IP And port . For example, use mysql The command connects to the highly available environment as follows .

[root@binghe151 ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-auth=mysql_native_password
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB)

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+----------+
| DATABASE |
+----------+
| shop     |
+----------+
1 row in set (0.10 sec)

mysql> use shop;
Database changed
mysql> show tables;
+-----------------------+
| Tables in shop        |
+-----------------------+
| customer_balance_log  |
| customer_inf          |
| customer_level_inf    |
| customer_login        |
| customer_login_log    |
| customer_point_log    |
| order_cart            |
| order_customer_addr   |
| order_detail          |
| order_master          |
| product_brand_info    |
| product_category      |
| product_comment       |
| product_info          |
| product_pic_info      |
| product_supplier_info |
| region_info           |
| serial                |
| shipping_info         |
| warehouse_info        |
| warehouse_proudct     |
+-----------------------+
21 rows in set (0.00 sec)

here , I'm just right binghe154 On the server MySQL Extended read-write separation environment , You can also according to the actual situation of other servers MySQL Realize master-slave copy and read-write separation , such , The whole high availability environment is realized HAProxy High availability 、Mycat High availability 、MySQL High availability 、Zookeeper High availability and Keepalived High availability .

Okay , That's all for today , I'm glacier , See you next time !!

This article is from WeChat official account. - Glacier Technology (hacker-binghe)

The source and reprint of the original text are detailed in the text , If there is any infringement , Please contact the yunjia_community@tencent.com Delete .

Original publication time : 2020-11-25

Participation of this paper Tencent cloud media sharing plan , You are welcome to join us , share .

版权声明
本文为[glacier]所创,转载请带上原文链接,感谢
https://cdmana.com/2020/12/20201224160737134E.html

Scroll to Top