编程知识 cdmana.com

Nginx realizes dynamic load balancing (nginx-1.10.1 + consul v0.6.4)

Nginx Dynamic load balancing (Nginx-1.10.1 + Consul v0.6.4)

I haven't found a suitable one Socat + Haproxy Can be used in Nginx, And then they found out Nginx Several modules of , But there are also various shortcomings . and Nginx In case of large flow nginx -s reload Yes, there is 15% Loss above , also work The thread will not exit until it has finished processing , And generate new threads to handle the connection . As Ha Is the transponder very painful ? In the end, I felt that nginx_upsync_module Can smooth the demand of the host on and off line through the command line , After that, how to use it .

https://www.cnblogs.com/beyondbit/p/6063132.html # Thanks to the author

1 ) several Nginx Description of online and offline modules :

## 1.1)  because nginx It doesn't provide these up and down lines API, need openresty And with some third-party extensions to achieve 
Tengine  Of Dyups modular (ngx_http_dyups_module).
 Sina Weibo Upsync+Consul  Dynamic load balancing .
OpenResty Of balancer_by_lua( And the cloud uses its open source slardar(Consul balancer_by_lua)).

ngx_http_dyups_module (https://github.com/yzprofile/ngx_http_dyups_module)          #  Provides coarse-grained upstream Management , Can be for the whole upstream Add new , Delete .
lua-upstream-nginx-module (https://github.com/openresty/lua-upstream-nginx-module)  # Provides a fine-grained management approach , Can be used for a certain service IP Conduct management , It provides set_peer_down Method , It can be done to upstream One of the ip Go online and offline .
ngx_dynamic_upstream (https://github.com/cubicdaiya/ngx_dynamic_upstream)           # These plug-ins have one thing in common , That's when you don't need to restart nginx On the basis of ,  Dynamic modification nginx Configuration of .

 Final decision to use   Microblogging  Nginx + Upsync + Consul  This scheme , We mainly consider the problem of configuration persistence and the registry hanging up , Whether it affects production . It's just going to work, so I decided to adopt this plan . 

## 1.2) Github Open source  Lua + nginx  Up and down projects :
https://github.com/firstep/grayscale

2 ) Experimental environment :

Host name domain name port Software Intranet IP function System
bj-node-1 con.linux08.com 8500 Consul_0.6.4 10.10.78.17 Registry Center Centos 7 x64
bj-master-1 www.linux08.com 80 Nginx + upsync 10.10.123.235 nginx distribution Centos 7 x64
cli-1 empty 80 nginx web 10.10.16.182 web service Centos 7 x64
cli-2 empty 80 nginx web 10.10.185.201 web service Centos 7 x64
2.2)  Environmental statement :
 2.2.1)  The four hosts should be connected to each other . cli-1 and cli-2  Install well nginx-1.10.1, And configure the home page file, randomly write some content to distinguish the host . 
 2.2.2)  Do a good job of firewall strategy in advance , The experiment opens all ports to the office IP.( Pay attention to network security, especially Consul web  Authority management is too weak ).   

2.3)  Using modules :
nginx-upsync-module                  #  And consul Exchange data module , coordination Consul form nginx upstream  Smooth online and offline host function ( Sina development ).
nginx_upstream_check_module          #  Probe nginx upstream  Host within the group and display Web Module ( Ali development ).

2.4)  Version Description :
 Nginx  The version of the plug-in is very demanding ,  The others tried several times without adding both software to Nginx And compiled successfully .  Finally, I read several articles and found that they are made with the following version number ,
  Nginx-1.10.1 
  Consul_0.6.4_linux_amd64
  nginx_upstream_check_module      #  All the patches in this package are written with the version number ,  Try to follow the tutorial . This use (check_1.9.2+.patch

3 ) install Nginx And add modules :

    cd /data/src/
3.1 ) download nginx Source code :
    wget http://nginx.org/download/nginx-1.10.1.tar.gz

3.2)  download nginx_upstream_check_module  modular :
    git clone https://github.com/xiaokai-wang/nginx_upstream_check_module

3.2)  download  nginx-upsync-module :
    wget https://codeload.github.com/weibocom/nginx-upsync-module/tar.gz/v2.1.2

3.4)  Decompress the software : 
    tar -zxf nginx-1.10.1.tar.gz 
    tar -zxf v2.1.2 

3.5)  Install dependency packages :    
    yum -y install libpcre3 libpcre3-dev ruby zlib1g-dev patch openssl openssl-devel pcre pcre-devel 

3.6)   to Nginx  patch up (nginx_upstream_check_module):
   **  Be careful  **  This patch is for nginx-1.10+ ( Be sure to use this version ):
    cd /data/src/nginx-1.10.1/
    patch -p0 < /data/src/nginx_upstream_check_module/check_1.9.2+.patch 

    The following shows that the patch is successful : 
   [root@bj-master-1 nginx-1.10.1]#     patch -p0 < /data/src/nginx_upstream_check_module/check_1.9.2+.patch 
    patching file src/http/modules/ngx_http_upstream_hash_module.c
    patching file src/http/modules/ngx_http_upstream_ip_hash_module.c
    patching file src/http/modules/ngx_http_upstream_least_conn_module.c
    patching file src/http/ngx_http_upstream_round_robin.c
    patching file src/http/ngx_http_upstream_round_robin.h

3.7) Compilation and installation nginx:
    groupadd -g 1001 work
    useradd -u 1001 -g 1001 work
    echo '123456' | passwd --stdin work

    cd /data/src/nginx-1.10.1/ 
    ./configure --user=work --group=work --prefix=/data/work/nginx \
     --with-http_ssl_module --with-pcre \
     --with-http_stub_status_module --with-http_ssl_module \
     --with-http_gzip_static_module \
     --with-http_realip_module --with-http_sub_module \
     --add-module=/data/src/nginx_upstream_check_module \
     --add-module=/data/src/nginx-upsync-module-2.1.2

    make -j 2 && make install 

    **  Be careful  **  --add-module=  Followed by patches and modules are the source package path .  The version number should be written correctly , If it's a new version, please check by yourself .

     Look at the test Nginx  Whether the module is added successfully :

    [root@bj-master-1 sbin]# ./nginx  -V
    nginx version: nginx/1.10.1
    built by gcc 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) 
    built with OpenSSL 1.0.2k-fips  26 Jan 2017
    TLS SNI support enabled
    configure arguments: --user=work --group=work --prefix=/data/work/nginx --with-http_ssl_module --with-pcre --with-http_stub_status_module --with-http_ssl_module --with-http_gzip_static_module --with-http_realip_module --with-http_sub_module --add-module=/data/src/nginx_upstream_check_module --add-module=/data/src/nginx-upsync-module-2.1.2

image
image

4 ) install Consul_0.6.4 And start the :

4.1)  download Consul_0.6.4:
wget https://releases.hashicorp.com/consul/0.6.4/consul_0.6.4_linux_amd64.zip

4.2)  Install and start Consul_0.6.4:
unzip consul_0.6.4_linux_amd64.zip
mkdir -p /data/soft/consul/data; mv consul /data/soft/consul/
cd /data/soft/consul
./consul agent -server -ui -bootstrap-expect=1 -syslog -bind=10.10.78.17 -client=0.0.0.0 -data-dir=/data/soft/consul/data -log-level=debug &

4.3) Condul  The startup script :
    cd /data/soft/consul/
    vim start.sh

    #!/bin/bash
    cd /data/soft/consul
    nohup ./consul agent -server -ui -bootstrap-expect=1 -syslog -bind=10.10.78.17 -client=0.0.0.0 \
     -data-dir=/data/soft/consul/data -log-level=debug &

    chmod 755 start.sh
    sh start.sh

4.5) see Consul  start log :
     tail -f -n nohup.log   
        2020/12/21 18:43:02 [INFO] raft: Node at 10.10.78.17:8300 [Follower] entering Follower state
        2020/12/21 18:43:02 [INFO] serf: EventMemberJoin: bj-node-1 10.10.78.17
        2020/12/21 18:43:02 [INFO] serf: EventMemberJoin: bj-node-1.dc1 10.10.78.17
        2020/12/21 18:43:02 [INFO] consul: adding LAN server bj-node-1 (Addr: 10.10.78.17:8300) (DC: dc1)
        2020/12/21 18:43:02 [INFO] consul: adding WAN server bj-node-1.dc1 (Addr: 10.10.78.17:8300) (DC: dc1)
        2020/12/21 18:43:02 [ERR] agent: failed to sync remote state: No cluster leader
        2020/12/21 18:43:03 [WARN] raft: Heartbeat timeout reached, starting election
        2020/12/21 18:43:03 [INFO] raft: Node at 10.10.78.17:8300 [Candidate] entering Candidate state
        2020/12/21 18:43:03 [DEBUG] raft: Votes needed: 1
        2020/12/21 18:43:03 [DEBUG] raft: Vote granted from 10.10.78.17:8300. Tally: 1
        2020/12/21 18:43:03 [INFO] raft: Election won. Tally: 1
        2020/12/21 18:43:03 [INFO] raft: Node at 10.10.78.17:8300 [Leader] entering Leader state
        2020/12/21 18:43:03 [INFO] consul: cluster leadership acquired
        2020/12/21 18:43:03 [INFO] consul: New leader elected: bj-node-1
        2020/12/21 18:43:03 [INFO] raft: Disabling EnableSingleNode (bootstrap)
        2020/12/21 18:43:03 [DEBUG] raft: Node 10.10.78.17:8300 updated peer set (2): [10.10.78.17:8300]
        2020/12/21 18:43:03 [DEBUG] raft: Node 10.10.78.17:8300 updated peer set (2): [10.10.78.17:8300]
        2020/12/21 18:43:03 [DEBUG] consul: reset tombstone GC to index 6
        2020/12/21 18:43:03 [DEBUG] agent: Service 'consul' in sync
        2020/12/21 18:43:03 [INFO] agent: Synced node info

4.6  visit Consul web  Management interface (web  port 8500):
        http://con.linux08.com:8500/ui/

image
image

5 ) To configure Nginx upstream And linkage Consul:

## 5.1)  modify Nginx  The configuration file :
vim /data/work/nginx/conf/nginx.conf

user work work;
worker_processes  auto;

error_log  /data/work/nginx/logs/error.log;

#pid        logs/nginx.pid;
worker_rlimit_nofile 60000;

events {
    use epoll;
    worker_connections 60000;
}

http {
        include       mime.types;
        default_type  application/octet-stream;
        charset  utf-8;

        log_format  main  '$remote_addr - $remote_user [$time_local]$upstream_addr-$upstream_status-$request_time'
                '-$upstream_response_time-$bytes_sent-$gzip_ratio "$host$request_uri" '
                '$status $body_bytes_sent "$http_referer" '
                '"$http_user_agent" "$http_x_forwarded_for"';

        log_format  upstream  '$time_iso8601 $http_x_forwarded_for $host $upstream_response_time $request $status $upstream_addr';

        access_log  /data/logs/nginx/access.log  main;

        types_hash_max_size 2048;
        sendfile        on;

        ..........  Some configurations are omitted here ..........

######################### Server ##############################

upstream con_server {                                                       # upstream  name   Very important ,consul Key  It's better to name it after this .
        server 127.0.0.1:11111;                                             #  This business is a space occupying ,  It can't start without it .
        upsync 10.10.78.17:8500/v1/kv/upstreams/con_server/ upsync_timeout=6m upsync_interval=500ms upsync_type=consul strong_dependency=off;
        upsync_dump_path /data/work/nginx/conf/servers/con_server.conf;     #  Will register center (Condul) Content persistence to local , The following should be consistent with this .

        include /data/work/nginx/conf/servers/con_server.conf;              #  Introduce persistent configuration files , Even if the registry is down, the service can still run .                 Directory and file must exist to get the information of registry .
        check interval=5000 rise=1 fall=3 timeout=4000 type=http port=80;   # upstream check  Required options for modules , With these web status  To display .
}

server {
        listen 80;
        server_name www.linux08.com;

        location / {
             proxy_next_upstream http_404 http_500 http_502 http_503 http_504 error timeout invalid_header;
             proxy_set_header X-Real-IP $remote_addr;
             proxy_set_header Host  $host;
             proxy_read_timeout 30s;
             proxy_connect_timeout 10s;
             proxy_pass http://con_server/;
        }

}

server {
    listen 80;
    server_name status.linux08.com;

    location / {
        check_status;
#        allow 0.0.0.0;
#        deny all;

        auth_basic      "login";
        auth_basic_user_file    /data/work/nginx/conf/.htpasswd;
        }
    }
}

## 5.2) Create a backup profile directory and create a backup profile : 
mkdir -p /data/work/nginx/conf/servers/              #  Create a backup profile directory                        ( It's important that there must be )
touch /data/work/nginx/conf/servers/con_server.conf  #  Create a backup profile , The file name is server It's equipped with   ( It's important that there must be )
mkdir -p /data/logs/nginx/                           #  Create a directory of log files 

## 5.3) Status page validation creation :
yum install httpd-tools -y
htpasswd -bcm /data/work/nginx/conf/.htpasswd root 123456    #  Last 2 Is it   Account and password 

[root@bj-master-1 conf]# htpasswd -bcm /data/work/nginx/conf/.htpasswd root 123456
Adding password for user root

## 5.4)  start-up nginx:
/data/work/nginx/sbin/nginx  -t
/data/work/nginx/sbin/nginx 

## 5.5) towards Consul server ()  Add host :
curl -X PUT -d '{"weight":1, "max_fails":2, "fail_timeout":3}' 10.10.78.17:8500/v1/kv/upstreams/con_server/10.10.16.182:80
curl -X PUT -d '{"weight":1, "max_fails":2, "fail_timeout":3}' 10.10.78.17:8500/v1/kv/upstreams/con_server/10.10.185.201:80

[root@bj-master-1 nginx]# curl -X PUT -d '{"weight":1, "max_fails":2, "fail_timeout":3}' 10.10.78.17:8500/v1/kv/upstreams/con_server/10.10.16.182:80
true

## 5.6)  Offline Consul server  host :
curl -X DELETE http://10.10.78.17:8500/v1/kv/upstreams/con_server/10.10.16.182:80
curl -X DELETE http://10.10.78.17:8500/v1/kv/upstreams/con_server/10.10.185.201:80
**  Be careful :  It is better for each group not to offline all hosts , Otherwise, the service cannot be provided .

[root@bj-master-1 nginx]# curl -X DELETE http://10.10.78.17:8500/v1/kv/upstreams/con_server/10.10.16.182:80
true                        ##  No problem with the information submitted , The command will return true.  There is no mistake in submitting it repeatedly ,  It just covers .

## 5.7)   The command line gets the result :
curl -s http://10.10.78.17:8500/v1/kv/upstreams/con_server/?recurse

[root@bj-master-1 nginx]# curl -s http://10.10.78.17:8500/v1/kv/upstreams/con_server/?recurse
[{"LockIndex":0,"Key":"upstreams/con_server/10.10.16.182:80","Flags":0,"Value":"eyJ3ZWlnaHQiOjEsICJtYXhfZmFpbHMiOjIsICJmYWlsX3RpbWVvdXQiOjN9","CreateIndex":9616,"ModifyIndex":9623},{"LockIndex":0,"Key":"upstreams/con_server/10.10.185.201:80","Flags":0,"Value":"eyJ3ZWlnaHQiOjEsICJtYXhfZmFpbHMiOjIsICJmYWlsX3RpbWVvdXQiOjN9","CreateIndex":5311,"ModifyIndex":5311}][root@bj-master-1 nginx]# 

## 5.8)  test result :

 visit  www.linux08.com    The content will be in  web1   and  web2  Switch between ,  Offline a host with the command , Refresh again   Only one online can be displayed . At the same time status.linux08.com  View information on the page 

image
image

6 ) Nginx status Introduce :

image

## 6.1 ) status  Page content introduction :
server number     #  The number of back-end servers 
generation        # Nginx reload The number of times 
Index             #  The index of the server 
Upstream          #  In the configuration upstream The name of 
Name              #  The server IP
Status            #  The state of the server 
Rise              #  The number of successful server checks 
Fall              #  The number of consecutive check failures 
Check type        #  The way of inspection 
Check port        #  Back end port set up specifically for health check 

7 ) Nginx check status Configuration parameter introduction :

image

## 7.1 ) Nginx  Profile contents 
    ......
       check interval=5000 rise=1 fall=3 timeout=4000 type=http port=80;
        # every other 5 Check the real node status of the back end in seconds , success 1 Next is up state , Failure 3 Next is down state , The timeout is 4 second , The type of examination is http    

       check_http_send "HEAD / HTTP/1.0\r\n\r\n";           #  This command allows the load balancer to simulate the back end realserver send out , Monitoring the detection of http package , simulation LVS Detection of .    
       check_http_expect_alive http_2xx http_3xx;           #  Returns the specified HTTP code, If it meets the expectation, the test is successful 
                                                            #  return 2xx,3xx The status code is normal , Other status codes are down state 
    ......

## 7.2 )  Configuration parameter syntax Introduction :
Syntax: check interval=milliseconds [fall=count] [rise=count] [timeout=milliseconds] [default_down=true|false] [type=tcp|http|ssl_hello|mysql|ajp] [port=check_port]
Default:  If there are no configuration parameters , The default value is :interval=30000 fall=5 rise=2 timeout=1000 default_down=true type=tcp
Context: upstream

check interval  The command can turn on the health check function of the back-end server ,  The meaning of the parameter after the instruction is :

interval#  Interval between health check packets sent to the back end , The unit is millisecond .
fall(fall_count):                 #  If the number of consecutive failures reaches fall_count, The server is considered to be down.
rise(rise_count):                 #  If the number of consecutive successes reaches rise_count, The server is considered to be up.
timeout:                          #  Timeout for backend health requests , The unit is millisecond .
default_down:                     #  Set the initial state of the server , If it is true, It means that the default is down Of , If it is false, Namely up Of . The default value is true, That is, at first the server thought it was not available , It is not considered healthy until a certain number of successful checkups are reached .
type#  Type of health check pack , The following types are now supported .

        tcp: ordinary tcp Connect , If the connection is successful , That means the back end is OK .
        ssl_hello: Send an initial SSL hello Package and accept the server's SSL hello package .
        http: send out HTTP request , The state of the back-end reply packet is used to judge whether the back-end is alive or not .
        mysql:  towards mysql Server connection , Through the receiving server greeting Package to determine whether the backend is alive .
        ajp: Send back AJP Agreed Cping package , By receiving Cpong Package to determine whether the backend is alive .

port:                             #  Specify the check port of the back-end server .

check_http_send:                 #  This command allows the load balancer to simulate the back end realserver send out , Monitoring the detection of http package , simulation LVS Detection of .
check_http_expect_alive:         #  return 2xx,3xx The status code is normal , Other status codes are down state .

版权声明
本文为[osc_ wcq210y3]所创,转载请带上原文链接,感谢
https://cdmana.com/2020/12/20201225122405116B.html

Scroll to Top