LVS-keepalived/keepalived高可用集群.md

13 KiB
Raw Permalink Blame History

keepalived高可用集群

注意centos7和9的配置完全一样

负载均衡 lb集群 load balance

流量分发

高可用 ha集群 high availability

主要是给服务器做冗余

keepalive 持久连接 保持存活

keepalived 高可用软件名称

红帽有自己的高可用集群套件RHCS

keepalived介绍

keepalived是集群管理中保证集群高可用的一个服务软件其功能类似于heartbeat用来防止单点故障。

master vip backup vip

脑裂问题:

backup vip

master vip

解决:stonith shoot the other node in the head 爆头

重启keepalived 关闭keepalived服务

keepalived工作原理

keepalived是以VRRP协议为实现基础的是实现路由器高可用的协议VRRP全称Virtual Router Redundancy Protocol虚拟路由冗余协议。

面试题(keepalived的工作原理) 将N台提供相同功能的服务器组成一个服务器组这个组里面有一个master和多个backupmaster上面有一个对外提供服务的vip该服务器所在局域网内其他机器的默认路由为该vipmaster会发组播当backup收不到vrrp包时就认为master宕掉了这时就需要根据VRRP的优先级来选举一个backup当master

广播 组播 单播

keepalived主要有三个模块分别是core、check和vrrp。core模块为keepalived的核心负责主进程的启动、维护以及全局配置文件的加载和解析。check负责健康检查包括常见的各种检查方式。vrrp模块是来实现VRRP协议的。

keepalived部署

++++++++++++

+---------------------------> + Client + 192.168.122.1/24 (真实机做客户端) | ++++++++++++ | | VIP eth0:1 192.168.122.254/24 | | | | ++++++++++++++++ +++++++++++++++++ | + Director master + + Director backup + | ++++++++++++++++ +++++++++++++++++ | DIP eth0 192.168.122.10/24 DIP eth0 192.168.122.20/24 | || |_______ ____________________ |_________________________________________ | | | |

++++++++++++++++ ++++++++++++++++

+ Real Server A + + Real Server B +

++++++++++++++++ ++++++++++++++++

eth0 192.168.122.30/24 eth0 192.168.122.40/24

分别在Director master、Director backup 上部署浮动资源VIP IPVS策略 测试2个Director在DR模式下都工作正常。测试完成后都撤掉浮动资源。


关于dr网卡路由条目上下顺序问题

因为dr2是ens36在下面ens33在上面所以配置keepalived的时候就不行了可尝试以下解决方案调整路由条目上下顺序如果还不行就需要在开始规划的时候就要固定网卡固定ip规划好其实物理服务器不会出现这个问题

[root@server ~]# ip r l
default via 192.168.26.2 dev ens36 proto dhcp src 192.168.26.132 metric 100 
default via 192.168.26.2 dev ens33 proto dhcp src 192.168.26.131 metric 101 
192.168.26.0/24 dev ens36 proto kernel scope link src 192.168.26.132 metric 100 
192.168.26.0/24 dev ens33 proto kernel scope link src 192.168.26.131 metric 101 

[root@server ~]# ip link set ens36 down
[root@server ~]# ip r l
default via 192.168.26.2 dev ens33 proto dhcp src 192.168.26.131 metric 101 
192.168.26.0/24 dev ens33 proto kernel scope link src 192.168.26.131 metric 101 

[root@server ~]# ip link set ens36 up
[root@server ~]# ip r l
default via 192.168.26.2 dev ens33 proto dhcp src 192.168.26.131 metric 101 
default via 192.168.26.2 dev ens36 proto dhcp src 192.168.26.132 metric 102 
192.168.26.0/24 dev ens33 proto kernel scope link src 192.168.26.131 metric 101 
192.168.26.0/24 dev ens36 proto kernel scope link src 192.168.26.132 metric 102

image-20230307181508663

1. 在master上安装配置Keepalived

# yum install keepalived -y

2. 修改配置文件(清空原有配置文件)

# cd /etc/keepalived/
# vim keepalived.conf

//全局配置 ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from keepalived@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id Director1 //两边是一样的 } //局部配置 vrrp_instance VI_1 { state MASTER //另外一台机器是BACKUP interface eth0 //心跳网卡 DIP那一块网卡 virtual_router_id 51 priority 50 //优先级 advert_int 1 //检查间隔,单位秒 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.122.254/24 dev eth0 //VIP随便自己定义 只要是同一个网段就可以 } }

virtual_server 192.168.122.254 80 { //LVS 配置VIP delay_loop 3 //服务论询的时间间隔 lb_algo rr //LVS 调度算法 lb_kind DR // LVS 集群模式 protocol TCP real_server 192.168.122.30 80 { weight 1 TCP_CHECK { connect_timeout 3 } } real_server 192.168.122.40 80 { weight 1 TCP_CHECK { connect_timeout 3 } } }

3.在backup上安装keepalived

# yum install keepalived -y

4.拷贝master上的keepalived.conf到backup上

# scp keepalived.conf 192.168.122.20:/etc/keepalived/

5.拷贝后,修改配置文件 state BACKUP priority 100

6.两个Director上启动服务

# systemctl start keepalived

7.测试 1观察lvs路由条目 2观察vip地址在哪台机器上有可能两台机器上都有但是他好用没有问题即可 3客户端浏览器访问vip 4关闭master上的keepalived服务再次访问vip

实验过程总结

10.0.0.20 dr1 10.0.0.21 dr2 10.0.0.22 web1 rs1 10.0.0.23 web2 rs2

1. 设置rs的配置 1.1 安装web服务器 1.2 添加vip10.0.0.40 lo 1.3 arp 1 2

2. dr的配置 2.1 两块网卡 两台机器上的两块网卡名称必须一样 2.2 路由条目的顺序必须一样 2.3 区分谁当dip 谁的路由条目在上面 谁当dip 2.4 dip要固定下来 20 21 ens33 2.5 vip ens37 :ip地址 有或没有都行 只要这个网卡是启动状态 如果有Ip地址这个Ip不能是vip 2.6 安装ipvsadm

3. 安装keepalived到两台dr上 修改keepalived.conf配置文件 设置vip 设置角色 master backup 设置realserver是谁 ip地址是多少

扩展实验-keepalived+mysql

mysql可以是以下3种情况

1、双主

2、mysql-cluster

3、gelara集群

项目环境 VIP 192.168.122.100 mysql1 192.168.122.10 mysql2 192.168.122.20 vip 主(keepalived) 主(keepalived

实现过程概要

一、mysql 主主同步 二、在两台mysql上安装keepalived 三、keepalived 主备配置文件 四、mysql状态检测脚本/root/bin/keepalived_check_mysql.sh 五、测试及诊断 注 keepalived之间使用vrrp组播方式通信使用的IP地址是224.0.0.18 实施步骤 一、mysql 主主同步 <略>

二、安装keepalived

两台
# yum install keepalived -y

三、keepalived 主备配置文件 主备置文件不同处有 state priority


192.168.122.10 Master配置

# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs { router_id mysql1 //两边一样 }

vrrp_script check_run { //定义配置健康检查脚本的配置名称check_run script "/root/keepalived_check_mysql.sh" interval 5 //执行健康检查的时间间隔 单位s }

vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 88 priority 100 advert_int 1 //检查keepalived本身服务的时间间隔 authentication { auth_type PASS auth_pass 1111 }

track_script { check_run //名字必须和上面的脚本配置名称一致 }

virtual_ipaddress { 192.168.122.100 dev eth0 } }


192.168.122.20 Slave配置

# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs { router_id mysql1 }

vrrp_script check_run { script "/root/keepalived_check_mysql.sh" interval 5 }

vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 88 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 1111 }

track_script { check_run }

virtual_ipaddress { 192.168.122.100 } }

四、mysql状态检测脚本

/root/keepalived_check_mysql.sh两台MySQL同样的脚本

版本一:简单使用:
#!/bin/bash
/usr/bin/mysql -h ip -uroot -p123 -e "show status;" &>/dev/null
if [ $? -ne 0 ] ;then
        systemctl stop keepalived
fi
# 此处的ip是本地ip

版本二:检查多次
#  vim  /root/keepalived_check_mysql.sh
#!/bin/bash
MYSQL=/usr/local/mysql/bin/mysql
MYSQL_HOST=localhost
MYSQL_USER=root
MYSQL_PASSWORD=1111
CHECK_TIME=3

#mysql  is working MYSQL_OK is 1 , mysql down MYSQL_OK is 0
MYSQL_OK=1

check_mysql_helth (){
    $MYSQL -h $MYSQL_HOST -u $MYSQL_USER -p${MYSQL_PASSWORD} -e "show status;" &>/dev/null
    if [ $? -eq 0 ] ;then
        MYSQL_OK=1
    else
        MYSQL_OK=0
    fi
    return $MYSQL_OK
}

while [ $CHECK_TIME -ne 0 ]
do
        check_mysql_helth
        if [ $MYSQL_OK -eq 1 ] ; then
                exit 0
        fi

        if [ $MYSQL_OK -eq 0 ] &&  [ $CHECK_TIME -eq 1 ];then
                /etc/init.d/keepalived stop
                exit 1
        fi
        let CHECK_TIME--
        sleep 1
done

版本三:检查多次
#  vim  /root/keepalived_check_mysql.sh
#!/bin/bash
MYSQL=/usr/local/mysql/bin/mysql
MYSQL_HOST=localhost
MYSQL_USER=root
MYSQL_PASSWORD=1111
CHECK_TIME=3

#mysql  is working MYSQL_OK is 1 , mysql down MYSQL_OK is 0
MYSQL_OK=1

check_mysql_helth (){
    $MYSQL -h $MYSQL_HOST -u $MYSQL_USER -p${MYSQL_PASSWORD} -e "show status;" &>/dev/null
    if [ $? -eq 0 ] ;then
        MYSQL_OK=1
    else
        MYSQL_OK=0
    fi
    return $MYSQL_OK
}

while [ $CHECK_TIME -ne 0 ]
do
        check_mysql_helth
        if [ $MYSQL_OK -eq 1 ] ; then
                exit 0
        fi

        let CHECK_TIME--
        sleep 1
done
/etc/init.d/keepalived stop
exit 1

# chmod 755 /root/keepalived_check_mysql.sh

两边均启动keepalived

日志查看脚本是否被执行
# tail -f /var/log/messages
Jun 19 15:20:19 xen1 Keepalived_vrrp[6341]: Using LinkWatch kernel netlink reflector...
Jun 19 15:20:19 xen1 Keepalived_vrrp[6341]: VRRP sockpool: [ifindex(2), proto(112), fd(11,12)]
Jun 19 15:20:19 xen1 Keepalived_vrrp[6341]: VRRP_Script(check_run) succeeded

扩展实验-keepalived+nginx负载均衡器

# vim /etc/keepalived/keepalived.conf
global_defs {
   router_id nginx1
}

vrrp_script check_run {
   script "/keepalived_check_nginx_proxy.sh"
   interval 5
}

vrrp_instance VI_1 {
    state MASTER
    interface ens32
    virtual_router_id 88
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }

    track_script {
        check_run
    }

    virtual_ipaddress {
        192.168.26.39
    }
}

# cat /keepalived_check_nginx_proxy.sh 
curl 127.1 &>/dev/null
if [ $? -ne 0 ];then
	systemctl stop keepalived
fi