主要核心模块:
- Checkers:负责对Real Server进行健康检查。
- VRRP栈:实现了VRRP协议,实现了vrrp_sync_group扩展,不依赖于LVS可以独立的使用。例如下面第一个实验keepalived+nginx反代。
- IPVS Wrapper:将keepalived的控制数据转换成ipvs的规则
- Netlink Reflector:设定VRRP VIPs。
HA Cluster的配置前提:
1、各节点时间要同步;
Ntp server
时间不同步keepalived一定会出错
2、确保iptales及seLinux不会成为障碍;
3、(可选)各节点之间可通过主机名互相通信;
节点的名称与hosts文件中解析的主机名都要保持一致;
# uname -n 获取的主机名,要与解析的主机名相同
4、各节点之间基于密钥认证的方式通过ssh通信互信;
5、每个节点都要安装 keepalived
keepalived的程序环境:
主配置文件:/etc/keepalived/keepalived.conf
Unit file:/usr/lib/systemd/system/keepalived.service
架设环境:————–主备模型
(1)、准备两个节点并同步时间
node1.gayj.cn 172.16.38.9 vip地址为:172.16.38.252
node2.gayj.cn 172.16.38.17
两个节点,均同步的一台时间服务器(自动同步任务)
(2)、确保selinux和firewalld是关闭或者允许的
[root@node1 ~]# getenforce
Disabled
[root@node1 ~]# systemctl status firewalled
- firewalled.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
(3)、各节点的hosts名和uname -r名字一样,并且两台要一样(也就是说要在hosts文件内相互包含)
[root@node1 ~]# vim /etc/hosts
172.16.38.17 node2.gayj.cn
172.16.38.9 node1.gayj.cn
[root@node2 ~]# vim /etc/hosts
172.16.38.17 node2.gayj.cn node2
172.16.38.9 node1.gayj.cn node1
(4)、每个节点都要基于密码认证
Node1:
[root@node1 ~]# ssh-keygen -t rsa
[root@node1 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node2.gayj.cn
Node2:
[root@node2 ~]# ssh-keygen -t rsa
[root@node2 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@node1.gayj.cn
[root@node1 ~]# date; ssh root@node2.gayj.cn ‘date’ #开通了基于密码认证后,可以这样远程运行命令
Wed Mar 16 12:04:17 CST 2016 #时间要保持分秒不差,所以要用时间服务器自动同步
Wed Mar 16 12:04:17 CST 2016
[root@node1 ~]# ifconfig | grep ‘inet 172.’; ssh root@node2.gayj.cn ‘ifconfig’ | grep ‘inet 172.’
inet 172.16.38.9 netmask 255.255.0.0 broadcast 172.16.255.255
inet 172.16.38.17 netmask 255.255.0.0 broadcast 172.16.255.255
[root@node1 ~]# iptables -t filter -L -n #检查iptables规则,因为防火墙关了,此项没意义
(5)、各节点上安装keepalived
[root@node1 ~]# yum -y install keepalived; ssh root@node2 ‘yum -y install keepalived’ #两台同时安装keepalived
[root@node1 ~]# rpm -ql keepalived | grep ‘/etc/keepalived/keepalived.conf’; ssh root@node2 ‘rpm -ql keepalived’ | grep ‘/etc/keepalived/keepalived.conf’
/etc/keepalived/keepalived.conf
/etc/keepalived/keepalived.conf
(6)、配置文件
Node1:
[root@node1 keepalived]# vim keepalived.conf
global_defs {
notification_email {
23264081@qq.com
root@localhost
}
notification_email_from node1@gayj.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1.gayj.cn
vrrp_mcast_group4 228.100.38.1
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass cKlSuVZu
}
virtual_ipaddress {
172.16.38.252/16
}
}
Node2:
[root@node1 keepalived]# vim keepalived.conf
global_defs {
notification_email {
23264081@qq.com
root@localhost
}
notification_email_from node1@gayj.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node2.gayj.cn
vrrp_mcast_group4 228.100.38.1
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777736
virtual_router_id 1
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass cKlSuVZu
}
virtual_ipaddress {
172.16.38.252/16
}
}
(7)、启动服务并查看状态
主节点:
[root@node1 ~]# systemctl start keepalived; ssh root@node2 ‘systemctl start keepalived’
[root@node1 ~]# systemctl status keepalived
Mar 16 13:32:33 node1.gayj.cn Keepalived_vrrp[12988]: VRRP_Instance(VI_1) Transition to MASTER STATE
Mar 16 13:32:34 node1.gayj.cn Keepalived_vrrp[12988]: VRRP_Instance(VI_1) Entering MASTER STATE
Mar 16 13:32:34 node1.gayj.cn Keepalived_vrrp[12988]: VRRP_Instance(VI_1) setting protocol VIPs.
Mar 16 13:32:34 node1.gayj.cn Keepalived_vrrp[12988]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eno167777….252
Mar 16 13:32:34 node1.gayj.cn Keepalived_healthcheckers[12987]: Netlink reflector reports IP 172.16.38.252 added
Mar 16 13:32:39 node1.gayj.cn Keepalived_vrrp[12988]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eno167777….252
Hint: Some lines were ellipsized, use -l to show in full.
[root@node1 ~]# ip addr list
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:15:62:9c brd ff:ff:ff:ff:ff:ff
inet 172.16.38.9/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.252/16 scope global secondary eno16777736 # VIP已经自动配置上去了,没添加别名,只能自动为辅助地址
备节点:
[root@node2 ~]# systemctl status keepalived
Mar 16 13:41:57 node2.gayj.cn Keepalived_vrrp[6945]: Registering Kernel netlink reflector
Mar 16 13:41:57 node2.gayj.cn Keepalived_vrrp[6945]: Registering Kernel netlink command channel
Mar 16 13:41:57 node2.gayj.cn Keepalived_vrrp[6945]: Registering gratuitous ARP shared channel
Mar 16 13:41:57 node2.gayj.cn Keepalived_vrrp[6945]: Opening file ‘/etc/keepalived/keepalived.conf’.
Mar 16 13:41:57 node2.gayj.cn Keepalived_vrrp[6945]: Configuration is using : 62991 Bytes
Mar 16 13:41:57 node2.gayj.cn Keepalived_vrrp[6945]: Using LinkWatch kernel netlink reflector…
Mar 16 13:41:57 node2.gayj.cn Keepalived_vrrp[6945]: VRRP_Instance(VI_1) Entering BACKUP STATE
Mar 16 13:41:57 node2.gayj.cn Keepalived_vrrp[6945]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
测试:
将一个节点的keepalived停止,看VIP是否漂移到另一个节点
1)、先将主节点停止 :
[root@node1 ~]# systemctl stop keepalived
[root@node1 ~]# ip addr list
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:15:62:9c brd ff:ff:ff:ff:ff:ff
inet 172.16.38.9/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe15:629c/64 scope link # VIP已经没有了
valid_lft forever preferred_lft forever
2)、查看备用节点:
[root@node2 ~]# ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:a8:5f:c7 brd ff:ff:ff:ff:ff:ff
inet 172.16.38.17/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.252/16 scope global secondary eno16777736 #VIP地址已经漂移到Node2节点
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fea8:5fc7/64 scope link
valid_lft forever preferred_lft forever
3)、将主节点Node1重新启动—–让地址在漂回来(工作在抢占模式)
[root@node1 ~]# systemctl start keepalived
[root@node1 ~]# ip addr list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:15:62:9c brd ff:ff:ff:ff:ff:ff
inet 172.16.38.9/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.252/16 scope global secondary eno16777736 #VIP地址已经漂移回Node1节点了
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe15:629c/64 scope link
valid_lft forever preferred_lft forever
注:如果出现两个都有VIP不能互相漂移时,不能互相通信时,可以是因为进程死掉,服务没有停止影响的
架设环境:————–主主(双主)模型(互为主备)
(1)、其他步骤一样,
(2)、修改两个节点的配置文件 # 复制小技巧 .,$y 复制当前行到结尾
Node1:—————————–注意变化
[root@node1 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
23264081@qq.com
root@localhost
}
notification_email_from node1@gayj.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1.gayj.cn
vrrp_mcast_group4 228.100.38.1
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass cKlSuVZu
}
virtual_ipaddress {
172.16.38.252/16
}
}
vrrp_instance VI_2 { # 定义虚拟路由组2
state BACKUP # 在Node1上路由组2为备节点,反之在node2配置中node1应该为备节点
interface eno16777736
virtual_router_id 2 # 虚拟路由组的id号 2,上面为1
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 1234 # 因为是虚拟路由组2,所以认证要重要使用随机密钥
}
virtual_ipaddress {
172.16.38.253/16 # 虚拟路由组2的VIP
}
}
Node2:—————————–注意变化
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from node2@gayj.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node2.gayj.cn
vrrp_mcast_group4 228.0.38.1
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777736
virtual_router_id 1
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass cKlSuVZu
}
virtual_ipaddress {
172.16.38.252/16
}
}
vrrp_instance VI_2 {
state MASTER
interface eno16777736
virtual_router_id 2
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1234
}
virtual_ipaddress {
172.16.38.253/16
}
}
(3)、重新启动各节点keepalived服务
[root@node1 ~]# systemctl restart keepalived.service
[root@node2 ~]# systemctl restart keepalived.service
查看:
[root@node2 ~]# ip addr show
inet 172.16.38.17/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.253/16 scope global secondary eno16777736
[root@node1 ~]# ip addr show
inet 172.16.38.9/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.252/16 scope global secondary eno16777736
(4)、测试:停止一个节点,应该两个ip地址都会到另一个节点上了
停止Node1:
[root@node1 ~]# systemctl stop keepalived.service
[root@node2 ~]# ip addr show
inet 172.16.38.17/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.253/16 scope global secondary eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.252/16 scope global secondary eno16777736
重启Node1其自己的主VIP又回来了
[root@node1 ~]# systemctl restart keepalived.servic
[root@node1 ~]# ip addr show
inet 172.16.38.9/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.252/16 scope global secondary eno16777736
停止Node2:
[root@node2 ~]# systemctl stop keepalived.service
[root@node1 ~]# ip addr show
inet 172.16.38.9/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.252/16 scope global secondary eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.253/16 scope global secondary eno16777736
========================================定义邮件通知========================================
定义通知脚本: —–邮件通知
要先安装mailx包
vrrp_instance {
…
notify_master <STRING>|<QUOTED-STRING> # 当前节点转为主节点时会触发QUOTED-STRING
notify_backup <STRING>|<QUOTED-STRING> # 当前节点转为被节点时会触发QUOTED-STRING
notify_fault <STRING>|<QUOTED-STRING> # 当前节点出现故障时会调用那个QUOTED-STRING来处理
notify <STRING>|<QUOTED-STRING> #不管理什么情况都使用此脚本(单独使用,上面三个一般一起使用)
}
邮件通知———–示例脚本:
#!/bin/bash
# Author: blog.gayj.cn
# Description: An example of notify script
#
#接收脚本的地址
contact=’root@localhost’
#定义一个函数
notify() {
mailsubject=”$(hostname) to be $1: vip floating” # 邮件主题:取得当前变量的主机名,vip漂移了
mailbody=”$(date +’%F %H:%M:%S’): vrrp transition, $(hostname) changed to be $1″
#正文:在什么时候vrrp发生改变,说明是那个主机发生改变
echo $mailbody | mail -s “$mailsubject” $contact #调用上面的一些内容,发送给contact用户
}
case $1 in #调用函数
master)
notify master #当节点转为主节点
exit 0
;;
backup)
notify backup #当节点转为备节点
exit 0
;;
fault)
notify fault #当节点出现故障
exit 0
;;
*) #只能输入下列状态
echo “Usage: $(basename $0) {master|backup|fault}”
exit 1
;;
esac
测试:
[root@node1 ~]# sh keepalived.sh backup
Message 16802:
From root@node1.gayj.cn Wed Mar 16 16:59:21 2016
Return-Path: <root@node1.gayj.cn>
X-Original-To: root@localhost
Delivered-To: root@localhost.gayj.cn
Date: Wed, 16 Mar 2016 16:59:21 +0800
To: root@localhost.gayj.cn
Subject: node1.gayj.cn to be backup: vip floating
User-Agent: Heirloom mailx 12.5 7/5/10
Content-Type: text/plain; charset=us-ascii
From: root@node1.gayj.cn (root)
Status: R
2016-03-16 16:59:21: vrrp transition, node1.gayj.cn changed to be backup
修改keepalived.conf添加对脚本的调用(两个节点相同)
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass cKlSuVZu
}
virtual_ipaddress {
172.16.38.252/16
}
notify_master “/etc/keepalived/notify.sh master” # 三种触发状态的自动调用,把刚才写好的脚本放下此目录下给x权限
notify_backup “/etc/keepalived/notify.sh backup” # 当这三种状态发生改时,会调用脚本对应进行通知
notify_fault “/etc/keepalived/notify.sh fault”
}
用Keepalived来追踪服务=====================Keepalived来检测nginx服务是否可用=============
检测nginx服务的代码添加到vrrp_instance VI_1同级(两个节点相同)
而实现资源转移(当Keepalived检测到本节点nginx服务停止时自动转移至备节点资源)
(1)、在上面主主模式的基础上把两个节点分别安装nginx
[root@node1 ~]# yum install nginx.x86_64 -y –nogpgcheck
[root@node2 ~]# yum install nginx.x86_64 -y –nogpgcheck
分别启动服务:
[root@node1 ~]# nginx
[root@node2 ~]# nginx
(2)、修改默认主页文件以示区别
[root@node1 ~]# vim /usr/share/nginx/html/index.html
[root@node2 ~]# vim /usr/share/nginx/html/index.html
(3)、修改各节点keepalived.conf ———————————注意:权重设置
Node1:
vrrp_script chk_nginx {
script “killall -0 nginx”
interval 2
weight -20 # 注:特别注意,此处减的权限,要小于备节点的值,不然不会实现资源转移:100-20=80,备节点为90,这就对了
原来由于老师-5设置的权重值不一样就变成了100-5=95,备节点为90,减权重太少没有小于备节点,死活转不过去
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass cKlSuVZu
}
virtual_ipaddress {
172.16.38.252/16
}
track_script {
chk_nginx
}
notify_master “/etc/keepalived/notify.sh master”
notify_backup “/etc/keepalived/notify.sh backup”
notify_fault “/etc/keepalived/notify.sh fault”
}
vrrp_instance VI_2 {
state BACKUP
interface eno16777736
virtual_router_id 2
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1234
}
virtual_ipaddress {
172.16.38.253/16
}
}
==========================================================================================
Node2:
vrrp_script chk_nginx {
script “killall -0 nginx”
interval 2
weight -20
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777736
virtual_router_id 1
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass cKlSuVZu
}
virtual_ipaddress {
172.16.38.252/16
}
track_script {
chk_nginx
}
notify_master “/etc/keepalived/notify.sh master”
notify_backup “/etc/keepalived/notify.sh backup”
notify_fault “/etc/keepalived/notify.sh fault”
}
vrrp_instance VI_2 {
state MASTER
interface eno16777736
virtual_router_id 2
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1234
}
virtual_ipaddress {
172.16.38.253/16
}
}
注:各节点先停止keepalived,启动nginx后才启动
(4)、测试:用VIP地址访问,172.16.38.252
停止:Node1上的nginx服务,看资源是否转移
[root@node1 ~]# ip addr list
inet 172.16.38.9/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe15:629c/64 scope link # 地址已经被转移
[root@node2 ~]# ip addr list
inet 172.16.38.17/16 brd 172.16.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.253/16 scope global secondary eno16777736
valid_lft forever preferred_lft forever
inet 172.16.38.252/16 scope global secondary eno16777736 # 已经修改为转移过来的地址
查看网页:
转移之前:
停止Node1上nginx后,资源转移后的:
如果要让刚才的检测脚本与nginx高偶合动作的话,在调用时添加如下行:
case $1 in #调用函数
master)
notify master #当节点转为主节点
systemctl start nginx #让nginx自已启动
exit 0
;;
backup)
notify backup #当节点转为备节点
systemclt restart nginx.service
exit 0
;;
fault)
notify fault #当节点出现故障
systemctl stop nginx
exit 0
;;
*) #只能输入下列状态
echo “Usage: $(basename $0) {master|backup|fault}”
exit 1
;;
esac
用Keepalived来追踪接口====================Keepalived来检测网卡接口是否可用=============
传输层健康状态检测(tcp协议层)
TCP_CHECK
{
…
}
检测参数:
connect_timeout <INTEGER>
其它:
connect_ip <IP ADDRESS>
connect_port <PORT>
bindto <IP ADDRESS>
bind_port <PORT>
====================Keepalived来定义ipvs集群=============================================================
(1)准备两个Realserver节点基于LVS-DR模式
RealServer1 :172.16.38.2
RealServer2 :172.16.38.4
Keepalived主:172.16.38.9
Keepalived备:172.16.38.17
VIP:172.16.38.22
都安装上httpd服务
(2)、因为用LVS-DR模型,调整内核参数
利用脚本设置
ifconfig | grep “^[^[:space:]]” | awk -F: ‘/^e/{print $1}’ # 取网卡接口名Centos 7
ifconfig | grep “^[^[:space:]]” | awk ‘/^e/{print $1}’ # 取网卡接口名Centos 6
配置reservel脚本:CentOS 6
[root@www ~]# cat set.sh
#!/bin/bash
#
iface=$(ifconfig | grep “^[^[:space:]]” | awk ‘/^e/{print $1}’) # 取网卡接口名Centos 6
vip=’172.16.38.22′ # vip地址
case $1 in
start)
echo 1 >/proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 >/proc/sys/net/ipv4/conf/all/arp_announce
echo 1 >/proc/sys/net/ipv4/conf/${iface}/arp_ignore
echo 2 >/proc/sys/net/ipv4/conf/${iface}/arp_announce
# 上面这段是设置nat参数
ifconfig lo:0 $vip netmask 255.255.255.255 broadcast $vip up # 设置VIP和路由
route add -host $vip dev lo:0
;;
stop) # 清除上面设置
ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/${iface}/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/${iface}/arp_announce
;;
esac
配置reservel脚本:CentOS 7
[root@www ~]# cat set.sh
#!/bin/bash
#
iface=$(ifconfig | grep “^[^[:space:]]” | awk -F: ‘/^e/{print $1}’) # 取网卡接口名Centos 7
vip=’172.16.38.22′ # vip地址
case $1 in
start)
echo 1 >/proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 >/proc/sys/net/ipv4/conf/all/arp_announce
echo 1 >/proc/sys/net/ipv4/conf/${iface}/arp_ignore
echo 2 >/proc/sys/net/ipv4/conf/${iface}/arp_announce
# 上面这段是设置nat参数
ifconfig lo:0 $vip netmask 255.255.255.255 broadcast $vip up # 设置VIP和路由
route add -host $vip dev lo:0
;;
stop) # 清除上面设置
ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/${iface}/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/${iface}/arp_announce
;;
esac
=========================================================================================================================
(3)、测试VIP设置是否正常
查看是否生成VIP
[root@www ~]# ifconfig | grep ‘inet addr:172.16.38.22’
inet addr:172.16.38.22 Mask:255.255.255.255
重要测试,用非RealServer主机Ping172.16.38.22,如果不通说明配置正常,因为DR模式Realserver 的VIP是对内发送报文的
[root@node2 ~]# ping 172.16.38.22
PING 172.16.38.22 (172.16.38.22) 56(84) bytes of data.
From 172.16.38.17 icmp_seq=1 Destination Host Unreachable
From 172.16.38.17 icmp_seq=2 Destination Host Unreachable
(4)、配置两个keepalived节点————————-基于keepalived的主备模式(在Master外添加下列字段)
主节点:172.16.38.9 node1.gayj.cn
[root@node1 ~]# vim /etc/keepalived/keepalived.conf
virtual_ipaddress { #Keepalived原漂移地址要改成172.16.38.22 VIP
172.16.38.22
}
virtual_server 172.16.38.22 80 { #添加虚拟服务器172.16.38.22 VIP
delay_loop 6 #延迟抢占时间为6
lb_algo rr #集群调度方法
lb_kind DR #集群类型 DR模型
protocol TCP #基于TCP协议
sorry_server 127.0.0.1 80 #添加错误返回服务器地址及端口127.0.0.1
real_server 172.16.38.2 80 {
weight 1 #权重
HTTP_GET { #基于HTTP_GET做健康检测
url {
path /
status_code 200 #期望健康状态码200做检测
digest 743c19ecde43057b9eea6750216fb25d # 期望对主index.html的MD5SUM进行检测
}
connect_timeout 3 #连接超时时间3
nb_get_retry 3 #重试次数为3次
delay_before_retry 3 #两次重试之间的间隔
}
}
real_server 172.16.38.4 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
=====================================================================================
备节点:172.16.38.17 node2.gayj.cn
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from node2@gayj.cn
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node2.gayj.cn
vrrp_mcast_group4 228.0.38.1
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777736
virtual_router_id 1
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass cKlSuVZu
}
virtual_ipaddress {
172.16.38.22 # 这里不加子网掩码,默认的是32位,即255.255.255.255
}
notify_master “/etc/keepalived/notify.sh master”
notify_backup “/etc/keepalived/notify.sh backup”
notify_fault “/etc/keepalived/notify.sh fault”
}
virtual_server 172.16.38.22 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
sorry_server 127.0.0.1 80
real_server 172.16.38.2 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 172.16.38.4 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
期望对主页做hash健康检测:
[root@node1 keepalived]# curl http://172.16.38.2/
<h1> <font color=”red”>This is RealServer 1 on CentOS 6</font> </h1>
[root@node1 keepalived]# genhash -s 172.16.38.2 -p 80 -u index.html #将此MD5SUM后面的码放入上面digest
MD5SUM = 743c19ecde43057b9eea6750216fb25d
=================================================================================================================
(5)、在各节点停止keepalived服务并启动
[root@node1 keepalived]# systemctl stop keepalived
[root@node1 keepalived]# systemctl start keepalived
测试:刷新会根据规则漂移
在测试,在看效果,先用ipvsadm -L -s查看
[root@node1 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.38.22:80 rr
-> 172.16.38.2:80 Route 1 0 0
-> 172.16.38.4:80 Route 1 0 0
把RealServer 1的httpd关闭
无论怎么刷新就只有2了,在用ipvsadm查看
[root@node1 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.38.22:80 rr
-> 172.16.38.4:80 Route 1 0 1
[root@centos6 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.16.38.22 0.0.0.0 255.255.255.255 UH 0 0 0 lo
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 eth0
注:路由表里多了一条169.254.0.0.的路由,导致加VIP失败,加了其他主机还能ping通
删除那条路由
[root@centos6 ~]# ip route del 169.254.0.0/16
[root@centos6 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.16.38.22 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 eth0
- 本文固定链接: https://www.gayj.cn/?p=477
- 转载请注明: https://www.gayj.cn/