因业务需求,需要使用LVS搭建高可用的负载均衡WEB服务,方案采用2台DR,6台RS,2台mysql 服务器。
十台服务器全部是双网卡,一块用做外网WEB服务提供(以下使用em1),一块用于内网数据交换(使用em2,因为此网卡在LVS模型中并不产生作用,以下内容忽略该网卡)。
外网主网卡配置为电信外网地址,网通地址配置为主网卡别名上,LVS DR与RS间转发通过外网网卡转发。
出于保密关系,电信网段使用10.1.1.0/24 gw 10.1.1.1; 网通使用172.16.10.0/24 gw 172.16.10.1
VIP使用地址为:10.1.1.8, 172.16.10.8
在两台DR上分别安装ipvsadm 和 keepalived
RS网络配置如下:
RS1:
#以下内容配置外网网卡地址以及默认路由,通常在/etc/sysconfig/network-scripts/em1 中直接进行配置
/sbin/ifconfig em1 10.1.1.11 netmask 255.255.255.0 up
/sbin/route add -net 0.0.0.0/0 gw 10.1.1.1
#以下内容配置RS VIP以及VIP响应的路由
/sbin/ifconfig lo down
/sbin/ifconfig lo up
/sbin/ifconfig lo:1 10.1.1.8 broadcast 10.1.1.8 netmask 255.255.255.255 up
/sbin/route add -host 10.1.1.8 dev lo:1
/sbin/ifconfig lo:2 172.16.10.8 broadcast 172.16.10.8 netmask 255.255.255.255 up
/sbin/route add -host 172.16.10.8 dev lo:2
#以下配置网卡的ARP响应以及通告级别
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
至此,RS的LVS相关配置完成,RS2-RS6只需要修改第一步的外网网卡IP地址为相应的IP地址即可,VIP以及ARP响应和通告的内容相同,采用脚本的形式设定为开机启动执行
DR配置:
DR1:
网络配置:
#以下内容配置外网网卡地址以及默认路由,通常在/etc/sysconfig/network-scripts/em1 中直接进行配置
/sbin/ifconfig em1 10.1.1.21 netmask 255.255.255.0 up
/sbin/route add -net 0.0.0.0/0 gw 10.1.1.1
#配置网通地址
/sbin/ifconfig em1:0 172.16.10.21 netmask 255.255.255.0 up
/etc/keepalived/keepalived.conf 配置内容:
! Configuration File for keepalived
#此处配置邮件,大家随意
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
#定义两个脚本用于手动切换LVS主备状态,使用方法为在当前的MASTER下的/etc/keepalived/目录下创建、删除down1或者down2文件,down1对应实例1,2对用实例2
vrrp_script chk_keepalived1 {
script "[ -e /etc/keepalived/down1 ] && exit 1 || exit 0"
interval 1
weight -2
fall 3
rise 1
}
vrrp_script chk_keepalived2 {
script "[ -e /etc/keepalived/down2 ] && exit 1 || exit 0"
interval 1
weight -2
fall 3
rise 1
}
#定义实例1
vrrp_instance VI_1 {
state MASTER
interface em1
virtual_router_id 88
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.8 dev em1 label em1:1
}
track_script {
chk_keepalived1
}
}
#定义实例2
vrrp_instance VI_2 {
state BACKUP
interface em1
virtual_router_id 98
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
124.160.120.228 dev em1 label em1:2
}
track_script {
chk_keepalived2
}
}
#实例1的LVS转发
virtual_server 10.1.1.8 80 {
delay_loop 6
lb_algo wlc
lb_kind DR
nat_mask 255.255.255.0
# persistence_timeout 50
protocol TCP
# sorry_server 127.0.0.1 80
real_server 10.1.1.11 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.12 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.13 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.14 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.15 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.16 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
}
#实例2的LVS转发
virtual_server 172.16.10.8 80 {
delay_loop 6
lb_algo wlc
lb_kind DR
nat_mask 255.255.255.0
# persistence_timeout 50
protocol TCP
# sorry_server 127.0.0.1 80
real_server 10.1.1.11 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.12 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.13 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.14 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.15 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.16 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
}
到此DR1 的配置完成,启动keepalived服务, service keepalived start
将keepalived服务加入启动
chkconfig --add keepalived
chkconfig --level 35 keepalived on
DR2:
网络配置:
#以下内容配置外网网卡地址以及默认路由,通常在/etc/sysconfig/network-scripts/em1 中直接进行配置
/sbin/ifconfig em1 10.1.1.22 netmask 255.255.255.0 up
/sbin/route add -net 0.0.0.0/0 gw 10.1.1.1
#配置网通地址
/sbin/ifconfig em1:0 172.16.10.22 netmask 255.255.255.0 up
/etc/keepalived/keepalived.conf 配置内容:
! Configuration File for keepalived
#此处配置邮件,大家随意
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
#定义两个脚本用于手动切换LVS主备状态,使用方法为在当前的MASTER下的/etc/keepalived/目录下创建、删除down1或者down2文件,down1对应实例1,2对用实例2
vrrp_script chk_keepalived1 {
script "[ -e /etc/keepalived/down1 ] && exit 1 || exit 0"
interval 1
weight -2
fall 3
rise 1
}
vrrp_script chk_keepalived2 {
script "[ -e /etc/keepalived/down2 ] && exit 1 || exit 0"
interval 1
weight -2
fall 3
rise 1
}
#定义实例1
vrrp_instance VI_1 {
state BACKUP
interface em1
virtual_router_id 88
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.1.1.8 dev em1 label em1:1
}
track_script {
chk_keepalived1
}
}
#定义实例2
vrrp_instance VI_2 {
state MASTER
interface em1
virtual_router_id 98
priority 101
advert_int 1
authentication {
auth_type PASS
auth_pass 2222
}
virtual_ipaddress {
124.160.120.228 dev em1 label em1:2
}
track_script {
chk_keepalived2
}
}
#实例1的LVS转发
virtual_server 10.1.1.8 80 {
delay_loop 6
lb_algo wlc
lb_kind DR
nat_mask 255.255.255.0
# persistence_timeout 50
protocol TCP
# sorry_server 127.0.0.1 80
real_server 10.1.1.11 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.12 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.13 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.14 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.15 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.16 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
}
#实例2的LVS转发
virtual_server 172.16.10.8 80 {
delay_loop 6
lb_algo wlc
lb_kind DR
nat_mask 255.255.255.0
# persistence_timeout 50
protocol TCP
# sorry_server 127.0.0.1 80
real_server 10.1.1.11 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.12 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.13 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.14 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.15 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
real_server 10.1.1.16 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
connectport 80
}
}
}
到此DR2 的配置完成,启动keepalived服务, service keepalived start
将keepalived服务加入启动
chkconfig --add keepalived
chkconfig --level 35 keepalived on
现在双主双线LVS配置完成,在DR1和DR2上面使用以下命令可以验证配置以及当前状态
ipvsadm -Ln -->用于查看当前LVS的详细信息,包括当前可用的RS列表
ifconfig -->可以使用其查看当前的网卡信息确定当前主机是否为MASTER,如果为MASTER,则在外网网卡上有VIP地址存在于其网卡别名上
最后补充:
因为网通线路转发通过电信转发,RS回应client的数据包通过电信网关送往公网并回到client,如有必要需要在网关路由器上指定策略路由,源于172.16.10.0/24 的数据包跳转至网通出口
如果选择使用内网转发,则RS回应client的数据包从内网网关转发,需要在内网网关设备上进行NAT转换,并对源地址进行策略路由跳转
原文:http://www.cnblogs.com/Xhale/p/5100419.html