主机规划
172.16.16.15 node1.ja.com
172.16.16.16 node2.ja.com
一、准备工作
=======================================================================
1. 为各主机设置主机名,IP地址,并测试主机连通性
分别为各节点设置主机名
node1:
hostname node1.ja.com
node2:
hostname node2.ja.com
确保各主机在重启之后,主机名保持不变
node1:
sed -i ‘s/\(HOSTNAME=\).*/\1node1.ja.com/p‘ /etc/sysconfig/network
node2:
sed -i ‘s/\(HOSTNAME=\).*/\1node2.ja.com/p‘ /etc/sysconfig/network
注: 主机重启过程中,会从配置文件 /etc/sysconfig/network中读取主机名,并将其设置为重启后主机名
2. 添加主机名解析: 确保每一个节点的名称都能互相解析
使用hosts文件做主机名解析,在各节点执行如下操作:
cat >ed.txt<<EOF
\$a\172.16.16.15 node1.ja.com
\$a\172.16.16.16 node2.ja.com
EOF
sed -i -f ed.txt /etc/hosts
3. 确保node1,node2之间时间同步
在命令行同步时间(立即生效,且临时生效)
ntpdate 172.16.0.1
设置定时任务(永久生效)
echo "*/5 * * * * /usr/sbin/ntpdate 172.16.0.1 &>/dev/null;/sbin/hwclock -w" >/var/spool/cron/root
4. 配置各节点之间基于ssh免密码免密钥认证通信(非必须)
生成密钥文件在各节点执行如下操作:
node1:
ssh-keygen -t rsa -P ‘‘
ssh-copy-id .ssh/id_rsa.pub node2.ja.com
node2:
ssh-keygen -t rsa -P ‘‘
ssh-copy-id .ssh/id_rsa.pub node1.ja.com
=======================================================================
二、安装配置drdb
=======================================================================
node1 & node2:
rpm -ivh drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-358.el6-8.4.3-33.el6.x86_64.rpm
在node1创建并修改DRDB配置文件
# cat /etc/drbd.conf
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
# cd /etc/drbd.d/
# ls
global_common.conf
编辑配置文件global_common.conf,最终内容如下:
# egrep -v ‘^$|^[[:space:]]*#‘ global_common.conf
global {
usage-count yes;
}
common {
handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
}
startup {
}
options {
}
disk {
on-io-error detach;
}
net {
protocol C;
cram-hmac-alg "sha1";
}
syncer {
rate 1000M;
}
}
=======================================================================
为DRDB创建磁盘分区(node1,node2执行同样的操作),操作如下:
=======================================================================
查看当前的磁盘分区情况
# fdisk -l /dev/sda
Disk /dev/sda: 85.9 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00056921
Device Boot Start End Blocks Id System
/dev/sda1 * 1 26 204800 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 26 7859 62914560 8e Linux LVM
/dev/sda3 7859 8512 5252256 83 Linux
创建扩展分区
# echo -e ‘n\ne\n\n\nw\n‘|fdisk /dev/sda
在扩展分区上创建新的磁盘分区
# echo -e ‘n\n\n+5G\nw\n‘|fdisk /dev/sda
查看/dev/sda硬盘上新增的磁盘分区
# fdisk -l /dev/sda|grep ‘/dev/sda[45]‘
/dev/sda4 8513 10443 15510757+ 5 Extended
/dev/sda5 8513 9166 5253223+ 83 Linux
下面的命令可以多执行几次,以确保新增的磁盘分区被内核识别
# partx -a /dev/sda
# kpartx -af /dev/sda
查看新增磁盘分区是否已被内核识别
# cat /proc/partitions |grep sda[45]
8 4 31 sda4
8 5 5253223 sda5
=======================================================================
两端提供等同大小的磁盘分区
drdb默认监听在7789端口;默认使用协议C
2、定义一个资源/etc/drbd.d/mystore1.res,内容如下:
# cat /etc/drbd.d/mystore1.res
resource mystore1 {
on node1.ja.com {
device /dev/drbd1;
disk /dev/sda5;
address 172.16.16.15:7789;
meta-disk internal;
}
on node2.ja.com {
device /dev/drbd1;
disk /dev/sda5;
address 172.16.16.16:7789;
meta-disk internal;
}
}
确保node1,node2的drdb配置文件保持一致
# scp -p global_common.conf mystore.res node2.ja.com:/etc/drbd.d/
在两个节点上初始化已定义的资源并启动服务:
1)初始化资源,在Node1和Node2上分别执行:
# drbdadm create-md mystore1
Writing meta data...
initializing activity log
NOT initializing bitmap
New drbd meta data block successfully created.
2)启动服务,在Node1和Node2上分别执行:
# /etc/init.d/drbd start
Starting DRBD resources: [
create res: mystore1
prepare disk: mystore1
adjust disk: mystore1
adjust net: mystore1
]
.
3)查看启动状态:
# cat /proc/drbd
version: 8.4.3 (api:1/proto:86-101)
GIT-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by gardner@, 2013-05-27 04:30:21
1: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:5253020 nr:0 dw:0 dr:5253740 al:0 bm:321 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0
也可以使用drbd-overview命令来查看:
# drbd-overview
0:web Connected Secondary/Secondary Inconsistent/Inconsistent C r----
从上面的信息中可以看出此时两个节点均处于Secondary状态。于是,我们接下来需要将其中一个节点设置为Primary。在要设置为Primary的节点上执行如下命令:
drbd 8.4中第一次设置某节点成为主节点的命令
# drbdadm primary --force resource
注: 也可以在要设置为Primary的节点上使用如下命令来设置主节点:
# drbdadm -- --overwrite-data-of-peer primary mystore1
而后再次查看状态,可以发现数据同步过程已经开始:
node1:
# drbd-overview
1:mystore1/0 SyncSource Primary/Secondary UpToDate/Inconsistent C r---n-
[>....................] sync‘ed: 0.1% (5128/5128)M
node2:
# drbd-overview
1:mystore1/0 SyncTarget Secondary/Primary Inconsistent/UpToDate C r-----
[>....................] sync‘ed: 2.7% (4996/5128)M
等数据同步完成以后再次查看状态,可以发现节点已经牌实时状态,且节点已经有了主次:
# drbd-overview
1:mystore1/0 Connected Primary/Secondary UpToDate/UpToDate C r-----
此时,若我们把node1降级为Secondary,node2提升为primary,观察一下这两个节点角色的变化
node1:
# drbdadm secondary mystore1
# drbd-overview
1:mystore1/0 Connected Secondary/Primary UpToDate/UpToDate C r-----
node2:
# drbdadm primary mystore1
# drbd-overview
1:mystore1/0 Connected Primary/Secondary UpToDate/UpToDate C r-----
注意:只有角色是primary的节点才能挂载使用,提供读写操作;
secondary既不能读,也不能写,仅能接收来自primary节点的数据
4、创建文件系统
文件系统的挂载只能在Primary节点进行,因此,也只有在设置了主节点后才能对drbd设备进行格式化:
node2:
# mke2fs -t ext4 /dev/drbd1
我在secondary(node1节点)上执行格式化时的错误提示信息
# mke2fs -t ext4 /dev/drbd1
mke2fs 1.41.12 (17-May-2010)
mke2fs: Wrong medium type while trying to determine filesystem size
对主Primary/Secondary模型的drbd服务来讲,在某个时刻只能有一个节点为Primary,因此,要切换两个节点的角色,只能在先将原有的Primary节点设置为Secondary后,才能原来的Secondary节点设置为Primary:
node2(Primary角色):
# mkdir /drbd
# mount /dev/drbd1 /drbd/
# cp /etc/issue /drbd/
# ls /drbd/
issue lost+found
# sed -i ‘$a\I am node2‘ /drbd/issue
# sed -n ‘$p‘ /drbd/issue
I am node2
模拟node2(Primary角色)故障:先卸载,再降级
# umount /drbd/
# drbdadm secondary mystore1
将node1提升为主的,并挂载使用,查看是否能看到node2之前添加的数据
# mkdir /drbd
# drbdadm primary mystore1
# mount /dev/drbd1 /drbd/
# sed -n ‘$p‘ /drbd/issue
I am node2
DRBD主从角色手动切换已演示完毕,数据同步也已ok
注:drbd自身并不能完成角色的自动切换,需借助其他工具
=======================================================================
本文出自 “Enjoy the process” 博客,请务必保留此出处http://1757513075.blog.51cto.com/8607255/1405841
DRBD主从角色手动切换的实现,布布扣,bubuko.com
DRBD主从角色手动切换的实现
原文:http://1757513075.blog.51cto.com/8607255/1405841