副本集(Replica Sets)是有自动故障转移和恢复特性的主从复制
mongoDB官方已经不建议使用主从模式了,替代方案是采用副本集的模式
mongoDB官方建议最小化的副本集为Three Member Sets,一个primary和两个secondary
备份集与主从复制主要的不同:
1:该集群没有特定的主数据库
2:主节点故障(如宕机)了,集群会推选出新的节点作为主节点顶上,从而实现自动故障切换
~]# mkdir /data/mongodb/node{1,2,3}
~]# mkdir /data/mongodb/logs
#端口分别为31234,32345,33456
#定义副本集名字,如yujx;使用--replSet选项指定
~]#mongod --fork --dbpath /data/mongodb/node1 --logpath /data/mongodb/logs/node1.log --rest --httpinterface --replSet yujx --port 31234
~]#mongod --fork --dbpath /data/mongodb/node2 --logpath /data/mongodb/logs/node2.log --rest --httpinterface --replSet yujx --port 32345
~]#mongod --fork --dbpath /data/mongodb/node3 --logpath /data/mongodb/logs/node3.log --rest --httpinterface --replSet yujx --port 33456
#首先创建一个副本集配置对象
~]# /usr/local/mongodb/bin/mongo --port 31234
MongoDB shell version: 2.6.6
connecting to: 127.0.0.1:31234/test
> use admin
switched to db admin
> rsconf={
... "_id" : "yujx",
... "members" : [
... {
... "_id" : 0,
... "host" : "192.168.211.217:31234"
... }
... ]
... }
{
"_id" : "yujx",
"members" : [
{
"_id" : 0,
"host" : "192.168.211.217:31234"
}
]
}
#使用rs.initiate()进行初始化
> rs.initiate(rsconf)
{
"info" : "Config now saved locally. Should come online in about a minute.",
"ok" : 1
}
#使用rs.add()将另外两个mongod添加到副本集当中
> rs.add("192.168.211.217:32345")
{ "ok" : 1 }
yujx:PRIMARY> rs.add("192.168.211.217:33456")
{ "ok" : 1 }
#发现31234这个mongod默认就是PRIMARY节点了
#通过rs.conf()可以查看集群的配置情况
yujx:PRIMARY> rs.conf()
{
"_id" : "yujx",
"version" : 3,
"members" : [
{
"_id" : 0,
"host" : "192.168.211.217:31234"
},
{
"_id" : 1,
"host" : "192.168.211.217:32345"
},
{
"_id" : 2,
"host" : "192.168.211.217:33456"
}
]
}
#通过rs.status()查看集群状态
yujx:PRIMARY> rs.status()
{
"set" : "yujx",
"date" : ISODate("2015-01-14T08:07:50Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.211.217:31234",
"health" : 1, #1表明状态是正常,0表明异常
"state" : 1, #1表明是primary,2表明是slave
"stateStr" : "PRIMARY",
"uptime" : 2261,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"electionTime" : Timestamp(1421222174, 1),
"electionDate" : ISODate("2015-01-14T07:56:14Z"),
"self" : true
},
{
"_id" : 1,
"name" : "192.168.211.217:32345",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 607,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"lastHeartbeat" : ISODate("2015-01-14T08:07:49Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T08:07:50Z"),
"pingMs" : 0,
"syncingTo" : "192.168.211.217:31234"
},
{
"_id" : 2,
"name" : "192.168.211.217:33456",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 596,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"lastHeartbeat" : ISODate("2015-01-14T08:07:50Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T08:07:50Z"),
"pingMs" : 0,
"syncingTo" : "192.168.211.217:31234"
}
],
"ok" : 1
}
#把node1:31234停止,查看集群状态,发现node3变成primary角色了
yujx:SECONDARY> rs.status()
{
"set" : "yujx",
"date" : ISODate("2015-01-14T08:13:37Z"),
"myState" : 2,
"syncingTo" : "192.168.211.217:33456",
"members" : [
{
"_id" : 0,
"name" : "192.168.211.217:31234",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"lastHeartbeat" : ISODate("2015-01-14T08:13:36Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T08:13:25Z"),
"pingMs" : 0
},
{
"_id" : 1,
"name" : "192.168.211.217:32345",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2497,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"infoMessage" : "syncing to: 192.168.211.217:33456",
"self" : true
},
{
"_id" : 2,
"name" : "192.168.211.217:33456",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 943,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"lastHeartbeat" : ISODate("2015-01-14T08:13:36Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T08:13:36Z"),
"pingMs" : 0,
"electionTime" : Timestamp(1421223211, 1),
"electionDate" : ISODate("2015-01-14T08:13:31Z")
}
],
"ok" : 1
#再启动node1,node1成为"SECONDARY"角色
yujx:SECONDARY> rs.status()
{
"set" : "yujx",
"date" : ISODate("2015-01-14T08:16:27Z"),
"myState" : 2,
"syncingTo" : "192.168.211.217:33456",
"members" : [
{
"_id" : 0,
"name" : "192.168.211.217:31234",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 20,
"optime" : Timestamp(1421222274, 1),
"optimeDate" : ISODate("2015-01-14T07:57:54Z"),
"infoMessage" : "syncing to: 192.168.211.217:33456",
"self" : true
。。。。。。。
yujx:PRIMARY> db.printSlaveReplicationInfo()
source: 192.168.211.217:31234
syncedTo: Wed Jan 14 2015 15:57:54 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: 192.168.211.217:32345
syncedTo: Wed Jan 14 2015 15:57:54 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
#如修改node1:31234的优先级,把node1变为primary
yujx:PRIMARY> rs.conf()
yujx:PRIMARY> cfg=rs.conf();
yujx:PRIMARY> cfg.members[0].priority=10;
yujx:PRIMARY> rs.reconfig(cfg); #注:修改节点优先级需要登录Master节点运行
yujx:SECONDARY> rs.status()
{
"set" : "yujx",
"date" : ISODate("2015-01-14T08:51:20Z"),
"myState" : 2,
"syncingTo" : "192.168.211.217:31234",
"members" : [
{
"_id" : 0,
"name" : "192.168.211.217:31234",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 11,
"optime" : Timestamp(1421225469, 1),
"optimeDate" : ISODate("2015-01-14T08:51:09Z"),
"lastHeartbeat" : ISODate("2015-01-14T08:51:19Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T08:51:18Z"),
"pingMs" : 0,
"electionTime" : Timestamp(1421225470, 1),
"electionDate" : ISODate("2015-01-14T08:51:10Z")
}
Mongodb可以做到在不停机的情况下无缝增加节点。命令也很简单,两步就可以完成
1.启动新的Mongodb,并指定副本集
2.把副本集添加到"串"中
~]# mkdir /data/mongodb/node4
~]#mongod --fork --dbpath /data/mongodb/node4 --logpath /data/mongodb/logs/node4.log --rest --httpinterface --replSet yujx --port 34567
yujx:PRIMARY> rs.add("192.168.211.217:34567")
{ "ok" : 1 }
yujx:PRIMARY> rs.conf()
{
"_id" : "yujx",
"version" : 5,
"members" : [
{
"_id" : 0,
"host" : "192.168.211.217:31234",
"priority" : 10
},
{
"_id" : 1,
"host" : "192.168.211.217:32345"
},
{
"_id" : 2,
"host" : "192.168.211.217:33456"
},
{
"_id" : 3,
"host" : "192.168.211.217:34567"
}
]
}
yujx:PRIMARY> rs.remove("192.168.211.217:34567")
2015-01-14T18:04:34.255+0800 DBClientCursor::init call() failed
2015-01-14T18:04:34.256+0800 Error: error doing query: failed at src/mongo/shell/query.js:81
2015-01-14T18:04:34.269+0800 trying reconnect to 127.0.0.1:31234 (127.0.0.1) failed
2015-01-14T18:04:34.269+0800 reconnect 127.0.0.1:31234 (127.0.0.1) ok
yujx:PRIMARY> rs.conf()
{
"_id" : "yujx",
"version" : 6,
"members" : [
{
"_id" : 0,
"host" : "192.168.211.217:31234",
"priority" : 10
},
{
"_id" : 1,
"host" : "192.168.211.217:32345"
},
{
"_id" : 2,
"host" : "192.168.211.217:33456"
}
]
}
仲裁节点不存储数据,只是用于投票。所以仲裁节点对于服务器负载很低。
节点一旦以仲裁者的身份加入集群,他就只能是仲裁者,无法将仲裁者配置为非仲裁者,反之也是一样。
另外一个集群最多只能使用一个仲裁者,额外的仲裁者拖累选举新Master节点的速度,同时也不能提供更好的数据安全性。
#初始化集群时,设置仲裁者的配置如下
config = { _id:"mvbox",
members:[
{_id:0,host:"192.168.1.1:27017"},
{_id:1,host:"192.168.1.2:27017",arbiterOnly:true},
{_id:2,host:"192.168.1.3:27017"}]
}
使用仲裁者主要是因为MongoDB副本集需要奇数成员,而又没有足够服务器的情况。在服务器充足的情况下,不应该使用仲裁者节点。
#在线添加仲裁节点
yujx:PRIMARY> rs.addArb("192.168.211.217:34567")
{ "down" : [ "192.168.211.217:34567" ], "ok" : 1 }
yujx:PRIMARY> rs.conf()
{
"_id" : "yujx",
"version" : 9,
"members" : [
{
"_id" : 0,
"host" : "192.168.211.217:31234",
"priority" : 10
},
{
"_id" : 1,
"host" : "192.168.211.217:32345"
},
{
"_id" : 2,
"host" : "192.168.211.217:33456"
},
{
"_id" : 3,
"host" : "192.168.211.217:34567",
"arbiterOnly" : true
}
]
}
yujx:PRIMARY> rs.status()
………..
{
"_id" : 3,
"name" : "192.168.211.217:34567",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 11,
"lastHeartbeat" : ISODate("2015-01-14T10:12:56Z"),
"lastHeartbeatRecv" : ISODate("2015-01-14T10:12:56Z"),
"pingMs" : 0
}
],
"ok" : 1
}
默认SECONDARY是不允许读写的,在写多读少的应用中,使用Replica Sets来实现读写分离。通过在连接时指定或者在主库指定slaveOk,由Secondary来分担读的压力,Primary只承担写操作
#在secondary节点本身设置为slave
#使用rs.slaveOk()或者db.getMongo().setSlaveOk()
问题:
mongodb副本集自己实现了自动故障转移,但是应用怎么实现自动主节点IP切换呢?
原文:http://blog.itpub.net/27000195/viewspace-1402616/