首页 > 其他 > 详细

redis-cluster 根据wiki学习记录 搭建,增删节点,故障转移

时间:2020-08-07 21:03:43      阅读:75      评论:0      收藏:0      [点我收藏+]

节点规划

三主三从 master slave
node-1 192.168.0.142 6379 192.168.0.142 26379
node-2 192.168.0.143 6379 192.168.0.143 26379
node-3 192.168.0.144 6379 192.168.0.144 26379
16379 消息传递端口 36379 消息传递端口

目录规划

安装目录 /usr/local/redis4
配置文件目录 /usr/local/redis4/conf/redis_cluster/
数据存储目录 /database/redis4/
日志目录 /var/log/redis4/
pidfile /var/run/
服务启动用户名 root

单节点部署

版本: redis 4.0.10

下面以一台0.142为例,另两台也需要配好

安装目录 /usr/local/redis4
配置文件目录 /usr/local/redis4/conf
数据存储目录 /database/redis4/redis_6379
日志目录 /var/log/redis4/redis_6379.log
服务启动用户名 root
redis实例文件命名规则
类型 规则 示例
实例配置文件名 redis_端口.conf redis_6379.conf
实例数据目录名 redis_端口 redis_6379
实例日志文件名 redis_端口.log redis_6379.log

服务器优化配置

主要是安装服务前服务器的优化,视服务器情况决定是否使用以下优化配置,不配置以下优化服务启动会有WARNING警告,但服务可以正常启动和运行。

1、禁用Linux透明大页

?     cd /etc/init.d

?     wget http://y-tools.up366.cn/tools/mongodb/disable-transparent-hugepages

?     chmod 755 /etc/init.d/disable-transparent-hugepages

?     chkconfig --add disable-transparent-hugepages

?     /etc/init.d/disable-transparent-hugepages start

2、/proc/sys/net/core/somaxconn  

socket监听(listen)的backlog上限(系统默认是128,限制了接收新 TCP 连接侦听队列的大小,redis配置文件默认tcp-backlog是511)

?    编辑/etc/sysctl.conf文件添加以下内容

?    net.core.somaxconn = 1024

?    执行命令使生效:sysctl -p

3、内存分配策略()

?    编辑/etc/sysctl.conf文件添加以下内容

?    vm.overcommit_memory = 1

?    执行命令使生效:sysctl -p


安装

解压

tar -xf redis-4.0.10.tar.gz 

进入目录编译安装

#cd redis-4.0.10
#make  PREFIX=/usr/local/redis4  


注:运行make  test需要8.5之上的tcl,所以先运行:yum install tcl  -y

#make test  

#make install PREFIX=/usr/local/redis4

如过编译有报错提示gcc未找到命令

yum install -y gcc epel-release jemalloc-devel
cd deps/ 
make hiredis  jemalloc  linenoise  lua
cd ..
make  PREFIX=/usr/local/redis4
echo $?

创建目录(配置文件目录,数据目录,日志目录)

[root@node-1 redis-4.0.10]# mkdir -p /usr/local/redis4/conf /database/redis4 /var/log/redis4 
#创建一个实例目录
[root@node-1 redis-4.0.10]# mkdir -p  /database/redis4/redis_6379

修改配置文件

[root@node-1 redis-4.0.10]# cp redis.conf  /usr/local/redis4/conf/
[root@node-1 redis-4.0.10]# vim /usr/local/redis4/conf/redis.conf 

1 daemonize  yes   #指定是否后台运行,yes:是,no:否

2 pidfile/var/run/redis_6379.pid #指定pid文件路径

3 port  6379#指定实例端口

4.bind 192.168.0.142

5 logfile  "/var/log/redis4/redis_6379.log"#指定日志文件路径

6 dbfilename    dump.rdb  #指定dump文件名

7 dir/database/redis4/redis_6379/#指定数据存储目录

8 appendonly  yes#启用AOF持久化方式,yes:启用,no:禁用

启动服务

/usr/local/redis4/bin/redis-server /usr/local/redis4/conf/redis.conf

集群部署

下面以一台为例0.142另两个也需要配好

准备环境

1、添加epel源

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

rpm -ivh epel-release-6-8.noarch.rpm

2、ruby环境安装

yum -y install ruby ruby-devel rubygems rpm-build

gem install redis -v 3.3.5        若报错ruby版本太低,解决办法:Redis问题总结

注:ruby gem安装的redis库,版本不能使用4.0,否则在reshard分片时会报错。

若执行gem install redis -v 3.3.5无反应

可手动安装gem,再执行gem install redis -v 3.3.5 参考:https://blog.csdn.net/wangshuminjava/article/details/80284810

3、创建数据存储目录

mkdir -p /database/redis4/redis_26379
4、创建配置文件目录
cd /usr/local/redis4/conf

mkdir redis_cluster/{6379,26379} -p

cp /usr/local/redis4/conf/redis_6379.conf /usr/local/redis4/conf/redis_cluster/6379/redis6379.conf

cp /usr/local/redis4/conf/redis_6379.conf /usr/local/redis4/conf/redis_cluster/26379/redis26379.conf(注意修改端口)

编辑配置文件redis_6379.conf和redis26379.conf,添加如下配置(注意修改端口)
dir ,logfile ,pidfile 路径端口也要注意修改
cluster-enabled yes
cluster-config-file nodes6379.conf  <-- 注意端口修改
cluster-node-timeout 10000
cluster-require-full-coverage no
5、设置环境变量PATH
cp -a /root/redis-4.0.10/src/redis-trib.rb /usr/local/redis4/bin/

vim /etc/profile

PATH=$PATH:/usr/local/redis4/bin

source /etc/profile

启动所有节点

 /usr/local/redis4/bin/redis-server /usr/local/redis4/conf/redis_cluster/6379/redis_6379.conf 

/usr/local/redis4/bin/redis-server /usr/local/redis4/conf/redis_cluster/26379/redis_26379.conf 

通过redis提供的redis-trib.rb集群管理工具进行管理创建集群(只在142上操作)

redis-trib.rb create --replicas 1 192.168.0.142:6379 192.168.0.143:6379 192.168.0.144:6379 192.168.0.144:26379 192.168.0.143:26379 192.168.0.142:26379
--replicas参数指定集群中每个主节点配备几个从节点,这里设置为1。

#Can I set the above configuration? (type ‘yes‘ to accept): yes   #中途会有一个交互,问是否同意节点配置 yes
.....
.....
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
16384个槽全部被分配,集群创建成功。注意:给redis-trib.rb的节点地址必须是不包含任何槽/数据的节点,否则会拒绝创建集群。

检查集群状态

[root@node-1 conf]# redis-trib.rb check 192.168.0.142:6379   指定任意节点即可
>>> Performing Cluster Check (using node 192.168.0.142:6379)
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379
   slots: (0 slots) slave
   replicates 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
M: 0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379
   slots: (0 slots) slave
   replicates 0cbfe1938a16594e35dbff487a49fe224da270b9
S: 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379
   slots: (0 slots) slave
   replicates 4311abf4f943795c0d117babb714b27b8ed1a80e
M: 4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

查看集群信息

[root@node-1 conf]# redis-trib.rb info 192.168.0.142:6379  任意节点即可
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 5461 slots | 1 slaves.
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 5462 slots | 1 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.

添加节点

添加主节点

1.步骤重复环境准备阶段,注意修改端口
2.将节点加入到集群
方法一:redis-trib.rb add-node 192.168.0.142:6380(新主节点ip:port)  192.168.0.142:6379(已存在节点的ip:port)

方法二:进入任意一个节点:

[root@node-1 6381]# redis-cli -c -h 192.168.0.142 -p 6379

192.168.0.142:6379> cluster meet 192.168.0.142 6380
OK

给新节点分配槽位slot

分配前
#准备好id
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380@16380 master - 0 1596789801413 0 connected
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596789800000 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596789800000 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596789796000 7 connected 0-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596789802416 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596789800411 6 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596789799408 3 connected 11672-16383

#6380现在还是空的
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 6961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 0 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
开始分配
[root@node-1 conf]# redis-trib.rb reshard  192.168.0.142:6380       
>>> Performing Cluster Check (using node 192.168.0.142:6380)
M: 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380
   slots: (0 slots) master
   0 additional replica(s)
S: 5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379
   slots: (0 slots) slave
   replicates 0cbfe1938a16594e35dbff487a49fe224da270b9
M: 4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379
   slots:11672-16383 (4712 slots) master
   1 additional replica(s)
S: 5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379
   slots: (0 slots) slave
   replicates 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
M: 0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379
   slots:6212-10922 (4711 slots) master
   1 additional replica(s)
S: 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379
   slots: (0 slots) slave
   replicates 4311abf4f943795c0d117babb714b27b8ed1a80e
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
   slots:0-6211,10923-11671 (6961 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 2000   #第一个交互,分配多少槽位?
What is the receiving node ID? 2e9f699fde48fcfbc566a8f14d21be85c66dc062 #新节点的ID
Please enter all the source node IDs.
  Type ‘all‘ to use all the nodes as source nodes for the hash slots.
  Type ‘done‘ once you entered all the source nodes IDs.
Source node #1:6b5387c7a4e647212b6943cb42b38abfaa45c4a3
Source node #2:done
#all表示从所有的master重新分配;
#或者写数据要提取slot的master节点id,最后用done结束

Ready to move 2000 slots.
  Source nodes:
    M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
   slots:0-6211,10923-11671 (6961 slots) master
   1 additional replica(s)
  Destination node:
    M: 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380
   slots: (0 slots) master
   0 additional replica(s)
  Resharding plan:
    Moving slot 0 from 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
    Moving slot 1 from 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
    Moving slot 2 from 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
    ..............
    ..............
    
Do you want to proceed with the proposed reshard plan (yes/no)? yes #想要继续分槽位吗? 
..............
................
分配完成
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 2000 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.

添加从节点

添加前
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 2000 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
添加节点
重复准备环境步骤,创建新从节点

16383
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380@16380 master - 0 1596793698059 8 connected 0-1999

#添加节点
命令格式:redis-trib.rb add-node --slave --master-id 主节点id 添加节点的ip和端口 集群中已存在节点ip和端口
      
[root@node-1 conf]# redis-trib.rb add-node --slave --master-id 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:26380 192.168.0.142:6379       
>>> Adding node 192.168.0.142:26380 to cluster 192.168.0.142:6379
>>> Performing Cluster Check (using node 192.168.0.142:6379)
M: 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379
   slots:2000-6211,10923-11671 (4961 slots) master
   1 additional replica(s)
M: 2e9f699fde48fcfbc566a8f14d21be85c66dc062 192.168.0.142:6380
   slots:0-1999 (2000 slots) master
   0 additional replica(s)
S: 5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379
   slots: (0 slots) slave
   replicates 6b5387c7a4e647212b6943cb42b38abfaa45c4a3
M: 0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379
   slots:6212-10922 (4711 slots) master
   1 additional replica(s)
S: 5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379
   slots: (0 slots) slave
   replicates 0cbfe1938a16594e35dbff487a49fe224da270b9
S: 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379
   slots: (0 slots) slave
   replicates 4311abf4f943795c0d117babb714b27b8ed1a80e
M: 4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379
   slots:11672-16383 (4712 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.0.142:26380 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 192.168.0.142:6380.
[OK] New node added correctly.
添加完成
[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 2000 slots | 1 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 4712 slots | 1 slaves.
[OK] 0 keys in 4 masters.

删除节点

删除主节点

1、首先要使用reshard移除master的全部slot(目前只能把被删除master的slot迁移到一个节点上)

redis-trib.rb reshard 192.168.0.183:6379
...
迁移过程
...

[root@node-2 opt]# redis-trib.rb info 192.168.0.143:6379
192.168.0.143:6379 (0cbfe193...) -> 0 keys | 4711 slots | 1 slaves.
192.168.0.142:6379 (6b5387c7...) -> 0 keys | 4961 slots | 1 slaves.
192.168.0.142:6380 (2e9f699f...) -> 0 keys | 0 slots | 0 slaves.
192.168.0.144:6379 (4311abf4...) -> 0 keys | 6712 slots | 2 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
2、 删除当前节点

redis-trib.rb del-node 192.168.0.142:6380  ‘7030164ada8fcabd6f8ecca2d03350a2c436d73a‘    < 任意 ip:端口 自己的节点id>

[root@node-1 conf]# redis-trib.rb del-node 192.168.0.142:6380 2e9f699fde48fcfbc566a8f14d21be85c66dc062
>>> Removing node 2e9f699fde48fcfbc566a8f14d21be85c66dc062 from cluster 192.168.0.142:6380
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes                                  
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797414173 7 connected
33e74c38f9ff08c725702ba0024b916e3f944a20 192.168.0.142:26380@36380 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797411000 9 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797411166 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797406000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797413170 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797412167 9 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596797410000 9 connected 0-1999 11672-16383

删除从节点

[root@node-1 conf]# redis-trib.rb del-node 192.168.0.142:26380 33e74c38f9ff08c725702ba0024b916e3f944a20
>>> Removing node 33e74c38f9ff08c725702ba0024b916e3f944a20 from cluster 192.168.0.142:26380
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes                                   
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797623000 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797622000 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797619000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797623622 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797622619 9 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596797621000 9 connected 0-1999 11672-16383

故障转移

命令:CLUSTER failover:手动进行故障转移,需要在从节点执行。
eg:

192.168.0.144:6380> CLUSTER failover
OK

转移前

[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes                                   
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797623000 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797622000 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797619000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797623622 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 slave 4311abf4f943795c0d117babb714b27b8ed1a80e 0 1596797622619 9 connected
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 master - 0 1596797621000 9 connected 0-1999 11672-16383

开始转移

[root@node-1 conf]# redis-cli  -h 192.168.0.142 -p 26379 
192.168.0.142:26379> 
192.168.0.142:26379> cluster failover
OK
192.168.0.142:26379> 

转移完成

[root@node-1 conf]# redis-cli -h 192.168.0.142 -p 6379 cluster nodes
5605c1fe0f0cc7ccf2d78ecc2357eb851370eb6f 192.168.0.143:26379@36379 slave 6b5387c7a4e647212b6943cb42b38abfaa45c4a3 0 1596797862201 7 connected
0cbfe1938a16594e35dbff487a49fe224da270b9 192.168.0.143:6379@16379 master - 0 1596797861198 2 connected 6212-10922
6b5387c7a4e647212b6943cb42b38abfaa45c4a3 192.168.0.142:6379@16379 myself,master - 0 1596797852000 7 connected 2000-6211 10923-11671
5aba67cfc49fd788c9025a58d095613487fe1ced 192.168.0.144:26379@36379 slave 0cbfe1938a16594e35dbff487a49fe224da270b9 0 1596797864205 5 connected
6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 192.168.0.142:26379@36379 master - 0 1596797861198 10 connected 0-1999 11672-16383
4311abf4f943795c0d117babb714b27b8ed1a80e 192.168.0.144:6379@16379 slave 6209d272e14ab59f5a1d4a08d43dedeba57ec1ba 0 1596797863203 10 connected

redis-cluster 根据wiki学习记录 搭建,增删节点,故障转移

原文:https://www.cnblogs.com/gaojiajun/p/13454731.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!