首页 > 其他 > 详细

Ceph-10.2.10安装配置

时间:2018-04-13 22:22:47      阅读:369      评论:0      收藏:0      [点我收藏+]
Ceph-10.2.10安装配置

操作系统

CentOS 7.2.1511

主机规划

192.168.0.106 ceph-node1 #ceph-deploy
192.168.0.107 ceph-node2
192.168.0.108 ceph-node3

磁盘规划

/dev/sda system
/dev/sdb osd
/dev/sdc osd

关闭防火墙

# setenforce 0
# sed -i s‘/SELINUX=enforcing/SELINUX=disabled‘/g /etc/selinux/config
# yum -y install epel-release

ssh免密登录
ceph-node1

# ssh-keygen -t rsa
# ssh-copy-id ceph-node1
# ssh-copy-id ceph-node2
# ssh-copy-id ceph-node3

时间同步

# yum -y install ntp ntpdate
# ntpdate cn.pool.ntp.org

使用国内源

# export CEPH_DEPLOY_REPO_URL=http://mirrors.163.com/ceph/rpm-jewel/el7
# export CEPH_DEPLOY_GPG_URL=http://mirrors.163.com/ceph/keys/release.asc

安装ceph-deploy

# yum -y ceph-deploy

快速部署

# mkdir my-cluster
# cd my-cluster

# ceph-deploy new ceph-node1

# yum -y update ceph-deploy

# ceph-deploy install ceph-node1 ceph-node2 ceph-node3

# ceph-deploy mon create-initial

# ceph-deploy admin ceph-node1 ceph-node2 ceph-node3

查看磁盘
# ceph-deploy disk list ceph-node1

# ceph-deploy osd create ceph-node1:/dev/sdb ceph-node1:/dev/sdc

# ceph-deploy osd create ceph-node2:/dev/sdb ceph-node2:/dev/sdc

# ceph-deploy osd create ceph-node3:/dev/sdb ceph-node3:/dev/sdc

# ceph osd tree
[root@ceph-node1 ~]# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.08752 root default                                          
-2 0.02917     host ceph-node1                                   
 0 0.01459         osd.0            up  1.00000          1.00000 
 1 0.01459         osd.1            up  1.00000          1.00000 
-3 0.02917     host ceph-node2                                   
 2 0.01459         osd.2            up  1.00000          1.00000 
 3 0.01459         osd.3            up  1.00000          1.00000 
-4 0.02917     host ceph-node3                                   
 4 0.01459         osd.4            up  1.00000          1.00000 
 5 0.01459         osd.5            up  1.00000          1.00000 

# chmod +r /etc/ceph/ceph.client.admin.keyring

检查健康状况
# ceph health
[root@ceph-node1 ~]# ceph health
HEALTH_OK

创建块设备
# rbd create rbd1 --size 10240 

查看创建的rbd
#rbd list
[root@ceph-node1 ~]# rbd list
rbd1

查看rbd细节
# rbd --image rbd1 info
[root@ceph-node1 ~]# rbd --image rbd1 info
rbd image ‘rbd1‘:
    size 10240 MB in 2560 objects
    order 22 (4096 kB objects)
    block_name_prefix: rbd_data.10372ae8944a
    format: 2
    features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
    flags: 

Ceph-10.2.10安装配置

原文:http://blog.51cto.com/hzde0128/2103268

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!