首页 > 其他 > 详细

openstack Q版部署(10)-----Cinder云存储服务

时间:2019-02-14 17:36:15      阅读:389      评论:0      收藏:0      [点我收藏+]

一、cinder介绍

技术分享图片

技术分享图片

一般 cinder-api 和 cinder-scheduler 安装在控制节点上, cinder-volume 安装在存储节点上。

二、数据库配置

# 在任意控制节点创建数据库
mysql -uroot -p12345678


CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘localhost‘ IDENTIFIED BY ‘cinder_dbpass‘;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘%‘ IDENTIFIED BY ‘cinder_dbpass‘;
flush privileges;
exit;

三、 创建cinder-api

认证信息的创建:

# 在任意控制节点操作

# 调用cinder服务需要认证信息,加载环境变量脚本即可
source admin-openrc.sh 

创建ncinder用户:
openstack user create --domain default --password=cinder_pass cinder #此处密码我设置了cinder_pass

将vinder加入到admin组和service项目
openstack role add --project service --user cinder admin

创建cinder服务实体

# cinder服务实体类型”volume”;
# 创建v2/v3两个服务实体
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

 

# 注意--region与初始化admin用户时生成的region一致;
# api地址统一采用vip,如果public/internal/admin分别使用不同的vip,请注意区分;
# cinder-api 服务类型为volume;
# cinder-api后缀为用户project-id,可通过”openstack project list”查看

# v2 public api
[root@controller01 ~]# openstack endpoint create --region RegionTest volumev2 public http://10.1.80.60:8776/v2/%\(project_id\)s
# v2 internal api
[root@controller01 ~]# openstack endpoint create --region RegionTest volumev2 internal http://10.1.80.60:8776/v2/%\(project_id\)s
# v2 admin api
[root@controller01 ~]# openstack endpoint create --region RegionTest volumev2 admin http://10.1.80.60:8776/v2/%\(project_id\)s

# v3 public api
[root@controller01 ~]# openstack endpoint create --region RegionTest volumev3 public http://10.1.80.60:8776/v3/%\(project_id\)s
# v3 internal api
[root@controller01 ~]# openstack endpoint create --region RegionTest volumev3 internal http://10.1.80.60:8776/v3/%\(project_id\)s
# v3 admin api
[root@controller01 ~]# openstack endpoint create --region RegionTest volumev3 admin http://10.1.80.60:8776/v3/%\(project_id\)s

  

四、安装并配置cinder

# 在全部控制节点安装cinder服务,以controller01节点为例
[root@controller01 ~]# yum install openstack-cinder -y

 

 配置cinder.conf

在全部控制节点操作,以controller01节点为例;
注意”my_ip”参数,根据节点修改;
注意cinder.conf文件的权限:root:cinder

[DEFAULT]
state_path = /var/lib/cinder
my_ip = 10.1.80.60
glance_api_servers = http://10.1.80.60:9292
auth_strategy = keystone
osapi_volume_listen = $my_ip
osapi_volume_listen_port = 8776
log_dir = /var/log/cinder
# 前端采用haproxy时,服务连接rabbitmq会出现连接超时重连的情况,可通过各服务与rabbitmq的日志查看;
# transport_url = rabbit://openstack:rabbitmq_pass@controller:5673
# rabbitmq本身具备集群机制,官方文档建议直接连接rabbitmq集群;但采用此方式时服务启动有时会报错,原因不明;如果没有此现象,强烈建议连接rabbitmq直接对接集群而非通过前端haproxy
transport_url=rabbit://openstack:rabbitmq_pass@10.1.80.60:5672
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:cinder_dbpass@10.1.80.60/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
www_authenticate_uri = http://10.1.80.60:5000
auth_url = http://10.1.80.60:35357
memcached_servers = 10.1.80.60:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder_pass
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = $state_path/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[service_user]
[ssl]
[vault]

配置nova.conf

# 在全部控制节点操作,以controller01节点为例;
# 配置只涉及nova.conf的”[cinder]”字段;
# 加入对应regiong
[root@controller01 ~]# vim /etc/nova/nova.conf
[cinder]
os_region_name=RegionTest

  

五、同步cinder数据库

# 任意控制节点操作;
# 忽略部分”deprecation”信息
[root@controller01 ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

技术分享图片

 

六、启动服务并验证

# 全部控制节点操作;
# 变更nova配置文件,首先需要重启nova服务
[root@controller01 ~]# systemctl restart openstack-nova-api.service


# 开机启动
[root@controller01 ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

# 启动并查看服务状态
systemctl restart openstack-cinder-api.service
systemctl restart openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service
systemctl status openstack-cinder-scheduler.service

验证

# 查看agent服务;
# 或:cinder service-list
[root@controller01 ~]# openstack volume service list

 

技术分享图片

 七、cinder 存储节点配置

yum install -y openstack-cinder python-cinderclient
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service
systemctl restart openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service
systemctl status openstack-cinder-scheduler.service

  

 

 

[root@linux-node2 ~]# vim /etc/lvm/lvm.conf
filter = [ "a/sda5/", "r/.*/"]

其中:a 表示同意, r 是不同意

---------------------------------------------------------------------------------------------------------
上面的home分区没有做lvm,设备名是/dev/sda5,则/etc/lvm/lvm.conf可以如上设置。

如果home分区做了lvm,“df -h”命令查看home分区的设备名比如是/dev/mapper/centos-home
那么/etc/lvm/lvm.conf这里就要这样配置了:
filter = [ "a|^/dev/mapper/centos-home$|", "r|.*/|" ]

  

 

修改配置文件

 

[DEFAULT]
enabled_backends = lvm
state_path = /var/lib/cinder
my_ip = 10.1.80.63
glance_api_servers = http://10.1.80.60:9292
auth_strategy = keystone
osapi_volume_listen = $my_ip
osapi_volume_listen_port = 8776
log_dir = /var/log/cinder
# 前端采用haproxy时,服务连接rabbitmq会出现连接超时重连的情况,可通过各服务与rabbitmq的日志查看;
# transport_url = rabbit://openstack:rabbitmq_pass@controller:5673
# rabbitmq本身具备集群机制,官方文档建议直接连接rabbitmq集群;但采用此方式时服务启动有时会报错,原因不明;如果没有此现象,强烈建
议连接rabbitmq直接对接集群而非通过前端haproxy
transport_url=rabbit://openstack:rabbitmq_pass@10.1.80.60:5672
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:cinder_dbpass@10.1.80.60/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
www_authenticate_uri = http://10.1.80.60:5000
auth_url = http://10.1.80.60:35357
memcached_servers = 10.1.80.60:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder_pass
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = $state_path/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[service_user]
[ssl]
[vault]
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

  

 

技术分享图片

 

openstack Q版部署(10)-----Cinder云存储服务

原文:https://www.cnblogs.com/jinyuanliu/p/10375200.html

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!