7 日常问题排查与解决
7.1 准备工作遇到的问题
7.1.1 物理主机分区问题
案例:
在物理主机安装系统过程中忘记调整分区,结果在 openstack 部署之后发现 openstack 集群磁盘总量太小,原因是安装系统默认分区/home 分区过大,而/ 分区过小,openstack 集群磁盘识别的是系统/
分区
解决办法:
以控制节点为例
查看分区
[root@YUN-11 ~]# fdisk -l
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 299.6 GB, 299563483136 bytes
255 heads, 63 sectors/track, 36419 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot /dev/sda1 Start End Blocks Id System
1 36420 292542463+ ee GPT
Disk /dev/mapper/vg_YUN2-lv_root: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_YUN2-lv_swap: 16.9 GB, 16894656512 bytes 255 heads, 63 sectors/track, 2053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_YUN2-lv_home: 228.2 GB, 228241440768 bytes 255 heads, 63 sectors/track, 27748 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
可 以 看 到 /dev/mapper/vg_YUN2-lv_root 太 小 , 而 /dev/mapper/vg_YUN2-lv_home 比较大,但是云主机磁盘使用的是/dev/mapper/vg_YUN2-lv_root 分区
下面对分区做一下调整
卸载/home 分区(如果/home 分区下有文件,需要先备份,这里尤为
重要)
# umount /home
检查分区
# e2fsck -f /dev/mapper/vg_YUN2-lv_home
调整 home 分区大小
# resize2fs -p /dev/mapper/vg_YUN2-lv_home 2G
挂载/home 分区
# mount /home
调整/dev/mapper/vg_YUN2-lv_home 大小
# lvreduce -L 2G /dev/mapper/vg_YUN2-lv_home
查看分区
[root@YUN-11 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_YUN2-lv_root
50G 2.2G 45G 5% /
tmpfs | 16G | 4.0K | 16G | 1% | /dev/shm |
/dev/sda2 | 477M | 49M | 404M | 11% | /boot |
/dev/sda1 | 200M | 260K | 200M | 1% | /boot/efi |
/srv/loopback-device/swiftloopback
1.9G 3.1M 1.8G 1% /srv/node/swiftloopback
/dev/mapper/vg_YUN2-lv_home
1.9G 25M 1.8G 2% /home
[root@YUN-11 ~]# vgdisplay | |
--- Volume group --- | |
VG Name | vg_YUN2 |
System ID | |
Format | lvm2 |
Metadata Areas | 1 |
Metadata Sequence No | 5 |
VG Access | read/write |
VG Status | resizable |
MAX LV | 0 |
Cur LV | 3 |
Open LV | 3 |
Max PV | 0 |
Cur PV | 1 |
Act PV | 1 |
VG Size | 278.30 GiB |
PE Size | 4.00 MiB |
Total PE | 71245 |
Alloc PE / Size | 17340 / 67.73 GiB |
Free PE / Size | 53905 / 210.57 GiB |
VG UUID | gwRY1M-61we-pric-21Wk-pTyk-2bF9-i5koBQ |
--- Volume group --- | |
VG Name | cinder-volumes |
System ID | |
Format | lvm2 |
Metadata Areas | 1 |
Metadata Sequence No | 1 |
VG Access | read/write |
VG Status | resizable |
MAX LV | 0 |
Cur LV | 0 |
Open LV | 0 |
Max PV | 0 |
Cur PV | 1 |
Act PV | 1 |
VG Size | 20.60 GiB |
PE Size | 4.00 MiB |
Total PE | 5273 |
Alloc PE / Size | 0 / 0 |
Free PE / Size | 5273 / 20.60 GiB |
VG UUID | sqfFLq-QOvj-awvz-M5HG-MDj3-5SZE-h4Snlc |
注意上面粗字体的一行,可以发现已经有 210.57 GiB 空闲空间
接下来调整/dev/mapper/vg_YUN2-lv_root 分区
# lvextend -L +182.27G /dev/mapper/vg_YUN2-lv_root
# resize2fs -p /dev/mapper/vg_YUN2-lv_root
再次查看分区 | ||||
[root@YUN-11 ~]# df -h | ||||
Filesystem | Size Used Avail Use% Mounted on | |||
/dev/mapper/vg_YUN2-lv_root | ||||
229G | 2.2G | 215G | 2% / | |
tmpfs | 16G | 4.0K | 16G | 1% /dev/shm |
/dev/sda2 | 477M | 49M | 404M | 11% /boot |
/dev/sda1 | 200M | 260K | 200M | 1% /boot/efi |
/srv/loopback-device/swiftloopback | ||||
1.9G | 3.1M | 1.8G | 1% /srv/node/swiftloopback | |
/dev/mapper/vg_YUN2-lv_home | ||||
1.9G | 25M | 1.8G | 2% /home | |
[root@YUN-11 ~]# vgdisplay | ||||
--- Volume group --- | ||||
VG Name | vg_YUN2 | |||
System ID | ||||
Format | lvm2 | |||
Metadata Areas | 1 | |||
Metadata Sequence No 6 | ||||
VG Access | read/write | |||
VG Status | resizable | |||
MAX LV | 0 | |||
Cur LV | 3 | |||
Open LV | 3 | |||
Max PV | 0 | |||
Cur PV | 1 | |||
Act PV | 1 | |||
VG Size | 278.30 GiB | |||
PE Size | 4.00 MiB | |||
Total PE | 71245 |
Alloc PE / Size | 64002 / 250.01 GiB | |
Free | PE / Size | 7243 / 28.29 GiB |
VG UUID | gwRY1M-61we-pric-21Wk-pTyk-2bF9-i5koBQ | |
--- Volume group --- | ||
VG Name | cinder-volumes | |
System ID | ||
Format | lvm2 | |
Metadata Areas | 1 | |
Metadata Sequence No | 1 | |
VG Access | read/write | |
VG Status | resizable | |
MAX LV | 0 | |
Cur LV | 0 | |
Open LV | 0 | |
Max PV | 0 | |
Cur PV | 1 | |
Act PV | 1 | |
VG Size | 20.60 GiB | |
PE Size | 4.00 MiB | |
Total PE | 5273 | |
Alloc PE / Size | 0 / 0 | |
Free | PE / Size | 5273 / 20.60 GiB |
VG UUID | sqfFLq-QOvj-awvz-M5HG-MDj3-5SZE-h4Snlc |
可以看到 root 分区调整的更大了
7.1.2 网络规划问题
在网络规划中最好不要把 openstack 集群内网 IP 地址和物理服务器集群对内服务网卡 IP 地址在同一个网段,容易造成 IP 地址不容易管理
7.2 部署环境遇到的问题
7.2.1 扩展节点出现异常
案例 1:
在扩展节点完成后,发现扩展节点的 NOVA 服务状态是 down
提示信息:
查看日志控制节点日志/var/log/nova/compute.log
AMQP server on <server>:5672 is unreachable: Socket closed
原因:
在扩展节点之前没有在配置节点上配置网桥
案例 2:
扩展节点的过程中出现错误
提示信息:
192.168.0.101_keystone.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.0.101_keystone. pp
Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]: Could not evaluate: Execution of '/usr/bin/keystone --os-auth-url http://127.0.0.1:35357/v2.0/ token-get' returned 1: The request you hav e made requires authentication. (HTTP 401)
You will find full trace in log /var/tmp/packstack/20150209-133712-oZcG_v/manifests/192.168.0.101_keystone.pp.log
Please check log file /var/tmp/packstack/20150209-133712-oZcG_v/o penstack-setup.log for more information
原因:
在 dashboard 修改登录密码没有在配置文件修改解决办法:
# vi packstack-answers-20150130-201639.txt
CONFIG_KEYSTONE_ADMIN_PW=hndlyptl(参数之后是重新设置的密码)
案例 3:
扩展节点之后,由于实际需要把扩展节点所在的物理主机替换掉,结果在 dashboard 中仍然存在原来的扩展节点
解决办法:(以删除节点 YUN-14 为例)
在扩展节点上
# mysql mysql>use nova;
mysql> select id,service_id,host_ip from compute_nodes; +----+------------+--------------+
| id | service_id | host_ip | | | |||
+---- | +------------ | + | -------------- | + |
| | 1 | | 5 | 192.168.0.101 | | ||
| | 2 | | 7 | 192.168.0.102 | | ||
| | 3 | | 8 | 192.168.0.103 | | ||
| | 4 | | 9 | 192.168.0.104 | | ||
+---- | +------------ | + | -------------- | + |
4 rows in set (0.00 sec)
mysql> delete from compute_nodes where id=4; Query OK, 1 row affected (0.01 sec)
mysql> delete from services where host='YUN-14'; Query OK, 1 row affected (0.00 sec)
7.3 后期运维遇到的问题
8 注意事项
注意事项 1:
在创建网络时,需要开启内网的 DHCP 功能
注意事项 2:
云主机不能修改 IP 地址
注意事项 3:
在创建完实例之后要查看资源占用情况,确认云主机的创建和资源没有冲突
注意事项 4:
对于各节点来说对待防火墙一定要慎重处理,不能关闭控制节点以及计算节点的防火墙
9 资源下载
repo 文件下载链接
http://mirrors.163.com/.help/CentOS6-Base-163.repo
http://mirrors.zju.edu.cn/epel/6/x86_64/epel-release-6-8.noarch.rpm
https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-4.noarch.rpm
10 小结
11 参考文档
部署 RDO 参考文档
https://www.rdoproject.org/Neutron_with_existing_external_network
http://www.it165.net/os/html/201410/9532.html
网络优化参考文档
https://ask.openstack.org/en/question/25306/slow-network-speed-between-vm-and-external/
http://www.chenshake.com/how-node-installation-centos-6-4-openstack-havana-ovsgre/
Openstack云计算项目实施其三(遇到问题以及注意事项)
原文:http://blog.51cto.com/xiaoxiaozhou/2113351