ASM在线迁移LUN遇到的问题
在2个节点上都配置好了RAW之后,确保在两边ll /dev/raw都能看到新配置好的raw,
在节点1用命令行的方式添加新LUN并删除旧的RAW,执行了一会报错:
SQL> Alter diskgroup ZXDG add disk ‘/dev/raw/raw4‘,‘/dev/raw/raw5‘ drop disk ‘/dev/raw/raw1‘ rebalance power 4 nowait;
Alter diskgroup ZXDG add disk ‘/dev/raw/raw4‘,‘/dev/raw/raw5‘ drop disk ‘/dev/raw/raw1‘ rebalance power 4 nowait
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15054: disk "/DEV/RAW/RAW1" does not exist in diskgroup "ZXDG"
SQL>
在两边的v$asm_disk中查看,发现节点1上已经能看到新LUN了,但节点2上却看不到新LUN。
SQL> select disk_number from v$asm_disk;
DISK_NUMBER
-----------
0
1
0
于是在节点2上重新添加了一次:
SQL>
SQL>
SQL> Alter diskgroup ZXDG add disk ‘/dev/raw/raw4‘,‘/dev/raw/raw5‘ ;
Alter diskgroup ZXDG add disk ‘/dev/raw/raw4‘,‘/dev/raw/raw5‘
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15031: disk specification ‘/dev/raw/raw5‘ matches no disks
ORA-15025: could not open disk ‘/dev/raw/raw5‘
ORA-27041: unable to open file
Linux-x86_64 Error: 6: No such device or address
Additional information: 42
Additional information: 255
Additional information: -750856672
ORA-15031: disk specification ‘/dev/raw/raw4‘ matches no disks
ORA-15025: could not open disk ‘/dev/raw/raw4‘
ORA-27041: unable to open file
Linux-x86_64 Error: 6: No such device or address
Additional information: 42
Additional information: 255
Additional information: -750856672
上面的报错一般是raw的属主、权限等问题,也可能是节点2识别raw有问题(ll /dev/raw的结果有很大的欺骗性),
检查发现raw的属主、权限都没问题,那么是不是节点2未能正确识别lun呢,在节点2上执行partprobe之后再次添加disk:
SQL>
SQL>
SQL> Alter diskgroup ZXDG add disk ‘/dev/raw/raw4‘,‘/dev/raw/raw5‘
2 ;
Alter diskgroup ZXDG add disk ‘/dev/raw/raw4‘,‘/dev/raw/raw5‘
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15033: disk ‘/dev/raw/raw4‘ belongs to diskgroup "ZXDG"
ORA-15033: disk ‘/dev/raw/raw5‘ belongs to diskgroup "ZXDG"
这回报ORA-15032与ORA-15033了,显示raw4/raw5已经属于DG的一部分了,
但再次查看v$asm_disk,发现raw4,raw5的状态有异常,group_number为0,mount_status为closed,没有分配name:
group_number disk_number mount_status header_status mode_status state path name
1 0 cached member online dropping /dev/raw/raw1 ZXDG_0000
0 1 closed member online normal /dev/raw/raw4
0 2 closed member online normal /dev/raw/raw5
而此时raw1已经处于dropping状态了,因为raw4/raw5没有被正常add到dg,所以无法对数据做rebalance,此时不允许将raw1从dg中drop掉,所以状态一直为dropping。
以防万一,先undrop:
SQL> Alter diskgroup ZXDG undrop disks ;
Diskgroup altered.
由于add raw的时候asm并有为raw4/raw5分配name,也无法正常将之从dg中删除:
SQL> Alter diskgroup ZXDG drop disk ‘/dev/raw/raw4‘;
Alter diskgroup ZXDG drop disk ‘/dev/raw/raw4‘
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15054: disk "/DEV/RAW/RAW4" does not exist in diskgroup "ZXDG"
读取3个raw的metadata,庆幸的是发现metadata并没有出现混乱的情况,如下:
[oracle@zxdb01 ~]$ kfed read /dev/raw/raw1 > /tmp/raw1
[oracle@zxdb01 ~]$ kfed read /dev/raw/raw4 > /tmp/raw4
[oracle@zxdb01 ~]$ kfed read /dev/raw/raw5 > /tmp/raw5
[oracle@zxdb01 ~]$ diff /tmp/raw4 /tmp/raw5
6c6
< kfbh.block.obj: 2147483649 ; 0x008: TYPE=0x8 NUMB=0x1
---
> kfbh.block.obj: 2147483650 ; 0x008: TYPE=0x8 NUMB=0x2
20c20
< kfdhdb.dsknum: 1 ; 0x024: 0x0001
---
> kfdhdb.dsknum: 2 ; 0x024: 0x0002
23c23
< kfdhdb.dskname: ZXDG_0001 ; 0x028: length=10
---
> kfdhdb.dskname: ZXDG_0002 ; 0x028: length=10
25c25
< kfdhdb.fgname: ZXDG_0001 ; 0x068: length=10
---
> kfdhdb.fgname: ZXDG_0002 ; 0x068: length=10
[oracle@zxdb01 ~]$ diff /tmp/raw1 /tmp/raw4
6,7c6,7
< kfbh.block.obj: 2147483648 ; 0x008: TYPE=0x8 NUMB=0x0
< kfbh.check: 203544188 ; 0x00c: 0x0c21d67c
---
> kfbh.block.obj: 2147483649 ; 0x008: TYPE=0x8 NUMB=0x1
> kfbh.check: 3389207210 ; 0x00c: 0xca0332aa
20c20
< kfdhdb.dsknum: 0 ; 0x024: 0x0000
---
> kfdhdb.dsknum: 1 ; 0x024: 0x0001
23c23
< kfdhdb.dskname: ZXDG_0000 ; 0x028: length=10
---
> kfdhdb.dskname: ZXDG_0001 ; 0x028: length=10
25c25
< kfdhdb.fgname: ZXDG_0000 ; 0x068: length=10
---
> kfdhdb.fgname: ZXDG_0001 ; 0x068: length=10
27,30c27,30
< kfdhdb.crestmp.hi: 32971218 ; 0x0a8: HOUR=0x12 DAYS=0xe MNTH=0x6 YEAR=0x7dc
< kfdhdb.crestmp.lo: 449324032 ; 0x0ac: USEC=0x0 MSEC=0x209 SECS=0x2c MINS=0x6
< kfdhdb.mntstmp.hi: 32993540 ; 0x0b0: HOUR=0x4 DAYS=0x8 MNTH=0xc YEAR=0x7dd
< kfdhdb.mntstmp.lo: 3706305536 ; 0x0b4: USEC=0x0 MSEC=0x26f SECS=0xe MINS=0x37
---
> kfdhdb.crestmp.hi: 33000305 ; 0x0a8: HOUR=0x11 DAYS=0x1b MNTH=0x2 YEAR=0x7de
> kfdhdb.crestmp.lo: 2735401984 ; 0x0ac: USEC=0x0 MSEC=0x2bb SECS=0x30 MINS=0x28
> kfdhdb.mntstmp.hi: 33000305 ; 0x0b0: HOUR=0x11 DAYS=0x1b MNTH=0x2 YEAR=0x7de
> kfdhdb.mntstmp.lo: 2735433728 ; 0x0b4: USEC=0x0 MSEC=0x2da SECS=0x30 MINS=0x28
35,36c35,36
< kfdhdb.dsksize: 204797 ; 0x0c4: 0x00031ffd
< kfdhdb.pmcnt: 3 ; 0x0c8: 0x00000003
---
> kfdhdb.dsksize: 102398 ; 0x0c4: 0x00018ffe
> kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002
39c39
< kfdhdb.f1b1locn: 2 ; 0x0d4: 0x00000002
---
> kfdhdb.f1b1locn: 0 ; 0x0d4: 0x00000000
[oracle@zxdb01 ~]$
[oracle@zxdb01 ~]$ diff /tmp/raw1 /tmp/raw5
6,7c6,7
< kfbh.block.obj: 2147483648 ; 0x008: TYPE=0x8 NUMB=0x0
< kfbh.check: 203544188 ; 0x00c: 0x0c21d67c
---
> kfbh.block.obj: 2147483650 ; 0x008: TYPE=0x8 NUMB=0x2
> kfbh.check: 3389207210 ; 0x00c: 0xca0332aa
20c20
< kfdhdb.dsknum: 0 ; 0x024: 0x0000
---
> kfdhdb.dsknum: 2 ; 0x024: 0x0002
23c23
< kfdhdb.dskname: ZXDG_0000 ; 0x028: length=10
---
> kfdhdb.dskname: ZXDG_0002 ; 0x028: length=10
25c25
< kfdhdb.fgname: ZXDG_0000 ; 0x068: length=10
---
> kfdhdb.fgname: ZXDG_0002 ; 0x068: length=10
27,30c27,30
< kfdhdb.crestmp.hi: 32971218 ; 0x0a8: HOUR=0x12 DAYS=0xe MNTH=0x6 YEAR=0x7dc
< kfdhdb.crestmp.lo: 449324032 ; 0x0ac: USEC=0x0 MSEC=0x209 SECS=0x2c MINS=0x6
< kfdhdb.mntstmp.hi: 32993540 ; 0x0b0: HOUR=0x4 DAYS=0x8 MNTH=0xc YEAR=0x7dd
< kfdhdb.mntstmp.lo: 3706305536 ; 0x0b4: USEC=0x0 MSEC=0x26f SECS=0xe MINS=0x37
---
> kfdhdb.crestmp.hi: 33000305 ; 0x0a8: HOUR=0x11 DAYS=0x1b MNTH=0x2 YEAR=0x7de
> kfdhdb.crestmp.lo: 2735401984 ; 0x0ac: USEC=0x0 MSEC=0x2bb SECS=0x30 MINS=0x28
> kfdhdb.mntstmp.hi: 33000305 ; 0x0b0: HOUR=0x11 DAYS=0x1b MNTH=0x2 YEAR=0x7de
> kfdhdb.mntstmp.lo: 2735433728 ; 0x0b4: USEC=0x0 MSEC=0x2da SECS=0x30 MINS=0x28
35,36c35,36
< kfdhdb.dsksize: 204797 ; 0x0c4: 0x00031ffd
< kfdhdb.pmcnt: 3 ; 0x0c8: 0x00000003
---
> kfdhdb.dsksize: 102398 ; 0x0c4: 0x00018ffe
> kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002
39c39
< kfdhdb.f1b1locn: 2 ; 0x0d4: 0x00000002
---
> kfdhdb.f1b1locn: 0 ; 0x0d4: 0x00000000
[oracle@zxdb01 ~]$
这种情况下完全可以用add disk force的方式强制加入dg:
SQL> Alter diskgroup ZXDG add disk ‘/dev/raw/raw4‘ name ZXDG_01 force ;
Diskgroup altered.
SQL> Alter diskgroup ZXDG add disk ‘/dev/raw/raw5‘ name ZXDG_01 force ;
Diskgroup altered.
再看状态,已经成功加入:
name group_number disk_number mount_status header_status mode_status state path
ZXDG_0000 1 0 CACHED MEMBER ONLINE NORMAL /dev/raw/raw1
ZXDG_02 1 2 CACHED MEMBER ONLINE NORMAL /dev/raw/raw5
ZXDG_01 1 1 CACHED MEMBER ONLINE NORMAL /dev/raw/raw4
可以安全的drop raw1了,设置并发为10的rebalance:
SQL>
SQL> Alter diskgroup ZXDG drop disk ZXDG_0000 rebalance power 10 nowait;
Diskgroup altered.
SQL>
此时通过v$asm_disk可以查看rebalance的进度:
name free_mb
ZXDG_0000 155446
ZXDG_02 55532
ZXDG_01 54982
速度还是挺快的。
值得庆幸的是在故障的过程中,metadata是没有损坏的,如果metadata发生损坏,则需要对metadata进行修复,这是个很繁琐的步骤。
当然如果有大量的停机时间的话,也可以对dg进行重建以修复这个问题。
估计经常运营WINDOWS服务器的童鞋在碰到这种情况的时候很有想重启一把asm的冲动,但如果是metedata发生损坏的情况,千万不能去尝试重启asm(尤其不能将所有节点的asm同时重启),
因为很可能会导致dg无法被挂载,这会对解决问题带来不必要的麻烦。腾讯2014暑期实习笔试总结,布布扣,bubuko.com
腾讯2014暑期实习笔试总结
原文:http://blog.csdn.net/bjtu08301097/article/details/23886009