12C Oracle ASM Filter Driver

Oracle ASM Filter Driver(Oracle ASMFD)消除了在系统每次被重启后Oracle ASM需要重新绑定磁盘设备来简化对磁盘设备的配置与管理。Oracle ASM Filter Driver(Oracle ASMFD)是一种内置在Oracle ASM磁盘的IO路径中的内核模块。Oracle ASM使用filter driver来验证对Oracle ASM磁盘的写IO操作。Oracle ASM Filter Driver会拒绝任何无效的IO请求。这种操作消除了意外覆盖Oracle ASM磁盘而损坏磁盘组中的磁盘与文件。例如,Oracle ASM Filter Driver会过滤掉所有可能意外覆盖磁盘的非Oracle IO操作。从Oracle 12.2开始,Oracle ASM Filter Driver(Oracle ASMFD)在系统安装Oracle ASMLIB的情况下不能被安装,如果你想安装与配置Oracle ASMFD,那么必须首先卸载Oracle ASMLIB。Oracle 12.2的ASMFD不支持扩展分区表。

配置Oracle ASM Filter Driver
可以在安装Oracle Grid Infrastructure时或在安装Oracle Grid Infrastructure后对磁盘设备永久性配置Oracle ASM Filter Driver(Oracle ASMFD)。

在安装Oracle Grid Infrastructure时配置Oracle ASM Filter Driver
在安装Oracle Grid Infrastructure时,可以选择启用自动安装与配置Oracle ASM Filter Driver。如果在安装Oracle Grid Infrastructure所在的系统中没有使用udev,那么可以在在安装Oracle Grid Infrastructure之前执行下面的操作来为Oracle ASMFD准备磁盘。下面的操作必须在Oracle Grid Infrastructure软件包在Oracle Grid Infrastructure home目录中必须解压后,但在配置ASMFD之前执行。

1.为了使用Oracle ASM Filter Driver来配置共享磁盘,以root用户来设置环境变量$ORACLE_HOME为Grid Home目录,设置环境变量$ORACLE_BASE为临时目录

# set ORACLE_HOME=/u01/app/oracle/12.2.0/grid
# set ORACLE_BASE=/tmp

ORACLE_BASE变量被设置为临时目录可以避免在安装Oracle Grid Infrastructure之前在Grid Home目录中创建诊断或跟踪文件。在执行下面的操作之前,确保是在$ORACLE_HOME/bin目录中执行命令。

2.使用ASMCMD afd_label命令来为Oracle ASM Filter Driver来准备磁盘.

#asmcmd afd_label DATA1 /dev/disk1a --init

3.使用ASMCMD afd_lslbl命令来验证磁盘是否已经被标记可以为Oracle ASMFD所使用

#asmcmd afd_lslbl /dev/disk1a

查看某块磁盘

[rootd@cs1 ~]./asmcmd afd_lslbl /dev/asmdisk01
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
CRS2                                  /dev/asmdisk01

列出已经标记可以为Oracle ASMFD所使用的所有磁盘

[grid@jytest1 ~]$ asmcmd afd_lslbl 
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
CRS1                                  /dev/asmdisk02
CRS2                                  /dev/asmdisk01
DATA1                                 /dev/asmdisk03
DATA2                                 /dev/asmdisk04
FRA1                                  /dev/asmdisk07
TEST1                                 /dev/asmdisk05
TEST2                                 /dev/asmdisk06

4.当为Oracle ASMFD准备完磁盘后清除变量ORACLE_BASE

# unset ORACLE_BASE

5.运行安装脚本(gridSetup.sh)来安装Oracle Grid Infrastructure并启用Oracle ASM Filter Driver配置。

在安装Oracle Grid Infrastructure后配置Oracle ASM Filter Driver
如果在安装Grid Infrastructure时没有启用配置Oracle ASMFD,那么可以使用Oracle ASMFD来手动配置Oracle ASM设备。

为Oracle Grid Infrastructure Clusterware环境配置Oracle ASM,具体操作如下:
1.以Oracle Grid Infrastructure用户来更新Oracle ASM磁盘发现路径来使Oracle ASMFD来发现磁盘。
首先检查当前Oracle ASM磁盘发现路径并更新

[grid@cs1 ~]$ asmcmd dsget
parameter:/dev/sd*, /dev/asm* 
profile:/dev/sd*,/dev/asm* 

将’AFD:*’增加到发现磁盘路径中

[grid@cs1 ~]$ asmcmd dsset 'dev/sd*','/dev/asm*','AFD:*'
[grid@cs1 ~]$ asmcmd dsget
parameter:dev/sd*, /dev/asm*, AFD:*
profile:dev/sd*,/dev/asm*,AFD:*

2.以Oracle Grid Infrastructure用户来获取cluster中的节点列表与角色

[grid@cs1 ~]$ olsnodes -a
cs1     Hub
cs2     Hub

3.在每个Hub与Leaf节点上,可以以回滚或非回滚模式来执行以下操作
3.1以root用户来停止Oracle Grid Infrastructure

# $ORACLE_HOME/bin/crsctl stop crs

如果命令返回错误,那么执行下面的命令来强制停止Oracle Grid Infrastructure

# $ORACLE_HOME/bin/crsctl stop crs -f

3.2在节点层面以root用户来配置Oracle ASMFD

# $ORACLE_HOME/bin/asmcmd afd_configure

3.3以Oracle Grid Infrastructure用户来验证Oracle ASMFD的状态

[grid@cs2 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'cs2.jy.net'

3.4以root用户来启动Oracle Clusterware stack

# $ORACLE_HOME/bin/crsctl start crs

3.5以Oracle Grid Infrastructure用户来设置Oracle ASMFD发现磁盘路径为步骤3.1中所检索到的原始Oracle ASM磁盘发现路径

[grid@cs1 ~]$ asmcmd dsset 'dev/sd*','/dev/asm*'

迁移不包含OCR或vote文件的磁盘组到Oracle ASMFD
1.以Oracle Grid Infrastructure用户来执行以下操作

2.列出已经存在的磁盘组

[grid@cs2 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960     1544                0            1544              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      860                0             860              0             N  DATA/
MOUNTED  NORMAL  N         512             512   4096  4194304     40960    40704                0           20352              0             N  DN/

3.列出相关磁盘

[grid@cs2 ~]$ asmcmd lsdsk -G DN
Path
/dev/asmdisk03
/dev/asmdisk05

从下面的查询可以看到/dev/asmdisk03和/dev/asmdisk05的label字段为空

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                              PATH
------------ ----------- ------------------------------ -------------------------------------------------- --------------------------------------------------
           0           0                                CRS2                                               /dev/asmdisk01
           0           1                                CRS1                                               /dev/asmdisk02
           0           2                                DATA1                                              /dev/asmdisk04
           3           0 DN_0000                                                                           /dev/asmdisk03
           3           1 DN_0001                                                                           /dev/asmdisk05
           1           0 CRS1                           CRS1                                               AFD:CRS1
           2           0 DATA1                          DATA1                                              AFD:DATA1
           1           1 CRS2                           CRS2                                               AFD:CRS2

4.检查Oracle ASM是否是活动状态

[grid@cs2 ~]$ srvctl status asm
ASM is running on cs1,cs2

5.在所有节点上停止数据库与dismount磁盘组

[grid@cs2 ~]$srvctl stop diskgroup -diskgroup DN -f

6.在每个Hub节点上执行以下命令来为磁盘组中的所有已经存在的磁盘进行标记

[grid@cs2 ~]$ asmcmd afd_label DN1 /dev/asmdisk03 --migrate
[grid@cs2 ~]$ asmcmd afd_label DN2 /dev/asmdisk05 --migrate

7.在所有Hub节点上扫描磁盘

[grid@cs1 ~]$ asmcmd afd_scan
[grid@cs2 ~]$ asmcmd afd_scan

8.在所有节点启动数据库并mount磁盘组

[grid@cs2 ~]$ srvctl start diskgroup -diskgroup DN

从下面的查询可以看到/dev/asmdisk03和/dev/asmdisk05的label字段分别显示为DN1和DN2

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                              PATH
------------ ----------- ------------------------------ -------------------------------------------------- --------------------------------------------------
           0           0                                CRS2                                               /dev/asmdisk01
           0           1                                DN2                                                /dev/asmdisk05
           0           2                                DN1                                                /dev/asmdisk03
           0           3                                CRS1                                               /dev/asmdisk02
           0           4                                DATA1                                              /dev/asmdisk04
           1           1 CRS2                           CRS2                                               AFD:CRS2
           2           0 DATA1                          DATA1                                              AFD:DATA1
           1           0 CRS1                           CRS1                                               AFD:CRS1
           3           0 DN_0000                        DN1                                                AFD:DN1
           3           1 DN_0001                        DN2                                                AFD:DN2

现在可以将原先的 udev rules 文件移除。当然,这要在所有节点中都运行。以后如果服务器再次重启,AFD 就会完全接管了。

[root@cs1 bin]# cd /etc/udev/rules.d/
[root@cs1 rules.d]# ls -lrt
total 16
-rw-r--r--. 1 root root  709 Mar  6  2015 70-persistent-ipoib.rules
-rw-r--r--  1 root root 1416 Mar  9 12:23 99-my-asmdevices.rules
-rw-r--r--  1 root root  224 Mar  9 15:52 53-afd.rules
-rw-r--r--  1 root root  190 Mar  9 15:54 55-usm.rules

[root@cs1 rules.d]# mv 99-my-asmdevices.rules 99-my-asmdevices.rules.bak

[root@cs1 rules.d]# cat 53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmadmin", MODE="0664"


[root@cs1 rules.d]# ls -l /dev/oracleafd/disks
total 20
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS1
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS2
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 DATA1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN2

[root@cs2 bin]# cd /etc/udev/rules.d/
[root@cs2 rules.d]# ls -lrt
total 16
-rw-r--r--. 1 root root  709 Mar  6  2015 70-persistent-ipoib.rules
-rw-r--r--  1 root root 1416 Mar  9 12:23 99-my-asmdevices.rules
-rw-r--r--  1 root root  224 Mar  9 15:52 53-afd.rules
-rw-r--r--  1 root root  190 Mar  9 15:54 55-usm.rules

[root@cs2 rules.d]# mv 99-my-asmdevices.rules 99-my-asmdevices.rules.bak

[root@cs2 rules.d]# cat 53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmadmin", MODE="0664"


[root@cs2 rules.d]# ls -l /dev/oracleafd/disks
total 20
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS1
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 CRS2
-rwxrwx--- 1 grid oinstall 15 Aug 30 14:30 DATA1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN1
-rwxrwx--- 1 grid oinstall 15 Aug 30 17:42 DN2

其实,AFD 也在使用 udev

迁移包含OCR或vote文件的磁盘组到Oracle ASMFD
1.以root用户来列出包含OCR和vote文件的磁盘组

[root@cs1 ~]# cd /u01/app/product/12.2.0/crs/bin
[root@cs1 bin]# sh ocrcheck -config
Oracle Cluster Registry configuration is :
         Device/File Name         :       +CRS
         Device/File Name         :      +DATA
[root@cs1 bin]# sh crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   750a78e1ae984fcdbfb4dbf44d337a77 (/dev/asmdisk02) [CRS]
Located 1 voting disk(s).

2.以Oracle Grid Infrastructure用户来列出与磁盘组相关的磁盘

[grid@cs2 ~]$ asmcmd lsdsk -G CRS
Path
/dev/asmdisk01
/dev/asmdisk02

3.以root用户来在所有节点上停止数据库与Oracle Clusterware

[root@cs1 bin]# ./crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'cs2'
CRS-2673: Attempting to stop 'ora.crsd' on 'cs1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs1'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs1'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'cs2'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs2'
CRS-2673: Attempting to stop 'ora.chad' on 'cs2'
CRS-2673: Attempting to stop 'ora.gns' on 'cs2'
CRS-2677: Stop of 'ora.gns' on 'cs2' succeeded
CRS-2677: Stop of 'ora.cs.db' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.cvu' on 'cs2'
CRS-2673: Attempting to stop 'ora.gns.vip' on 'cs2'
CRS-2677: Stop of 'ora.cs.db' on 'cs1' succeeded
CRS-2677: Stop of 'ora.CRS.dg' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'cs1'
CRS-2677: Stop of 'ora.DATA.dg' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'cs1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cs2.vip' on 'cs2'
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'cs2'
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'cs2'
CRS-2677: Stop of 'ora.gns.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.cs2.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan2.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.chad' on 'cs1'
CRS-2677: Stop of 'ora.cvu' on 'cs2' succeeded
CRS-2677: Stop of 'ora.chad' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs2'
CRS-2677: Stop of 'ora.ons' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs2'
CRS-2677: Stop of 'ora.net1.network' on 'cs2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs2' has completed
CRS-2677: Stop of 'ora.crsd' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs2'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs2'
CRS-2673: Attempting to stop 'ora.storage' on 'cs2'
CRS-2677: Stop of 'ora.storage' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.ctssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.evmd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.chad' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'cs1'
CRS-2677: Stop of 'ora.mgmtdb' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'cs1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs1'
CRS-2677: Stop of 'ora.CRS.dg' on 'cs1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.MGMTLSNR' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cs1.vip' on 'cs1'
CRS-2677: Stop of 'ora.cs1.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs2'
CRS-2677: Stop of 'ora.cssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs1'
CRS-2677: Stop of 'ora.ons' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs1'
CRS-2677: Stop of 'ora.net1.network' on 'cs1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs1' has completed
CRS-2677: Stop of 'ora.crsd' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs1'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs1'
CRS-2673: Attempting to stop 'ora.storage' on 'cs1'
CRS-2677: Stop of 'ora.storage' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.ctssd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs1'
CRS-2677: Stop of 'ora.cssd' on 'cs1' succeeded

4.以Oracle Grid Infrastructure用户来执行下面的命令为每个Hub节点上的磁盘组中的磁盘进行标记

[grid@cs2 ~]$ asmcmd afd_label DN1 /dev/asmdisk03
[grid@cs2 ~]$ asmcmd afd_label DN2 /dev/asmdisk05

5.以Oracle Grid Infrastructure用户来执行下面的命令对每个Hub节点进行磁盘重新扫描

[grid@cs1 ~]$ asmcmd afd_scan
[grid@cs2 ~]$ asmcmd afd_scan

6.以root用户来在所有节点上启用Oracle Clusterware stack并mount OCR与vote文件磁盘与数据库

[root@cs1 bin]# ./crsctl start cluster -all
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs1'
CRS-2672: Attempting to start 'ora.evmd' on 'cs1'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs2'
CRS-2672: Attempting to start 'ora.evmd' on 'cs2'
CRS-2676: Start of 'ora.cssdmonitor' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs1'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs1'
CRS-2676: Start of 'ora.diskmon' on 'cs1' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs2'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs2'
CRS-2676: Start of 'ora.diskmon' on 'cs2' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs2' succeeded
CRS-2676: Start of 'ora.cssd' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2676: Start of 'ora.cssd' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2676: Start of 'ora.ctssd' on 'cs1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'cs2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs1'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs1'
CRS-2676: Start of 'ora.storage' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs1'
CRS-2676: Start of 'ora.crsd' on 'cs1' succeeded
CRS-2676: Start of 'ora.storage' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs2'
CRS-2676: Start of 'ora.crsd' on 'cs2' succeeded

判断Oracle ASM Filter Driver是否已经配置
可以通过判断Oracle ASM实例的SYS_ASMFD_PROPERTIES的AFD_STATE参数值来判断Oracle ASMFD是否被配置。也可以使用ASMCMD afd_state命令来检查Oracle ASMFD的状态

[grid@cs1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DISABLED' on host 'cs1.jy.net'

[grid@cs2 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'cs2.jy.net'

下面的查询如果AFD_STATE参数值等于NOT AVAILABLE就表示Oracle ASMFD没有被配置

SQL> select sys_context('SYS_ASMFD_PROPERTIES', 'AFD_STATE') from dual;

SYS_CONTEXT('SYS_ASMFD_PROPERTIES','AFD_STATE')
--------------------------------------------------------------------------- 
CONFIGURED

设置Oracle ASM Filter Driver的AFD_DISKSTRING参数
AFD_DISKSTRING参数来指定Oracle ASMFD磁盘发现路径来标识由Oracle ASMFD来管理的磁盘。也可以使用ASMCMD afd_dsset和afd_dsget命令来设置和显示AFD_DISKSTRING参数:

[grid@cs1 ~]$ asmcmd afd_dsset '/dev/sd*','/dev/asm*','AFD:*'

[grid@cs2 ~]$ asmcmd afd_dsset '/dev/sd*','/dev/asm*','AFD:*'

[grid@cs1 ~]$ asmcmd dsget
parameter:dev/sd*, /dev/asm*, AFD:*
profile:dev/sd*,/dev/asm*,AFD:*

[grid@cs2 ~]$ asmcmd dsget
parameter:dev/sd*, /dev/asm*, AFD:*
profile:dev/sd*,/dev/asm*,AFD:*

可以使用alter system语句来设置AFD_DISKSTRING。标识已经被创建在磁盘头中通过Oracle ASMFD磁盘发现路径来识别磁盘

SQL> ALTER SYSTEM AFD_DISKSTRING SET 'dev/sd*','/dev/asm*','AFD:*';
System altered.

SQL> SELECT SYS_CONTEXT('SYS_ASMFD_PROPERTIES', 'AFD_DISKSTRING') FROM DUAL;

SYS_CONTEXT('SYS_ASMFD_PROPERTIES','AFD_DISKSTRING')
----------------------------------------------------------------------------------- 
dev/sd*,/dev/asm*,AFD:*

为Oracle ASM Filter Driver磁盘设置Oracle ASM ASM_DISKSTRING参数
可以更新Oracle ASM磁盘发现路径来增加或删除Oracle ASMFD 磁盘标识名到ASM_DISKSTRING参数,可以使用alter system语句或asmcmd dsset命令

SQL> show parameter asm_diskstring

NAME                                 TYPE                   VALUE
------------------------------------ ---------------------- ------------------------------
asm_diskstring                       string                 dev/sd*, /dev/asm*, AFD:*

[grid@cs1 ~]$  asmcmd dsset 'dev/sd*','/dev/asm*','AFD:*'
[grid@cs2 ~]$ asmcmd dsset 'dev/sd*','/dev/asm*','AFD:*'

测试Filter功能
首先检查filter功能是否开启

[grid@cs1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'cs1.jy.net'

[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                       ENABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06

上面的结果显示filter功能已经开启,如果要禁用filter功能执行asmcmd afd_filter -d

[grid@cs1 ~]$ asmcmd afd_filter -d
[grid@cs1 ~]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'jytest1.jydba.net'
[grid@cs1 ~]$ asmcmd afd_lsdsk
There are no labelled devices.

如果要开启filter功能执行asmcmd afd_filter -e

[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                       ENABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06

先用 KFED 读取一下TEST磁盘组的AFD:TEST1的磁盘头,验证一下确实无误

[grid@jytest1 ~]$ kfed read AFD:TEST1
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  3275580027 ; 0x00c: 0xc33d627b
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:    ORCLDISKTEST1 ; 0x000: length=13
kfdhdb.driver.reserved[0]:   1414743380 ; 0x008: 0x54534554
kfdhdb.driver.reserved[1]:           49 ; 0x00c: 0x00000031
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                203424000 ; 0x020: 0x0c200100
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                   TEST1 ; 0x028: length=5
kfdhdb.grpname:                    TEST ; 0x048: length=4
kfdhdb.fgname:                    TEST1 ; 0x068: length=5
kfdhdb.fgname:                      DN1 ; 0x068: length=3

下面直接用dd尝试将磁盘头清零。dd 命令本身没有任何错误返回。

[root@cs1 ~]# dd if=/dev/zero of=/dev/asmdisk03 bs=1024 count=10000
10000+0 records in
10000+0 records out
10240000 bytes (10 MB) copied, 1.24936 s, 8.2 MB/s

备份磁盘的前1024字节并清除,普通用户没权限读

[root@jytest1 ~]# dd if=/dev/asmdisk05 of=asmdisk05_header bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000282638 s, 3.6 MB/s
[root@jytest1 ~]# ls -lrt
 
-rw-r--r--  1 root root 1024 Aug 31 01:22 asmdisk05_header


[root@jytest1 ~]# dd if=/dev/zero of=/dev/asmdisk05 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000318516 s, 3.2 MB/s
再用 KFED 读取一下TEST磁盘组的AFD:TEST1的磁盘头,验证一下确实无误
[grid@jytest1 ~]$ kfed read AFD:TEST1
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  3275580027 ; 0x00c: 0xc33d627b
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:    ORCLDISKTEST1 ; 0x000: length=13
kfdhdb.driver.reserved[0]:   1414743380 ; 0x008: 0x54534554
kfdhdb.driver.reserved[1]:           49 ; 0x00c: 0x00000031
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                203424000 ; 0x020: 0x0c200100
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                   TEST1 ; 0x028: length=5
kfdhdb.grpname:                    TEST ; 0x048: length=4
kfdhdb.fgname:                    TEST1 ; 0x068: length=5

测试dismount磁盘组TEST,再mount磁盘组TEST都能成功

[grid@jytest1 ~]$ asmcmd umount TEST
[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
[grid@jytest1 ~]$ asmcmd mount TEST
[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
MOUNTED  NORMAL  N         512             512   4096  4194304     40960    11128                0            5564              0             N  TEST/

现在对磁盘/dev/asmdisk05禁用filter功能

[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                       ENABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06
[grid@jytest1 ~]$ asmcmd afd_filter -d /dev/asmdisk05
[grid@jytest1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk03
DATA2                       ENABLED   /dev/asmdisk04
FRA1                        ENABLED   /dev/asmdisk07
TEST1                      DISABLED   /dev/asmdisk05
TEST2                       ENABLED   /dev/asmdisk06

清除磁盘的前1024字节

 
[root@jytest1 ~]# dd if=/dev/zero of=/dev/asmdisk05 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000318516 s, 3.2 MB/s

[grid@jytest1 ~]$ asmcmd umount TEST
[grid@jytest1 ~]$ asmcmd mount TEST
ORA-15032: not all alterations performed
ORA-15017: diskgroup "TEST" cannot be mounted
ORA-15040: diskgroup is incomplete (DBD ERROR: OCIStmtExecute)


[grid@jytest1 ~]$ kfed read AFD:TEST1
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
000000000 00000000 00000000 00000000 00000000  [................]
  Repeat 255 times
KFED-00322: invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

可以看到当filter功能被禁用时就失去了保护功能

使用之前备份的磁盘前1024字节信息来恢复磁盘头

[root@jytest1 ~]# dd if=asmdisk05_header of=/dev/asmdisk05 bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000274822 s, 3.7 MB/s

[grid@jytest1 ~]$ kfed  read /dev/asmdisk05
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  1645917758 ; 0x00c: 0x621ab63e
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:    ORCLDISKTEST1 ; 0x000: length=13
kfdhdb.driver.reserved[0]:   1414743380 ; 0x008: 0x54534554
kfdhdb.driver.reserved[1]:           49 ; 0x00c: 0x00000031
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                203424000 ; 0x020: 0x0c200100
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                   TEST1 ; 0x028: length=5
kfdhdb.grpname:                    TEST ; 0x048: length=4
kfdhdb.fgname:                    TEST1 ; 0x068: length=5

再次mount磁盘组TEST

[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
[grid@jytest1 ~]$ asmcmd mount TEST
[grid@jytest1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     40960      264                0             264              0             Y  CRS/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    24732                0           24732              0             N  DATA/
MOUNTED  EXTERN  N         512             512   4096  4194304     40960    18452                0           18452              0             N  FRA/
MOUNTED  NORMAL  N         512             512   4096  4194304     40960    11120                0            5560              0             N  TEST/

设置,清除与扫描Oracle ASM Filter Driver Labels
给由Oracle ASMFD管理的磁盘设置一个标识,在标识设置后,指定的磁盘将会由Oracle ASMFD来管理。可以使用ASMCMD afd_label,afd_unlabel与afd_scan来增加,删除和扫描标识查看已经标识过的磁盘可以看到磁盘/dev/asmdisk03和 /dev/asmdisk05没有被标识。

[grid@cs1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk04

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                              PATH
------------ ----------- ------------------------------ -------------------------------------------------- --------------------------------------------------
           0           0                                CRS1                                               AFD:CRS1
           0           1                                                                                   /dev/asmdisk05
           0           2                                DATA1                                              AFD:DATA1
           0           3                                                                                   /dev/asmdisk03
           0           4                                CRS2                                               AFD:CRS2
           1           0 CRS1                           CRS1                                               /dev/asmdisk02
           1           1 CRS2                           CRS2                                               /dev/asmdisk01
           2           0 DATA1                          DATA1                                              /dev/asmdisk04

设置标识

[grid@cs2 ~]$ asmcmd afd_label DN1 /dev/asmdisk03
[grid@cs2 ~]$ asmcmd afd_label DN2 /dev/asmdisk05

查看已经标识过的磁盘可以看到磁盘/dev/asmdisk03和 /dev/asmdisk05已经被标识

[grid@cs1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk04
DN1                         ENABLED   /dev/asmdisk03
DN2                         ENABLED   /dev/asmdisk05

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                                          PATH
------------ ----------- ------------------------------ -------------------------------------------------------------- --------------------------------------------------
           0           0                                CRS1                                                           AFD:CRS1
           0           1                                DN2                                                            /dev/asmdisk05
           0           2                                DN1                                                            AFD:DN1
           0           3                                DATA1                                                          AFD:DATA1
           0           4                                DN1                                                            /dev/asmdisk03
           0           6                                CRS2                                                           AFD:CRS2
           0           5                                DN2                                                            AFD:DN2
           1           1 CRS2                           CRS2                                                           /dev/asmdisk01
           1           0 CRS1                           CRS1                                                           /dev/asmdisk02
           2           0 DATA1                          DATA1                                                          /dev/asmdisk04

清除标识

[grid@cs1 ~]$ asmcmd afd_unlabel 'DN1'
[grid@cs1 ~]$ asmcmd afd_unlabel 'DN2'

注意在清除标识时,如果标识所标记的磁盘已经用来创建磁盘组了那么是不能清除的,例如

[grid@cs1 ~]$ asmcmd afd_unlabel 'TEST1'
disk AFD:TEST1 is already provisioned for ASM
No devices to be unlabeled.
ASMCMD-9514: ASM disk label clear operation failed.

扫描标识

[grid@cs1 ~]$ asmcmd afd_scan

查看已经标识过的磁盘可以看到 /dev/asmdisk03和 /dev/asmdisk05的标识已经清除了

[grid@cs1 ~]$ asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
CRS1                        ENABLED   /dev/asmdisk02
CRS2                        ENABLED   /dev/asmdisk01
DATA1                       ENABLED   /dev/asmdisk04

SQL> select group_number,disk_number,name,label,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           LABEL                                                          PATH
------------ ----------- ------------------------------ -------------------------------------------------------------- --------------------------------------------------
           0           0                                CRS1                                                           AFD:CRS1
           0           1                                                                                               /dev/asmdisk05
           0           2                                DATA1                                                          AFD:DATA1
           0           3                                                                                               /dev/asmdisk03
           0           4                                CRS2                                                           AFD:CRS2
           1           1 CRS2                           CRS2                                                           /dev/asmdisk01
           1           0 CRS1                           CRS1                                                           /dev/asmdisk02
           2           0 DATA1                          DATA1                                                          /dev/asmdisk04

Oracle ASM使用asmcmd中的cp命令来执行远程复制

Oracle ASM使用asmcmd中的cp命令来执行远程复制
cp命令的语法如下:

cp src_file [--target target_type] [--service service_name] [--port port_num] [connect_str:]tgt_file

–target target_type是用来指定asmcmd命令执行复制操作必须要连接到的实例的目标类型。有效选项为ASM,IOS或APX。
–service service_name如果缺省值不是+ASM,用来指定Oracle ASM实例名
–port port_num 缺省值是1521,用来指定监听端口

connect_str用来指定连接到远程实例的连接串。connect_str对于本地实例的复制是不需要指定的。对于远程实例复制,必须指定连接串并且会提示输入密码。它的格式如下:
user@host.SID
user,host和SID都是需要指定的。缺省端口为1521,也可以使用–port选项来修改。连接权限(sysasm或sysdba)是由启动asmcmd命令时由–privilege选项所决定的。

src_file 被复制的源文件名,它必须是一个完整路径文件名或一个Oracle ASM别名。在执行asmcmd复制时,Oracle ASM会创建一个OMF文件例如:
diskgroup/db_unique_name/file_type/file_name.#.#
其中db_unique_name被设置为ASM,#为数字。在复制过程中cp命令会对目标地址创建目录结构并对实际创建的OMF文件创建别名。

tgt_file 复制操作所创建的目标文件名或一个别名目录名的别名。

注意,cp命令不能在两个远程实例之间复制文件。在执行cp命令时本地Oracle ASM实例必须是源地址或目标地址。

使用cp命令可以执行以下三种复制操作:
1.从磁盘组中复制文件到操作系统中
2.从磁盘组中复制文件到磁盘组中
3.从操作系统中复制文件到磁盘组中

注意有些文件是不能执行复制的,比如OCR和SPFILE文件。为了备份,复制或移动Oracle ASM SPFILE文件,可以使用spbackup,spcopy或spmove命令。为了复制OCR备份文件,源地址必须是磁盘组。

如果文件存储在Oracle ASM磁盘组中,复制操作是可以跨字节序的(Little-Endian and Big-Endian)。Orale ASM会自动转换文件格式。在非Oracle ASM文件与Oracle ASM磁盘组之间是可以对不同字节序平台进行复制的,在复制完成后执行命令来对文件进行转换操作即可。

首先显示+data/cs/datafile目录中的所有文件

ASMCMD [+data/cs/datafile] > ls -lt
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  N    jy01.dbf => +DATA/cs/DATAFILE/JY.331.976296525
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    USERS.275.970601909
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS2.284.970602381
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS1.274.970601905
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    TEST.326.976211663
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSTEM.272.970601831
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSAUX.273.970601881
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    JY.331.976296525
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    USERS.261.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    UNDOTBS1.260.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSTEM.258.970598233
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSAUX.259.970598293

将+data/cs/datafile/JY.331.976296525文件从磁盘组中复制到操作系统中

ASMCMD [+] > cp +data/cs/datafile/JY.331.976296525 /home/grid/JY.bak
copying +data/cs/datafile/JY.331.976296525 -> /home/grid/JY.bak

将操作系统中的文件复制到磁盘组中

ASMCMD [+] > cp /home/grid/JY.bak +data/cs/datafile/JY.bak
copying /home/grid/JY.bak -> +data/cs/datafile/JY.bak

ASMCMD [+] > ls -lt  +data/cs/datafile/
Type      Redund  Striped  Time             Sys  Name
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  N    jy01.dbf => +DATA/cs/DATAFILE/JY.331.976296525
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    USERS.275.970601909
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS2.284.970602381
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    UNDOTBS1.274.970601905
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    TEST.326.976211663
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSTEM.272.970601831
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    SYSAUX.273.970601881
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  N    JY.bak => +DATA/ASM/DATAFILE/JY.bak.453.984396007
DATAFILE  UNPROT  COARSE   AUG 17 11:00:00  Y    JY.331.976296525
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    USERS.261.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    UNDOTBS1.260.970598319
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSTEM.258.970598233
DATAFILE  UNPROT  COARSE   MAR 12 18:00:00  Y    SYSAUX.259.970598293

将+data/cs/datafile/JY.331.976296525文件从磁盘组中复制到远程ASM实例的磁盘组中

ASMCMD [+] > cp +data/cs/datafile/JY.331.976296525 sys@10.138.130.175.+ASM1:+TEST/JY.bak
Enter password: ***********
copying +data/cs/datafile/JY.331.976296525 -> 10.138.130.175:+TEST/JY.bak

ASMCMD [+test] > ls -lt
Type      Redund  Striped  Time             Sys  Name
                                            N    rman_backup/
                                            N    arch/
                                            Y    JY/
                                            Y    DUP/
                                            Y    CS_DG/
                                            Y    ASM/
DATAFILE  MIRROR  COARSE   AUG 17 16:00:00  N    JY.bak => +TEST/ASM/DATAFILE/JY.bak.342.984413875

将+data/cs/datafile/JY.331.976296525文件从磁盘组中复制到远程ASM实例所在服务器的操作系统中

ASMCMD [+] > cp +data/cs/datafile/JY.331.976296525 sys@10.138.130.175.+ASM1:/home/grid/JY.bak
Enter password: ***********
copying +data/cs/datafile/JY.331.976296525 -> 10.138.130.175:/home/grid/JY.bak

[grid@jytest1 ~]$ ls -lrt
-rw-r----- 1 grid oinstall 104865792 Aug 17 16:21 JY.bak

使用asmcmd cp命令比使用dbms_file_transfer来方便些。

Oracle 12C Database File Mapping for Oracle ASM Files

为了理解I/O性能,你必须要详细了解存储文件的存储层次信息。Oracle提供了一组动态性能视国来显示文件到逻辑卷中间层到实际的物理设备之间的映射信息。使用这些动态性能视图,可以找到一个文件的任何数据块所内置的实际物理磁盘。Oracle数据库使用一个名叫FMON的后台进程来管理映射信息。Oracle提供了PL/SQL dbms_storage_map包来将映射操作填充到映射视图中。Oracle数据库文件映射当映射Oracle ASM文件时不需要使用第三方的动态库。另外,Oracle数据库支持在所有操作系统平台上对Oracle ASM文件的映射。

对Oracle ASM文件启用文件映射
为了启用文件映射,需要将参数file_mapping设置为true。数据库实例不必关闭来设置这个参数。可以使用以下alter system语句来设置这个参数:

SQL> alter system set file_mapping=true scope=both sid='*';

System altered.

执行合适的dbms_storage_map映射过程
.在冷启动情况下,Oracle数据库在刚刚启动时没有映射操作被调用。可以执行dbms_storage_map.map_all过程来为数据库相关的整个I/O子系统来构建映射信息。例如,下面的命令构建映射信息并且提供10000事件:

SQL> execute dbms_storage_map.map_all(10000);

PL/SQL procedure successfully completed.

.在暖启动情况下,Oracle数据库已经构建了映射信息,可以选择执行dbms_storage_map.map_save过程来将映射信息保存在数据字典中。缺省情况下这个过程将被dbms_storage_map.map_all过程调用,这将强制SGA中的所有映射信息被刷新到磁盘。缺省情况下dbms_storage_map.map_save过程将被dbms_storage_map.map_all()。在重启数据库后,使用dbms_storage_map.restore()过程来还原映射信息到SGA中。如果需要,dbms_storage_map.map_all()可以用来刷新映射信息。

由dbms_storage_map包生成的映射信息会被捕获到动态性能视图中。这些视图包括v$map_comp_list,v$map_element,v$map_ext_element,v$map_file,v$map_file_extent,v$map_file_io_stack,v$map_library与v$map_subelement。
可以使用v$map_file来查询文件映射信息:

SQL> select file_map_idx, substr(file_name,1,45), file_type, file_structure from v$map_file;

FILE_MAP_IDX SUBSTR(FILE_NAME,1,45)                                                                     FILE_TYPE   FILE_STRU
------------ ------------------------------------------------------------------------------------------ ----------- ---------
           0 +DATA/CS/DATAFILE/system.272.970601831                                                     DATAFILE    ASMFILE
           1 +DATA/CS/DATAFILE/sysaux.273.970601881                                                     DATAFILE    ASMFILE
           2 +DATA/CS/DATAFILE/undotbs1.274.970601905                                                   DATAFILE    ASMFILE
           3 +DATA/CS/4700A987085B3DFAE05387E5E50A8C7B/DAT                                              DATAFILE    ASMFILE
           4 +DATA/CS/4700A987085B3DFAE05387E5E50A8C7B/DAT                                              DATAFILE    ASMFILE
           5 +DATA/CS/DATAFILE/users.275.970601909                                                      DATAFILE    ASMFILE
           6 +DATA/CS/4700A987085B3DFAE05387E5E50A8C7B/DAT                                              DATAFILE    ASMFILE
           7 +DATA/CS/DATAFILE/undotbs2.284.970602381                                                   DATAFILE    ASMFILE
           8 +DATA/CS/DATAFILE/test.326.976211663                                                       DATAFILE    ASMFILE
           9 +DATA/CS/DATAFILE/jy.331.976296525                                                         DATAFILE    ASMFILE
          10 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/DAT                                              DATAFILE    ASMFILE
          11 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/DAT                                              DATAFILE    ASMFILE
          12 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/DAT                                              DATAFILE    ASMFILE
          13 +DATA/CS/ONLINELOG/group_2.277.970601985                                                   LOGFILE     ASMFILE
          14 +DATA/CS/ONLINELOG/group_1.278.970601985                                                   LOGFILE     ASMFILE
          15 +DATA/CS/ONLINELOG/group_3.285.970602759                                                   LOGFILE     ASMFILE
          16 +DATA/CS/ONLINELOG/group_4.286.970602761                                                   LOGFILE     ASMFILE
          17 +DATA/CS/ONLINELOG/redo05.log                                                              LOGFILE     ASMFILE
          18 +DATA/CS/ONLINELOG/redo06.log                                                              LOGFILE     ASMFILE
          19 +DATA/CS/ONLINELOG/redo07.log                                                              LOGFILE     ASMFILE
          20 +DATA/CS/ONLINELOG/redo08.log                                                              LOGFILE     ASMFILE
          21 +DATA/CS/ONLINELOG/redo09.log                                                              LOGFILE     ASMFILE
          22 +DATA/CS/ONLINELOG/redo10.log                                                              LOGFILE     ASMFILE
          23 +DATA/CS/TEMPFILE/temp.279.970602003                                                       TEMPFILE    ASMFILE
          24 +DATA/CS/67369AA1C9AA3E71E053BE828A0A8262/TEM                                              TEMPFILE    ASMFILE
          25 +DATA/CS/6C61AD7B443C2CD2E053BE828A0A2A74/TEM                                              TEMPFILE    ASMFILE
          26 +DATA/arch/1_222_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          27 +DATA/arch/1_223_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          28 +DATA/arch/2_277_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          29 +DATA/arch/2_278_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          30 +DATA/arch/2_279_970601983.dbf                                                             ARCHIVEFILE ASMFILE
          31 +DATA/CS/CONTROLFILE/current.276.970601979                                                 CONTROLFILE ASMFILE

31 rows selected.

可以使用dbms_storage_map PL/SQL包中的过程来控制映射操作。例如,可以使用dbms_storage_map.map_object过程通过指定对象名,所有者与类型来对数据库对象来构建映射信息。在dbms_storage_map.map_object过程运行之后,那么可以通过查询map_object视图来查询映射信息

SQL> execute dbms_storage_map.map_object('T1','C##TEST','TABLE');

PL/SQL procedure successfully completed.

SQL> select io.object_name o_name, io.object_owner o_owner, io.object_type o_type,
  2  mf.file_name, me.elem_name, io.depth,
  3  (sum(io.cu_size * (io.num_cu - decode(io.parity_period, 0, 0,
  4  trunc(io.num_cu / io.parity_period)))) / 2) o_size
  5  from map_object io, v$map_element me, v$map_file mf
  6  where io.object_name = 'T1'
  7  and io.object_owner = 'C##TEST'
and io.object_type = 'TABLE'
  8    9  and me.elem_idx = io.elem_idx
 10  and mf.file_map_idx = io.file_map_idx
 11  group by io.elem_idx, io.file_map_idx, me.elem_name, mf.file_name, io.depth,
 12  io.object_name, io.object_owner, io.object_type
 13  order by io.depth;

O_NAME               O_OWNER              O_TYP FILE_NAME                                          ELEM_NAME                 DEPTH     O_SIZE
-------------------- -------------------- ----- -------------------------------------------------- -------------------- ---------- ----------
T1                   C##TEST              TABLE +DATA/CS/DATAFILE/users.275.970601909              +/dev/asmdisk04               0         64

Oracle Linux 7使用syslog来管理Oracle ASM的审计文件

使用syslog来管理Oracle ASM的审计文件
如果不对Oracle ASM实例的审计文件目录进行定期维护那么它将会包含大量的审计文件。如果存在大理审计文件可能会造成文件系统耗尽磁盘空间或indoes,或者由于文件系统扩展限制而造成Oracle运行缓慢,还有可能造成Oracle ASM实例在启动时hang住。这里将介绍如何使用Linux syslog工具来管理Oracle ASM审计记录,因此通过使用操作系统的syslog工具来代替单独的audit_dump_dest目录来记录Oracle ASM审计记录。下面将介绍具体的操作,而且这些操作必须对于RAC环境中的每个节点执行。
1.对Oracle ASM实例设置audit_syslog_level与audit_sys_operations参数

SQL> show parameter audit_sys_

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_sys_operations                 boolean     TRUE
audit_syslog_level                   string

SQL> alter system set AUDIT_SYSLOG_LEVEL='local0.info' scope=spfile sid='*';

System altered.

由于audit_sys_operations参数默认为启用所以不用进行设置了。

2.为Oracle ASM审计配置/etc/syslog.conf
通过执行以下两处改变来对Oracle ASM审计配置syslog的配置文件/etc/syslog.conf或/etc/rsyslog.conf:
2.1在/etc/syslog.conf或/etc/rsyslog.conf文件中增加以下内容

local0.info   /var/log/oracle_asm_audit.log

2.2在/etc/syslog.conf或/etc/rsyslog.conf文件中的/var/log/messages这一行增加local0.none,修改后的配置如下:

*.info;mail.none;authpriv.none;cron.none;local0.none   /var/log/messages
[root@cs1 ~]# vi /etc/rsyslog.conf
 
 ....省略....

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none;local0.none    /var/log/messages
local0.info                                            /var/log/oracle_asm_audit.log


[root@cs2 ~]# vi /etc/rsyslog.conf
 ....省略....

# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;mail.none;authpriv.none;cron.none;local0.none    /var/log/messages
local0.info                                            /var/log/oracle_asm_audit.log

3.配置logrotate来管理syslog日志文件
Linux的logrotate工具被用来管理Oracle ASM审计的syslog日志文件的大小与数量,创建文件/etc/logrotate.d/oracle_asm_audit,并向文件增加以下内容:

/var/log/oracle_asm_audit.log {
  weekly
  rotate 4
  compress
  copytruncate
  delaycompress
  notifempty
}
[root@cs1 ~]# cd /etc/logrotate.d/
[root@cs1 logrotate.d]# pwd
/etc/logrotate.d
[root@cs1 logrotate.d]# vi oracle_asm_audit
/var/log/oracle_asm_audit.log {
  weekly
  rotate 4
  compress
  copytruncate
  delaycompress
  notifempty
}

[root@cs2 ~]# cd /etc/logrotate.d/
[root@cs1 logrotate.d]# pwd
/etc/logrotate.d
[root@cs1 logrotate.d]# vi oracle_asm_audit
/var/log/oracle_asm_audit.log {
  weekly
  rotate 4
  compress
  copytruncate
  delaycompress
  notifempty
}

4.重启Oracle ASM实例与rsyslog服务
为了使用这些改变生效必须重启Oracle ASM实例与rsyslog服务。可以使用crsctl stop cluster -all与crsctl start cluster -all在任何一个RAC节点上执行来重启Oracle ASM实例,这个操作会将数据库实例也关闭。

[root@cs1 bin]# /u01/app/product/12.2.0/crs/bin/crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'cs1'
CRS-2673: Attempting to stop 'ora.crsd' on 'cs2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs2'
CRS-2673: Attempting to stop 'ora.chad' on 'cs2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'cs1'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs2'
CRS-2673: Attempting to stop 'ora.cs.db' on 'cs1'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'cs1'
CRS-2673: Attempting to stop 'ora.gns' on 'cs1'
CRS-2677: Stop of 'ora.gns' on 'cs1' succeeded
CRS-2677: Stop of 'ora.cs.db' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'cs2'
CRS-2677: Stop of 'ora.CRS.dg' on 'cs2' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cs2.vip' on 'cs2'
CRS-2673: Attempting to stop 'ora.chad' on 'cs1'
CRS-2677: Stop of 'ora.chad' on 'cs2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'cs2'
CRS-2677: Stop of 'ora.cs.db' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'cs1'
CRS-2673: Attempting to stop 'ora.cvu' on 'cs1'
CRS-2673: Attempting to stop 'ora.gns.vip' on 'cs1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'cs1' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'cs1'
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'cs1'
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2'
CRS-2677: Stop of 'ora.cs2.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.gns.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'cs2' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs2'
CRS-2677: Stop of 'ora.scan2.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.ons' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs2'
CRS-2677: Stop of 'ora.net1.network' on 'cs2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs2' has completed
CRS-2677: Stop of 'ora.chad' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'cs1'
CRS-2677: Stop of 'ora.crsd' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs2'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs2'
CRS-2673: Attempting to stop 'ora.storage' on 'cs2'
CRS-2677: Stop of 'ora.cvu' on 'cs1' succeeded
CRS-2677: Stop of 'ora.storage' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs2'
CRS-2677: Stop of 'ora.ctssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.mgmtdb' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'cs1'
CRS-2673: Attempting to stop 'ora.CRS.dg' on 'cs1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'cs1'
CRS-2677: Stop of 'ora.CRS.dg' on 'cs1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.evmd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'cs1' succeeded
CRS-2677: Stop of 'ora.MGMTLSNR' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cs1.vip' on 'cs1'
CRS-2677: Stop of 'ora.cs1.vip' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs2'
CRS-2677: Stop of 'ora.cssd' on 'cs2' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'cs1'
CRS-2677: Stop of 'ora.ons' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'cs1'
CRS-2677: Stop of 'ora.net1.network' on 'cs1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'cs1' has completed
CRS-2677: Stop of 'ora.crsd' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'cs1'
CRS-2673: Attempting to stop 'ora.evmd' on 'cs1'
CRS-2673: Attempting to stop 'ora.storage' on 'cs1'
CRS-2677: Stop of 'ora.storage' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'cs1'
CRS-2677: Stop of 'ora.evmd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'cs1' succeeded
CRS-2677: Stop of 'ora.asm' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'cs1'
CRS-2677: Stop of 'ora.cssd' on 'cs1' succeeded


[root@cs1 bin]# /u01/app/product/12.2.0/crs/bin/crsctl start cluster -all
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs1'
CRS-2672: Attempting to start 'ora.evmd' on 'cs1'
CRS-2672: Attempting to start 'ora.evmd' on 'cs2'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'cs2'
CRS-2676: Start of 'ora.cssdmonitor' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs2'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs2'
CRS-2676: Start of 'ora.cssdmonitor' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'cs1'
CRS-2672: Attempting to start 'ora.diskmon' on 'cs1'
CRS-2676: Start of 'ora.diskmon' on 'cs1' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs1' succeeded
CRS-2676: Start of 'ora.diskmon' on 'cs2' succeeded
CRS-2676: Start of 'ora.evmd' on 'cs2' succeeded
CRS-2676: Start of 'ora.cssd' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs2'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs2'
CRS-2676: Start of 'ora.cssd' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'cs1'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'cs1'
CRS-2676: Start of 'ora.ctssd' on 'cs2' succeeded
CRS-2676: Start of 'ora.ctssd' on 'cs1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs1'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs2'
CRS-2676: Start of 'ora.asm' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'cs1'
CRS-2676: Start of 'ora.storage' on 'cs1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs1'
CRS-2676: Start of 'ora.crsd' on 'cs1' succeeded
CRS-2676: Start of 'ora.storage' on 'cs2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'cs2'
CRS-2676: Start of 'ora.crsd' on 'cs2' succeeded

执行service rsyslog restart命令来重启rsyslog服务

[root@cs1 bin]# service rsyslog restart
Redirecting to /bin/systemctl restart  rsyslog.service
[root@cs1 bin]# service rsyslog status
Redirecting to /bin/systemctl status  rsyslog.service
rsyslog.service - System Logging Service
   Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled)
   Active: active (running) since Wed 2018-08-01 15:13:22 CST; 12s ago
 Main PID: 23011 (rsyslogd)
   CGroup: /system.slice/rsyslog.service
           鈹斺攢23011 /usr/sbin/rsyslogd -n

Aug 01 15:13:22 cs1.jy.net systemd[1]: Started System Logging Service.

[root@cs2 logrotate.d]#  service rsyslog restart
Redirecting to /bin/systemctl restart  rsyslog.service
[root@cs2 logrotate.d]# service rsyslog status
Redirecting to /bin/systemctl status  rsyslog.service
rsyslog.service - System Logging Service
   Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled)
   Active: active (running) since Wed 2018-08-01 15:13:54 CST; 7s ago
 Main PID: 9809 (rsyslogd)
   CGroup: /system.slice/rsyslog.service
           鈹斺攢9809 /usr/sbin/rsyslogd -n

Aug 01 15:13:54 cs2.jy.net systemd[1]: Started System Logging Service.

5.验证Oracle ASM审计记录是否被记录到/var/log/oracle_asm_audit.log中

[root@cs1 bin]# tail -f /var/log/oracle_asm_audit.log
Aug  1 15:13:46 cs1 journal: Oracle Audit[23601]: LENGTH : '317' ACTION :[80] 'begin dbms_diskgroup.close(:handle); exception when others then   raise;   end;
Aug  1 15:13:48 cs1 journal: Oracle Audit[23610]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'
Aug  1 15:13:50 cs1 journal: Oracle Audit[23654]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'
Aug  1 15:13:50 cs1 journal: Oracle Audit[23654]: LENGTH : '494' ACTION :[257] 'select name_kfgrp, number_kfgrp, incarn_kfgrp, compat_kfgrp, dbcompat_kfgrp, state_kfgrp, flags32_kfgrp, type_kfgrp, refcnt_kfgrp, sector_kfgrp, blksize_kfgrp, ausize_kfgrp , totmb_kfgrp, freemb_kfgrp, coldmb_kfgrp, hotmb_kfgrp, minspc_kfgrp, usable_kfgrp, ' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:13:50 cs1 journal: Oracle Audit[23654]: LENGTH : '308' ACTION :[071] 'offline_kfgrp, lflags_kfgrp  , logical_sector_kfgrp  from x$kfgrp_stat
Aug  1 15:13:55 cs1 journal: Oracle Audit[23681]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs1.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '370' ACTION :[132] 'begin dbms_diskgroup.openpwfile(:NAME,:lblksize,:fsz,:handle,:pblksz,:fmode,:genfname);  exception when others then   raise;   end;
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '355' ACTION :[117] 'begin dbms_diskgroup.read(:handle,:offset,:length,:buffer,:reason,:mirr); exception when others then   raise;   end;
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '355' ACTION :[117] 'begin dbms_diskgroup.read(:handle,:offset,:length,:buffer,:reason,:mirr); exception when others then   raise;   end;
Aug  1 15:13:56 cs1 journal: Oracle Audit[23681]: LENGTH : '317' ACTION :[80] 'begin dbms_diskgroup.close(:handle); exception when others then   raise;   end;


[root@cs2 logrotate.d]# tail -f /var/log/oracle_asm_audit.log
Aug  1 15:14:46 cs2 journal: Oracle Audit[9928]: LENGTH : '299' ACTION :[51] 'BEGIN DBMS_SESSION.USE_DEFAULT_EDITION_ALWAYS; END;' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[2] '47'
Aug  1 15:14:46 cs2 journal: Oracle Audit[9928]: LENGTH : '287' ACTION :[39] 'ALTER SESSION SET "_notify_crs" = false' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[2] '42'
Aug  1 15:14:46 cs2 journal: Oracle Audit[9926]: LENGTH : '287' ACTION :[39] 'ALTER SESSION SET "_notify_crs" = false' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[2] '42'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:14:47 cs2 journal: Oracle Audit[9928]: LENGTH : '292' ACTION :[45] 'SELECT value FROM v$parameter WHERE name = :1' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSRAC' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[10] '1386528187' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[1] '3'
Aug  1 15:15:01 cs2 journal: Oracle Audit[9944]: LENGTH : '244' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[0] '' STATUS:[1] '0' DBID:[0] '' SESSIONID:[10] '4294967295' USERHOST:[10] 'cs2.jy.net' CLIENT ADDRESS:[0] '' ACTION NUMBER:[3] '100'

可以看到Oracle ASM审计记录已经被记录到了/var/log/oracle_asm_audit.log文件中。

Oracle Linux 7使用cron来管理Oracle ASM审计文件目录的增长

使用cron来管理Oracle ASM审计文件目录的增长
如果不对Oracle ASM实例的审计文件目录进行定期维护那么它将会包含大量的审计文件。如果存在大理审计文件可能会造成文件系统耗尽磁盘空间或indoes,或者由于文件系统扩展限制而造成Oracle运行缓慢,还有可能造成Oracle ASM实例在启动时hang住。这里将介绍如何使用Linux的cron工具来管理Oracle ASM审计文件目录的文件数量。

下面将介绍具体的操作,而且这些操作必须对于RAC环境中的每个节点执行。
1.识别Oracle ASM审计目录
这里有三个目录可能存在Oracle ASM的审计文件。所有三个目录都要控制让其不要过度增长。两个缺省目录是基于Oracle ASM实例启动时环境变量的设置。为了判断系统右的缺省目录,以安装Grid Infrastructure软件的用户(grid)登录系统,设置环境变量,因此可以连接到Oracle ASM实例,运行echo命令。

[grid@cs1 ~]$ . /usr/local/bin/oraenv
ORACLE_SID = [+ASM1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/grid

[grid@cs1 ~]$ echo $ORACLE_HOME/rdbms/audit
/u01/app/product/12.2.0/crs/rdbms/audit

[grid@cs1 ~]$ echo $ORACLE_BASE/admin/$ORACLE_SID/adump
/u01/app/grid/admin/+ASM1/adump


[grid@cs2 ~]$ . /usr/local/bin/oraenv
ORACLE_SID = [+ASM2] ? 
The Oracle base remains unchanged with value /u01/app/grid

[grid@cs2 ~]$ echo $ORACLE_HOME/rdbms/audit
/u01/app/product/12.2.0/crs/rdbms/audit

[grid@cs2 ~]$ echo $ORACLE_BASE/admin/$ORACLE_SID/adump
/u01/app/grid/admin/+ASM2/adump

第三个Oracle ASM审计目录可以使用SQL*Plus登录Oracle ASM实例后进行查询

grid@cs1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Wed Aug 1 14:13:47 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select value from v$parameter where name = 'audit_file_dest';

VALUE
--------------------------------------------------------------------------------
/u01/app/product/12.2.0/crs/rdbms/audit

这里第三个目录与第一个目录是相同的

2.给Grid Infrastructure软件用户使用cron的权限
Oracle ASM的审计文件是由Grid Infrastructure软件用户所创建的,它通常为oracle或grid。移动或删除审计文件的命令必须由Grid Infrastructure软件用户来执行。在Oracle Linux中如果/etc/cron.allow 文件存在,只有在文件中出现其登录名称的用户可以使用 crontab 命令。root 用户的登录名必须出现在cron.allow 文件中,如果/etc/cron.deny 文件存在,并且用户的登录名列在其中,那么这些用户将不能执行crontab命令。如果只有/etc/cron.deny 文件存在,任一名称没有出现在这个文件中的用户可以使用crontab 命令。在Oracle Linux 7.1中只有/etc/cron.deny文件,而且访文件没有任何用户存在,就是说所有用户都能执行crontab命令。

[root@cs1 etc]# cat cron.deny

[root@cs1 etc]# ls -lrt crontab
-rw-r--r--. 1 root root 451 Apr 29  2014 crontab

[root@cs1 etc]# chmod 777 crontab
[root@cs1 etc]# ls -lrt crontab
-rwxrwxrwx. 1 root root 451 Apr 29  2014 crontab

3.添加命令到crontab来管理审计文件
以Grid Infrastructure软件用户来向crontab文件增加命令

[grid@cs1 ~]$ crontab -e

0 6 * * sun /usr/bin/find /u01/app/product/12.2.0/crs/rdbms/audit /u01/app/grid/admin/+ASM1/adump /u01/app/product/12.2.0/crs/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -delete

这个crontab条目在每个星期日的上午6点执行find命令,find命令将从三个审计目录中找出保存时间超过30天的所有审计文件将其删除。如果想要保存审计文件更长的时间,那么在执行find命令后,将相关审计文件移到备份目录中,例如:


0 6 * * sun /usr/bin/find /u01/app/product/12.2.0/crs/rdbms/audit /u01/app/grid/admin/+ASM1/adump /u01/app/product/12.2.0/crs/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -execdir 

/bin/mv {} /archived_audit_dir \;

检查crontab

[grid@cs1 ~]$ crontab -l

0 6 * * sun /usr/bin/find /u01/app/product/12.2.0/crs/rdbms/audit /u01/app/grid/admin/+ASM1/adump /u01/app/product/12.2.0/crs/rdbms/audit -maxdepth 1 -name '*.aud' -mtime +30 -delete

Oracle 12CR2 Oracle Restart – ASM Startup fails with PRCR-1079

操作系统Oracle Linux 7.1,数据库版本为12.2.0.1 Oracle Restart在重启之后不能正常启动ASM实例

[grid@jytest3 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  OFFLINE      jytest3                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       jytest3                  STABLE
ora.asm
               ONLINE  OFFLINE      jytest3                  STABLE
ora.ons
               OFFLINE OFFLINE      jytest3                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.evmd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.jy.db
      1        ONLINE  OFFLINE                               Instance Shutdown,ST
                                                             ABLE
--------------------------------------------------------------------------------

尝试手动启动ASM实例,提示无效的用户与密码

[grid@jytest3 ~]$ srvctl start asm -startoption MOUNT -f
PRKO-2002 : Invalid command line option: -f
[grid@jytest3 ~]$ srvctl start asm -startoption MOUNT 
PRCR-1079 : Failed to start resource ora.asm
CRS-5017: The resource action "ora.asm start" encountered the following error: 
ORA-01017: invalid username/password; logon denied
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/jytest3/crs/trace/ohasd_oraagent_grid.trc".

CRS-2674: Start of 'ora.asm' on 'jytest3' failed
ORA-01017: invalid username/password; logon denied

如果尝试使用SQL*PLUS来启动ASM实例,仍然提示无效的用户与密码

[grid@jytest3 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.2.0.1.0 Production on Tue May 2 20:30:15 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied

检查错误跟踪文件内容如下:

2017-05-02 21:47:39.181 :    AGFW:380081920: {0:0:169} Agent received the message: RESOURCE_START[ora.asm jytest3 1] ID 4098:1530
2017-05-02 21:47:39.181 :    AGFW:380081920: {0:0:169} Preparing START command for: ora.asm jytest3 1
2017-05-02 21:47:39.181 :    AGFW:380081920: {0:0:169} ora.asm jytest3 1 state changed from: OFFLINE to: STARTING
2017-05-02 21:47:39.181 :    AGFW:380081920: {0:0:169} RECYCLE_AGENT attribute not found
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] (:CLSN00107:) clsn_agent::start {
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 000 { 
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start stopConnection 020
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::stopConnection
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::broadcastEvent 000 entry { OHSid:/u01/app/grid/product/12.2.0/crs+ASM 

s_ohSidEventMapLock:0x139e930 action:2
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::removeConnection connection count 0
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::removeConnection freed 0
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::stopConnection sid +ASM status  1
2017-05-02 21:47:39.182 : USRTHRD:388486912: {0:0:169} ConnectionPool::~ConnectionPool  destructor this:f4061e90 m_oracleHome:/u01/app/grid/product/12.2.0/crs, 

m_oracleSid:+ASM,  m_usrOraEnv:  m_pResState:0x7f10f4049000
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent::refresh
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent::refresh ORACLE_HOME = /u01/app/grid/product/12.2.0/crs
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.182 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib 2 getResAttrib USR_ORA_INST_NAME oracleSid:+ASM
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib oracleSid:+ASM
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent::refresh ORACLE_SID = +ASM
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::ConnectionPool 2 constructor this:406d700 

m_oracleHome:/u01/app/grid/product/12.2.0/crs, m_oracleSid:+ASM, m_usrOraEnv: m_instanceType:2 m_instanceVersion:12.2.0.1.0
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::ConnectionPool 2 constructor m_pResState:0x7f1104033540
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent::setOracleSidAttrib updating GEN_USR_ORA_INST_NAME to +ASM
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::setResAttrib nonPerX current value GEN_USR_ORA_INST_NAME value +ASM
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Utils::setResAttrib clsagfw_modify_attribute attr GEN_USR_ORA_INST_NAME value +ASM retCode 0
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sModifyConfig entry resversion:12.2.0.1.0 compId:+ASM comment:StartOption[1] {
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sInitFile entry pathname:/etc finename:oratab
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sInitFile pathname:/etc 

backupPath:/u01/app/grid/product/12.2.0/crs/srvm/admin/ filename:oratab pConfigF:0x7f11040752d8
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::ConfigFile constructor name:oratab
2017-05-02 21:47:39.183 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] CssLock::lock s_siha_mtex, got lock CLSN.oratab.jytest3
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse parseLine name: value: comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:+asm nameWithCase:+ASM value:/u01/app/grid/product/12.2.0/crs:N 

comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:jy nameWithCase:jy value:/u01/app/oracle/product/12.2.0/db:N 

comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] CssLock::unlock s_siha_mtex, released lock CLSN.oratab.jytest3
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::setAltName this:0x7f11040d80e0 altName:+ASM
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::setAltValue altValue:/u01/app/grid/product/12.2.0/crs:N
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sInitFile resType:ora.asm.type setAltName(compId):+ASM setAltValue

(oracleHome):/u01/app/grid/product/12.2.0/crs:N
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib 2 getResAttrib USR_ORA_INST_NAME oracleSid:+ASM
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib oracleSid:+ASM
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::getAltName this:0x7f11040d80e0
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile:getAltName altName:+asm
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sInitFile dbname:+ASM altName:+asm
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::getComment name:+asm comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::getAltName this:0x7f11040d80e0
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile:getAltName altName:+asm
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::getAltName this:0x7f11040d80e0
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile:getAltName altName:+asm
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::getComment name:+asm comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sInitFile exit dbname:+ASM startup comment: startup altName:+asm comment:
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start pConfigF:40d80e0
2017-05-02 21:47:39.184 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] CssLock::lock s_siha_mtex, got lock CLSN.oratab.jytest3
2017-05-02 21:47:39.185 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse parseLine name: value: comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:+asm nameWithCase:+ASM value:/u01/app/grid/product/12.2.0/crs:N 

comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:jy nameWithCase:jy value:/u01/app/oracle/product/12.2.0/db:N 

comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sCleanEntry - delete alt(sid) entry
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::cmdIdIsStart CmdId:257
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib 2 getResAttrib USR_ORA_INST_NAME oracleSid:+ASM
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] AsmAgent:getOracleSidAttrib oracleSid:+ASM
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sCleanEntry - key = +ASM
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse parseLine name: value: comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:+asm nameWithCase:+ASM value:/u01/app/grid/product/12.2.0/crs:N 

comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::parse mmap name:jy nameWithCase:jy value:/u01/app/oracle/product/12.2.0/db:N 

comment:
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConfigFile::updateInPlace file /etc/oratab is not modified
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] CssLock::unlock s_siha_mtex, released lock CLSN.oratab.jytest3
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sclsnInstAgent::sCleanEntry - key[+ASM] is cleaned
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] sModifyConfig exit for compId:+ASM }
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Utils::getResAttrib entry attribName:USR_ORA_OPI required:0 loglevel:1
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Utils::getResAttrib: attribname USR_ORA_OPI value false len 5
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Utils::getResAttrib attribname:USR_ORA_OPI value:false exit
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Agent::valueOfAttribIs attrib: REASON compare value: user attribute value: user
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Agent::valueOfAttribIs returns 1
2017-05-02 21:47:39.186 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 100 getGenRestart
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::getGenRestart exit GEN_RESTART:StartOption[1] }
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 120 comment:StartOption[1]
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Agent::valueOfAttribIs attrib: REASON compare value: failure attribute value: user
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Agent::valueOfAttribIs returns 0
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::checkState 030 new gimh oracleSid:+ASM
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Gimh::constructor ohome:/u01/app/grid/product/12.2.0/crs sid:+ASM
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::resetConnection  s_statusOfConnectionMap:0x139ea20
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::resetConnection sid +ASM status  2
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Gimh::check condition changes to (GIMH_NEXT_NUM) 0(Abnormal Termination) exists
2017-05-02 21:47:39.187 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] CLS_DRUID_REF(CLSN00006) AsmAgent::gimhChecks 100 failed gimh state 0
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] (:CLSN00006:)InstAgent::checkState 110 return unplanned offline
2017-05-02 21:47:39.188 : USRTHRD:388486912: {0:0:169} Gimh::destructor gimh_dest_query_ctx rc=0
2017-05-02 21:47:39.188 : USRTHRD:388486912: {0:0:169} Gimh::destructor gimh_dest_inst_ctx rc=0
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::stopConnection
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::broadcastEvent 000 entry { OHSid:/u01/app/grid/product/12.2.0/crs+ASM 

s_ohSidEventMapLock:0x139e930 action:2
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::removeConnection connection count 0
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::removeConnection freed 0
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ConnectionPool::stopConnection sid +ASM status  1
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::checkState 200 prev clsagfw_res_status 3 current clsagfw_res_status 1
2017-05-02 21:47:39.188 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 180 startOption:StartOption[1]
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnUtils::setResAttrib nonPerX current value GEN_RESTART value StartOption[1]
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Utils::setResAttrib clsagfw_modify_attribute attr GEN_RESTART value StartOption[1] retCode 0
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::setGenRestart updating GEN_RESTART to StartOption[1] retcode:0 ohasd resource:1
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 200 StartInstance with startoption:1 pfile:null 
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::start 400 startInstance
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::startInstance 000 { startOption:1
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstConnection:InstConnection: init:040d5ef0 oracleHome:/u01/app/grid/product/12.2.0/crs 

oracleSid:+ASM instanceType:2 instanceVersion:12.2.0.1.0 
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnInstConnection::makeConnectStr UsrOraEnv  m_oracleHome /u01/app/grid/product/12.2.0/crs 

Crshome /u01/app/grid/product/12.2.0/crs
2017-05-02 21:47:39.189 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] clsnInstConnection::makeConnectStr = (DESCRIPTION=(ADDRESS=(PROTOCOL=beq)

(PROGRAM=/u01/app/grid/product/12.2.0/crs/bin/oracle)(ARGV0=oracle+ASM)(ENVS='ORACLE_HOME=/u01/app/grid/product/12.2.0/crs,ORACLE_SID=+ASM')(ARGS='(DESCRIPTION=

(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')(CONNECT_DATA=(SID=+ASM))))
2017-05-02 21:47:39.191 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] Container:start oracle home /u01/app/grid/product/12.2.0/crs
2017-05-02 21:47:39.191 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstAgent::startInstance 020 connect logmode:8008
2017-05-02 21:47:39.191 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] InstConnection::connectInt 020 server not attached
2017-05-02 21:47:39.694 :  CLSDMC:357324544: command 0 failed with status 1
2017-05-02 21:47:39.695 :CLSDYNAM:357324544: [ora.evmd]{0:0:2} [check] DaemonAgent::check returned 0
2017-05-02 21:47:39.695 :CLSDYNAM:357324544: [ora.evmd]{0:0:2} [check] Deep check returned 1
2017-05-02 21:47:40.210 :CLSDYNAM:388486912: [ ora.asm]{0:0:169} [start] ORA-01017: invalid username/password; logon denied

关于ASM启动报无效用户名与密码的问题在MOS上有一篇相关文档”ASM not Starting With: ORA-01017: invalid username/password; logon denied (Doc ID 1918617.1)”,说原因是因为修改了sqlnet.ora文件,正确设置应该如下:

SQLNET.AUTHENTICATION_SERVICES= (NTS)

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)

但我这里并不是这种情况,从另一篇文档”Oracle Restart – ASM Startup fails with PRCR-1079 (Doc ID 1904942.1)”找到类似的错误信息,说原因是因为ASM资源的ACL(访问控制列表)的权限发生了改变。

检查asm资源的访问控制列表权限

[grid@jytest3 dbs]$  crsctl stat res ora.asm -p
NAME=ora.asm
TYPE=ora.asm.type
ACL=owner:grid:rwx,pgrp:asmdba:r-x,other::r--
ACTIONS=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
ASM_DISKSTRING=AFD:*
AUTO_START=restore
CHECK_INTERVAL=1
CHECK_TIMEOUT=30
CLEAN_TIMEOUT=60
CSS_CRITICAL=no
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle ASM resource
ENABLED=1
GEN_RESTART=StartOption[1]
GEN_USR_ORA_INST_NAME=+ASM
IGNORE_TARGET_ON_FAILURE=no
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NLS_LANG=
OFFLINE_CHECK_INTERVAL=0
OS_CRASH_THRESHOLD=0
OS_CRASH_UPTIME=0
PRESENCE=standard
PWFILE=+DATA/orapwasm
PWFILE_BACKUP=
REGISTERED_TYPE=srvctl
RESOURCE_GROUP=
RESTART_ATTEMPTS=5
RESTART_DELAY=0
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SPFILE=+DATA/ASM/ASMPARAMETERFILE/registry.253.938201547
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.cssd) weak(ora.LISTENER.lsnr)
START_DEPENDENCIES_RTE_INTERNAL=
START_TIMEOUT=900
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(ora.cssd)
STOP_DEPENDENCIES_RTE_INTERNAL=
STOP_TIMEOUT=600
TARGET_DEFAULT=default
TYPE_VERSION=1.2
UPTIME_THRESHOLD=1d
USER_WORKLOAD=no
USR_ORA_ENV=
USR_ORA_INST_NAME=+ASM
USR_ORA_OPEN_MODE=mount  
USR_ORA_OPI=false
USR_ORA_STOP_MODE=immediate
WORKLOAD_CPU=0
WORKLOAD_CPU_CAP=0
WORKLOAD_MEMORY_MAX=0
WORKLOAD_MEMORY_TARGET=0

从上面的信息ACL=owner:grid:rwx,pgrp:asmdba:r-x,other::r–,可知组名(pgrp)变为了asmdba组,正常来说应该是grid用户的oinstall组,如果尝试修改:

[grid@jytest3 dbs]$ crsctl setperm resource ora.asm -g oinstall
CRS-4995:  The command 'Setperm  resource' is invalid in crsctl. Use srvctl for this command.

修改时出错了,在12.2.0.1中crsctl setperm命令是一种无效的crsctl,建议使用srvctl命令,但srvctl没有setperm命令,如是加上-unsupported参数再次执行修改:

[grid@jytest3 dbs]$ crsctl setperm resource ora.asm -g oinstall -unsupported

重启Oracle Restart,ASM实例正常启动

[grid@jytest3 lib]$ crsctl stop has
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'jytest3'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'jytest3'
CRS-2673: Attempting to stop 'ora.evmd' on 'jytest3'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'jytest3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'jytest3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'jytest3'
CRS-2677: Stop of 'ora.cssd' on 'jytest3' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'jytest3'
CRS-2677: Stop of 'ora.driver.afd' on 'jytest3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'jytest3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[grid@jytest3 ~]$ crsctl start has
CRS-4123: Oracle High Availability Services has been started.
[grid@jytest3 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       jytest3                  STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       jytest3                  STABLE
ora.asm
               ONLINE  ONLINE       jytest3                  Started,STABLE
ora.ons
               OFFLINE OFFLINE      jytest3                  STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.driver.afd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.evmd
      1        ONLINE  ONLINE       jytest3                  STABLE
ora.jy.db
      1        ONLINE  ONLINE       jytest3                  Open,HOME=/u01/app/o
                                                             racle/product/12.2.0
                                                             /db,STABLE
--------------------------------------------------------------------------------

Oracle ASM Renaming Disks Groups

renamedg工具可以用来改变一个磁盘组的名称。在对磁盘组执行renamedg之前,磁盘组必须在所有节点中dismounted。renamedg工具重命名磁盘组由两个阶段组成:
1.步骤一会生成一个配置文件供步骤二使用
2.步骤二会使用步骤一生成的配置文件来重命名磁盘组

renamedg的语法如下:

[grid@jyrac1 ~]$ renamedg -help
NOTE: No asm libraries found in the system

Parsing parameters..
phase                           Phase to execute, 
                                (phase=ONE|TWO|BOTH), default BOTH

dgname                          Diskgroup to be renamed

newdgname                       New name for the diskgroup

config                          intermediate config file

check                           just check-do not perform actual operation,
                                (check=TRUE/FALSE), default FALSE

confirm                         confirm before committing changes to disks,
                                (confirm=TRUE/FALSE), default FALSE

clean                           ignore errors,
                                (clean=TRUE/FALSE), default TRUE

asm_diskstring                  ASM Diskstring (asm_diskstring='discoverystring',
                                'discoverystring1' ...)

verbose                         verbose execution, 
                                (verbose=TRUE|FALSE), default FALSE

keep_voting_files               Voting file attribute, 
                                (keep_voting_files=TRUE|FALSE), default FALSE


phase=(ONE|TWO|BOTH):指定被执行的阶段。它的取值为one,two或both。这个参数是一个可选参数,它的缺省值为both。通过使用both。如果在第二个阶段出现问题,那么可以使用生成的配

置文件来重新执行two(第二个)阶段。

dgname=diskgroup:指定要被重新命名的磁盘组

newdgname=newdiskgroup:指定新磁盘组名

config=configfile:指定在第一阶段所生成的的配置文件路径或在第二阶段所使用的配置文件路径。这个参数是一个可选参数。缺省的配置文件名为renamedg_config并且它存储在执行

renamedg命令的目录中。在有些平台上配置文件可能需要使用单引号。

asm_diskstring=discoverystring,discoverystrig…:指定Oracle ASM发现字符串。如果Oracle ASM磁盘不是在操作系统的缺省位置,就必须指定asm_diskstring参数。在有些平台上配置文件可能需要使用单引号,当指定通配符时通常需要使用单引号。

clean=(true|false):指定是否容忍错误,否则将会忽略。缺省值是true。

check=(true|false):指定一个boolean值将用在第二阶段。如果为true,那么renamedg工具将打印对磁盘所使用的改变信息列表。没有写操作被执行,这是一个可选参数,缺省值为false。

confirm=(true|false):指定一个boolean值将
用在第二阶段。如果为false,那么renamedg工具将打印所执行的改变并且在实际执行改变之前进行确认。这是一个可选参数,缺省值为false。

如果check参数被设置为true,那么这个参数值是多余的。

verbose=(true|false):当verbose=true时指定verbose执行。缺省值为false。

keep_voting_files=(true|false):指定voting文件是否被保留在被重命名的磁盘组中。它的缺省值为false,它将从被重命名的磁盘组中删除voting文件。

renamedg工具不会更新集群资源,也不会更新数据库所使用的任何文件路径。因为这个原因,在完成第二阶段操作之后,原来的磁盘组资源不会自动删除。旧磁盘组资源的状态可以通过Oracle Clusterware Control(crsctl)的crsctl stat res -t命令来进行检查,并且然后使用Server Control Utility(srvctl)的srvctl remove diskgroup命令来进行删除。

下面将使用两个例子来演示renamedg工具的使用方法
1.将创建一个test磁盘组,并将test磁盘组使用带有asm_diskstring,verbose参数的renamedg命令重命名为new_test磁盘组。

SQL> create diskgroup test normal redundancy disk '/dev/raw/raw12','/dev/raw/raw13';

Diskgroup created.

SQL> select group_number,name from v$asm_diskgroup;

GROUP_NUMBER NAME
------------ ------------------------------
           1 ACFS
           2 ARCHDG
           3 CRSDG
           4 DATADG
           5 TEST

SQL> select group_number,disk_number,name,path from v$asm_disk;

GROUP_NUMBER DISK_NUMBER NAME                           PATH
------------ ----------- ------------------------------ ------------------------------
           0           3                                /dev/raw/raw14
           1           2 ACFS_0002                      /dev/raw/raw7
           5           1 TEST_0001                      /dev/raw/raw13
           5           0 TEST_0000                      /dev/raw/raw12
           4           0 DATADG_0001                    /dev/raw/raw11
           1           0 ACFS_0000                      /dev/raw/raw5
           4           3 DATADG_0000                    /dev/raw/raw10
           2           1 ARCHDG_0001                    /dev/raw/raw9
           3           1 CRSDG_0001                     /dev/raw/raw8
           1           1 ACFS_0001                      /dev/raw/raw6
           2           0 ARCHDG_0000                    /dev/raw/raw2
           4           1 DATADG_0003                    /dev/raw/raw4
           4           2 DATADG_0002                    /dev/raw/raw3
           3           0 CRSDG_0000                     /dev/raw/raw1

14 rows selected.

在两节点上将磁盘组test执行dismout

SQL> alter diskgroup test dismount;

Diskgroup altered.

SQL> alter diskgroup test dismount;

Diskgroup altered.

将磁盘组test重命名为new_test

[grid@jyrac1 ~]$ renamedg  phase=both dgname=test newdgname=new_test asm_diskstring='/dev/raw/raw*' verbose=true
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : TEST 
         New DG name          : NEW_TEST 
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : (null) 
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=both dgname=test newdgname=new_test asm_diskstring='/dev/raw/raw*' verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking if the diskgroup is mounted or used by CSS 
Checking disk number:0
Checking disk number:1
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for /dev/raw/raw12
Modifying the header
Looking for /dev/raw/raw13
Modifying the header
Completed phase 2
Terminating kgfd context 0x2b0307e120a0

检查生成的配置文件

[grid@jyrac1 ~]$ ls -lrt
total 58
-rw-r--r-- 1 grid oinstall       58 Feb  9 10:04 renamedg_config
[grid@jyrac1 ~]$ cat renamedg_config
/dev/raw/raw12 TEST NEW_TEST
/dev/raw/raw13 TEST NEW_TEST

在两节点上将命名为new_test的磁盘组执行mount操作

SQL> alter diskgroup new_test mount;

Diskgroup altered.


SQL> alter diskgroup new_test mount;

Diskgroup altered.


ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576     15360     8763             5120            1821              0             N  ACFS/
MOUNTED  NORMAL  N         512   4096  1048576     10240     2152                0            1076              0             N  ARCHDG/
MOUNTED  EXTERN  N         512   4096  1048576     10240     9842                0            9842              0             Y  CRSDG/
MOUNTED  NORMAL  N         512   4096  1048576     20480    12419             5120            3649              0             N  DATADG/
MOUNTED  NORMAL  N         512   4096  1048576     10240    10054                0            5027              0             N  NEW_TEST/

检查资源状态信息,可以看到原来的磁盘组test还存在,但处于offline状态,需要使用srvctl remove diskgroup命令将其删除

[grid@jyrac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFS.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ARCHDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.NEW_TEST.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.TEST.dg
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.asm
               ONLINE  ONLINE       jyrac1                   Started             
               ONLINE  ONLINE       jyrac2                   Started             
ora.gsd
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.net1.network
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ons
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.registry.acfs
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       jyrac1                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.cvu
      1        ONLINE  ONLINE       jyrac2                                       
ora.jyrac.db
      1        ONLINE  ONLINE       jyrac1                   Open                
      2        ONLINE  ONLINE       jyrac2                   Open                
ora.jyrac1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.oc4j
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.scan2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan3.vip
      1        ONLINE  ONLINE       jyrac2                       

使用srvctl remove diskgroup命令将资源ora.TEST.dg(磁盘组test)删除

[grid@jyrac1 ~]$ srvctl remove diskgroup -g test

再次检查资源状态信息,可以看到原来的磁盘组test不存在了。

[grid@jyrac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFS.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ARCHDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.NEW_TEST.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.asm
               ONLINE  ONLINE       jyrac1                   Started             
               ONLINE  ONLINE       jyrac2                   Started             
ora.gsd
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.net1.network
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ons
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.registry.acfs
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       jyrac1                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.cvu
      1        ONLINE  ONLINE       jyrac2                                       
ora.jyrac.db
      1        ONLINE  ONLINE       jyrac1                   Open                
      2        ONLINE  ONLINE       jyrac2                   Open                
ora.jyrac1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.oc4j
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.scan2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan3.vip
      1        ONLINE  ONLINE       jyrac2           

2.下面的例子将new_test磁盘组使用带有asm_diskstring,verbose参数的renamedg命令分两个阶段操作重命名为test磁盘组。

只生成第二阶段操作需要的配置文件

[grid@jyrac1 ~]$ renamedg phase=one dgname=new_test newdgname=test asm_diskstring='/dev/raw/raw*' config=/home/grid/new_test.conf verbose=true
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : NEW_TEST 
         New DG name          : TEST 
         Phases               :
                 Phase 1
         Discovery str        : /dev/raw/raw* 
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=one dgname=new_test newdgname=test asm_diskstring=/dev/raw/raw* config=/home/grid/new_test.conf verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:/dev/raw/raw*
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/raw/raw*
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking if the diskgroup is mounted or used by CSS 
Checking disk number:0
KFNDG-00405: file not found; arguments: [NEW_TEST]

出现了KFNDG-00405错误,这是因为在执行renamedg操作之前忘记了将磁盘组new_test在所有节点上执行dismount

在两节点上将磁盘组new_test执地dismount

SQL> alter diskgroup new_test dismount;

Diskgroup altered.


SQL> alter diskgroup new_test dismount;

Diskgroup altered.

Terminating kgfd context 0x2b34496f80a0
[grid@jyrac1 ~]$ renamedg phase=one dgname=new_test newdgname=test asm_diskstring='/dev/raw/raw*' config=/home/grid/new_test.conf verbose=true
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : NEW_TEST 
         New DG name          : TEST 
         Phases               :
                 Phase 1
         Discovery str        : /dev/raw/raw* 
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=one dgname=new_test newdgname=test asm_diskstring=/dev/raw/raw* config=/home/grid/new_test.conf verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:/dev/raw/raw*
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:/dev/raw/raw*
Identified disk UFS:/dev/raw/raw12 with disk number:0 and timestamp (33048873 -682272768)
Identified disk UFS:/dev/raw/raw13 with disk number:1 and timestamp (33048873 -682272768)
Checking if the diskgroup is mounted or used by CSS 
Checking disk number:0
Checking disk number:1
Generating configuration file..
Completed phase 1
Terminating kgfd context 0x2b1c202a80a0

执行重命名磁盘组的第二阶段操作

[grid@jyrac1 ~]$ renamedg phase=two dgname=new_test newdgname=test config=/home/grid/new_test.conf verbose=true
NOTE: No asm libraries found in the system

Parsing parameters..

Parameters in effect:

         Old DG name       : NEW_TEST 
         New DG name          : TEST 
         Phases               :
                 Phase 2
         Discovery str        : (null) 
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: phase=two dgname=new_test newdgname=test config=/home/grid/new_test.conf verbose=true
Executing phase 2
Looking for /dev/raw/raw12
Modifying the header
Looking for /dev/raw/raw13
Modifying the header
Completed phase 2
Terminating kgfd context 0x2b8da14950a0

检查磁盘组new_test是否成功被重命名为test

SQL> select group_number,name from v$asm_diskgroup;

GROUP_NUMBER NAME
------------ ------------------------------
           1 ACFS
           2 ARCHDG
           3 CRSDG
           4 DATADG
           0 TEST

在两节点上将磁盘组test执行mount操作

SQL> alter diskgroup test mount;

Diskgroup altered.


SQL> alter diskgroup test mount;

Diskgroup altered.

ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  1048576     15360     8763             5120            1821              0             N  ACFS/
MOUNTED  NORMAL  N         512   4096  1048576     10240     1958                0             979              0             N  ARCHDG/
MOUNTED  EXTERN  N         512   4096  1048576     10240     9842                0            9842              0             Y  CRSDG/
MOUNTED  NORMAL  N         512   4096  1048576     20480    12419             5120            3649              0             N  DATADG/
MOUNTED  NORMAL  N         512   4096  1048576     10240    10054                0            5027              0             N  TEST/

检查资源状态信息,可以看到原磁盘组new_test还存在,但为offline状态,重命名后的磁盘组test状态为online

                              
[grid@jyrac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFS.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ARCHDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.NEW_TEST.dg
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.TEST.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.asm
               ONLINE  ONLINE       jyrac1                   Started             
               ONLINE  ONLINE       jyrac2                   Started             
ora.gsd
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.net1.network
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ons
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.registry.acfs
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       jyrac1                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.cvu
      1        ONLINE  ONLINE       jyrac2                                       
ora.jyrac.db
      1        ONLINE  ONLINE       jyrac1                   Open                
      2        ONLINE  ONLINE       jyrac2                   Open                
ora.jyrac1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.oc4j
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.scan2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan3.vip
      1        ONLINE  ONLINE       jyrac2                  

将原来的磁盘组new_test从资源信息中删除

[grid@jyrac1 ~]$ srvctl remove diskgroup -g new_test

再将检查资源状态信息,可以看到原磁盘组new_test不存在了

[grid@jyrac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFS.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ARCHDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.CRSDG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.DATADG.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.TEST.dg
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.asm
               ONLINE  ONLINE       jyrac1                   Started             
               ONLINE  ONLINE       jyrac2                   Started             
ora.gsd
               OFFLINE OFFLINE      jyrac1                                       
               OFFLINE OFFLINE      jyrac2                                       
ora.net1.network
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.ons
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
ora.registry.acfs
               ONLINE  ONLINE       jyrac1                                       
               ONLINE  ONLINE       jyrac2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       jyrac1                                       
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       jyrac2                                       
ora.cvu
      1        ONLINE  ONLINE       jyrac2                                       
ora.jyrac.db
      1        ONLINE  ONLINE       jyrac1                   Open                
      2        ONLINE  ONLINE       jyrac2                   Open                
ora.jyrac1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.jyrac2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.oc4j
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       jyrac1                                       
ora.scan2.vip
      1        ONLINE  ONLINE       jyrac2                                       
ora.scan3.vip
      1        ONLINE  ONLINE       jyrac2                        

Oracle Find block in ASM

为了更容易的从ASM定位与抽取Oracle数据文件,可以创建一个Perl脚本find_block.pl来自动进行处理,只需要提供数据文件名与块号。脚本find_block.pl是一个Perl脚本,它由dd或kfed命令组成。它可以与所有的Linux和Unix ASM版本与单实例的ASM或RAC环境中使用。

这个脚本必须以ASM/Grid Infrastructure用户来执行,在RAC环境中,脚本可以在任何一个节点上执行,在运行脚本之前,设置ASM环境变量并且确保ORACLE_SID,ORACLE_HOME,LD_LIBRARY_PATH等等设置正确。对于ASM 10g与11gr1来说,也可以设置环境变量PERL5LIB,比如:

export PERL5LIB=$ORACLE_HOME/perl/lib/5.8.3:$ORACLE_HOME/perl/lib/site_perl

运行脚本例子如下:

$ORACLE_HOME/perl/bin/perl find_block.pl filename block

命令中的filename是指要抽取的数据块所在文件的文件名。对于数据文件,文件名可以通过在数据库实例上执行select name from v$datafile来获得。block是要从ASM中抽取数据块的块号

输出结果类似如:

dd if=[ASM disk path] ... of=block_N.dd

或Exadata中的

kfed read dev=[ASM disk path] ... > block_N.txt

如果磁盘组是外部冗余,这个脚本将生成单个命令。对于normal冗余磁盘组文件来说,这个脚本将生成两个命令,对于high冗余磁盘组文件来说,这个脚本将生成三个命令。

下面将对于oracle 10g rac来通过find_block.pl脚来从ASM中抽取数据块

SQL> select name from v$tablespace;

NAME
------------------------------
SYSTEM
UNDOTBS1
SYSAUX
USERS
TEMP
EXAMPLE
UNDOTBS2
YB
TEST

9 rows selected.

SQL> create table t1 (name varchar2(16)) tablespace TEST;

Table created.

SQL> insert into t1 values ('CAT');

1 row created.

SQL> insert into t1 values ('DOG');

1 row created.

SQL> commit;

Commit complete.

SQL> select rowid,name from t1;

ROWID              NAME
------------------ ----------------
AAAN8qAAIAAAAAUAAA CAT
AAAN8qAAIAAAAAUAAB DOG

SQL> select dbms_rowid.rowid_block_number('AAAN8qAAIAAAAAUAAA') "block" from dual;

     block
----------
        20

SQL> select t.name "Tablespace", f.name "Datafile" from v$tablespace t, v$datafile f where t.ts#=f.ts# and t.name='TEST';

Tablespace                     Datafile
------------------------------ --------------------------------------------------
TEST                           +DATADG/test/datafile/test.269.930512093

切换到ASM环境,设置PERL5LIB,运行脚本

[oracle@jyrac3 bin]$ export ORACLE_SID=+ASM1
[oracle@jyrac3 bin]$ export PERL5LIB=$ORACLE_HOME/perl/lib/5.8.3:$ORACLE_HOME/perl/lib/site_perl

[oracle@jyrac3 bin]$ $ORACLE_HOME/perl/bin/perl find_block.pl +DATADG/test/datafile/test.269.930512093 20
dd if=/dev/raw/raw3 bs=8192 count=1 skip=266260 of=block_20.dd
dd if=/dev/raw/raw4 bs=8192 count=1 skip=266260 of=block_20.dd

从上面的输出可以看到指定的文件是normal冗余,脚本生成了两个dd命令,下面我们来运行:

[root@jyrac3 ~]# dd if=/dev/raw/raw3 bs=8192 count=1 skip=266260 of=block_20.dd
1+0 records in
1+0 records out
8192 bytes (8.2 kB) copied, 0.00608323 seconds, 1.3 MB/s

下面查看block_20.dd文件的内容,使用od工具,我们可以看到插入表中的数据:

[root@jyrac3 ~]# od -c block_20.dd | tail -3
0017740   S   O   R   T   =   '   B   I   N   A   R   Y   '  \b   , 001
0017760 001 003   D   O   G   , 001 001 003   C   A   T 001 006  \r 203
0020000

可以看到DOG与CAT

Example with ASM version 12.1.0.1 in Exadata
在Exadata中,不可以使用dd命令来抽取数据块,因为磁盘对数据库服务器不可见。为了得到数据库的数据块,可以使用kfed工具,因为find_block.pl将由kfed命令组成。


$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on [date]

SQL> alter pluggable database BR_PDB open;

Pluggable database altered.

SQL> show pdbs

CON_ID CON_NAME OPEN MODE   RESTRICTED
------ -------- ----------- ----------
       2 PDB$SEED READ ONLY   NO
...
       5 BR_PDB   READ WRITE  NO

SQL>

$ sqlplus bane/welcome1@BR_PDB

SQL*Plus: Release 12.1.0.1.0 Production on [date]

SQL> create table TAB1 (n number, name varchar2(16)) tablespace USERS;

Table created.

SQL> insert into TAB1 values (1, 'CAT');

1 row created.

SQL> insert into TAB1 values (2, 'DOG');

1 row created.

SQL> commit;

Commit complete.

SQL> select t.name "Tablespace", f.name "Datafile"
from v$tablespace t, v$datafile f
where t.ts#=f.ts# and t.name='USERS';

Tablespace Datafile
---------- ---------------------------------------------
USERS      +DATA/CDB/054.../DATAFILE/users.588.860861901

SQL> select ROWID, NAME from TAB1;

ROWID              NAME
------------------ ----
AAAWYEABfAAAACDAAA CAT
AAAWYEABfAAAACDAAB DOG

SQL> select DBMS_ROWID.ROWID_BLOCK_NUMBER('AAAWYEABfAAAACDAAA') "Block number" from dual;

Block number
------------
       131

SQL>

切换到ASM环境执行脚本

$ $ORACLE_HOME/perl/bin/perl find_block.pl +DATA/CDB/0548068A10AB14DEE053E273BB0A46D1/DATAFILE/users.588.860861901 131
kfed read dev=o/192.168.1.9/DATA_CD_03_exacelmel05 ausz=4194304 aunum=16212 blksz=8192 blknum=131 | grep -iv ^kf > block_131.txt
kfed read dev=o/192.168.1.11/DATA_CD_09_exacelmel07 ausz=4194304 aunum=16267 blksz=8192 blknum=131 | grep -iv ^kf > block_131.txt

注意find_block.pl将生成两个脚本,数据文件是normal冗余,执行以下命令:

$ kfed read dev=o/192.168.1.9/DATA_CD_03_exacelmel05 ausz=4194304 aunum=16212 blksz=8192 blknum=131 | grep -iv ^kf > block_131.txt
$

检查block_131文件的内容,可以看到DOG与CAT

$ more block_131.txt
...
FD5106080 00000000 00000000 ...  [................]
      Repeat 501 times
FD5107FE0 00000000 00000000 ...  [........,......D]
FD5107FF0 012C474F 02C10202 ...  [OG,......CAT..,-]
$

Find any block
find_block.pl脚本可以用来从存储在ASM中的任何文件抽取数据块。执行下面的命令来抽取控制文件与随机数据块

$ $ORACLE_HOME/perl/bin/perl find_block.pl +DATA/CDB/CONTROLFILE/current.289.843047837 5
kfed read dev=o/192.168.1.9/DATA_CD_10_exacelmel05 ausz=4194304 aunum=73 blksz=16384 blknum=5 | grep -iv ^kf > block_5.txt
kfed read dev=o/192.168.1.11/DATA_CD_01_exacelmel07 ausz=4194304 aunum=66 blksz=16384 blknum=5 | grep -iv ^kf > block_5.txt
kfed read dev=o/192.168.1.10/DATA_CD_04_exacelmel06 ausz=4194304 aunum=78 blksz=16384 blknum=5 | grep -iv ^kf > block_5.txt
$

可以看到脚本显示了正确的控制文件块大小(16K),并且生成三个不同的命令。当磁盘组data是normal冗余磁盘组时,控制文件会为high冗余(ASM中控制文件缺省为冗余)。

小结:
find_block.pl是perl脚本,它由dd或kfed命令组成用来从ASM中的文件中抽取数据块。在大多数情况下我们想要从数据文件中抽取数据块,但脚本也能从控制文件,重做日志或任何其它文件中抽取数据块。

如果文件存储在外部冗余磁盘组中,那么脚本将会生成单个命令,这可以用来从ASM磁盘中抽取数据块。

如果文件存储在normal冗余磁盘组,那么脚本将会生成两个命令,它们用来从两个不同的ASM磁盘来抽取数据块(相同副本)。

如果文件存储在high冗余磁盘组,那么脚本将会生成三个命令。

Oracle ASM REQUIRED_MIRROR_FREE_MB

REQUIRED_MIRROR_FREE_MB与USABLE_FILE_MB是V$ASM_DISKGROUP[_STAT]视图中两个非常有意义的列。关于这两个字段有很多问题与及如何计算它们。

How much space can I use
ASM不能阻止你使用外部冗余磁盘组的所有可用空间,normal冗余磁盘组总空间的一半,high冗余磁盘组总空间的三分之一。但如果你想填满磁盘组直到溢出,使它没有足够空间来增长或增加任何文件,在磁盘故障情况下,直到故障磁盘被替换与rebalance操作完成之前,将没有空间来还原一些数据的冗余.

11gr2 ASM in Exadata
在Exadata ASM 11gr2中,required_mirror_free_mb作为磁盘组中的最大故障组的大小被显示。下面是Exadata中的11.2.0.4 ASM的例子进行说明。

[grid@exadb01 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.4.0 Production on [date]

SQL> select NAME, GROUP_NUMBER from v$asm_diskgroup_stat;

NAME      GROUP_NUMBER
--------- ------------
DATA                 1
DBFS_DG              2
RECO                 3

SQL>

下面将查看磁盘组DBFS_DG。正常来说对于磁盘组DBFS_DG的每个故障组有10块磁盘。为了说明REQUIRED_MIRROR_FREE_MB对于最大故障组所显示的大小删除了几块磁盘。

SQL> select FAILGROUP, count(NAME) "Disks", sum(TOTAL_MB) "MB"
from v$asm_disk_stat
where GROUP_NUMBER=2
group by FAILGROUP
order by 3;

FAILGROUP       Disks         MB
---------- ---------- ----------
EXACELL04           7     180096
EXACELL01           8     205824
EXACELL02           9     231552
EXACELL03          10     257280

SQL>

可以看到最大故障组中的总空间大小为257280MB。

最后我们来查看最大故障组的required_mirror_free_mb大小

SQL> select NAME, TOTAL_MB, FREE_MB, REQUIRED_MIRROR_FREE_MB, USABLE_FILE_MB
from v$asm_diskgroup_stat
where GROUP_NUMBER=2;

NAME         TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
---------- ---------- ---------- ----------------------- --------------
DBFS_DG        874752     801420                  257280         272070

ASM计算USABLE_FILE_MB使用以下公式:

USABLE_FILE_MB=(FREE_MB-REQUIRED_MIRROR_FREE_MB)/2
              =(801420-257280)/2=544140/2=272070 

Exadata with ASM version 12cR1
在使用ASM 12cr1的exadata中,required_mirror_free_mb作为磁盘组中最大的大小被显示

[grid@exadb03 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.1.0.2.0 Production on [date]

SQL> select NAME, GROUP_NUMBER from v$asm_diskgroup_stat;

NAME     GROUP_NUMBER
-------- ------------
DATA                1
DBFS_DG             2
RECO                3

SQL> select FAILGROUP, count(NAME) "Disks", sum(TOTAL_MB) "MB"
from v$asm_disk_stat
where GROUP_NUMBER=2
group by FAILGROUP
order by 3;

FAILGROUP       Disks         MB
---------- ---------- ----------
EXACELL05           8     238592
EXACELL07           9     268416
EXACELL06          10     298240

最大故障组中的总空间为298240MB,但这时required_mirror_free_mb显示的大小为29824MB。

SQL> select NAME, TOTAL_MB, FREE_MB, REQUIRED_MIRROR_FREE_MB, USABLE_FILE_MB
from v$asm_diskgroup_stat
where GROUP_NUMBER=2;  2    3

NAME         TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB
---------- ---------- ---------- ----------------------- --------------
DBFS_DG        805248     781764                   29824         375970

下面查看磁盘组中最大磁盘的大小

SQL> select max(TOTAL_MB) from v$asm_disk_stat where GROUP_NUMBER=2;

MAX(TOTAL_MB)
-------------
        29824

ASM计算usable_file_mb的公式如下:

USABLE_FILE_MB = (FREE_MB - REQUIRED_MIRROR_FREE_MB) / 2
               =(781764-29824)/2=375970

小结:
required_mirror_free_mb与usable_file_mb用来帮助dba与存储管理员来规划磁盘组容量与冗余。这只是一个报告,ASM并不执行。在ASM为12cr1的exadata中,required_mirror_free_mb的大小就是磁盘组中最大磁盘的大小。通过这种设计,从这个字段就能反映出实践经验,它显示了磁盘出现故障,而不是整个存储单元。

Oracle ASM Disk Group Attributes

磁盘组属性是在ASM 11.1中引入的,它们属于磁盘组,不是属于ASM实例。有一些属性只能在磁盘组创建时设置,有一些只能在磁盘组创建之后设置,而有些属性可以在任何时候设置。

ACCESS_CONTROL.ENABLED
这个属性决定了对磁盘组是否启用ASM文件访问控制。它的参数值可以设置为TRUE或FALSE(缺省值)。如果属性被设置为TRUE,访问ASM文件受制于访问控制。如果为FALSE,任何用户都可以访问磁盘组中的文件。所有其它操作不依赖这个属性。这个属性只能在执行修改磁盘组时才能设置。

ACCESS_CONTROL.UMASK
这个属性决定了那些权限掩盖了创建ASM文件的所有者,组与用户组中的其它信息。这个属性应用于磁盘组中的所有文件。这个属性值可以是三个数字的组合 {0|2|6} {0|2|6} {0|2|6},缺省值是066。设置为’0’表示不掩盖任何信息。设置为’2’掩盖写权限,设置为’6’掩盖读写权限。在设置ACCESS_CONTROL.UMASK磁盘组属性之前,ACCESS_CONTROL.ENABLED必须被设置为TRUE。

AU_SIZE
这个属性控制着AU大小并且只能在创建磁盘组时进行设置。每个磁盘组可以有不同的AU大小。

CELL.SMART_SCAN_CAPABLE[Exadata]
这个属性在exadata中对storage cells中的grid disks创建磁盘组时使用。它能对存储在磁盘组中的对象启用smart scan功能。

COMPATIBLE.ASM
磁盘组的compatible.asm属性决定了可以使用这些磁盘组的ASM实例的最低软件版本。这个设置也会影响ASM元数据结构的格式。compatible.asm的缺省值为10.1,当使用create diskgroup语句时,ASMCMD的mkdg命令与EM的Create Disk Group page时会使用。当使用ASMCA创建磁盘组时,在ASM 11gr2中它的缺省值为11.2,在ASM 12c中它的缺省值为12.1。

COMPATIBLE.RDBMS
这个属性决定了使用磁盘组的任何数据库实例的compatible参数的最小值。在推进compatible.rdbms属性之前,确保访问磁盘组的所有数据库的compatible参数被设置为新的compatible.rdbms所允许的最小值。

COMPATIBLE.ADVM
这个属性值决定了是否磁盘组可以创建ASM卷。这个属性必须设置为11.2或更高版本。在设置这个属性之前,compatible.asm值必须被设置为11.2或更高版本,ADVM卷驱动必须被加载到所支持的环境中。缺省情况下,compatible.advm属性为空。

CONTENT.CHECK[12c]
当对磁盘组执行rebalance时,这个属性决定了是否启用或禁用内容检查。这个属性可以设置为TRUE或FALSE。内容检查包括对用户数据的硬件辅助弹性数据(HARD)检查,验证文件目录中的文件类型,文件目录信息与镜像比较。当这个属性设置为TRUE时,会对所有rebalance操作执行内容检查。内容检查也可以当作磁盘清除功能。

CONTENT.TYPE[11.2.0.3,Exadata]
这个属性标识磁盘组类型,它可以是DATA,RECOVERY或SYSTEM。它决定了最近的伙伴磁盘/故障磁盘组的距离。缺省值是DATA,它指定了距离为1,RECOVERY指定距离为3,SYSTEM指定距离为5。距离为1,意味着ASM会将所有磁盘考虑为伙伴关系。距离为3意味着每3个磁盘将被考虑为伙伴关系,距离为5意味着每5个磁盘将考虑为伙伴关系。

这个属性可以在创建或修改磁盘组时可以设置。如果content.type属性被设置或使用alter diskgroup被改变,新的配置直到磁盘组rebalance显式完成之前不会生效。

content.type属性只对normal与high冗余磁盘组有效。compatible.asm属性必须被设置为11.2.0.3或更高版本来启用content.type属性。

DISK_REPAIR_TIME
DISK_REPAIR_TIME属性决定了在磁盘删除之前,ASM将其它保持脱机状态的时间。这个属性与快速镜像重新同步功能相关,并且compatible.asm属性必须设置为11.1或更高版本。而且只能在修改磁盘组时才能设置。

FAILGROUP_REPAIR_TIME[12c]
这个属性指定了磁盘组中的故障磁盘组的缺省修复时间。故障磁盘组的修复时间在ASM判断整个故障磁盘组出现故障时会被用到。它的缺省值是24小时。如果对磁盘指定了修复时间,比如执行alter diskgroup offline disk drop after语句,那么磁盘修复时间将会覆盖故障磁盘组的修复时间。

这个属性只能对normal与high冗余磁盘组执行修改时进行设置。

IDP.BOUNDARY and IDP.TYPE[Exadata]
这两个属性被用来配置Exadata存储,并且与智能数据存储功能相关。

PHYS_META_REPLICATED[12c]
这个属性用来跟踪磁盘组的复制状态。当一个磁盘组的ASM的compatible被设置为12.1或更高版本时,每块磁盘的物理元数据会被复制。这些元数据包含磁盘头,可用空间表块与分配表块。复制操作是以联机异步方式执行。这个属性值被设置为true时,磁盘组中的每个磁盘中的物理元数据会被复制。

这个属性只能在compatible.asm被设置为12.1或更高版本才能对磁盘组进行定义。这个属性是只读状态并且用户不能设置或修改它。它的参数值为true或false。

SECTOR_SIZE
这个属性指定磁盘组中磁盘的sector size,并且只能在创建磁盘组时设置。SECTOR_SIZE的值可以是512,4094或4K(提供的磁盘支持这些参数值)。缺省值依赖于平台。compatible.asm与compatible.rdbms属性必须被设置为11.2或更高版本才能将sector size设置为非缺省值。ACFS不支持4KB的sector驱动。

STORAGE.TYPE
这个属性指定了磁盘组中的磁盘类型。它的参数值有:exadata,pillar,zfsas与other。如果属性被设置为exadata|pillar|zfsas,那么磁盘组中的所有磁盘必须是这种类型。如果属性被设置为other,磁盘组中的磁盘的类型可以是任何类型。

如果storage.type磁盘组属性被设置为pillar或zfsas,Hybrid Columnar Compression(HCC)混合列压缩功能可以对磁盘组中的对象启用。Exadata支持HCC。

注意:ZFS存储必须通过Direct NFS(dNFS)提供来使用,Pillar Axiom存储必须通过SCSI或光纤通道接口提供来使用。

为了设置storage.type属性,compatible.asm与compatible.rdbms磁盘组属性必须设置为11.2.0.3或更高版本。为了对ZFS存储提供最大化支持,将compatible.asm与compatible.rdbms磁盘组属性设置为11.2.0.4或更高版本。

storage.type属性可以在创建或修改磁盘组时进行设置。当客户端连接到磁盘组时不能进行设置。例如,当磁盘组使用ADVM卷时,这个属性不能进行设置。

这个属性在v$asm_attribute视图或ASMCMD lsattr命令中直到它被设置之前是不可见的。

THIN_PROVISIONED[12c]
这个属性在磁盘组完成rebalance操作之后用来启用或禁用丢弃没有使用的空间。这个属性值可以设置为true或false(缺省值)

存储厂商产品支持thin provisioning的能力来更用效率的重用丢弃的存储空间

APPLIANCE.MODE[11.2.0.4,Exadatga]
APPLIANCE.MODE属性提供了在删除一个或多个ASM磁盘时,磁盘rebalance完成时间。这意味着在磁盘故障后冗余会被快速还原。当在Exadata中创建新磁盘组时这个属性会自动被启用。现有磁盘组必须使用alter diskgroup命令来显式设置。这个功能叫作固定伙伴。

这个属性在满足以下条件时可以对磁盘组启用:
Oracle ASM磁盘组属性compatible.asm被设置为11.2.0.4或更高版本,CELL.SMART_SCAN_CAPABLE属性被设置为TRUE。磁盘组中的所有磁盘是相同的磁盘类型,比如都是磁盘或闪存磁盘。磁盘组中的所有磁盘是相同大小。磁盘组中的所有故障组有相同磁盘数。磁盘组中没有磁盘脱机。

最小化软件:Oracle Exadata Storage Server Software release 11.2.3.3 运行Oracle Database 11g Release 2(11.2) release 11.2.0.4

注意:这个功能在Oracle Database version 12.1.0.1中不可用。

Hidden disk group attributes

_REBALANCE_COMPACT
这个属性与rebalance的compacting阶段有关。这个属性可以为TRUE(缺省值)或FALSE。将这个属性设置为FALSE,会对磁盘组的rebalance禁用compacting阶段。

_EXTENT_COUNTS
_EXTENT_COUNTS这个属性,与可变区大小有关,它决定了那个区大小将会增加。这个属性的值为”20000 20000 214748367″,这意味着前20000个区大小将是1个AU,接下来的20000区大小将由_extent_sizes属性的第二个值决定,并且剩余的区大小将由_extent_sizes的第三个值决定。

_EXTENT_SIZES
这个属性是与可变区大小有关的第二个隐藏参数,并且它决定了区大小的增长,也就是AU的数量。

在ASM 11.1中,属性值为”1 8 64″。在ASM 11.2与以后版本中,属性值为”1 4 16″。

v$asm_attribute视图与ASMCMD lsattr命令
磁盘组属性可以通过v$asm_attribute视图与asmcmd lsattr命令。

下面是显示磁盘组DATADG属性的一种方法:

[grid@jyrac1 ~]$ asmcmd lsattr -G DATADG -l
Name                     Value       
access_control.enabled   FALSE       
access_control.umask     066         
au_size                  1048576     
cell.smart_scan_capable  FALSE       
compatible.asm           11.2.0.0.0  
compatible.rdbms         11.2.0.0.0  
disk_repair_time         3.6h        
sector_size              512         

磁盘组属性可以通过alter diskgroup set attribute语句,ASMCMD setattr命令与ASMCA来进行修改。下面是使用ASMCMD setattr命令来修改disk_repair_time属性的一个例子:

[grid@jyrac1 ~]$ asmcmd setattr -G DATADG disk_repair_time '4.5 H'

检查新属性:

[grid@jyrac1 ~]$ asmcmd lsattr -G DATADG -l disk_repair_time
Name              Value  
disk_repair_time  4.5 H  

小结:
磁盘组属性,在ASM 11.1中引入,它是优化磁盘组能力的一种很好方式。有些属性是Exadata特定的,有些只能在特定版本中使用。大多数磁盘组属性可以通过v$asm_attribute视图来查看。

查询